Sense and Avoid Sensor Selection

Originally unmanned systems were developed for government and military use. However, in recent decades, there has been a significant increase of small unmanned aerial systems (sUAS) becoming available commercially for a variety uses including: agriculture, infrastructure inspection, real estate, movie production, and hobbyist activities. With this sudden increase of availability, there has also been an increase of inexperienced operators. The development of sense and avoid systems not only protects the system, but also the public from potential incidents. These systems not only assist operators with avoiding obstacles, but allow the system to autonomously operate to avoid potential collisions when flying a predesignated route. This paper will discuss a commercially available sense and avoid system developed for a sUAS. It will give a detailed explanation of the system including how it operates, cost, weight, and power requirements.

The Guidance is a sense and avoid system developed by DJI in 2015 (Snow, 2016). Developed originally for the Matrice 100 platform, the Guidance was later incorporated into the Phantom 4 and later the Matrice 600 series sUAS (Snow, 2016). The Guidance uses a combination of five sets of ultrasonic sensors and stereo cameras to detect objects up to 65 feet away from the vehicle, including objects difficult to detect such as trees and grass (DJI, n.d.a; DJI, n.d.b). The Guidance allows sUAS to become a high-precision vision positioning system without the need of a Global Positioning System (GPS), with an accuracy within five centimeters (DJI, n.d.b). The Guidance has an effective sensor range of 0.2 meters to 20 meters and a velocity detection range within 0.04 meters per second from 2 meters of the ground (DJI, n.d.b). This allows the Guidance to maintain a safe predesignated distance from obstacles automatically, without the assistance of the operator (Snow, 2015).

The Guidance Kit from DJI comes with several components including: one Guidance Core, five Guidance Sensors, five VBUS cables (standard) to connect the sensors to the core, one VBUS cable (long) as a spare, and one CAN-Bus cable to connect the Guidance Core to the flight control system (DJI Store, n.d.). The kit also comes with one micro-USB cable, USB cable extender and one UART cable to allow the system to be connected to other computer systems when necessary (DJI Store, n.d.). The total weight of the system is approximately 337 grams which broken down is 64 grams for the Guidance Core, 215 grams for the five sensors sets, and 58 grams for the five VBUS cables (DJI, n.d.c). The system has a maximum power consumption level of 12 watts, which includes operating all five sensors simultaneously (n.d.c). The Guidance is available for purchase from the DJI store for $999 on its own (DJI Store, n.d.a). The Phantom 4 includes the Guidance within its onboard systems, but the Matrice does not come with the Guidance system equipped (Snow, 2016). In order to add the Guidance to the Matrice a Guidance Connector kit is needed which costs an additional $79 (DJI Store, n.d.a; DJI, n.d.b).

This system is an ideal choice for several reasons. One of the most significant reasons is that it can ensure precision position information without the need of a GPS. This means it can act as a redundant sensor in case of a GPS failure onboard the vehicle. The system can also be put onboard any sUAS with USB and UART connection ports according to an article from Unmanned Aerial Online and the DJI website (Lillian, 2015; DJI, n.d.c). This means that any operator wanting to design their own system can put the Guidance onto their vehicle. Another benefit to the system is the weight, which allows it to be within the payload requirements for many sUAS without making additional changes to the payload or battery components.


DJI. (n.d.a) Guidance. DJI. Retrieved from

DJI. (n.d.b) Guidance Features. DJI. Retrieved from

DJI. (n.d.c) Guidance SPECS. DJI. Retrieved from

DJI Store. (n.d.a) Guidance. DJI Store. Retrieved from

DJI Store. (n.d.b). Matrice 100 – Guidance Connector Kit. DJI Store. Retrieved from

Lillian, B. (2015, June 8). DJI Unveils M100 Drone, Collision Avoidance System. Unmanned Aerial Online. Retrieved from

Snow, C. (2016, September 22). Sense and Avoid for Drones is No Easy Feat. Skylogic Research Drone Analyst. Retrieved from

Control Station Analysis

Unmanned systems are complex combinations of technology assembled together to allow humans to perform activities from a different location. This can be as simple as flying and filming the view from several feet above them to being able to support military or research missions in other parts of the world. What allows these tasks to be possible is not just the unmanned vehicle, but the ground control station (GCS). No matter what the domain, the GCS is what connects the operator to the vehicle. In this paper an in-depth analysis will be provided for a GCS that controls an unmanned system from the maritime domain. This analysis will include the hardware, software, and user interface of the data depiction and presentation strategy of the system. The analysis will also include any issues or challenges that operators currently deal with, as well as recommended changes or modifications that can be done to the system to improve its functionality in the maritime environment.

The System

The Gladius is a commercially available underwater system equipped with either a HD 1080P or 4K camera for personal or research purposes (Gladius Underwater Drone, n.d.). The vehicle is capable of diving up to 100 meters for up to 3 hours (Gladius Underwater Drone, n.d.). The system is equipped with four thrusters and is neutrally buoyant so that it does not float or sink, leaving the operator full control of how the system moves in the water (Gladius Underwater Drone, n.d.).

The Hardware

The GCS for most unmanned systems is a computer that allows the operator to control the vehicle remotely. The main components of the Gladius GCS are the remote control that comes with the system and the operator’s cellular device (Gladius Underwater Drone, n.d.). The remote is similar to controls for a video game console, especially the Nintendo Switch, since it slides apart to allow the operator to secure their cellular device into the system to view the live video feed from the vehicle (Gladius Underwater Drone, n.d.). The operator is able to control the Gladius through two joy sticks: one controls the direction and speed of the vessel while the other controls the camera direction (Gladius Underwater Drone, n.d.). The controller also allows the operator to control the two 1200 lumen LED lights individually by user the buttons at the top of the controller (Gladius Underwater Drone, n.d.). In order to be able to communicate with the underwater system, two other components are needed: the tether and the buoy (Gladius Underwater Drone, n.d.). These two components allow the system to send data and images back to the GCS as well as receive commands from the controller using long range WiFi communication signals (Gladius Underwater Drone, n.d.).

The Software

The GCS Software for the Gladius is a smartphone app that can be downloaded to most android and iOS devices. This program not only allows the operator to view and control the functions of the onboard camera, but also be able to save the live video feed and still photos to the cellular device instead of the onboard SD card (Gladius Underwater Drone, n.d.). The screen also displays current information of the system including:  compass heading, horizon tilt, foreward and aft pitch, depth, buoyancy status and battery life (Gladius Underwater Drone, 2017).

Issues and Recommendations

The vehicle is one of the first commercially available systems that is affordable compared to other options, but there are a few issues with the system that could be easily remedied. Firstly, there is no alert notification on the ground controller. This alert could be used for several things including: obstacle avoidance, weather, and tide warnings. Part of the solution would be to equip additional sensors to the system for obstacle avoidance or on the buoy for weather updates and warnings. The second part of the solution is to include the method of alert for the operator. In addition to adding an alert notification into the software so that it shows up onto the screen, it would be recommended to add vibration sensors to the control so that the operator feels the notification. This secondary method ensures that the operator receives the notification in order to perform the operation needed to save the system.


Gladius Underwater Drone. (2017, April 12). Gladius Depth Test [Video File]. Youtube. Retrieved from

Gladius Underwater Drone. (n.d.). Gladius Submersible Underwater Drone. Indiegogo. Retrieved from

Unmanned Systems Data Protocol and Format

Unmanned Systems provide the operator the opportunity to see the world from a completely new first person perspective. In order to achieve this, informational data must be able to be developed and sent to the operator but also stored for later use. This paper will be a detailed discussion of the data format, protocols, and storage methods used on the Mavic Pro, a small Unmanned Aerial System (sUAS) by DJI. It will describe each of the sensors onboard the system and the amount of power and storage they require to operate efficiently and work together into the overall data plan. Afterwards, an alternative data treatment strategy will be recommended to improve the overall operations of the system and enhance the experience of the operator.

The Mavic Pro is a foldable quad-copter sUAS and is the smallest system on the market from DJI. The Mavic Pro weighs 743 g and measures in at 83 mm tall, 83 mm wide, and 198 mm long when collapsed in its folded design (DJI, n.d.a). The Mavic Pro has a plethora of sensors onboard including: 5 cameras (4 for the dual forward and downward vision sensors and the main camera), dual satellite positioning sensors (global positioning system (GPS) and Global Navigation Satellite System (GLONASS)), two ultrasonic rangefinders, compass, and two inertial measurement units (IMUs) each containing the accelerometer and gyroscope (DJI, n.d.a). These sensors work onboard together to allow the Mavic Pro to be operated manually or autonomously while successfully avoiding obstacles using flight autonomy technology consisting of ultrasonic waves and Time of Flight (ToF) sensors to detect obstacles up to 49 ft away (DJI, n.d.a; DJI; 2016). An intelligent flight battery is the only power supply for the entire system, this battery is a LiPo 3S 11.4 V which can power the entire system for about 27 minutes of flight time (DJI, n.d.b).

The primary payload and visual system is a stabilized 28 mm camera equipped with a complementary metal-oxide semiconductor (CMOS) sensor to capture 4k video and still photos of 4000 x 3000 pixels in size (DJI, n.d.a). The camera is permanently fixed onto a 3-axis gimbal and comes with a removable cover that is used when the system is stored (DJI, n.d.b). When recording video or taking still photos, the files can be saved in two different formats depending on the needs of the operator. Photos taken with the system can be saved in either Joint Photographic Experts Group (JPEG) and Digital Negative (DNG) raw image form (DJI, n.d.b). The JPEG is a compressed image file that has discarded unnecessary data compared to the DNG which is unprocessed data but is in a format that many programs can read (Domeij, 2010). When recording videos with the Mavic Pro, the operator can choose to save the file container formats of either MP4 or MOV (DJI, n.d.b). There is not a significant difference between the two formats since they use the same lossy compression methods and can be easily converted to the other file type depending on the program the operator wants to view the video in (Joan, 2011). The data of the Mavic Pro saved onto micro Secure Digital (SD) card, the system comes with a 16 GB micro SD card, but the system can allow up a 64 GB card as long as it is a Class 10 or UHS-1 rating (DJI, 2017). The data can be viewed or downloaded onto a computer by using a Universal Serial Bus (USB) cable into a Micro USB port (DJI, 2017).

It is also possible for the operator to receive a live video feed as the Mavic Pro is flying. In order to this the operator will need to download the DJI GO 4 app onto a mobile device. This app allows the operator to control the Mavic Pro completely from the mobile device, but at the cost of range since it uses Wi-Fi versus radio frequencies to communicate with the aircraft (DJI, n.d.c). The Mavic Controller uses OcuSync transmission data to increase the range of the system up a distance of 4.3 mi (7 km) while using only Wi-Fi gives the operator a controllable range of 80 m (DJI, n.d.c). In order for the video data to be transmitted from the Mavic Pro to it is compressed through MPEG-4 AVC/H.264 which can offer DVD-quality video to the operator at under 1 Mbps (DJI, n.d.b; Rouse, 2005). This compression method allows the file to be broken down into a smaller size so that it can be sent at lower bit rates compared to previous methods, while providing the operator with a high quality video (Robertson, 2007).

One alternative data treatment strategy that could be considered for the Mavic Pro is to utilize cloud technologies. Cloud service technologies have expanded in recent years to include computing, networking and storage services (Hassan, 2016). If DJI created a cloud storage program to store video, still photos, and even flight data, the operator could conduct several flights without needing to go to a computer to download their data until they want to. The company would have the option to make cloud storage a subscription service to any registered owner and can change the rates by how much storage the operator would like to purchase. With the availability of stronger cellular and Wi-Fi networks, this option is becoming more feasible in the near future. An additional benefit of this option is that the operator would not have to select an option where more data could be removed and permanently lost such as truncation and other data treatment options.


DJI. (2016, December 13). Flight Controller. DJI. Retrieved from

DJI. (2017, April). Mavic Pro User Manual V1.6. DJI. Retrieved from

DJI. (n.d.a). Mavic. DJI. Retrieved from

DJI. (n.d.b). Mavis Specs. DJI. Retrieved from

DJI. (n.d.c). Mavic FAQ. Retrieved from

Domeij, U. (2010, August 10). Re: What is the difference of a DNG file Vs a JPEG file? [Web log post]. Retrieved from

Hassan, Q. F. (2016). Innovative Research and Applications in Next – Generation High Performance Computing. United States of America: Information Science Reference. Retrieved from

Joan, B. (2011, July 11). Difference Between MOV and MP4. Retrieved from

Robertson, M. R. (2007, October 23). H.264 Versus MPEG-4 – Video Encoding Formats Compared. TubularInsights. Retrieved from

Rouse, M. (2005, September). H.264 (MPEG-4 AVC). TechTarget. Retrieved from

UAS Sensor Placement

The unmanned aviation domain is growing exponentially every day and becoming available for personal and professional use by the average person. However, not all unmanned aerial systems (UAS) are built alike. Besides differences in their physical appearances, the sensor placement and integration can change how effective an UAS can be for a particular task. This paper will discuss the selection of two different commercially available UAS systems designed to complete two different tasks while comparing the sensors onboard each system. The first system must be able to provide aerial photography and full motion video below 400 feet above ground level (AGL) and the second system needs to be able to perform as a racing drone with a first person view (FPV).

DJI Phantom 4 Pro

The DJI Phantom 4 Pro is the most recent edition of the Phantom series and was launched in March of 2016 by the DJI company (Perlman, 2017). Amongst the newest changes of the Phantom 4 is that it is lighter and can stay up in the air for up to 28 minutes, has updated obstacle avoidance, and a new camera lens (Perlman, 2017). The camera onboard is a professional quality 4K 20 mm lens that is capable of a 94˚ field of view that is capable of capturing 30 frames per second (fps) in camera mode and high definition video in 1080p at 120 fps (DJI, n.d.a). The camera has a range of 100-3,200 m for video and 100-1600 m for still photos which allow it to take ideal panoramic photos (DJI, n.d.b). The camera is installed underneath the airframe using a 3-axis gimbal to eliminate vibrations and in-flight movement in order to capture clear, smooth images (DJI n.d.a). Between the two positioning systems and the vision system onboard, the Phantom 4 can easily be flown at altitude with comfort and ease. The Vision Positioning System (VPS) onboard the Phantom 4 includes a forward vision system and a downward vision system (DJI, n.d.a). The forward vision system acts as the primary sensor for the Phantom 4 to detect and avoid obstacles in different flight modes while the downward facing system uses dual cameras and ultrasonic sensors to determine position accuracy when flying indoors or at low altitudes (DJI, n.d.a). On top of the vision system, the Phantom 4 also is equipped with not only a Global Positioning System (GPS) and also a Global Navigation Satellite System (GLONASS) (DJI, n.d.a).  These sensors allow the Phantom 4 to always be connected to satellites in order to maintain a precise accurate position while in the air. The Phantom 4 also has several redundant systems, including two compass modules and dual Inertial Measurement Units (IMUs), so that the system can internally check and verify information that it is receiving (DJI, n.d.a). To give more variety to the types of tasks the Phantom 4 can complete it offers three flight modes: position mode, atti mode, and the new sport mode (DJI, n.d.a).

Aerodyne RC Nimbus 195

The Nimbus 195 is one of the newest FPV racing drones on the market, by Aerodyne RC (Aerodyne RC, n.d.a). What makes this vehicle unique compared to any other is that the frame is from a carbon fiber material that allows it to be virtually indestructible and can handle operating in inclement weather (Aerodyne RC, n.d.a). The Nimbus is available to pre-order in both Ready-to-Fly (RTF) and Bind-and-Fly kits as well as just the frame (Liszewski, 2017). The drone that is included with the kits is fully assembled drone and equipped with the essentials needed for an effective racing drone. Sensors on this system include a Typhoon 4-in-1 electric speed controller (ESC), an Omnibus F4 flight controller with onscreen display, a 5.8 GHz Tramp HV video transmitter, SBUS radio control receiver, and a video transmitter (VTx) antenna (Aerodyne RC, n.d.b). The key visual sensor to this system is the adjustable FPV Foxeer Arrow V3 camera (Aerodyne RC, n.d.b). It is installed onto the front of the system and is adjustable to 30, 45, and 55 degree angles in order to assist with situational awareness as the vehicles races through any course (Aerodyne RC, n.d.b). The other benefit to this camera is that is it allows for the smallest delay possible, which due to the necessity of the operator needs to be able to quickly respond to obstacles on the course. Another reason this system is a recommended system for drone racing is because it is not equipped with unnecessary sensors such as a GPS, but has a gyroscope, accelerometer and an added barometer in order to maintain situational awareness of the position of the vehicle (Aerodyne RC, n.d.a; InvenSense, 2013; Aerodyne RC, n.d.c).

Even though both of these systems are examples of UAS, they both demonstrate that depending on how they were built they can serve completely different purposes. The most significant difference between the two is the camera system setup. The Nimbus is equipped with a FPV camera in order to provide the operator a first person experience with the least amount of drag possible, while the Phantom 4’s camera is installed with a gimbal to guarantee that the camera stays stable as it takes images. The breakdowns of both of these systems demonstrate that the design and placement of key components can completely change how the system functions and can serve a specific mission.


Aerodyne RC.  (n.d.a). FPV Racing Drone NIMBUS 195. Aerodyne RC.  Retrieved from

Aerodyne RC. (n.d.b). Nimbus – BNF kit. Aerodyne RC.  Retrieved from

Aerodyne RC. (n.d.c). Omnibus F4. V3 Flight Controller. Aerodyne RC. Retrieved from

DJI. (n.d.a). Phantom 4. DJI. Retrieved from

DJI. (n.d.b). Phantom 4 Specs. DJI. Retrieved from

InvenSense. (2013, August 19). MPU-6000/MPU-6050 Product Specification Revision 3.4. Retrieved from

Liszewski, A. (2017, January 11). This Unbreakable Racing Drone is Perfect for Terrible Pilots. GIZMODO. Retrieved from

Perlman, A. (2017, January 3). 10 Best RC Drones with a Camera. UAV Coach. Retrieved from

Unmanned Systems Maritime Search and Rescue

Unmanned maritime operations have a unique set of challenges that no other domain faces due to the effects of water on the vehicle and sensors. This is further complicated by taking into account the restrictions of time as a factor for when a mission begins as a search and rescue operation and changes into a search and recovery mission. This paper will discuss the selection of an unmanned maritime system to support search and recovery operations. Other topics of focus will include: a detailed description of onboard proprioceptive and exteroceptive sensors specifically designed for the maritime environment, a recommended modification to the system, how unmanned maritime systems can be used in conjunction with unmanned aerial systems (UAS) to enhance their effectiveness, advantages of unmanned maritime systems over manned systems, and if there are sensor suites that are more effective on unmanned systems.

Selected Vehicle

One unmanned surface vessel that has made a substantial impact on maritime search and rescue operations is the Emergency Integrated Lifesaving Lanyard (EMILY). Originally developed in 2001, the EMILY is a remotely controlled robotic lifeguard that is four feet in length, weighs 25 pounds, and can travel up to 22 miles per hour (Duffie, 2016). The EMILY has provided lifeguard support in various places such as Los Angeles but most recently has been tasked to support rescuing refugees as they attempt to flee Syria through the Mediterranean Sea (Silverman, 2017).

Onboard Sensors

There are two different variants of the Emily vessel for search and rescue operations:  Sonar EMILY and Payload Station or the Swift Water EMILY (Hydronalix, 2016). Both versions can be operated by satellite or remote control, which control proprioceptive sensors such as the steering servo, the motor control module, and magnetic power switch box (Hydronalix, 2016). Depending on the type of mission and the version of EMILY the exteroceptive sensor options change. The Swift Water EMILY comes with an electro-optical / infrared camera to allow the vehicle to be used during day time or night time rescue operations (Hydronalix, 2016). The Sonar EMILY and Payload Station comes with a Hummingbird ION scan sonar which provides both dual beam and single imaging sonar in order to provide high resolution imagery under the water’s surface even through different water temperatures and other water column interferences (Hydronalix, n.d.). The sonar has a 400 meter range between the vehicle and the control station and is capable of adjusting the color palette of the images to three options to allow it to be useful in overcast or direct sunlight as it sends the imagery to a smart phone or laptop (Hydronalix, n.d.).

Recommended modifications

To make the system more all-inclusive for life saving operations, the addition of a two hand manipulator with hepatic sensors equipped in order to provide the operator of the system the capability to save a victim that is unconscious. The concept of hepatic hand manipulators is not a new, but an idea from the OceanOne system, a humanoid robotic diver created by a team at Stanford University (Ackerman, 2016). This is an alternative to an approach in which the remotely operated underwater vehicles attempt to lasso the victim with a rope in order to bring them ashore.

Cross Domain Operations

UAS can also be used alongside unmanned maritime systems to further enhance search and rescue operations. In situations where the waters are too dangerous for manned vehicles to approach, unmanned systems serve as an alternative solution. By first deploying an UAV equipped with appropriate visual sensors such as visual detection and ranging (ViDAR), electro-optical, or infrared in order to locate and track the position of a victim the information can be sent down to sea level to a USV. The USV such as the Emily can be deployed from the shore or a vessel once global positioning system (GPS) coordinates are received. The development of a cross domain network is critical for not only the operators but the systems to be able to communicate and work together towards a common goal. One project that is already attempting to do this is the ICARUS Unmanned Search and Rescue research project with a primary goal to develop unmanned systems across the aerial, ground, and maritime domains that can collaborate together across a wireless network for search and rescue operations (ICARUS, n.d.). In July 2015, ICARUS demonstrated the capabilities of their UAS and USV communicating together in a simulated maritime crisis event where the systems demonstrated tasks within search and rescue operations including: area scanning/searching, victim detection and approach, raft deployment, and victim rescue (ICARUS, 2015).

Unmanned Versus Manned Systems

Unmanned systems do have a significant number of benefits compared to manned systems, but at a cost. Unmanned systems remove the operator from the event and protect them from being at risk in a dangerous situation. However, in regards to search and rescue, by removing the operator from the rescue you also reduce the amount of victim assessment that can be made on site and even initial measures of onsite medical treatment such as pressure on a wound. A sensor system is not necessarily better on an unmanned system but compared to an unmanned system, an operator is forced to rely on his sensors or tools versus his own situational awareness of an event.


Ackerman, E. (2016, April 28). Standford’s Humanoid Diving Robot Takes on Undersea Archaeology and Coral Reefs. IEE Spectrum. Retrieved from

Black Laser Learning Latest News. (2017, February 28).  Self-Propelled Rescue Robot Incorporates Humminbird® Sonar to Find Drowning Victims. Black Laser Learning. Retrieved from

Duffie, W. (2016, May 5). From Whales to Silver Foxes to Refugees: EMILY Robot is A Lifesaver. Office of Naval Research. Retrieved from

Hydronalix. (2016). EMILY Parts Catalog. Retrieved from

Hydronalix. (n.d). Sonar EMILY and Payload Station. Retrieved from

ICARUS. (n.d.) Project Objectives. Retrieved from

ICARUS. (2015, November 12). Successful Final Demonstrations in Alfeite, Portugal (Sea Scenario) and Marche-en-Famenne, Belgium (Land Scenario). Retrieved from

Silverman, L. (2017, March 22). Meet Emily, The Lifeguard Robot That’s Saving Refugees Crossing The Mediterranean Sea.  Kera News. Retrieved from

UNSY 605

Module 1.5: First Blog Entry

In an article by Patrick Miller (2017), the author discusses the prospect of the U.S. Coast Guard being equipped with small unmanned aerial systems (sUAS) in order to support maritime operations. In the article the author interviews Ron Termain, a business executive for Coast Guard Affairs with 23 years of flight experience within the Coast Guard, as he recalls a mission when he failed to identify two divers and flew off (Miller, 2017). Fortunately, the story had a happy ending when the divers were rescued the next day, but the event could have easily had different results. Termain admits that if he had access to technology such as the Insitu Scan Eagle, that is equipped with a visual detection and ranging (ViDAR) sensor, he would have easily been able to identify the divers (Miller, 2017). According to the article, the ViDAR sensor is capable of identifying a person from a mile away and a freighter vessel from 30 miles away, making it an ideal candidate for not only search and rescue missions, but also drug interdiction operations (Miller, 2017). ViDAR is a type of exteroceptive visual sensor that can detect and analyze pixel anomalies within a video overlay and zoom in for a visual inspection and confirmation (Miller, 2017). Compared to traditional electro-optical and infrared visual sensors, ViDAR is a wide area sensor that enhances electro-optical imagery, bringing the capabilities of larger unmanned systems down to a size suitable for smaller aircraft. The ViDAR sensor is not only a significant example of new technology for an exteroceptive sensor but also how more powerful equipment is being made smaller so that it can be equipped onboard smaller airframes. By making sensors smaller, it allows for the endurance of the airframe to be increased or additional hardware to be installed.


Miller, P. (2017, February 16). Insitu Hopes to Give U.S. Coast Guard Cutters the UAS Advantage. UAS Magazine. Retrieved from