Refine
Document Type
- video (19)
- Conference Proceeding (1)
Keywords
- 360° Panorama (1)
- NeRF (1)
- Rescue Robotics (1)
- Small UAVs (1)
- Visual Monocular SLAM (1)
Problem
- How to effectively use aerial robots to support rescue forces?
- How to achieve good flight characteristics and long flight times?
- How to enable simple and intuitive control?
- How to efficiently record image data of the environment?
- How to generate flight and image data for rescue forces?
Implementation:
The flying robot was designed in Autodesk Fusion360. In order to achieve high stability as well as low weight, the frame was milled from carbon. Mounts such as for GPS and 360° camera were 3D printed. A special feature is that the flying robot is not visible in the panoramic view of the 360° camera. The flight controller of the robot was set up using Ardupilot. The communication with the robot is done via MAVLink (UDP).To support different platforms, a software was realized as a web application. The front end was created using HTML, CSS and Javascript.
The back end is based on Flask-Socket-IO (Python). For the intelligent recognition of motor vehicles a micro controller with an integrated camera is used. For the post-processing of flight and video data a pipeline was implemented for automation.
The video shows a very high resolution 3D point cloud !!! of the outdoor area of the German Rescue Robotics Center. For the recording, a 25-second POI flight was performed with a Mavic 3. From the 4K video footage captured during this flight, 77 images were cropped and localized within 4 minutes using colmap and processed using Neural Radiance Fields (NeRF). The nerfacto model of Nerfstudio was trained on an Nvidia RTX 4090 for 8 minutes. In summary, a top 3D model is available to task forces after about 13 minutes. The calculation is performed locally on site by the RobLW of the DRZ. The video shown here shows a free camera path rendered at 60 hz (Full HD).
Nerf(acto) for the 3D modeling of the Computer Science building of Westfälische Hochschule GE
(2023)
The video shows a very high resolution 3D point cloud !!! of the computer science building of the University of Applied Science Gelsenkirchen. For the recording a 3 minute flight with a M30T was performed. The 105 images taken by the wide-angle camera during this flight were localized within 3 minutes using colmap and processed using Neural Radiance Fields (NeRF). The nerfacto model of Nerfstudio was trained on an Nvidia RTX 4090 for 8 minutes. Thus, a top 3D model is available after about 15 minutes.
The video shown here shows a free camera path rendered at 60 hz (Full HD).
From the 360° images of the former video (
• German rescue robotic center captured... ) we now generate the 3D point cloud. The UAV needs 3 minutes to capture the outdoor scenario and the hall from inside and outside. The 3D point cloud generation is 5x slower than the video. It uses a VSLAM algorithm to localize the k-frames (green) and with 3 k-frames it use a 360° PatchMatch algorithm implemented at a NVIDIA graphic card (CUDA) to calculated the dense point clouds.The hall ist about 70 x 20 meters.
The video shows the first test of a small spherical UAV (35 cm) with 4 rotors for missions in complex environments such as buildings, caves or tunnels. The spherical design protects the vehicle's internal components and allows the UAV to roll over the ground when the environment allows. The drone can land and take off in any position and come into contact with objects without endangering the propellers and can restart even after crashes.
Sperical UAV: Crash Test with 1/2 liter bottle from 2 meters
Gaussian Splatting: 3D Reconstruction of a Chemical Company After a Tank Explosion in Kempen 8/2023
(2023)
The video showcases a 3D model of a chemical company following a tank explosion that occurred on August 17, 2023, in Kempen computed with the gaussian splatting algorithm. Captured by a compact mini drone measuring 18cm x 18cm and equipped with a 360° camera, these images offer an intricate perspective of the aftermath. The computation need 29 minutes and uses 2770 images (~350 equirectangular images). After a comprehensive aerial survey and inspection of the 360° images taken within the facility, authorities confirmed that it was safe for the evacuated residents to return to their homes. See also:
https://www1.wdr.de/fernsehen/aktuelle-stunde/alle-videos/video-grosser-chemieunfall-in-kempen-100.html
The video showcases a 3D model of a chemical company following a tank explosion that occurred on August 17, 2023, in Kempen computed with the AI algorithm Neural Radiance Field (NeRF). Captured by a compact mini drone measuring 18cm x 18cm and equipped with a 360° camera, these images offer an intricate perspective of the aftermath. After a comprehensive aerial survey and inspection of the 360° images taken within the facility, authorities confirmed that it was safe for the evacuated residents to return to their homes. See also:
https://www1.wdr.de/fernsehen/aktuelle-stunde/alle-videos/video-grosser-chemieunfall-in-kempen-100.html
ARGUS is a tool for the systematic acquisition, documentation and evaluation of drone flights in rescue operations. In addition to the very fast generation of RGB and IR orthophotos, a trained AI can automatically detect fire, people and cars in the images captured by the drones. The video gives a short introduction to the Aerial Rescue and Geospatial Utility System -- ARGUS
Check out our Github repository under
https://github.com/RoblabWh/argus/
You can find the dataset on kaggle under
https://www.kaggle.com/datasets/julienmeine/rescue-object-detection
This video features a flight test conducted in our robotics lab, showcasing a custom-built thermal camera drone. We've enhanced a DJI Avata with a specialized thermal camera system. With its compact dimensions measuring 18 x 18 x 17 cm, this drone is designed to navigate and provide critical thermal information within post-fire or post-explosion environments. For more insights, be sure to check out our previous videos on this channel.
In the realm of digital situational awareness during disaster situations, accurate digital representations,
like 3D models, play an indispensable role. To ensure the
safety of rescue teams, robotic platforms are often deployed
to generate these models. In this paper, we introduce an
innovative approach that synergizes the capabilities of compact Unmaned Arial Vehicles (UAVs), smaller than 30 cm, equipped with 360° cameras and the advances of Neural Radiance Fields (NeRFs). A NeRF, a specialized neural network, can deduce a 3D representation of any scene using 2D images and then synthesize it from various angles upon request. This method is especially tailored for urban environments which have experienced significant destruction, where the structural integrity of buildings is compromised to the point of barring entry—commonly observed post-earthquakes and after severe fires. We have tested our approach through recent post-fire scenario, underlining the efficacy of NeRFs even in challenging outdoor environments characterized by water, snow, varying light conditions, and reflective surfaces.
360° and IR- Camera Drone Flight Test: Superimposition of two data sources for Post-Fire Inspection
(2023)
This video highlights a recent flight test carried out in our cutting-edge robotics lab, unveiling the capabilities of our meticulously crafted thermal and 360° camera drone! We've ingeniously upgraded a DJI Avata with a bespoke thermal and 360° camera system. Compact yet powerful, measuring just 18 x 18 x 17 cm, this drone is strategically engineered to effortlessly navigate and deliver crucial thermal and 360° insights concurrently in post-fire or post-explosion environments.
The integration of a specialized thermal and 360° camera system enables the simultaneous capture of both data sources during a single flight. This groundbreaking approach not only reduces inspection time by half but also facilitates the seamless superimposition of thermal and 360° videos for comprehensive analysis and interpretation.
At the integration sprint of the E-DRZ consortium in march 2023 we improve the information captured by the human spotter (of the fire brigade) by extending him through a 360° drone i.e. the DJI Avata with an Insta360 on top of it. The UAV needs 3 minutes to capture the outdoor scenario and the hall from inside and outside. The hall ist about 70 x 20 meters. When the drone is landed we have all information in 360° degree at 5.7k as you can see it in the video. Furthermore it is a perfect documentation of the deployment scenario. In the next video we will show how to spatial localize the 360° video and how to generate a 3D point cloud from it.
At the integration sprint of the E-DRZ consortium in march 2023 we improve the information captured by the human spotter (of the fire brigade) by extending him through a 360° drone. The UAV needs 3 minutes to capture the outdoor scenario and the hall from inside and outside. The hall ist about 70 x 20 meters. When the drone is landed we have all information in 360° degree at 5.7k as you can see it in the video. Furthermore it is a perfect documentation of the deployment scenario. In the next video we will show how to spatial localize the 360° video and how to generate a 3D point cloud from it.
The dataset is used for 3D environment modeling, i.e. for the generation of dense 3D point clouds and 3D models with PatchMatch algorithm and neural networks. Difficult for the modeling algorithm are the reflections of rain, water and snow, as well as windows and vehicle surface. In addition, lighting conditions are constantly changing.