Refine
Document Type
- video (2)
- Conference Proceeding (1)
Language
- English (3) (remove)
Keywords
- 360° Panorama (1)
- NeRF (1)
- Rescue Robotics (1)
- Small UAVs (1)
- Visual Monocular SLAM (1)
Problem
- How to effectively use aerial robots to support rescue forces?
- How to achieve good flight characteristics and long flight times?
- How to enable simple and intuitive control?
- How to efficiently record image data of the environment?
- How to generate flight and image data for rescue forces?
Implementation:
The flying robot was designed in Autodesk Fusion360. In order to achieve high stability as well as low weight, the frame was milled from carbon. Mounts such as for GPS and 360° camera were 3D printed. A special feature is that the flying robot is not visible in the panoramic view of the 360° camera. The flight controller of the robot was set up using Ardupilot. The communication with the robot is done via MAVLink (UDP).To support different platforms, a software was realized as a web application. The front end was created using HTML, CSS and Javascript.
The back end is based on Flask-Socket-IO (Python). For the intelligent recognition of motor vehicles a micro controller with an integrated camera is used. For the post-processing of flight and video data a pipeline was implemented for automation.
The video shows a very high resolution 3D point cloud !!! of the outdoor area of the German Rescue Robotics Center. For the recording, a 25-second POI flight was performed with a Mavic 3. From the 4K video footage captured during this flight, 77 images were cropped and localized within 4 minutes using colmap and processed using Neural Radiance Fields (NeRF). The nerfacto model of Nerfstudio was trained on an Nvidia RTX 4090 for 8 minutes. In summary, a top 3D model is available to task forces after about 13 minutes. The calculation is performed locally on site by the RobLW of the DRZ. The video shown here shows a free camera path rendered at 60 hz (Full HD).
In the realm of digital situational awareness during disaster situations, accurate digital representations,
like 3D models, play an indispensable role. To ensure the
safety of rescue teams, robotic platforms are often deployed
to generate these models. In this paper, we introduce an
innovative approach that synergizes the capabilities of compact Unmaned Arial Vehicles (UAVs), smaller than 30 cm, equipped with 360° cameras and the advances of Neural Radiance Fields (NeRFs). A NeRF, a specialized neural network, can deduce a 3D representation of any scene using 2D images and then synthesize it from various angles upon request. This method is especially tailored for urban environments which have experienced significant destruction, where the structural integrity of buildings is compromised to the point of barring entry—commonly observed post-earthquakes and after severe fires. We have tested our approach through recent post-fire scenario, underlining the efficacy of NeRFs even in challenging outdoor environments characterized by water, snow, varying light conditions, and reflective surfaces.