Refine
Document Type
- video (7)
- Conference Proceeding (3)
Keywords
- Robotik (7)
- Flugkörper (5)
- UAV (5)
- SLAM (3)
- Kartierung (2)
- Mapping (2)
- Point Clouds (2)
- 3D Modell (1)
- DRZ (1)
- Deutsches Rettungsrobotik-Zentrum (1)
This technical report is about the architecture and integration of commercial UAVs in Search and Rescue missions. We describe a framework that consists of heterogeneous UAVs, a UAV task planner, a bridge to the UAVs, an intelligent image hub, and a 3D point cloud generator. A first version of the framework was developed and tested in several training missions in the EU project TRADR.
Wie können mit Luftbildaufnahmen 3D Modelle generiert werden?
- Planen von kreisförmigen und einen rasterförmigen Flug Trajektorien.
- Autonomes Abfliegen und Aufnahme der Bilder
- Verortung der Bilder mittels GPS und Structure from Motion Algorithmen.
- Generierung von 3D Modellen mithilfe von Multi-View Stereo Algorithmen.
This technical report is about the architecture and integration of very small commercial UAVs (< 40 cm diagonal) in indoor Search and Rescue missions. One UAV is manually controlled by only one single human operator delivering live video streams and image series for later 3D scene modelling and inspection. In order to assist the operator who has to simultaneously observe the environment and navigate through it we use multiple deep neural networks to provide guided autonomy, automatic object detection and classification and local 3D scene modelling. Our methods help to reduce the cognitive load of the operator. We describe a framework for quick integration of new methods from the field of Deep Learning, enabling for rapid evaluation in real scenarios, including the interaction of methods.
Durch Panoramen in Kombination mit dem ORB-SLAM ist ein schnelles Tracking möglich, liefert jedoch ausschließlich spärliche Daten. Durch die Kombination mit einem neuronalen Netz soll der SLAM Algorithmus zu einem RGBD-SLAM erweitert werden, um ein besseres Tracking und eine dichtere Punktwolke zu gewährleisten.
In this paper, we present a method for detecting objects of interest, including cars, humans, and fire, in aerial images captured by unmanned aerial vehicles (UAVs) usually during vegetation fires. To achieve this, we use artificial neural networks and create a dataset for supervised learning. We accomplish the assisted labeling of the dataset through the implementation of an object detection pipeline that combines classic image processing techniques with pretrained neural networks. In addition, we develop a data augmentation pipeline to augment the dataset with utomatically labeled images. Finally, we evaluate the performance of different neural networks.