Filtern
Dokumenttyp
- Video (57) (entfernen)
Schlagworte
- Robotik (29)
- Flugkörper (21)
- UAV (21)
- Rettungsrobotik (8)
- 3D Modell (7)
- Rasenmäher (4)
- DRZ (3)
- Deutsches Rettungsrobotik-Zentrum (3)
- SLAM (3)
- Kartierung (2)
- Mapping (2)
- Point Clouds (2)
- Ubuntu (2)
- Vegetationsbrand (2)
- 3D-Printer (1)
- Bildverarbeitung (1)
- Inferenz <Künstliche Intelligenz> (1)
- Ortsbestimmung (1)
- Travelling-salesman-Problem (1)
- Vegetationsbrandübung (1)
- Vegetatonsbrandübung (1)
Wie können mit Luftbildaufnahmen 3D Modelle generiert werden?
- Planen von kreisförmigen und einen rasterförmigen Flug Trajektorien.
- Autonomes Abfliegen und Aufnahme der Bilder
- Verortung der Bilder mittels GPS und Structure from Motion Algorithmen.
- Generierung von 3D Modellen mithilfe von Multi-View Stereo Algorithmen.
Venice 2018: Tradr Review
(2018)
The video shows an orthopoto and a textured 3D model of the location. 300 images were recorded in two short flights with a Mavic Pro in 50 meter height. The first one was a single grid while the camera facing down and the second one was a double grid facing the camera at an 60 degree angle. The 3D model is computed with OpenDroneMap.
Sperical UAV: Crash Test with 1/2 liter bottle from 2 meters
The video shows the first test of a small spherical UAV (35 cm) with 4 rotors for missions in complex environments such as buildings, caves or tunnels. The spherical design protects the vehicle's internal components and allows the UAV to roll over the ground when the environment allows. The drone can land and take off in any position and come into contact with objects without endangering the propellers and can restart even after crashes.
9 Panoramen, das erste ist aus größerer Höhe aufgenommen und enthält im Himmel eine Karte mit den Positionen der aufgenommenen Punkte (gelb). Das aktuelle Bild ist im Fadenkreuz (rot). Zusätzlich noch ein paar Details zu dem aktuellen Punkt. Jedes Panorama ist 10 Sekunden lang.
Zum Betrachten die höchste Auflösungsstufe wählen und die Pausetaste verwenden. Mit dem gedrückten linken Button kann man sich im Bild bewegen.
The video showcases a 3D model of a chemical company following a tank explosion that occurred on August 17, 2023, in Kempen computed with the AI algorithm Neural Radiance Field (NeRF). Captured by a compact mini drone measuring 18cm x 18cm and equipped with a 360° camera, these images offer an intricate perspective of the aftermath. After a comprehensive aerial survey and inspection of the 360° images taken within the facility, authorities confirmed that it was safe for the evacuated residents to return to their homes. See also:
https://www1.wdr.de/fernsehen/aktuelle-stunde/alle-videos/video-grosser-chemieunfall-in-kempen-100.html
Nerf(acto) for the 3D modeling of the Computer Science building of Westfälische Hochschule GE
(2023)
The video shows a very high resolution 3D point cloud !!! of the computer science building of the University of Applied Science Gelsenkirchen. For the recording a 3 minute flight with a M30T was performed. The 105 images taken by the wide-angle camera during this flight were localized within 3 minutes using colmap and processed using Neural Radiance Fields (NeRF). The nerfacto model of Nerfstudio was trained on an Nvidia RTX 4090 for 8 minutes. Thus, a top 3D model is available after about 15 minutes.
The video shown here shows a free camera path rendered at 60 hz (Full HD).
The video shows a very high resolution 3D point cloud !!! of the outdoor area of the German Rescue Robotics Center. For the recording, a 25-second POI flight was performed with a Mavic 3. From the 4K video footage captured during this flight, 77 images were cropped and localized within 4 minutes using colmap and processed using Neural Radiance Fields (NeRF). The nerfacto model of Nerfstudio was trained on an Nvidia RTX 4090 for 8 minutes. In summary, a top 3D model is available to task forces after about 13 minutes. The calculation is performed locally on site by the RobLW of the DRZ. The video shown here shows a free camera path rendered at 60 hz (Full HD).
Durch Panoramen in Kombination mit dem ORB-SLAM ist ein schnelles Tracking möglich, liefert jedoch ausschließlich spärliche Daten. Durch die Kombination mit einem neuronalen Netz soll der SLAM Algorithmus zu einem RGBD-SLAM erweitert werden, um ein besseres Tracking und eine dichtere Punktwolke zu gewährleisten.