Informatik und Kommunikation
Filtern
Dokumenttyp
- Video (57)
- Konferenzveröffentlichung (8)
Schlagworte
- Robotik (30)
- Flugkörper (21)
- UAV (21)
- Rettungsrobotik (8)
- 3D Modell (7)
- Rasenmäher (4)
- DRZ (3)
- Deutsches Rettungsrobotik-Zentrum (3)
- SLAM (3)
- Kartierung (2)
The dataset is used for 3D environment modeling, i.e. for the generation of dense 3D point clouds and 3D models with PatchMatch algorithm and neural networks. Difficult for the modeling algorithm are the reflections of rain, water and snow, as well as windows and vehicle surface. In addition, lighting conditions are constantly changing.
At the beginning of the pandemic in Feb. 2020 I had a little time and wanted to do something new i.e. bring my 3D printer, AI and computer science together somehow. The result is a printed portrait with a lot of computer science. Using style transfer I transferred the etching style of a Göthe portrait to a young girl I call Carolin. By means of image processing I made a black and white picture out of it. Then, using the problem of the traveling salesman, each black point in the picture is interpreted as a city and the whole picture is drawn by only one line. Since this line is very long, it is optimized and shortened by a so-called simulated annealing algorithm. The result is printed in 5 layers on a 3D printer.
ARGUS is a tool for the systematic acquisition, documentation and evaluation of drone flights in rescue operations. In addition to the very fast generation of RGB and IR orthophotos, a trained AI can automatically detect fire, people and cars in the images captured by the drones. The video gives a short introduction to the Aerial Rescue and Geospatial Utility System -- ARGUS
Check out our Github repository under
https://github.com/RoblabWh/argus/
You can find the dataset on kaggle under
https://www.kaggle.com/datasets/julienmeine/rescue-object-detection
The two churches, San Francesco and Sant'Agostino in Amatrice, Italy was hit by an earthquake on August 24 2016. Both churches are in a state of partial collapse, in need of shoring to prevent potential further destruction and to preserve the national heritage. The video show the mission at 1.Sept.2016 in clips of 10 seconds.
The TRADR project was asked by the Italian firebrigade Vigili del Fuoco to provide 3D textured models of two churches.
The team entered San Francesco with two UGVs (ground robots) and one UAV (drone, flown by Prof. Surmann), teleoperating them entirely out of line of sight and partially in collaboration. We entered Sant'Agostino with one UAV (also flown by Prof. Surmann) while two other UAVs were providing a view from different angles to facilitate maneuvering them entirely out of line of sight.
The video shows a snapshot of a 16 minute flight of a DJI Phantom 3 professional over the Schloss Birlinghoven at Sankt Augustin, Germany. The castle is located at the Fraunhofer Campus at Sankt Augustin. The 3D model is generated out of 400 key frames of the 4k video which are cut out with ffmpeg. The work is part of an evaluation in the Tradr Project (www.tradr-project.eu)
Global registration of heterogeneous ground and aerial mapping data is a challenging task. This is especially difficult in disaster response scenarios when we have no prior information on the environment and cannot assume the regular order of man-made environments or meaningful semantic cues. In this work we extensively evaluate different approaches to globally register UGV generated 3D point-cloud data from LiDAR sensors with UAV generated point-cloud maps from vision sensors. The approaches are realizations of different selections for: a) local features: key-points or segments; b) descriptors: FPFH, SHOT, or ESF; and c) transformation estimations: RANSAC or FGR. Additionally, we compare the results against standard approaches like applying ICP after a good prior transformation has been given. The evaluation criteria include the distance which a UGV needs to travel to successfully localize, the registration error, and the computational cost. In this context, we report our findings on effectively performing the task on two new Search and Rescue datasets. Our results have the potential to help the community take informed decisions when registering point-cloud maps from ground robots to those from aerial robots.
From the 360° images of the former video (
• German rescue robotic center captured... ) we now generate the 3D point cloud. The UAV needs 3 minutes to capture the outdoor scenario and the hall from inside and outside. The 3D point cloud generation is 5x slower than the video. It uses a VSLAM algorithm to localize the k-frames (green) and with 3 k-frames it use a 360° PatchMatch algorithm implemented at a NVIDIA graphic card (CUDA) to calculated the dense point clouds.The hall ist about 70 x 20 meters.