Informatik und Kommunikation
Filtern
Erscheinungsjahr
Dokumenttyp
- Konferenzveröffentlichung (45) (entfernen)
Schlagworte
- Akkreditierung (3)
- E-Learning (2)
- Human-Robot Interaction (2)
- Tetraplegie (2)
- Virtuelle Hochschule (2)
- 360° Panorama (1)
- Akademischer Grad (1)
- Alltagsunterstützende Assistenzlösung (1)
- Alternative Geschäftsmodelle (1)
- Arbeitsbelastung (1)
- Artificial Intelligence (1)
- Assisted living technologies (1)
- Assistive robotics (1)
- Augmented Reality (1)
- Automatisierung, Journalismus, Literaturüberblick (1)
- Autonomous Agents (1)
- Bachelor-Studiengang (1)
- Bachelorstudiengang (1)
- Bachelorstudium (1)
- Berufsbefähigung (1)
- Bologna-Prozess (1)
- Continuous Queries (1)
- Crowdfunding (1)
- Curriculanormwert (1)
- Datalog (1)
- Datenjournalismus (1)
- Deductive Databases (1)
- Erweiterte Realität <Informatik> (1)
- Gehirn & Computer (1)
- Hochschulbildung (1)
- Human-centered computing (1)
- Incremental Evaluation (1)
- Informatik (1)
- Informatikstudium (1)
- Ingenieurstudium (1)
- Interaktion (1)
- Internationalisierung (1)
- Journalismus (1)
- Journalistenausbildung (1)
- Kalman filter (1)
- Kreditpunktesystem (1)
- Künstliche Intelligenz (1)
- Machine Learning (1)
- Maschinenintelligenz (1)
- Master-Studiengang (1)
- Masterstudiengang (1)
- Masterstudium (1)
- Mathematikstudium (1)
- Mensch-Roboter (1)
- Menschheitsentwicklung (1)
- Mixed Reality (1)
- Modularisierung (1)
- Multi-Agent System (1)
- Naturwissenschaftliches Studium (1)
- NeRF (1)
- Online-Studium (1)
- People with disabilities (1)
- Qualifikationsrahmen (1)
- Rescue Robotics (1)
- Robot assistive drinking (1)
- Robot assistive eating (1)
- Robotik (1)
- Small UAVs (1)
- Smart Grid (1)
- Studierbarkeit (1)
- Supercomputer (1)
- Update Propagation (1)
- Visual Monocular SLAM (1)
- Weiterbildung (1)
- Workload (1)
- Zustandsmaschine (1)
- human-centered design (1)
- hybrid sensor system (1)
- participatory design (1)
- risk management (1)
- sensor fusion (1)
- state machine (1)
- user acceptance (1)
Institut
This technical report is about the architecture and integration of very small commercial UAVs (< 40 cm diagonal) in indoor Search and Rescue missions. One UAV is manually controlled by only one single human operator delivering live video streams and image series for later 3D scene modelling and inspection. In order to assist the operator who has to simultaneously observe the environment and navigate through it we use multiple deep neural networks to provide guided autonomy, automatic object detection and classification and local 3D scene modelling. Our methods help to reduce the cognitive load of the operator. We describe a framework for quick integration of new methods from the field of Deep Learning, enabling for rapid evaluation in real scenarios, including the interaction of methods.
In the realm of digital situational awareness during disaster situations, accurate digital representations,
like 3D models, play an indispensable role. To ensure the
safety of rescue teams, robotic platforms are often deployed
to generate these models. In this paper, we introduce an
innovative approach that synergizes the capabilities of compact Unmaned Arial Vehicles (UAVs), smaller than 30 cm, equipped with 360° cameras and the advances of Neural Radiance Fields (NeRFs). A NeRF, a specialized neural network, can deduce a 3D representation of any scene using 2D images and then synthesize it from various angles upon request. This method is especially tailored for urban environments which have experienced significant destruction, where the structural integrity of buildings is compromised to the point of barring entry—commonly observed post-earthquakes and after severe fires. We have tested our approach through recent post-fire scenario, underlining the efficacy of NeRFs even in challenging outdoor environments characterized by water, snow, varying light conditions, and reflective surfaces.
This paper presents a novel approach to build consistent 3D maps for multi robot cooperation in USAR environments. The sensor streams from unmanned aerial vehicles (UAVs) and ground robots (UGV) are fused in one consistent map. The UAV camera data are used to generate 3D point clouds that are fused with the 3D point clouds generated by a rolling 2D laser scanner at the UGV. The registration method is based on the matching of corresponding planar segments that are extracted from the point clouds. Based on the registration, an approach for a globally optimized localization is presented. Apart from the structural information of the point clouds, it is important to mention that no further information is required for the localization. Two examples show the performance of the overall registration.