Refine
Year of publication
Document Type
- Article (97)
- video (55)
- Lecture (46)
- Conference Proceeding (36)
- Part of a Book (11)
- Working Paper (9)
- Bachelor Thesis (8)
- Report (8)
- Master's Thesis (3)
- Other (2)
Language
- German (225)
- English (51)
- French (1)
- Multiple languages (1)
- Romanian (1)
Has Fulltext
- yes (279) (remove)
Keywords
- Robotik (27)
- Flugkörper (18)
- UAV (18)
- Bionik (8)
- 3D Modell (7)
- Akkreditierung (6)
- E-Learning (6)
- Radio-Feature (6)
- Rettungsrobotik (5)
- Virtuelle Hochschule (5)
Institute
ChatGPT ist ein leistungsstarker Chatbot, der nach Eingabe konkreter Aufforderungen maßgeschneiderte Texte erstellt und Entwickler beim Programmieren unterstützen kann. Dazu bildet das GPT-Modell, ein „Large Language Model“ (LLM), Muster auf ein statistisches Modell ab, die dem Nutzer eine Antwort auf eine Frage generieren. Durch die große mediale Aufmerksamkeit mit der ChatGPT eingeführt wurde haben eine Vielzahl von Nutzern die potenziellen Chancen dieser Technologie kennengelernt. Jedoch birgt ChatGPT auch eine Reihe von Risiken.
In diesem Artikel werden sowohl die Chancen als auch die Risiken von ChatGPT umfassend insbesondere im Bereich Cyber-Sicherheit betrachtet.
Sowohl im Online-, aber auch im stationären Handel sind schon etliche innovative immersive Anwendungen entstanden, die neue kognitive und affektive Interaktions- und Informationsmöglichkeiten bieten. In den Bereichen Kunst, Immobilien, Architektur, Gaming, Fashion, Stadtplanung und -führungen finden sich ebenfalls mehr und mehr AR/VR Anwendungen. In diesem Beitrag wird nach einer Sichtung ausgewählter immersiver Projekte ein Konzept zur Nutzung von AR bzw. VR für Leerstände in einer ehemals attraktiven Einkaufsmeile in Gelsenkirchen vorgestellt.
Desert ants Cataglyphis spec. monitor inclination and distance covered through force-based sensing in their legs. To transfer this mechanism to legged robots, artificial neural networks are used to determine the inclination angle of an experimental ramp from the motor data of the legs of a commercial hexapod walking robot. It is possible to determine the inclination angle of the ramp based on the motor data of the robot legs read out during a run. The result is independent of the weight and orientation of the robot on the ramp and hence robust enough to serve as an independent odometer.
In the realm of digital situational awareness during disaster situations, accurate digital representations,
like 3D models, play an indispensable role. To ensure the
safety of rescue teams, robotic platforms are often deployed
to generate these models. In this paper, we introduce an
innovative approach that synergizes the capabilities of compact Unmaned Arial Vehicles (UAVs), smaller than 30 cm, equipped with 360° cameras and the advances of Neural Radiance Fields (NeRFs). A NeRF, a specialized neural network, can deduce a 3D representation of any scene using 2D images and then synthesize it from various angles upon request. This method is especially tailored for urban environments which have experienced significant destruction, where the structural integrity of buildings is compromised to the point of barring entry—commonly observed post-earthquakes and after severe fires. We have tested our approach through recent post-fire scenario, underlining the efficacy of NeRFs even in challenging outdoor environments characterized by water, snow, varying light conditions, and reflective surfaces.
In this paper, we present a method for detecting objects of interest, including cars, humans, and fire, in aerial images captured by unmanned aerial vehicles (UAVs) usually during vegetation fires. To achieve this, we use artificial neural networks and create a dataset for supervised learning. We accomplish the assisted labeling of the dataset through the implementation of an object detection pipeline that combines classic image processing techniques with pretrained neural networks. In addition, we develop a data augmentation pipeline to augment the dataset with utomatically labeled images. Finally, we evaluate the performance of different neural networks.
Problem: A group of robots, called a swarm, is placed in an unknown environment and is supposed to explore it independently. The goal of the exploration is the creation of a common map.
Implementation
- Equipping six Kobuki robots with appropriate sensor technology, a large battery, a router and the Jetson board
- Setup of the Jetson-Boards with self-made ROS2 nodes and the set up mesh network
- Writing of launch files for the common start of all functions
- Reinforcement learning is used to train an AI that controls the swarm by selecting points for the robots to approach and navigating to them and navigating them there.
- Setting up a responsive website using Angular and the Bootstrap
Framework.
360° and IR- Camera Drone Flight Test: Superimposition of two data sources for Post-Fire Inspection
(2023)
This video highlights a recent flight test carried out in our cutting-edge robotics lab, unveiling the capabilities of our meticulously crafted thermal and 360° camera drone! We've ingeniously upgraded a DJI Avata with a bespoke thermal and 360° camera system. Compact yet powerful, measuring just 18 x 18 x 17 cm, this drone is strategically engineered to effortlessly navigate and deliver crucial thermal and 360° insights concurrently in post-fire or post-explosion environments.
The integration of a specialized thermal and 360° camera system enables the simultaneous capture of both data sources during a single flight. This groundbreaking approach not only reduces inspection time by half but also facilitates the seamless superimposition of thermal and 360° videos for comprehensive analysis and interpretation.
The dataset is used for 3D environment modeling, i.e. for the generation of dense 3D point clouds and 3D models with PatchMatch algorithm and neural networks. Difficult for the modeling algorithm are the reflections of rain, water and snow, as well as windows and vehicle surface. In addition, lighting conditions are constantly changing.
At the integration sprint of the E-DRZ consortium in march 2023 we improve the information captured by the human spotter (of the fire brigade) by extending him through a 360° drone i.e. the DJI Avata with an Insta360 on top of it. The UAV needs 3 minutes to capture the outdoor scenario and the hall from inside and outside. The hall ist about 70 x 20 meters. When the drone is landed we have all information in 360° degree at 5.7k as you can see it in the video. Furthermore it is a perfect documentation of the deployment scenario. In the next video we will show how to spatial localize the 360° video and how to generate a 3D point cloud from it.
At the integration sprint of the E-DRZ consortium in march 2023 we improve the information captured by the human spotter (of the fire brigade) by extending him through a 360° drone. The UAV needs 3 minutes to capture the outdoor scenario and the hall from inside and outside. The hall ist about 70 x 20 meters. When the drone is landed we have all information in 360° degree at 5.7k as you can see it in the video. Furthermore it is a perfect documentation of the deployment scenario. In the next video we will show how to spatial localize the 360° video and how to generate a 3D point cloud from it.