Filtern
Erscheinungsjahr
Dokumenttyp
- Konferenzveröffentlichung (45) (entfernen)
Schlagworte
- Akkreditierung (3)
- E-Learning (2)
- Human-Robot Interaction (2)
- Tetraplegie (2)
- Virtuelle Hochschule (2)
- 360° Panorama (1)
- Akademischer Grad (1)
- Alltagsunterstützende Assistenzlösung (1)
- Alternative Geschäftsmodelle (1)
- Arbeitsbelastung (1)
- Artificial Intelligence (1)
- Assisted living technologies (1)
- Assistive robotics (1)
- Augmented Reality (1)
- Automatisierung, Journalismus, Literaturüberblick (1)
- Autonomous Agents (1)
- Bachelor-Studiengang (1)
- Bachelorstudiengang (1)
- Bachelorstudium (1)
- Berufsbefähigung (1)
- Bologna-Prozess (1)
- Continuous Queries (1)
- Crowdfunding (1)
- Curriculanormwert (1)
- Datalog (1)
- Datenjournalismus (1)
- Deductive Databases (1)
- Erweiterte Realität <Informatik> (1)
- Gehirn & Computer (1)
- Hochschulbildung (1)
- Human-centered computing (1)
- Incremental Evaluation (1)
- Informatik (1)
- Informatikstudium (1)
- Ingenieurstudium (1)
- Interaktion (1)
- Internationalisierung (1)
- Journalismus (1)
- Journalistenausbildung (1)
- Kalman filter (1)
- Kreditpunktesystem (1)
- Künstliche Intelligenz (1)
- Machine Learning (1)
- Maschinenintelligenz (1)
- Master-Studiengang (1)
- Masterstudiengang (1)
- Masterstudium (1)
- Mathematikstudium (1)
- Mensch-Roboter (1)
- Menschheitsentwicklung (1)
- Mixed Reality (1)
- Modularisierung (1)
- Multi-Agent System (1)
- Naturwissenschaftliches Studium (1)
- NeRF (1)
- Online-Studium (1)
- People with disabilities (1)
- Qualifikationsrahmen (1)
- Rescue Robotics (1)
- Robot assistive drinking (1)
- Robot assistive eating (1)
- Robotik (1)
- Small UAVs (1)
- Smart Grid (1)
- Studierbarkeit (1)
- Supercomputer (1)
- Update Propagation (1)
- Visual Monocular SLAM (1)
- Weiterbildung (1)
- Workload (1)
- Zustandsmaschine (1)
- human-centered design (1)
- hybrid sensor system (1)
- participatory design (1)
- risk management (1)
- sensor fusion (1)
- state machine (1)
- user acceptance (1)
Institut
- Informatik und Kommunikation (45) (entfernen)
A Robust Interface for Head Motion based Control of a Robot Arm using MARG and Visual Sensors
(2018)
Head-controlled human machine interfaces have gained popularity over the past years, especially in the restoration of the autonomy of severely disabled people, like tetraplegics. These interfaces need to be reliable and robust regarding the environmental conditions to guarantee safety of the user and enable a direct interaction between a human and a machine. This paper presents a hybrid MARG and visual sensor system for head orientation estimation which is in this case used to teleoperate a robotic arm. The system contains a Magnetic Angular Rate Gravity (MARG)-sensor and a Tobii eye tracker 4C. A MARG sensor consists of tri-axis accelerometer, gyroscope as well as a magnetometer which enable a complete measurement of orientation relative to the direction of gravity and magnetic field of the earth. The tri-axis magnetometer is sensitive to external magnetic fields which result in incorrect orientation estimation from the sensor fusion process. In this work the Tobii eye tracker 4C is used to increase head orientation estimation because it also features head tracking even though it is commonly used for eye tracking. This type of visual sensor does not suffer magnetic drift. However, it computes orientation data only, if a user is detectable. Within this work a state machine is presented which enables data fusion of the MARG and visual sensor to improve orientation estimation. The fusion of the orientation data of MARG and visual sensors enables a robust interface, which is immune against external magnetic fields. Therefore, it increases the safety of the human machine interaction.
Studiengänge der Medieninformatik variieren in ihren Schwerpunkten ebenso wie in den Berufsbildern, auf die sie vorbereiten. Ein vereinendes Curriculum als Basis für alle Studiengänge ist in Anbetracht der Datenlage ein großes Unterfangen. Als einen ersten Schritt in diese Richtung geht die Fachgruppe Medieninformatik in ihrem diesjährigen Workshop der Frage nach, welche Kernkompetenzen Medieninformatiker*innen im Rahmen ihres Studiums erlangen sollten. Der Beitrag stellt den aktuellen Zwischenstand der Diskussion in der Fachgruppe Medieninformatik und im Arbeitskreises Curriculum dar und soll den Weg zu einer spezifischen Empfehlung für Medieninformatik-Studiengänge vorbereiten und dokumentieren, für die MI-Community, aber auch für alle anderen, die an der Medieninformatik interessiert sind.
Renewable and sustainable energy production by many small and distributed producers is revolutionizing the energy landscape as we know it. Consumers produce energy, making them to prosumers in the smart grid. The interaction between prosumers and other entities in the grid and the optimal utilization of new smart grid components (electric cars, freezers, solar panels, etc.) are crucial for the success of the smart grid. The Power Trading Agent Competition is an open simulation platform that allows researchers to conduct low risk studies in this new energy market. In this work we present Maxon16, an autonomous energy broker and champion of the 2016's Power Trading Agent Competition. We present the strategies the broker used in the final round and evaluate the effectiveness of the strategies by analyzing the tournament's results.
This technical report is about the architecture and integration of commercial UAVs in Search and Rescue missions. We describe a framework that consists of heterogeneous UAVs, a UAV task planner, a bridge to the UAVs, an intelligent image hub, and a 3D point cloud generator. A first version of the framework was developed and tested in several training missions in the EU project TRADR.
This technical report is about the mission and the experience gained during the reconnaissance of an industrial hall with hazardous substances after a major fire in Berlin. During this operation, only UAVs and cameras were used to obtain information about the site and the building. First, a geo-referenced 3D model of the building was created in order to plan the entry into the hall. Subsequently, the UAVs were used to fly in the heavily damaged interior and take pictures from inside of the hall. A 360° camera mounted under the UAV was used to collect images of the surrounding area especially from sections that were difficult to fly into. Since the collected data set contained similar images as well as blurred images, it was cleaned from non-optimal images using visual SLAM, bundle adjustment and blur detection so that a 3D model and overviews could be calculated. It was shown that the emergency services were not able to extract the necessary information from the 3D model. Therefore, an interactive panorama viewer with links to other 360° images was implemented where the links to the other images depends on the semi dense point cloud and located camera positions of the visual SLAM algorithm so that the emergency forces could view the surroundings.
In this paper, we present a method for detecting objects of interest, including cars, humans, and fire, in aerial images captured by unmanned aerial vehicles (UAVs) usually during vegetation fires. To achieve this, we use artificial neural networks and create a dataset for supervised learning. We accomplish the assisted labeling of the dataset through the implementation of an object detection pipeline that combines classic image processing techniques with pretrained neural networks. In addition, we develop a data augmentation pipeline to augment the dataset with utomatically labeled images. Finally, we evaluate the performance of different neural networks.
This technical report is about the architecture and integration of very small commercial UAVs (< 40 cm diagonal) in indoor Search and Rescue missions. One UAV is manually controlled by only one single human operator delivering live video streams and image series for later 3D scene modelling and inspection. In order to assist the operator who has to simultaneously observe the environment and navigate through it we use multiple deep neural networks to provide guided autonomy, automatic object detection and classification and local 3D scene modelling. Our methods help to reduce the cognitive load of the operator. We describe a framework for quick integration of new methods from the field of Deep Learning, enabling for rapid evaluation in real scenarios, including the interaction of methods.
In the realm of digital situational awareness during disaster situations, accurate digital representations,
like 3D models, play an indispensable role. To ensure the
safety of rescue teams, robotic platforms are often deployed
to generate these models. In this paper, we introduce an
innovative approach that synergizes the capabilities of compact Unmaned Arial Vehicles (UAVs), smaller than 30 cm, equipped with 360° cameras and the advances of Neural Radiance Fields (NeRFs). A NeRF, a specialized neural network, can deduce a 3D representation of any scene using 2D images and then synthesize it from various angles upon request. This method is especially tailored for urban environments which have experienced significant destruction, where the structural integrity of buildings is compromised to the point of barring entry—commonly observed post-earthquakes and after severe fires. We have tested our approach through recent post-fire scenario, underlining the efficacy of NeRFs even in challenging outdoor environments characterized by water, snow, varying light conditions, and reflective surfaces.
This paper presents a novel approach to build consistent 3D maps for multi robot cooperation in USAR environments. The sensor streams from unmanned aerial vehicles (UAVs) and ground robots (UGV) are fused in one consistent map. The UAV camera data are used to generate 3D point clouds that are fused with the 3D point clouds generated by a rolling 2D laser scanner at the UGV. The registration method is based on the matching of corresponding planar segments that are extracted from the point clouds. Based on the registration, an approach for a globally optimized localization is presented. Apart from the structural information of the point clouds, it is important to mention that no further information is required for the localization. Two examples show the performance of the overall registration.