Refine
Year of publication
Document Type
- Conference Proceeding (45) (remove)
Keywords
- Akkreditierung (3)
- E-Learning (2)
- Human-Robot Interaction (2)
- Tetraplegie (2)
- Virtuelle Hochschule (2)
- 360° Panorama (1)
- Akademischer Grad (1)
- Alltagsunterstützende Assistenzlösung (1)
- Alternative Geschäftsmodelle (1)
- Arbeitsbelastung (1)
- Artificial Intelligence (1)
- Assisted living technologies (1)
- Assistive robotics (1)
- Augmented Reality (1)
- Automatisierung, Journalismus, Literaturüberblick (1)
- Autonomous Agents (1)
- Bachelor-Studiengang (1)
- Bachelorstudiengang (1)
- Bachelorstudium (1)
- Berufsbefähigung (1)
- Bologna-Prozess (1)
- Continuous Queries (1)
- Crowdfunding (1)
- Curriculanormwert (1)
- Datalog (1)
- Datenjournalismus (1)
- Deductive Databases (1)
- Erweiterte Realität <Informatik> (1)
- Gehirn & Computer (1)
- Hochschulbildung (1)
- Human-centered computing (1)
- Incremental Evaluation (1)
- Informatik (1)
- Informatikstudium (1)
- Ingenieurstudium (1)
- Interaktion (1)
- Internationalisierung (1)
- Journalismus (1)
- Journalistenausbildung (1)
- Kalman filter (1)
- Kreditpunktesystem (1)
- Künstliche Intelligenz (1)
- Machine Learning (1)
- Maschinenintelligenz (1)
- Master-Studiengang (1)
- Masterstudiengang (1)
- Masterstudium (1)
- Mathematikstudium (1)
- Mensch-Roboter (1)
- Menschheitsentwicklung (1)
- Mixed Reality (1)
- Modularisierung (1)
- Multi-Agent System (1)
- Naturwissenschaftliches Studium (1)
- NeRF (1)
- Online-Studium (1)
- People with disabilities (1)
- Qualifikationsrahmen (1)
- Rescue Robotics (1)
- Robot assistive drinking (1)
- Robot assistive eating (1)
- Robotik (1)
- Small UAVs (1)
- Smart Grid (1)
- Studierbarkeit (1)
- Supercomputer (1)
- Update Propagation (1)
- Visual Monocular SLAM (1)
- Weiterbildung (1)
- Workload (1)
- Zustandsmaschine (1)
- human-centered design (1)
- hybrid sensor system (1)
- participatory design (1)
- risk management (1)
- sensor fusion (1)
- state machine (1)
- user acceptance (1)
Institute
- Informatik und Kommunikation (45) (remove)
A Robust Interface for Head Motion based Control of a Robot Arm using MARG and Visual Sensors
(2018)
Head-controlled human machine interfaces have gained popularity over the past years, especially in the restoration of the autonomy of severely disabled people, like tetraplegics. These interfaces need to be reliable and robust regarding the environmental conditions to guarantee safety of the user and enable a direct interaction between a human and a machine. This paper presents a hybrid MARG and visual sensor system for head orientation estimation which is in this case used to teleoperate a robotic arm. The system contains a Magnetic Angular Rate Gravity (MARG)-sensor and a Tobii eye tracker 4C. A MARG sensor consists of tri-axis accelerometer, gyroscope as well as a magnetometer which enable a complete measurement of orientation relative to the direction of gravity and magnetic field of the earth. The tri-axis magnetometer is sensitive to external magnetic fields which result in incorrect orientation estimation from the sensor fusion process. In this work the Tobii eye tracker 4C is used to increase head orientation estimation because it also features head tracking even though it is commonly used for eye tracking. This type of visual sensor does not suffer magnetic drift. However, it computes orientation data only, if a user is detectable. Within this work a state machine is presented which enables data fusion of the MARG and visual sensor to improve orientation estimation. The fusion of the orientation data of MARG and visual sensors enables a robust interface, which is immune against external magnetic fields. Therefore, it increases the safety of the human machine interaction.
Studiengänge der Medieninformatik variieren in ihren Schwerpunkten ebenso wie in den Berufsbildern, auf die sie vorbereiten. Ein vereinendes Curriculum als Basis für alle Studiengänge ist in Anbetracht der Datenlage ein großes Unterfangen. Als einen ersten Schritt in diese Richtung geht die Fachgruppe Medieninformatik in ihrem diesjährigen Workshop der Frage nach, welche Kernkompetenzen Medieninformatiker*innen im Rahmen ihres Studiums erlangen sollten. Der Beitrag stellt den aktuellen Zwischenstand der Diskussion in der Fachgruppe Medieninformatik und im Arbeitskreises Curriculum dar und soll den Weg zu einer spezifischen Empfehlung für Medieninformatik-Studiengänge vorbereiten und dokumentieren, für die MI-Community, aber auch für alle anderen, die an der Medieninformatik interessiert sind.
Renewable and sustainable energy production by many small and distributed producers is revolutionizing the energy landscape as we know it. Consumers produce energy, making them to prosumers in the smart grid. The interaction between prosumers and other entities in the grid and the optimal utilization of new smart grid components (electric cars, freezers, solar panels, etc.) are crucial for the success of the smart grid. The Power Trading Agent Competition is an open simulation platform that allows researchers to conduct low risk studies in this new energy market. In this work we present Maxon16, an autonomous energy broker and champion of the 2016's Power Trading Agent Competition. We present the strategies the broker used in the final round and evaluate the effectiveness of the strategies by analyzing the tournament's results.
This technical report is about the architecture and integration of commercial UAVs in Search and Rescue missions. We describe a framework that consists of heterogeneous UAVs, a UAV task planner, a bridge to the UAVs, an intelligent image hub, and a 3D point cloud generator. A first version of the framework was developed and tested in several training missions in the EU project TRADR.
This technical report is about the mission and the experience gained during the reconnaissance of an industrial hall with hazardous substances after a major fire in Berlin. During this operation, only UAVs and cameras were used to obtain information about the site and the building. First, a geo-referenced 3D model of the building was created in order to plan the entry into the hall. Subsequently, the UAVs were used to fly in the heavily damaged interior and take pictures from inside of the hall. A 360° camera mounted under the UAV was used to collect images of the surrounding area especially from sections that were difficult to fly into. Since the collected data set contained similar images as well as blurred images, it was cleaned from non-optimal images using visual SLAM, bundle adjustment and blur detection so that a 3D model and overviews could be calculated. It was shown that the emergency services were not able to extract the necessary information from the 3D model. Therefore, an interactive panorama viewer with links to other 360° images was implemented where the links to the other images depends on the semi dense point cloud and located camera positions of the visual SLAM algorithm so that the emergency forces could view the surroundings.
In this paper, we present a method for detecting objects of interest, including cars, humans, and fire, in aerial images captured by unmanned aerial vehicles (UAVs) usually during vegetation fires. To achieve this, we use artificial neural networks and create a dataset for supervised learning. We accomplish the assisted labeling of the dataset through the implementation of an object detection pipeline that combines classic image processing techniques with pretrained neural networks. In addition, we develop a data augmentation pipeline to augment the dataset with utomatically labeled images. Finally, we evaluate the performance of different neural networks.