Filtern
Erscheinungsjahr
Dokumenttyp
Sprache
- Englisch (51) (entfernen)
Schlagworte
- Robotik (8)
- Flugkörper (7)
- UAV (7)
- Rettungsrobotik (5)
- Erweiterte Realität <Informatik> (3)
- Augmented Reality (2)
- Human-Robot Interaction (2)
- Twitter <Softwareplattform> (2)
- 360° Panorama (1)
- Alternative Geschäftsmodelle (1)
- Artificial Intelligence (1)
- Assisted living technologies (1)
- Assistive robotics (1)
- Autonomous Agents (1)
- Brand theory (1)
- Chief Executive Officer (1)
- Codegenerierung (1)
- Communication management (1)
- Continuous Queries (1)
- Crowdfunding (1)
- Data Journalism (1)
- Datalog (1)
- Datenjournalismus (1)
- Deductive Databases (1)
- Enterprise JavaBeans (1)
- Greek dept crisis (1)
- Hands-free Interaction (1)
- Human-centered computing (1)
- Incremental Evaluation (1)
- Journalismus (1)
- Kalman filter (1)
- Machine Learning (1)
- Media Brands (1)
- Media brand characteristics (1)
- Media positioning (1)
- Mixed Reality (1)
- Multi-Agent System (1)
- NeRF (1)
- New Work, Information and Communication Industry, Innovation, Organizational Goals, Survey (1)
- Normalisierung (1)
- Object Recognition (1)
- Object-relational Mapping (1)
- Ortsbestimmung (1)
- People with disabilities (1)
- Persistenz <Informatik> (1)
- Politische Berichterstattung (1)
- Rescue Robotics (1)
- Robot assistive drinking (1)
- Robot assistive eating (1)
- Small UAVs (1)
- Smart Grid (1)
- Social Media (1)
- Tetraplegie (1)
- Twitter (1)
- Update Propagation (1)
- Visual Monocular SLAM (1)
- Zustandsmaschine (1)
- assistive robotics (1)
- augmented reality (1)
- balance (1)
- cobot (1)
- composition (1)
- design process (1)
- ethics (1)
- expert interviews (1)
- gender stereotypes (1)
- gender-sensitive design (1)
- gender-specific design (1)
- human robot interaction (1)
- human-centered design (1)
- human-robot collaboration (1)
- hybrid sensor system (1)
- international comparative study (1)
- media accountability (1)
- neutrality (1)
- normalisation (1)
- participatory design (1)
- political journalism (1)
- projection (1)
- quality standards (1)
- relevance (1)
- risk management (1)
- role identity (1)
- sensor fusion (1)
- shared user control (1)
- state machine (1)
- television news coverage (1)
- user acceptance (1)
- virtual reality (1)
- visual cues (1)
- visualization techniques (1)
- watchblogs (1)
Institut
- Informatik und Kommunikation (51) (entfernen)
A Robust Interface for Head Motion based Control of a Robot Arm using MARG and Visual Sensors
(2018)
Head-controlled human machine interfaces have gained popularity over the past years, especially in the restoration of the autonomy of severely disabled people, like tetraplegics. These interfaces need to be reliable and robust regarding the environmental conditions to guarantee safety of the user and enable a direct interaction between a human and a machine. This paper presents a hybrid MARG and visual sensor system for head orientation estimation which is in this case used to teleoperate a robotic arm. The system contains a Magnetic Angular Rate Gravity (MARG)-sensor and a Tobii eye tracker 4C. A MARG sensor consists of tri-axis accelerometer, gyroscope as well as a magnetometer which enable a complete measurement of orientation relative to the direction of gravity and magnetic field of the earth. The tri-axis magnetometer is sensitive to external magnetic fields which result in incorrect orientation estimation from the sensor fusion process. In this work the Tobii eye tracker 4C is used to increase head orientation estimation because it also features head tracking even though it is commonly used for eye tracking. This type of visual sensor does not suffer magnetic drift. However, it computes orientation data only, if a user is detectable. Within this work a state machine is presented which enables data fusion of the MARG and visual sensor to improve orientation estimation. The fusion of the orientation data of MARG and visual sensors enables a robust interface, which is immune against external magnetic fields. Therefore, it increases the safety of the human machine interaction.
Renewable and sustainable energy production by many small and distributed producers is revolutionizing the energy landscape as we know it. Consumers produce energy, making them to prosumers in the smart grid. The interaction between prosumers and other entities in the grid and the optimal utilization of new smart grid components (electric cars, freezers, solar panels, etc.) are crucial for the success of the smart grid. The Power Trading Agent Competition is an open simulation platform that allows researchers to conduct low risk studies in this new energy market. In this work we present Maxon16, an autonomous energy broker and champion of the 2016's Power Trading Agent Competition. We present the strategies the broker used in the final round and evaluate the effectiveness of the strategies by analyzing the tournament's results.
Media Brand Management
(2022)
The management of media brands faces challenges. In order to be able to point out possible solutions, this article first explains the concept and the nature of “media brands.” Subsequently, various theoretical approaches to the explanation of media brands and their management are presented. Regardless of theoretical preferences, it is important to keep in mind the brand-strategic complexity of media management that is subsequently described. Due to their specificity, special attention is paid to the basic strategic positioning options and to the communication management of media brands. In this way, the special features of media brand management become clear in comparison with other products and services.
This technical report is about the architecture and integration of commercial UAVs in Search and Rescue missions. We describe a framework that consists of heterogeneous UAVs, a UAV task planner, a bridge to the UAVs, an intelligent image hub, and a 3D point cloud generator. A first version of the framework was developed and tested in several training missions in the EU project TRADR.
This technical report is about the mission and the experience gained during the reconnaissance of an industrial hall with hazardous substances after a major fire in Berlin. During this operation, only UAVs and cameras were used to obtain information about the site and the building. First, a geo-referenced 3D model of the building was created in order to plan the entry into the hall. Subsequently, the UAVs were used to fly in the heavily damaged interior and take pictures from inside of the hall. A 360° camera mounted under the UAV was used to collect images of the surrounding area especially from sections that were difficult to fly into. Since the collected data set contained similar images as well as blurred images, it was cleaned from non-optimal images using visual SLAM, bundle adjustment and blur detection so that a 3D model and overviews could be calculated. It was shown that the emergency services were not able to extract the necessary information from the 3D model. Therefore, an interactive panorama viewer with links to other 360° images was implemented where the links to the other images depends on the semi dense point cloud and located camera positions of the visual SLAM algorithm so that the emergency forces could view the surroundings.
In this paper, we present a method for detecting objects of interest, including cars, humans, and fire, in aerial images captured by unmanned aerial vehicles (UAVs) usually during vegetation fires. To achieve this, we use artificial neural networks and create a dataset for supervised learning. We accomplish the assisted labeling of the dataset through the implementation of an object detection pipeline that combines classic image processing techniques with pretrained neural networks. In addition, we develop a data augmentation pipeline to augment the dataset with utomatically labeled images. Finally, we evaluate the performance of different neural networks.
This technical report is about the architecture and integration of very small commercial UAVs (< 40 cm diagonal) in indoor Search and Rescue missions. One UAV is manually controlled by only one single human operator delivering live video streams and image series for later 3D scene modelling and inspection. In order to assist the operator who has to simultaneously observe the environment and navigate through it we use multiple deep neural networks to provide guided autonomy, automatic object detection and classification and local 3D scene modelling. Our methods help to reduce the cognitive load of the operator. We describe a framework for quick integration of new methods from the field of Deep Learning, enabling for rapid evaluation in real scenarios, including the interaction of methods.
In the realm of digital situational awareness during disaster situations, accurate digital representations,
like 3D models, play an indispensable role. To ensure the
safety of rescue teams, robotic platforms are often deployed
to generate these models. In this paper, we introduce an
innovative approach that synergizes the capabilities of compact Unmaned Arial Vehicles (UAVs), smaller than 30 cm, equipped with 360° cameras and the advances of Neural Radiance Fields (NeRFs). A NeRF, a specialized neural network, can deduce a 3D representation of any scene using 2D images and then synthesize it from various angles upon request. This method is especially tailored for urban environments which have experienced significant destruction, where the structural integrity of buildings is compromised to the point of barring entry—commonly observed post-earthquakes and after severe fires. We have tested our approach through recent post-fire scenario, underlining the efficacy of NeRFs even in challenging outdoor environments characterized by water, snow, varying light conditions, and reflective surfaces.
This paper presents a novel approach to build consistent 3D maps for multi robot cooperation in USAR environments. The sensor streams from unmanned aerial vehicles (UAVs) and ground robots (UGV) are fused in one consistent map. The UAV camera data are used to generate 3D point clouds that are fused with the 3D point clouds generated by a rolling 2D laser scanner at the UGV. The registration method is based on the matching of corresponding planar segments that are extracted from the point clouds. Based on the registration, an approach for a globally optimized localization is presented. Apart from the structural information of the point clouds, it is important to mention that no further information is required for the localization. Two examples show the performance of the overall registration.
The two churches, San Francesco and Sant'Agostino in Amatrice, Italy was hit by an earthquake on August 24 2016. Both churches are in a state of partial collapse, in need of shoring to prevent potential further destruction and to preserve the national heritage. The video show the mission at 1.Sept.2016 in clips of 10 seconds.
The TRADR project was asked by the Italian firebrigade Vigili del Fuoco to provide 3D textured models of two churches.
The team entered San Francesco with two UGVs (ground robots) and one UAV (drone, flown by Prof. Surmann), teleoperating them entirely out of line of sight and partially in collaboration. We entered Sant'Agostino with one UAV (also flown by Prof. Surmann) while two other UAVs were providing a view from different angles to facilitate maneuvering them entirely out of line of sight.
Venice 2018: Tradr Review
(2018)
The video shows an orthopoto and a textured 3D model of the location. 300 images were recorded in two short flights with a Mavic Pro in 50 meter height. The first one was a single grid while the camera facing down and the second one was a double grid facing the camera at an 60 degree angle. The 3D model is computed with OpenDroneMap.
Challenging visual localization of an UAV while flying out of a room into a snowy environment (~ 4:50). The UAV is equipped with a 360° camera. The localization is done with OpenVSLAM.
The video was recorded in Jan. 2019 at the Fire Brigade training center in Dortmund
To achieve nearly real time conditions the original resolution of 5k (30 fps) was reduced to 2k (ffmpeg -i video.mp4 -vf scale=1920:-1 -crf 25 vido-small.mp4) with high compression (-crf 25). This reduce the original size from 3.2 GB to 93MB (~ 4 MBit/s which could be transmitted online via a radio link). The localization shown did not use frameskip. With a frameskip above 1 the localization fails while the UAV is flying through the window. Indoor localization can be done with a frameskip of 3 in real time.