Informatik und Kommunikation
Filtern
Erscheinungsjahr
Dokumenttyp
- Konferenzveröffentlichung (31) (entfernen)
Volltext vorhanden
- nein (31) (entfernen)
Schlagworte
Institut
- Informatik und Kommunikation (31) (entfernen)
Internetstatistik
(2005)
This paper presents a novel approach to build consistent 3D maps for multi robot cooperation in USAR environments. The sensor streams from unmanned aerial vehicles (UAVs) and ground robots (UGV) are fused in one consistent map. The UAV camera data are used to generate 3D point clouds that are fused with the 3D point clouds generated by a rolling 2D laser scanner at the UGV. The registration method is based on the matching of corresponding planar segments that are extracted from the point clouds. Based on the registration, an approach for a globally optimized localization is presented. Apart from the structural information of the point clouds, it is important to mention that no further information is required for the localization. Two examples show the performance of the overall registration.
Global registration of heterogeneous ground and aerial mapping data is a challenging task. This is especially difficult in disaster response scenarios when we have no prior information on the environment and cannot assume the regular order of man-made environments or meaningful semantic cues. In this work we extensively evaluate different approaches to globally register UGV generated 3D point-cloud data from LiDAR sensors with UAV generated point-cloud maps from vision sensors. The approaches are realizations of different selections for: a) local features: key-points or segments; b) descriptors: FPFH, SHOT, or ESF; and c) transformation estimations: RANSAC or FGR. Additionally, we compare the results against standard approaches like applying ICP after a good prior transformation has been given. The evaluation criteria include the distance which a UGV needs to travel to successfully localize, the registration error, and the computational cost. In this context, we report our findings on effectively performing the task on two new Search and Rescue datasets. Our results have the potential to help the community take informed decisions when registering point-cloud maps from ground robots to those from aerial robots.
A Robust Interface for Head Motion based Control of a Robot Arm using MARG and Visual Sensors
(2018)
Head-controlled human machine interfaces have gained popularity over the past years, especially in the restoration of the autonomy of severely disabled people, like tetraplegics. These interfaces need to be reliable and robust regarding the environmental conditions to guarantee safety of the user and enable a direct interaction between a human and a machine. This paper presents a hybrid MARG and visual sensor system for head orientation estimation which is in this case used to teleoperate a robotic arm. The system contains a Magnetic Angular Rate Gravity (MARG)-sensor and a Tobii eye tracker 4C. A MARG sensor consists of tri-axis accelerometer, gyroscope as well as a magnetometer which enable a complete measurement of orientation relative to the direction of gravity and magnetic field of the earth. The tri-axis magnetometer is sensitive to external magnetic fields which result in incorrect orientation estimation from the sensor fusion process. In this work the Tobii eye tracker 4C is used to increase head orientation estimation because it also features head tracking even though it is commonly used for eye tracking. This type of visual sensor does not suffer magnetic drift. However, it computes orientation data only, if a user is detectable. Within this work a state machine is presented which enables data fusion of the MARG and visual sensor to improve orientation estimation. The fusion of the orientation data of MARG and visual sensors enables a robust interface, which is immune against external magnetic fields. Therefore, it increases the safety of the human machine interaction.
Opportunities and Challenges in Mixed-Reality for an Inclusive Human-Robot Collaboration Environment
(2018)
This paper presents an approach to enhance robot control using Mixed-Reality. It highlights the opportunities and challenges in the interaction design to achieve a Human-Robot Collaborative environment. In fact, Human-Robot Collaboration is the perfect space for social inclusion. It enables people, who suffer severe physical impairments, to interact with the environment by providing them movement control of an external robotic arm. Now, when discussing about robot control it is important to reduce the visual-split that different input and output modalities carry. Therefore, Mixed-Reality is of particular interest when trying to ease communication between humans and robotic systems.