Informatik und Kommunikation
Refine
Year of publication
Document Type
- Conference Proceeding (30)
- Part of a Book (29)
- Article (19)
- Course Material (13)
- Book (11)
- video (5)
- Contribution to a Periodical (4)
- Other (2)
- Preprint (1)
Has Fulltext
- no (114) (remove)
Keywords
- Journalismus (14)
- Marketing (5)
- World Wide Web 2.0 (4)
- Alternative Geschäftsmodelle (3)
- Datenjournalismus (3)
- Flugkörper (3)
- Kommunikationswissenschaft (3)
- Mikroprozessor (3)
- Online-Medien (3)
- Rettungsrobotik (3)
Institute
Bachelor/Master-Studiengänge
(1999)
Selbstständig und selbstbestimmt essen und trinken zu können gehört zu den Grundbedürfnissen des Menschen und wird den Aktivitäten des täglichen Lebens (ATLs) zugeordnet. Körperliche Beeinträchtigungen, die mit Funktionsverlusten in Armen, Händen und ggf. der Beweglichkeit des Oberkörpers einhergehen, schränken die selbstständige Nahrungszufuhr erheblich ein. Die Betroffenen sind darauf angewiesen, dass ihnen Getränke und Mahlzeiten zubereitet, bereitgestellt und angereicht werden. Zu dieser Personengruppe gehören Menschen mit querschnittbedingter Tetraplegie, Multiple Sklerose, Muskeldystrophie und Erkrankungen mit ähnlichen Auswirkungen. Derzeit gibt es verschiedene assistive Technologien, die das selbstständige Essen und Trinken wieder ermöglichen sollen.Wie aber muss ein Interaktionsdesign für einen Roboterarm gestaltet sein, damit er von den Betroffenen zur Nahrungsaufnahme genutzt werden kann? Welche Anforderungen gibt es und welche Aspekte sind in Bezug auf die Akzeptanz eines Roboterarms zu berücksichtigen?
Robot arms are one of many assistive technologies used by people with motor impairments. Assistive robot arms can allow people to perform activities of daily living (ADL) involving grasping and manipulating objects in their environment without the assistance of caregivers. Suitable input devices (e.g., joysticks) mostly have two Degrees of Freedom (DoF), while most assistive robot arms have six or more. This results in time-consuming and cognitively demanding mode switches to change the mapping of DoFs to control the robot. One option to decrease the difficulty of controlling a high-DoF assistive robot arm using a low-DoF input device is to assign different combinations of movement-DoFs to the device’s input DoFs depending on the current situation (adaptive control). To explore this method of control, we designed two adaptive control methods for a realistic virtual 3D environment. We evaluated our methods against a commonly used non-adaptive control method that requires the user to switch controls manually. This was conducted in a simulated remote study that used Virtual Reality and involved 39 non-disabled participants. Our results show that the number of mode switches necessary to complete a simple pick-and-place task decreases significantl when using an adaptive control type. In contrast, the task completion time and workload stay the same. A thematic analysis of qualitative feedback of our participants suggests that a longer period of training could further improve the performance of adaptive control methods.
A Robust Interface for Head Motion based Control of a Robot Arm using MARG and Visual Sensors
(2018)
Head-controlled human machine interfaces have gained popularity over the past years, especially in the restoration of the autonomy of severely disabled people, like tetraplegics. These interfaces need to be reliable and robust regarding the environmental conditions to guarantee safety of the user and enable a direct interaction between a human and a machine. This paper presents a hybrid MARG and visual sensor system for head orientation estimation which is in this case used to teleoperate a robotic arm. The system contains a Magnetic Angular Rate Gravity (MARG)-sensor and a Tobii eye tracker 4C. A MARG sensor consists of tri-axis accelerometer, gyroscope as well as a magnetometer which enable a complete measurement of orientation relative to the direction of gravity and magnetic field of the earth. The tri-axis magnetometer is sensitive to external magnetic fields which result in incorrect orientation estimation from the sensor fusion process. In this work the Tobii eye tracker 4C is used to increase head orientation estimation because it also features head tracking even though it is commonly used for eye tracking. This type of visual sensor does not suffer magnetic drift. However, it computes orientation data only, if a user is detectable. Within this work a state machine is presented which enables data fusion of the MARG and visual sensor to improve orientation estimation. The fusion of the orientation data of MARG and visual sensors enables a robust interface, which is immune against external magnetic fields. Therefore, it increases the safety of the human machine interaction.
Global registration of heterogeneous ground and aerial mapping data is a challenging task. This is especially difficult in disaster response scenarios when we have no prior information on the environment and cannot assume the regular order of man-made environments or meaningful semantic cues. In this work we extensively evaluate different approaches to globally register UGV generated 3D point-cloud data from LiDAR sensors with UAV generated point-cloud maps from vision sensors. The approaches are realizations of different selections for: a) local features: key-points or segments; b) descriptors: FPFH, SHOT, or ESF; and c) transformation estimations: RANSAC or FGR. Additionally, we compare the results against standard approaches like applying ICP after a good prior transformation has been given. The evaluation criteria include the distance which a UGV needs to travel to successfully localize, the registration error, and the computational cost. In this context, we report our findings on effectively performing the task on two new Search and Rescue datasets. Our results have the potential to help the community take informed decisions when registering point-cloud maps from ground robots to those from aerial robots.