Filtern
Dokumenttyp
Schlagworte
- Erweiterte Realität <Informatik> (2)
- Human-Robot Interaction (2)
- Tetraplegie (2)
- Alltagsunterstützende Assistenzlösung (1)
- Assisted living technologies (1)
- Assistive robotics (1)
- Augmented Reality (1)
- Human-centered computing (1)
- Interaktion (1)
- Kalman filter (1)
- Mensch-Roboter (1)
- Mixed Reality (1)
- People with disabilities (1)
- Robot assistive drinking (1)
- Robot assistive eating (1)
- Zustandsmaschine (1)
- assistive robotics (1)
- augmented reality (1)
- cobot (1)
- human robot interaction (1)
- human-centered design (1)
- human-robot collaboration (1)
- hybrid sensor system (1)
- participatory design (1)
- projection (1)
- risk management (1)
- sensor fusion (1)
- shared user control (1)
- state machine (1)
- user acceptance (1)
- virtual reality (1)
- visual cues (1)
- visualization techniques (1)
Institut
- Informatik und Kommunikation (10) (entfernen)
Robot arms are one of many assistive technologies used by people with motor impairments. Assistive robot arms can allow people to perform activities of daily living (ADL) involving grasping and manipulating objects in their environment without the assistance of caregivers. Suitable input devices (e.g., joysticks) mostly have two Degrees of Freedom (DoF), while most assistive robot arms have six or more. This results in time-consuming and cognitively demanding mode switches to change the mapping of DoFs to control the robot. One option to decrease the difficulty of controlling a high-DoF assistive robot arm using a low-DoF input device is to assign different combinations of movement-DoFs to the device’s input DoFs depending on the current situation (adaptive control). To explore this method of control, we designed two adaptive control methods for a realistic virtual 3D environment. We evaluated our methods against a commonly used non-adaptive control method that requires the user to switch controls manually. This was conducted in a simulated remote study that used Virtual Reality and involved 39 non-disabled participants. Our results show that the number of mode switches necessary to complete a simple pick-and-place task decreases significantl when using an adaptive control type. In contrast, the task completion time and workload stay the same. A thematic analysis of qualitative feedback of our participants suggests that a longer period of training could further improve the performance of adaptive control methods.
Nowadays, robots are found in a growing number of areas where they collaborate closely with humans. Enabled by lightweight materials and safety sensors, these cobots are gaining increasing popularity in domestic care, where they support people with physical impairments in their everyday lives. However, when cobots perform actions autonomously, it remains challenging for human collaborators to understand and predict their behavior, which is crucial for achieving trust and user acceptance. One significant aspect of predicting cobot behavior is understanding their perception and comprehending how they “see” the world. To tackle this challenge, we compared three different visualization techniques for Spatial Augmented Reality. All of these communicate cobot perception by visually indicating which objects in the cobot’s surrounding have been identified by their sensors. We compared the well-established visualizations Wedge and Halo against our proposed visualization Line in a remote user experiment with participants suffering from physical impairments. In a second remote experiment, we validated these findings with a broader non-specific user base. Our findings show that Line, a lower complexity visualization, results in significantly faster reaction times compared to Halo, and lower task load compared to both Wedge and Halo. Overall, users prefer Line as a more straightforward visualization. In Spatial Augmented Reality, with its known disadvantage of limited projection area size, established off-screen visualizations are not effective in communicating cobot perception and Line presents an easy-to-understand alternative.
Recommendations for the Development of a Robotic Drinking and Eating Aid - An Ethnographic Study
(2021)
Being able to live independently and self-determined in one’s own home is a crucial factor or human dignity and preservation of self-worth. For people with severe physical impairments who cannot use their limbs for every day tasks, living in their own home is only possible with assistance from others. The inability to move arms and hands makes it hard to take care of oneself, e.g. drinking and eating independently. In this paper, we investigate how 15 participants with disabilities consume food and drinks. We report on interviews, participatory observations, and analyzed the aids they currently use. Based on our findings, we derive a set of recommendations that supports researchers and practitioners in designing future robotic drinking and eating aids for people with disabilities.
This Article introduces two research projects towards assistive robotic arms for people with severe body impairments. Both projects aim to develop new control and interaction designs to promote accessibility and a better performance for people with functional losses in all four extremities, e.g. due to quadriplegic or multiple sclerosis. The project MobILe concentrates on using a robotic arm as drinking aid and controlling it with smart glasses, eye-tracking and augmented reality. A user oriented development process with participatory methods were pursued which brought new knowledge about the life and care situation of the future target group and the requirements a robotic drinking aid needs to meet. As a consequence the new project DoF-Adaptiv follows an even more participatory approach, including the future target group, their family and professional caregivers from the beginning into decision making and development processes within the project. DoF-Adaptiv aims to simplify the control modalities of assistive robotic arms to enhance the usability of the robotic arm for activities of daily living. lo decide on exemplary activities, like eating or open a door, the future target group, their family and professional caregivers are included in the decision making process. Furthermore all relevant stakeholders will be included in the investigation of ethical, legal and social implications as well as the identification of potential risks. This article will show the importance of the participatory design for the development and research process in MobILe and DoF-Adaptiv.
In der wissenschaftlichen Literatur gibt es kaum Studien, die sich mit der konkreten Alltagstauglichkeit von Smartwatches beschäftigen, um zu verstehen, warum die Klasse von wearables eher ein Nischendasein führt. In diesem Beitrag wird daher die Verwendung einer Smartwatch am Beispiel Kochen untersucht. Hierzu wurde eine Koch-App mit Rezeptinformationen für eine Smartwatch entwickelt, welche über Hand- und Armbewegungen in Form von Gesten bedient werden kann. In einer Feldstudie mit acht Probanden wurde ermittelt, inwieweit diese Interaktionsform den Kochprozess verändert. Die Ergebnisse zeigen, dass die unmittelbare Verfügbarkeit der Uhr sowohl Effizienz- als auch Effektivitätsvorteile gegenüber klassischen Kochhilfen bietet. Die Steuerung via Freihandgesten erlaubte zudem die Nutzung in einem Szenario, in welchem die Hände oft belegt oder verschmutzt sind und somit eine Bedienung per Finger problematisch sein kann. Die Uhr wurde von den Probanden dabei als nützliches Werkzeug erachtet, obwohl diese bislang keinerlei Erfahrung mit einem solchen Gerät hatten.
Selbstständig und selbstbestimmt essen und trinken zu können gehört zu den Grundbedürfnissen des Menschen und wird den Aktivitäten des täglichen Lebens (ATLs) zugeordnet. Körperliche Beeinträchtigungen, die mit Funktionsverlusten in Armen, Händen und ggf. der Beweglichkeit des Oberkörpers einhergehen, schränken die selbstständige Nahrungszufuhr erheblich ein. Die Betroffenen sind darauf angewiesen, dass ihnen Getränke und Mahlzeiten zubereitet, bereitgestellt und angereicht werden. Zu dieser Personengruppe gehören Menschen mit querschnittbedingter Tetraplegie, Multiple Sklerose, Muskeldystrophie und Erkrankungen mit ähnlichen Auswirkungen. Derzeit gibt es verschiedene assistive Technologien, die das selbstständige Essen und Trinken wieder ermöglichen sollen.Wie aber muss ein Interaktionsdesign für einen Roboterarm gestaltet sein, damit er von den Betroffenen zur Nahrungsaufnahme genutzt werden kann? Welche Anforderungen gibt es und welche Aspekte sind in Bezug auf die Akzeptanz eines Roboterarms zu berücksichtigen?
Opportunities and Challenges in Mixed-Reality for an Inclusive Human-Robot Collaboration Environment
(2018)
This paper presents an approach to enhance robot control using Mixed-Reality. It highlights the opportunities and challenges in the interaction design to achieve a Human-Robot Collaborative environment. In fact, Human-Robot Collaboration is the perfect space for social inclusion. It enables people, who suffer severe physical impairments, to interact with the environment by providing them movement control of an external robotic arm. Now, when discussing about robot control it is important to reduce the visual-split that different input and output modalities carry. Therefore, Mixed-Reality is of particular interest when trying to ease communication between humans and robotic systems.
A Robust Interface for Head Motion based Control of a Robot Arm using MARG and Visual Sensors
(2018)
Head-controlled human machine interfaces have gained popularity over the past years, especially in the restoration of the autonomy of severely disabled people, like tetraplegics. These interfaces need to be reliable and robust regarding the environmental conditions to guarantee safety of the user and enable a direct interaction between a human and a machine. This paper presents a hybrid MARG and visual sensor system for head orientation estimation which is in this case used to teleoperate a robotic arm. The system contains a Magnetic Angular Rate Gravity (MARG)-sensor and a Tobii eye tracker 4C. A MARG sensor consists of tri-axis accelerometer, gyroscope as well as a magnetometer which enable a complete measurement of orientation relative to the direction of gravity and magnetic field of the earth. The tri-axis magnetometer is sensitive to external magnetic fields which result in incorrect orientation estimation from the sensor fusion process. In this work the Tobii eye tracker 4C is used to increase head orientation estimation because it also features head tracking even though it is commonly used for eye tracking. This type of visual sensor does not suffer magnetic drift. However, it computes orientation data only, if a user is detectable. Within this work a state machine is presented which enables data fusion of the MARG and visual sensor to improve orientation estimation. The fusion of the orientation data of MARG and visual sensors enables a robust interface, which is immune against external magnetic fields. Therefore, it increases the safety of the human machine interaction.