Robot arms are one of many assistive technologies used by people with motor impairments. Assistive robot arms can allow people to perform activities of daily living (ADL) involving grasping and manipulating objects in their environment without the assistance of caregivers. Suitable input devices (e.g., joysticks) mostly have two Degrees of Freedom (DoF), while most assistive robot arms have six or more. This results in time-consuming and cognitively demanding mode switches to change the mapping of DoFs to control the robot. One option to decrease the difficulty of controlling a high-DoF assistive robot arm using a low-DoF input device is to assign different combinations of movement-DoFs to the device’s input DoFs depending on the current situation (adaptive control). To explore this method of control, we designed two adaptive control methods for a realistic virtual 3D environment. We evaluated our methods against a commonly used non-adaptive control method that requires the user to switch controls manually. This was conducted in a simulated remote study that used Virtual Reality and involved 39 non-disabled participants. Our results show that the number of mode switches necessary to complete a simple pick-and-place task decreases significantl when using an adaptive control type. In contrast, the task completion time and workload stay the same. A thematic analysis of qualitative feedback of our participants suggests that a longer period of training could further improve the performance of adaptive control methods.
Körperliche Behinderungen können einen Menschen soweit einschränken, dass für sie ein autonomes und selbstbestimmtes Leben, trotz intakter mentaler und kognitiven Fähigkeiten, nicht mehr möglich ist. Daher ist für Menschen, die beispielsweise vom Hals abwärts gelähmt sind, sogenannten Tetraplegikern, jede Zurückgewinnung von Autonomie eine Steigerung der Lebensqualität. In dieser Masterarbeit wird ein Augmented Reality Prototyp entwickelt, der es Tetraplegikern oder Menschen mit einer ähnlichen körperlichen Einschränkung erlaubt, an einem Mensch-Roboter Arbeitsplatz Montageaufgaben durchzuführen und ihnen somit eine Integration ins Arbeitsleben ermöglichen kann. Der Prototyp erlaubt es den Benutzer ohne die Nutzung der Hände, einen Kuka iiwa Roboterarm mit der Microsoft HoloLens zu steuern. Dabei wird ein Fokus darauf gelegt, das Blickfeld des Benutzers mit speziellen virtuellen Visualisierungen, sogenannten visuellen Helfern, anzureichern, um die Nachteile, die durch die Bewegungseinschränkungen der Zielgruppe ausgelöst werden, auszugleichen. Diese visuellen Helfer sollen bei der Steuerung des Roboterarms unterstützen und die Bedienung des Prototyps verbessern. Eine Evaluation des Prototyps zeigte Tendenzen, dass das Konzept der visuellen Helfer den Benutzer den Roboterarm präziser steuern lässt und seine Bedienung unterstützt.
Autonomy and self-determination are fundamental aspects of living in our society. Supporting people for whom this freedom is limited due to physical impairments is the fundamental goal of this thesis. Especially for people who are paralyzed, even working at a desk job is often not feasible. Therefore, in this thesis a prototype of a robot assembly workstation was constructed that utilizes a modern Augmented Reality (AR)-Head-Mounted Display (HMD) to control a robotic arm. Through the use of object pose recognition, the objects in the working environment are detected and this information is used to display different visual cues at the robotic arm or in its vicinity. Providing the users with additional depth information and helping them determine object relations, which are often not easily discernible from a fixed perspective.
To achieve this a hands-free AR-based robot-control scheme was developed, which uses speech and head-movement for interaction. Additionally, multiple advanced visual cues were designed that utilize object pose detection for spatial-visual support. The pose recognition system is adapted from state-of-the-art research in computer vision to allow the detection of arbitrary objects with no regard for texture or shape.
Two evaluations were performed, a small user study that excluded the object recognition, which confirms the general usability of the system and gives an impression on its performance. The participants were able to perform difficult pick and place tasks with a high success rate. Secondly, a technical evaluation of the object recognition system was conducted, which revealed an adequate prediction precision, but is too unreliable for real-world scenarios as the prediction quality is highly variable and depends on object orientations and occlusion.
Opportunities and Challenges in Mixed-Reality for an Inclusive Human-Robot Collaboration Environment
(2018)
This paper presents an approach to enhance robot control using Mixed-Reality. It highlights the opportunities and challenges in the interaction design to achieve a Human-Robot Collaborative environment. In fact, Human-Robot Collaboration is the perfect space for social inclusion. It enables people, who suffer severe physical impairments, to interact with the environment by providing them movement control of an external robotic arm. Now, when discussing about robot control it is important to reduce the visual-split that different input and output modalities carry. Therefore, Mixed-Reality is of particular interest when trying to ease communication between humans and robotic systems.