Filtern
Erscheinungsjahr
Dokumenttyp
Sprache
- Englisch (51) (entfernen)
Schlagworte
- Robotik (8)
- Flugkörper (7)
- UAV (7)
- Rettungsrobotik (5)
- Erweiterte Realität <Informatik> (3)
- Augmented Reality (2)
- Human-Robot Interaction (2)
- Twitter <Softwareplattform> (2)
- 360° Panorama (1)
- Alternative Geschäftsmodelle (1)
- Artificial Intelligence (1)
- Assisted living technologies (1)
- Assistive robotics (1)
- Autonomous Agents (1)
- Brand theory (1)
- Chief Executive Officer (1)
- Codegenerierung (1)
- Communication management (1)
- Continuous Queries (1)
- Crowdfunding (1)
- Data Journalism (1)
- Datalog (1)
- Datenjournalismus (1)
- Deductive Databases (1)
- Enterprise JavaBeans (1)
- Greek dept crisis (1)
- Hands-free Interaction (1)
- Human-centered computing (1)
- Incremental Evaluation (1)
- Journalismus (1)
- Kalman filter (1)
- Machine Learning (1)
- Media Brands (1)
- Media brand characteristics (1)
- Media positioning (1)
- Mixed Reality (1)
- Multi-Agent System (1)
- NeRF (1)
- New Work, Information and Communication Industry, Innovation, Organizational Goals, Survey (1)
- Normalisierung (1)
- Object Recognition (1)
- Object-relational Mapping (1)
- Ortsbestimmung (1)
- People with disabilities (1)
- Persistenz <Informatik> (1)
- Politische Berichterstattung (1)
- Rescue Robotics (1)
- Robot assistive drinking (1)
- Robot assistive eating (1)
- Small UAVs (1)
- Smart Grid (1)
- Social Media (1)
- Tetraplegie (1)
- Twitter (1)
- Update Propagation (1)
- Visual Monocular SLAM (1)
- Zustandsmaschine (1)
- assistive robotics (1)
- augmented reality (1)
- balance (1)
- cobot (1)
- composition (1)
- design process (1)
- ethics (1)
- expert interviews (1)
- gender stereotypes (1)
- gender-sensitive design (1)
- gender-specific design (1)
- human robot interaction (1)
- human-centered design (1)
- human-robot collaboration (1)
- hybrid sensor system (1)
- international comparative study (1)
- media accountability (1)
- neutrality (1)
- normalisation (1)
- participatory design (1)
- political journalism (1)
- projection (1)
- quality standards (1)
- relevance (1)
- risk management (1)
- role identity (1)
- sensor fusion (1)
- shared user control (1)
- state machine (1)
- television news coverage (1)
- user acceptance (1)
- virtual reality (1)
- visual cues (1)
- visualization techniques (1)
- watchblogs (1)
Institut
- Informatik und Kommunikation (51) (entfernen)
Recommendations for the Development of a Robotic Drinking and Eating Aid - An Ethnographic Study
(2021)
Being able to live independently and self-determined in one’s own home is a crucial factor or human dignity and preservation of self-worth. For people with severe physical impairments who cannot use their limbs for every day tasks, living in their own home is only possible with assistance from others. The inability to move arms and hands makes it hard to take care of oneself, e.g. drinking and eating independently. In this paper, we investigate how 15 participants with disabilities consume food and drinks. We report on interviews, participatory observations, and analyzed the aids they currently use. Based on our findings, we derive a set of recommendations that supports researchers and practitioners in designing future robotic drinking and eating aids for people with disabilities.
Nowadays, robots are found in a growing number of areas where they collaborate closely with humans. Enabled by lightweight materials and safety sensors, these cobots are gaining increasing popularity in domestic care, where they support people with physical impairments in their everyday lives. However, when cobots perform actions autonomously, it remains challenging for human collaborators to understand and predict their behavior, which is crucial for achieving trust and user acceptance. One significant aspect of predicting cobot behavior is understanding their perception and comprehending how they “see” the world. To tackle this challenge, we compared three different visualization techniques for Spatial Augmented Reality. All of these communicate cobot perception by visually indicating which objects in the cobot’s surrounding have been identified by their sensors. We compared the well-established visualizations Wedge and Halo against our proposed visualization Line in a remote user experiment with participants suffering from physical impairments. In a second remote experiment, we validated these findings with a broader non-specific user base. Our findings show that Line, a lower complexity visualization, results in significantly faster reaction times compared to Halo, and lower task load compared to both Wedge and Halo. Overall, users prefer Line as a more straightforward visualization. In Spatial Augmented Reality, with its known disadvantage of limited projection area size, established off-screen visualizations are not effective in communicating cobot perception and Line presents an easy-to-understand alternative.
The two churches, San Francesco and Sant'Agostino in Amatrice, Italy was hit by an earthquake on August 24 2016. Both churches are in a state of partial collapse, in need of shoring to prevent potential further destruction and to preserve the national heritage. The video show the mission at 1.Sept.2016 in clips of 10 seconds.
The TRADR project was asked by the Italian firebrigade Vigili del Fuoco to provide 3D textured models of two churches.
The team entered San Francesco with two UGVs (ground robots) and one UAV (drone, flown by Prof. Surmann), teleoperating them entirely out of line of sight and partially in collaboration. We entered Sant'Agostino with one UAV (also flown by Prof. Surmann) while two other UAVs were providing a view from different angles to facilitate maneuvering them entirely out of line of sight.
Venice 2018: Tradr Review
(2018)
The video shows an orthopoto and a textured 3D model of the location. 300 images were recorded in two short flights with a Mavic Pro in 50 meter height. The first one was a single grid while the camera facing down and the second one was a double grid facing the camera at an 60 degree angle. The 3D model is computed with OpenDroneMap.
Challenging visual localization of an UAV while flying out of a room into a snowy environment (~ 4:50). The UAV is equipped with a 360° camera. The localization is done with OpenVSLAM.
The video was recorded in Jan. 2019 at the Fire Brigade training center in Dortmund
To achieve nearly real time conditions the original resolution of 5k (30 fps) was reduced to 2k (ffmpeg -i video.mp4 -vf scale=1920:-1 -crf 25 vido-small.mp4) with high compression (-crf 25). This reduce the original size from 3.2 GB to 93MB (~ 4 MBit/s which could be transmitted online via a radio link). The localization shown did not use frameskip. With a frameskip above 1 the localization fails while the UAV is flying through the window. Indoor localization can be done with a frameskip of 3 in real time.
360° Camera at a small UAV
(2021)
This paper presents a novel approach to build consistent 3D maps for multi robot cooperation in USAR environments. The sensor streams from unmanned aerial vehicles (UAVs) and ground robots (UGV) are fused in one consistent map. The UAV camera data are used to generate 3D point clouds that are fused with the 3D point clouds generated by a rolling 2D laser scanner at the UGV. The registration method is based on the matching of corresponding planar segments that are extracted from the point clouds. Based on the registration, an approach for a globally optimized localization is presented. Apart from the structural information of the point clouds, it is important to mention that no further information is required for the localization. Two examples show the performance of the overall registration.