Datenverarbeitung; Informatik
Filtern
Dokumenttyp
- Video (16)
- Konferenzveröffentlichung (8)
- Wissenschaftlicher Artikel (4)
- Sonstiges (1)
- Arbeitspapier (1)
Schlagworte
- E-Learning (5)
- Virtuelle Hochschule (5)
- Kommunikation im Internet (3)
- Informatik (2)
- Internet (2)
- Lernraum (2)
- Medieninformatik (2)
- Studium (2)
- 360° Panorama (1)
- Akkreditierung (1)
- Bildverarbeitung (1)
- Codegenerierung (1)
- Competency-Oriented Exams (1)
- Continuous Assessment (1)
- Digitalisierung (1)
- Enterprise JavaBeans (1)
- Flipped Classroom (1)
- Formative Assessment (1)
- Interactive Voting Systems (1)
- KMU (1)
- Maus (1)
- Mikrofotografie (1)
- Modellierung (1)
- Nachhaltigkeitsreporting (1)
- NeRF (1)
- Object-relational Mapping (1)
- Online-Studium (1)
- Peer Assessment (1)
- Peer Instruction (1)
- Persistenz <Informatik> (1)
- Rescue Robotics (1)
- Small UAVs (1)
- Social Learning (1)
- Student Activation (1)
- Virtuelle 3D-Welt (1)
- Virtuelle Realität (1)
- Virtueller Lernraum (1)
- Visual Monocular SLAM (1)
Institut
Dieser Artikel bietet einen Bericht über Entwicklungstendenzen und erste Erfahrungen virtueller Lernräume im Internet. Der Autor geht auf die Kommunikation im Internet, Lernräume im WWW, die Rollenverteilung im Lernraum ein, mit den Anforderungen aus Sicht der Lernenden, Lehrenden und der Verwaltung.
Dieser Artikel beleuchtet die Virtuelle Kooperative Hochschule (Organisation, Betreuung, Manpower), den Lernraum (Kommunikation im Internet, Lernräume im WWW und die Rollenverteilung im Lernraum), das Bundesleitprojekt Virtuelle Fachhochschule (Studentenleben, Projektdaten), Studieren im Netz (Medieninformatik, Virtuelles Lernmodul: Navigation), sowie die Didaktik.
In this paper, we present a method for detecting objects of interest, including cars, humans, and fire, in aerial images captured by unmanned aerial vehicles (UAVs) usually during vegetation fires. To achieve this, we use artificial neural networks and create a dataset for supervised learning. We accomplish the assisted labeling of the dataset through the implementation of an object detection pipeline that combines classic image processing techniques with pretrained neural networks. In addition, we develop a data augmentation pipeline to augment the dataset with utomatically labeled images. Finally, we evaluate the performance of different neural networks.
This paper reveals various approaches undertaken over more than two decades of teaching undergraduate programming classes at different Higher Education Institutions, in order to improve student activation and participation in class and consequently teaching and learning effectiveness.
While new technologies and the ubiquity of smartphones and internet access has brought new tools to the classroom and opened new didactic approaches, lessons learned from this personal long-term study show that neither technology itself nor any single new and often hyped didactic approach ensured sustained improvement of student activation. Rather it needs an integrated yet open approach towards a participative learning space supported but not created by new tools, technology and innovative teaching methods.
Technik des Online-Studiums
(2002)
Ziele und Konzepte der GI-Empfehlungen, Adressaten der Empfehlung, Studiengänge und Abschlussbezeichnungen, Ausbildungsziele und curriculare Anforderungen, Grundstruktur und Kategorien, Inhalte, Organisatorische Anforderungen, Qualität der Lehre, Ausstattung des Lehr- und Studienbetriebs, Akkreditierung.
In the realm of digital situational awareness during disaster situations, accurate digital representations,
like 3D models, play an indispensable role. To ensure the
safety of rescue teams, robotic platforms are often deployed
to generate these models. In this paper, we introduce an
innovative approach that synergizes the capabilities of compact Unmaned Arial Vehicles (UAVs), smaller than 30 cm, equipped with 360° cameras and the advances of Neural Radiance Fields (NeRFs). A NeRF, a specialized neural network, can deduce a 3D representation of any scene using 2D images and then synthesize it from various angles upon request. This method is especially tailored for urban environments which have experienced significant destruction, where the structural integrity of buildings is compromised to the point of barring entry—commonly observed post-earthquakes and after severe fires. We have tested our approach through recent post-fire scenario, underlining the efficacy of NeRFs even in challenging outdoor environments characterized by water, snow, varying light conditions, and reflective surfaces.
The video showcases a 3D model of a chemical company following a tank explosion that occurred on August 17, 2023, in Kempen computed with the AI algorithm Neural Radiance Field (NeRF). Captured by a compact mini drone measuring 18cm x 18cm and equipped with a 360° camera, these images offer an intricate perspective of the aftermath. After a comprehensive aerial survey and inspection of the 360° images taken within the facility, authorities confirmed that it was safe for the evacuated residents to return to their homes. See also:
https://www1.wdr.de/fernsehen/aktuelle-stunde/alle-videos/video-grosser-chemieunfall-in-kempen-100.html
Nerf(acto) for the 3D modeling of the Computer Science building of Westfälische Hochschule GE
(2023)
The video shows a very high resolution 3D point cloud !!! of the computer science building of the University of Applied Science Gelsenkirchen. For the recording a 3 minute flight with a M30T was performed. The 105 images taken by the wide-angle camera during this flight were localized within 3 minutes using colmap and processed using Neural Radiance Fields (NeRF). The nerfacto model of Nerfstudio was trained on an Nvidia RTX 4090 for 8 minutes. Thus, a top 3D model is available after about 15 minutes.
The video shown here shows a free camera path rendered at 60 hz (Full HD).
The video shows a very high resolution 3D point cloud !!! of the outdoor area of the German Rescue Robotics Center. For the recording, a 25-second POI flight was performed with a Mavic 3. From the 4K video footage captured during this flight, 77 images were cropped and localized within 4 minutes using colmap and processed using Neural Radiance Fields (NeRF). The nerfacto model of Nerfstudio was trained on an Nvidia RTX 4090 for 8 minutes. In summary, a top 3D model is available to task forces after about 13 minutes. The calculation is performed locally on site by the RobLW of the DRZ. The video shown here shows a free camera path rendered at 60 hz (Full HD).
Dieser Aufsatz will zeigen, wie mit den modernen Hilfsmitteln, die im Zusammenhang mit dem Internet entstehen, dreidimensionale virtuelle Welten geschaffen werden können, in denen physikalische Prozesse ablaufen. Neben einer allgemeinen, mehr philosophisch gehaltenen Einführung werden kurz die wichtigsten Elemente einer Sprache beschrieben, mit der man diese Welten modellieren kann. Die kurze Einführung in diese Sprache reicht aus, um an einem Beispiel den Fall eines Balls nachbilden zu können. Die Behandlung dieses Themas kann anregen, sich mit den Möglichkeiten dieser neuen Techniken zur Darstellung physikalischer Abläufe zu beschäftigen.
At the integration sprint of the E-DRZ consortium in march 2023 we improve the information captured by the human spotter (of the fire brigade) by extending him through a 360° drone. The UAV needs 3 minutes to capture the outdoor scenario and the hall from inside and outside. The hall ist about 70 x 20 meters. When the drone is landed we have all information in 360° degree at 5.7k as you can see it in the video. Furthermore it is a perfect documentation of the deployment scenario. In the next video we will show how to spatial localize the 360° video and how to generate a 3D point cloud from it.
At the integration sprint of the E-DRZ consortium in march 2023 we improve the information captured by the human spotter (of the fire brigade) by extending him through a 360° drone i.e. the DJI Avata with an Insta360 on top of it. The UAV needs 3 minutes to capture the outdoor scenario and the hall from inside and outside. The hall ist about 70 x 20 meters. When the drone is landed we have all information in 360° degree at 5.7k as you can see it in the video. Furthermore it is a perfect documentation of the deployment scenario. In the next video we will show how to spatial localize the 360° video and how to generate a 3D point cloud from it.
Gaussian Splatting: 3D Reconstruction of a Chemical Company After a Tank Explosion in Kempen 8/2023
(2023)
The video showcases a 3D model of a chemical company following a tank explosion that occurred on August 17, 2023, in Kempen computed with the gaussian splatting algorithm. Captured by a compact mini drone measuring 18cm x 18cm and equipped with a 360° camera, these images offer an intricate perspective of the aftermath. The computation need 29 minutes and uses 2770 images (~350 equirectangular images). After a comprehensive aerial survey and inspection of the 360° images taken within the facility, authorities confirmed that it was safe for the evacuated residents to return to their homes. See also:
https://www1.wdr.de/fernsehen/aktuelle-stunde/alle-videos/video-grosser-chemieunfall-in-kempen-100.html
Es werden die Hintergründe der Projekte angesprochen, die das Ziel haben, Lehrinhalte multimedial zu ergänzen, diese elektronisch anzubieten oder über das Internet zur Verfügung zu stellen. Zugangsmöglichkeiten zu diesen Lernformen und die technischen Stufen der Darbietung werden erläutert: Navigator, Lernraum, Portal. Auf virtuelle Lernräume, die Kommunikation im Internet, die Lernraumauswahl, auf Ausblicke für die Zukunft und auf e-Learningbeispiele wird eingegangen.
This video features a flight test conducted in our robotics lab, showcasing a custom-built thermal camera drone. We've enhanced a DJI Avata with a specialized thermal camera system. With its compact dimensions measuring 18 x 18 x 17 cm, this drone is designed to navigate and provide critical thermal information within post-fire or post-explosion environments. For more insights, be sure to check out our previous videos on this channel.
Dem Thema Nachhaltigkeit kommt zunehmend eine größere Bedeutung zu. Dies liegt nicht zuletzt daran, dass die Pflicht, einen Nachhaltigkeitsbericht zu erstellen, mit dem Jahr 2024 auch auf viele kleine und mittlere Unternehmen ausgeweitet wird. Bislang trifft dies überwiegend auf große Unternehmen zu, welche in der Regel strukturell und hinsichtlich Software sehr gut für die Bewältigung dieser Aufgabe aufgestellt sind. Anders verhält es sich jedoch bei KMU, denn in diesen fehlen meist personelle und finanzielle Ressourcen sowie geeignete softwaretechnische Unterstützungswerkzeuge. In diesem Beitrag werden die Ergebnisse einer Studie der Westfälischen Hochschule vorgestellt, die sich auf das Nachhaltigkeitsreporting von KMU fokussiert. Darüber hinaus werden Herausforderungen aus Informationssicht erläutert und mögliche Unterstützungsbedarfe für KMU diskutiert. Ein Überblick über zukünftige Ansatzpunkte und eine abschließende Diskussion runden den Artikel ab.
Dieser Bericht beschreibt in Kurzform das Projekt, dessen Ziel es ist, Online-Studiengänge zu entwickeln. Weiterhin werden die Besonderheiten bei der Durchführung von Online-Studiengängen und die damit verbundenen Schwierigkeiten aufgezeigt. An einem Beispiel kann man erkennen, wie die didaktische und multimediale Umsetzung der einzelnen Lernmodule realisiert wurde. Eine ausführliche Abhandlung kann man im Internet nachlesen: http://194.94.127.15/Lehre/infophysik/IP-WBT-Demo/infophysik.html
The dataset is used for 3D environment modeling, i.e. for the generation of dense 3D point clouds and 3D models with PatchMatch algorithm and neural networks. Difficult for the modeling algorithm are the reflections of rain, water and snow, as well as windows and vehicle surface. In addition, lighting conditions are constantly changing.
This paper presents a pragmatic approach for stepwise introduction of peer assessment elements in undergraduate programming classes, discusses some lessons learned so far and directions for further work. Students are invited to challenge their peers with their own programming exercises to be submitted through Moodle and evaluated by other students according to a predefined rubric and supervised by teaching assistants. Preliminary results show an increased activation and motivation of students leading to a better performance in the final programming exams.
An EJB container can host three types of beans: Session beans to model business processes, entity beans to represent business objects and message-driven beans to provide for asynchronous method calls. This paper addresses entity beans and their mapping to persistent storage, especially relational and object-relational databases. A tool named BeanMaker is presented which can do object mapping either automatically by metadata analysis of a database schema or manually based on intrinsic real world semantics supplied by the user. BeanMaker is a running prototype system with an intuitive GUI interface. This paper looks what's behind the scenes and focuses on design issues and concepts of code generation.
ARGUS is a tool for the systematic acquisition, documentation and evaluation of drone flights in rescue operations. In addition to the very fast generation of RGB and IR orthophotos, a trained AI can automatically detect fire, people and cars in the images captured by the drones. The video gives a short introduction to the Aerial Rescue and Geospatial Utility System -- ARGUS
Check out our Github repository under
https://github.com/RoblabWh/argus/
You can find the dataset on kaggle under
https://www.kaggle.com/datasets/julienmeine/rescue-object-detection
From the 360° images of the former video (
• German rescue robotic center captured... ) we now generate the 3D point cloud. The UAV needs 3 minutes to capture the outdoor scenario and the hall from inside and outside. The 3D point cloud generation is 5x slower than the video. It uses a VSLAM algorithm to localize the k-frames (green) and with 3 k-frames it use a 360° PatchMatch algorithm implemented at a NVIDIA graphic card (CUDA) to calculated the dense point clouds.The hall ist about 70 x 20 meters.
360° and IR- Camera Drone Flight Test: Superimposition of two data sources for Post-Fire Inspection
(2023)
This video highlights a recent flight test carried out in our cutting-edge robotics lab, unveiling the capabilities of our meticulously crafted thermal and 360° camera drone! We've ingeniously upgraded a DJI Avata with a bespoke thermal and 360° camera system. Compact yet powerful, measuring just 18 x 18 x 17 cm, this drone is strategically engineered to effortlessly navigate and deliver crucial thermal and 360° insights concurrently in post-fire or post-explosion environments.
The integration of a specialized thermal and 360° camera system enables the simultaneous capture of both data sources during a single flight. This groundbreaking approach not only reduces inspection time by half but also facilitates the seamless superimposition of thermal and 360° videos for comprehensive analysis and interpretation.