Filtern
Erscheinungsjahr
- 2023 (38) (entfernen)
Dokumenttyp
- Wissenschaftlicher Artikel (12)
- Konferenzveröffentlichung (12)
- Teil eines Buches (Kapitel) (8)
- Video (2)
- Sonstiges (2)
- Preprint (2)
Sprache
- Englisch (38) (entfernen)
Schlagworte
- Dissipative Particle Dynamics (2)
- Field measurement (2)
- Solar modules (2)
- 360° Panorama (1)
- AI (1)
- Augmented Three-Phase AC-Railgun (1)
- Chemistry Development Kit, CDK, Molecule fragmentation, In silico fragmentation, Scaffolds, Functional groups, Glycosidic moieties, Rich client, Graphical user interface, GUI (1)
- Competency-Oriented Exams (1)
- Continuous Assessment (1)
- Dissipative particle dynamics, DPD, Surfactant, Bilayer, Lamellar, Simulation, Mesoscopic (1)
- Flipped Classroom (1)
- Formative Assessment (1)
- Interactive Voting Systems (1)
- Multiphase Rail Launcher (1)
- NeRF (1)
- New Work, Information and Communication Industry, Innovation, Organizational Goals, Survey (1)
- OCSR (1)
- Peer Assessment (1)
- Peer Instruction (1)
- Performance prediction (1)
- Rescue Robotics (1)
- Segmentation; Correlation; Diseases; Convolutional Neural Networks (1)
- Small UAVs (1)
- Social Learning (1)
- Student Activation (1)
- Temperature coefficients (1)
- Transformative Teaching (1)
- Visual Monocular SLAM (1)
- artificial intelligence (1)
- consent banner (1)
- cookie banner (1)
- cookies (1)
- ingots (1)
- machine learning (1)
- optical chemical structure recognition (1)
- photovoltaic power systems (1)
- privacy (1)
- silicon (1)
- solar cells (1)
- sustainable development (1)
- web measurement (1)
Institut
- Fachbereiche (8)
- Institut für biologische und chemische Informatik (8)
- Informatik und Kommunikation (5)
- Maschinenbau Bocholt (5)
- Westfälisches Energieinstitut (2)
- Wirtschaft und Informationstechnik Bocholt (2)
- Elektrotechnik und angewandte Naturwissenschaften (1)
- Institute (1)
- Strategische Projekte (1)
In this paper, we present a method for detecting objects of interest, including cars, humans, and fire, in aerial images captured by unmanned aerial vehicles (UAVs) usually during vegetation fires. To achieve this, we use artificial neural networks and create a dataset for supervised learning. We accomplish the assisted labeling of the dataset through the implementation of an object detection pipeline that combines classic image processing techniques with pretrained neural networks. In addition, we develop a data augmentation pipeline to augment the dataset with utomatically labeled images. Finally, we evaluate the performance of different neural networks.
The disruptive nature of the changing media landscape and technology-driven advances in communication have led to innovative ways of organizing work in the information and communication industry. This reorganization of work is reflected in the concept of New Work, which rethinks working concepts, styles, and employee behavior. Based on a survey among staff in the information and communication industry (n = 380), this study investigates the status quo of the implementation of New Work measures and their effectiveness in helping companies reach organizational goals. The results show that New Work measures are widely adopted although there is still unused potential. Moreover, the study demonstrates that the implementation of New Work measures supports companies in achieving New Work goals as well as overall organizational goals in the contexts of agile management, change management, internal communication, and evaluation.
This paper reveals various approaches undertaken over more than two decades of teaching undergraduate programming classes at different Higher Education Institutions, in order to improve student activation and participation in class and consequently teaching and learning effectiveness.
While new technologies and the ubiquity of smartphones and internet access has brought new tools to the classroom and opened new didactic approaches, lessons learned from this personal long-term study show that neither technology itself nor any single new and often hyped didactic approach ensured sustained improvement of student activation. Rather it needs an integrated yet open approach towards a participative learning space supported but not created by new tools, technology and innovative teaching methods.
The German supply chain law ( Lieferkettensorgfaltspflichtengesetz, abbreviated: LkSG) which enters into force on 1 January 2023 is part of the developing legal framework for human rights in global supply chains. Like the French vigilance law, it represents a new generation of supply chain laws which impose mandatory human rights due diligence obligations. The LkSG requires enterprises to exercise a number of due diligence obligations – from conducting risk analysis to undertaking preventive measures or remedial actions. The law is based on public enforcement via a competent authority, the Federal Office for Economic Affairs and Export Control (BAFA). The BAFA monitors and enforces compliance with the due diligence obligations. Non-compliant enterprises can be fined with up to 800,000 Euros and, in some cases, up to 2% of the annual turnover. Whilst the LkSG is an important step towards achieving greater corporate sustainability, it also has limitations. It was a political compromise and, as such, it does not include a new civil liability for non-compliance. Moreover, by default, it only applies to the enterprise’s own business area and its direct suppliers, whereas indirect suppliers are only included where the enterprise has substantiated knowledge that an obligation has been violated.
In the realm of digital situational awareness during disaster situations, accurate digital representations,
like 3D models, play an indispensable role. To ensure the
safety of rescue teams, robotic platforms are often deployed
to generate these models. In this paper, we introduce an
innovative approach that synergizes the capabilities of compact Unmaned Arial Vehicles (UAVs), smaller than 30 cm, equipped with 360° cameras and the advances of Neural Radiance Fields (NeRFs). A NeRF, a specialized neural network, can deduce a 3D representation of any scene using 2D images and then synthesize it from various angles upon request. This method is especially tailored for urban environments which have experienced significant destruction, where the structural integrity of buildings is compromised to the point of barring entry—commonly observed post-earthquakes and after severe fires. We have tested our approach through recent post-fire scenario, underlining the efficacy of NeRFs even in challenging outdoor environments characterized by water, snow, varying light conditions, and reflective surfaces.
Recent years have seen a sharp increase in the development of deep learning and artificial intelligence-based molecular informatics. There has been a growing interest in applying deep learning to several subfields, including the digital transformation of synthetic chemistry, extraction of chemical information from the scientific literature, and AI in natural product-based drug discovery. The application of AI to molecular informatics is still constrained by the fact that most of the data used for training and testing deep learning models are not available as FAIR and open data. As open science practices continue to grow in popularity, initiatives which support FAIR and open data as well as open-source software have emerged. It is becoming increasingly important for researchers in the field of molecular informatics to embrace open science and to submit data and software in open repositories. With the advent of open-source deep learning frameworks and cloud computing platforms, academic researchers are now able to deploy and test their own deep learning models with ease. With the development of new and faster hardware for deep learning and the increasing number of initiatives towards digital research data management infrastructures, as well as a culture promoting open data, open source, and open science, AI-driven molecular informatics will continue to grow. This review examines the current state of open data and open algorithms in molecular informatics, as well as ways in which they could be improved in future.
Measurement studies are essential for research and industry alike to understand the Web’s inner workings better and help quantify specific phenomena. Performing such studies is demanding due to the dynamic nature and size of the Web. An experiment’s careful design and setup are complex, and many factors might affect the results. However, while several works have independently observed differences in
the outcome of an experiment (e.g., the number of observed trackers) based on the measurement setup, it is unclear what causes such deviations. This work investigates the reasons for these differences by visiting 1.7M webpages with five different measurement setups. Based on this, we build ‘dependency trees’ for each page and cross-compare the nodes in the trees. The results show that the measured trees differ considerably, that the cause of differences can be attributed to specific nodes, and that even identical measurement setups can produce different results.