Filtern
Erscheinungsjahr
Dokumenttyp
- Konferenzveröffentlichung (216) (entfernen)
Sprache
- Englisch (216) (entfernen)
Schlagworte
- Bionik (3)
- Gespenstschrecken (3)
- Haftorgan (3)
- adhesion (3)
- stick insects (3)
- Competency-Oriented Exams (2)
- Field measurement (2)
- Solar modules (2)
- 360° Panorama (1)
- AEM-Electrolysis (1)
- API 1130 (1)
- Air handling unit (1)
- Alternative Geschäftsmodelle (1)
- Artificial Intelligence (1)
- Assisted living technologies (1)
- Assistive robotics (1)
- Augmented Reality (1)
- Autonomous Agents (1)
- Bildverarbeitung (1)
- Biomimetics (1)
- CFD Simulation (1)
- COIL (1)
- CPM (1)
- Climate change (1)
- Constructive Alignment (1)
- Continuous Assessment (1)
- Continuous Queries (1)
- Crowdfunding (1)
- Current Pulses (1)
- Datalog (1)
- Deductive Databases (1)
- Deutschland / Technische Regeln für brennbare Flüssigkeiten (1)
- Distributed Software Development (1)
- Elastizitätsmodul (1)
- Electrodeposition (1)
- Erneuerbare Energien (1)
- Erweiterte Realität <Informatik> (1)
- Exams with Third-Party Applications (1)
- Fehlererkennung (1)
- Fehlerortung (1)
- Flat-Channel (1)
- Flipped Classroom (1)
- Formative Assessment (1)
- Future capacity needs (1)
- High Reynold Numer (1)
- Human-Robot Interaction (1)
- Human-centered computing (1)
- Hydraulic compression, Carbon Nano Fibers, PEM Fuel Cells, Catalyst utilization (1)
- Incremental Evaluation (1)
- Interactive Voting Systems (1)
- Intercultural Collaboration (1)
- Journalismus (1)
- Kalman filter (1)
- Kohlenstoff-Nanoröhre (1)
- Launcher (1)
- Leak detection (1)
- Leckerkennung (1)
- Leckortung (1)
- Lecksuchgerät (1)
- Lecküberwachung (1)
- Machine Learning (1)
- Maus (1)
- Mikrofotografie (1)
- Mixed Reality (1)
- Multi-Agent System (1)
- NeRF (1)
- Ni-Mo alloy Catalyst (1)
- Online Programming Exams (1)
- Online Supervision (1)
- PEM Electrolysis, Hydrogen, Hydraulic Compression, High Pressure (1)
- Peer Assessment (1)
- Peer Instruction (1)
- People with disabilities (1)
- Performance prediction (1)
- Physics-Informed Deep Learning (1)
- Polymer-Elektrolytmembran-Brennstoffzelle (1)
- Project-based Learning (1)
- Regeln der Technik (1)
- Rescue Robotics (1)
- Robot assistive drinking (1)
- Robot assistive eating (1)
- Robotik (1)
- Sinusoidal (1)
- Small UAVs (1)
- Smart Grid (1)
- Social Learning (1)
- Student Activation (1)
- TRFL (1)
- Temperature coefficients (1)
- Tetraplegie (1)
- Thermal Stress (1)
- Transformative Teaching (1)
- Update Propagation (1)
- Urban heat island (1)
- Visual Monocular SLAM (1)
- Young´s modulus (1)
- Zustandsmaschine (1)
- biomimicry (1)
- consent banner (1)
- cookie banner (1)
- cookies (1)
- human-centered design (1)
- hybrid sensor system (1)
- leak locating (1)
- leak monitoring (1)
- participatory design (1)
- privacy (1)
- risk management (1)
- sensor fusion (1)
- state machine (1)
- user acceptance (1)
- web measurement (1)
Institut
- Westfälisches Institut für Gesundheit (49)
- Institut für Internetsicherheit (45)
- Westfälisches Energieinstitut (24)
- Informatik und Kommunikation (21)
- Maschinenbau Bocholt (20)
- Elektrotechnik und angewandte Naturwissenschaften (19)
- Wirtschaft und Informationstechnik Bocholt (7)
- Institut für biologische und chemische Informatik (6)
- Fachbereiche (2)
- Institut Arbeit und Technik (2)
An automated pipeline for comprehensive calculation of intermolecular interaction energies based on molecular force-fields using the Tinker molecular modelling package is presented. Starting with non-optimized chemically intuitive monomer structures, the pipeline allows the approximation of global minimum energy monomers and dimers, configuration sampling for various monomer-monomer distances, estimation of coordination numbers by molecular dynamics simulations, and the evaluation of differential pair interaction energies. The latter are used to derive Flory-Huggins parameters and isotropic particle-particle repulsions for Dissipative Particle Dynamics (DPD). The computational results for force fields MM3, MMFF94, OPLSAA and AMOEBA09 are analyzed with Density Functional Theory (DFT) calculations and DPD simulations for a mixture of the non-ionic polyoxyethylene alkyl ether surfactant C10E4 with water to demonstrate the usefulness of the approach.
Inspired by the super-human performance of deep learning models in playing the game of Go after being presented with virtually unlimited training data, we looked into areas in chemistry where similar situations could be achieved. Encountering large amounts of training data in chemistry is still rare, so we turned to two areas where realistic training data can be fabricated in large quantities, namely a) the recognition of machine-readable structures from images of chemical diagrams and b) the conversion of IUPAC(-like) names into structures and vice versa. In this talk, we outline the challenges, technical implementation and results of this study.
Optical Chemical Structure Recognition (OCSR): Vast amounts of chemical information remain hidden in the primary literature and have yet to be curated into open-access databases. To automate the process of extracting chemical structures from scientific papers, we developed the DECIMER.ai project. This open-source platform provides an integrated solution for identifying, segmenting, and recognising chemical structure depictions in scientific literature. DECIMER.ai comprises three main components: DECIMER-Segmentation, which utilises a Mask-RCNN model to detect and segment images of chemical structure depictions; DECIMER-Image Classifier EfficientNet-based classification model identifies which images contain chemical structures and DECIMER-Image Transformer which acts as an OCSR engine which combines an encoder-decoder model to convert the segmented chemical structure images into machine-readable formats, like the SMILES string.
DECIMER.ai is data-driven, relying solely on the training data to make accurate predictions without hand-coded rules or assumptions. The latest model was trained with 127 million structures and 483 million depictions (4 different per structure) on Google TPU-V4 VMs
Name to Structure Conversion: The conversion of structures to IUPAC(-like) or systematic names has been solved algorithmically or rule-based in satisfying ways. This fact, on the other side, provided us with an opportunity to generate a name-structure training pair at a very large scale to train a proof-of-concept transformer network and evaluate its performance.
In this work, the largest model was trained using almost one billion SMILES strings. The Lexichem software utility from OpenEye was employed to generate the IUPAC names used in the training process. STOUT V2 was trained on Google TPU-V4 VMs. The model's accuracy was validated through one-to-one string matching, BLEU scores, and Tanimoto similarity calculations. To further verify the model's reliability, every IUPAC name generated by STOUT V2 was analysed for accuracy and retranslated using OPSIN, a widely used open-source software for converting IUPAC names to SMILES. This additional validation step confirmed the high fidelity of STOUT V2's translations.
The DECIMER.ai Project
(2024)
Over the past few decades, the number of publications describing chemical structures and their metadata has increased significantly. Chemists have published the majority of this information as bitmap images along with other important information as human-readable text in printed literature and have never been retained and preserved in publicly available databases as machine-readable formats. Manually extracting such data from printed literature is error-prone, time-consuming, and tedious. The recognition and translation of images of chemical structures from printed literature into machine-readable format is known as Optical Chemical Structure Recognition (OCSR). In recent years, deep-learning-based OCSR tools have become increasingly popular. While many of these tools claim to be highly accurate, they are either unavailable to the public or proprietary. Meanwhile, the available open-source tools are significantly time-consuming to set up. Furthermore, none of these offers an end-to-end workflow capable of detecting chemical structures, segmenting them, classifying them, and translating them into machine-readable formats.
To address this issue, we present the DECIMER.ai project, an open-source platform that provides an integrated solution for identifying, segmenting, and recognizing chemical structure depictions within the scientific literature. DECIMER.ai comprises three main components: DECIMER-Segmentation, which utilizes a Mask-RCNN model to detect and segment images of chemical structure depictions; DECIMER-Image Classifier EfficientNet-based classification model identifies which images contain chemical structures and DECIMER-Image Transformer which acts as an OCSR engine which combines an encoder-decoder model to convert the segmented chemical structure images into machine-readable formats, like the SMILES string.
A key strength of DECIMER.ai is that its algorithms are data-driven, relying solely on the training data to make accurate predictions without any hand-coded rules or assumptions. By offering this comprehensive, open-source, and transparent pipeline, DECIMER.ai enables automated extraction and representation of chemical data from unstructured publications, facilitating applications in chemoinformatics and drug discovery.
In the realm of digital situational awareness during disaster situations, accurate digital representations,
like 3D models, play an indispensable role. To ensure the
safety of rescue teams, robotic platforms are often deployed
to generate these models. In this paper, we introduce an
innovative approach that synergizes the capabilities of compact Unmaned Arial Vehicles (UAVs), smaller than 30 cm, equipped with 360° cameras and the advances of Neural Radiance Fields (NeRFs). A NeRF, a specialized neural network, can deduce a 3D representation of any scene using 2D images and then synthesize it from various angles upon request. This method is especially tailored for urban environments which have experienced significant destruction, where the structural integrity of buildings is compromised to the point of barring entry—commonly observed post-earthquakes and after severe fires. We have tested our approach through recent post-fire scenario, underlining the efficacy of NeRFs even in challenging outdoor environments characterized by water, snow, varying light conditions, and reflective surfaces.
In this paper, we present a method for detecting objects of interest, including cars, humans, and fire, in aerial images captured by unmanned aerial vehicles (UAVs) usually during vegetation fires. To achieve this, we use artificial neural networks and create a dataset for supervised learning. We accomplish the assisted labeling of the dataset through the implementation of an object detection pipeline that combines classic image processing techniques with pretrained neural networks. In addition, we develop a data augmentation pipeline to augment the dataset with utomatically labeled images. Finally, we evaluate the performance of different neural networks.
Measurement studies are essential for research and industry alike to understand the Web’s inner workings better and help quantify specific phenomena. Performing such studies is demanding due to the dynamic nature and size of the Web. An experiment’s careful design and setup are complex, and many factors might affect the results. However, while several works have independently observed differences in
the outcome of an experiment (e.g., the number of observed trackers) based on the measurement setup, it is unclear what causes such deviations. This work investigates the reasons for these differences by visiting 1.7M webpages with five different measurement setups. Based on this, we build ‘dependency trees’ for each page and cross-compare the nodes in the trees. The results show that the measured trees differ considerably, that the cause of differences can be attributed to specific nodes, and that even identical measurement setups can produce different results.
This paper reveals various approaches undertaken over more than two decades of teaching undergraduate programming classes at different Higher Education Institutions, in order to improve student activation and participation in class and consequently teaching and learning effectiveness.
While new technologies and the ubiquity of smartphones and internet access has brought new tools to the classroom and opened new didactic approaches, lessons learned from this personal long-term study show that neither technology itself nor any single new and often hyped didactic approach ensured sustained improvement of student activation. Rather it needs an integrated yet open approach towards a participative learning space supported but not created by new tools, technology and innovative teaching methods.
This paper presents a pragmatic approach for stepwise introduction of peer assessment elements in undergraduate programming classes, discusses some lessons learned so far and directions for further work. Students are invited to challenge their peers with their own programming exercises to be submitted through Moodle and evaluated by other students according to a predefined rubric and supervised by teaching assistants. Preliminary results show an increased activation and motivation of students leading to a better performance in the final programming exams.
In this work a mathematical approach to calculate solar panel temperature based on measured irradiance, temperature and wind speed is applied. With the calculated module temperature, the electrical solar module characteristics is determined. A program developed in MatLab App Designer allows to import measurement data from a weather station and calculates the module temperature based on the mathematical NOCT and stationary approach with a time step between the measurements of 5 minutes. Three commercially available solar panels with different cell and interconnection technologies are used for the verification of the established models. The results show a strong correlation between the measured and by the stationary model predicted module temperature with a coefficient of determination R2 close to 1 and a root mean square deviation (RMSE) of ≤ 2.5 K for a time period of three months. Based on the predicted temperature, measured irradiance in module plane and specific module information the program models the electrical data as time series in 5-minute steps. Predicted to measured power for a time period of three months shows a linear correlation with an R2 of 0.99 and a mean absolute error (MAE) of 3.5, 2.7 and 4.8 for module ID 1, 2 and 3. The calculated energy (exemplarily for module ID 2) based on the measured, calculated by the NOCT and stationary model for this time period is 118.4 kWh, resp. 116.7 kWh and 117.8 kWh. This is equivalent to an uncertainty of 1.4% for the NOCT and 0.5% for the stationary model.
Advanced Determination of Temperature Coefficients of Photovoltaic Modules by Field Measurements
(2023)
In this work data from outdoor measurements, acquired over the course of up to three years on commercially available solar panels, is used to determine the temperature coefficients and compare these to the information as stated by the producer in the data sheets. A program developed in MatLab App Designer allows to import the electrical and ambient measurement data. Filter algorithms for solar irradiance narrow the irradiance level down to ~1000 W/m2 before linear regression methods are applied to obtain the temperature coefficients. A repeatability investigation proves the accuracy of the determined temperature coefficients which are in good agreement to the supplier specification if the specified values for power are not larger than -0.3%/K. Further optimization is achieved by applying wind filter techniques and days with clear sky condition. With the big (measurement) data on hand it was possible to determine the change of the temperature coefficients for varying irradiance. As stated in literature we see an increase of the temperature coefficient of voltage and a decline for the temperature coefficient of power with increasing irradiance.
The concept of “Internationalisation at Home“ has gained momentum with the increasing digitalization of education and limitations on mobility. Collaborative Online International Learning (COIL) is an innovative, cost-effective instructional method that promotes intercul-tural learning through online collaboration between faculty and students from different countries or locations. The benefits of using COIL courses have been widely recognized, with learners developing intercultural competencies, digital skills, international education experi-ence, and global awareness.
However, multicultural communication in project environments can be complex and demand awareness of cultural variations . The creation and development of effective cross-cultural collectivism, trust, communication, and empathy in leadership is an important ingredient for remote project collaborations success. This is an area that has been least explored in re-search on communication in virtual teams.
The GIPE projects are mainly carried out as so-called Collaborative Online International Learning (COIL) events. However, to gain a “real world“ experience abroad in an intercultural team, students from all partner universities can participate in the Spring School being held for two weeks in Germany and the Germany students present and hand-over the results in the country of the partner university. The main objective of this research was to examine the experiences of students participating in the GIPE project and to evaluate the effectiveness of the project in enhancing intercultural competencies and fostering collaboration among stu-dents from different continents. This paper will also explore the implications of the GIPE project for Education 2.0 considering the COVID-19 pandemic and the future of education delivery and administration transformation.
Cookie notices (or cookie banners) are a popular mechanism for websites to provide (European) Internet users a tool to choose which cookies the site may set. Banner implementations range from merely providing information that a site uses cookies over offering the choice to accepting or denying all cookies to allowing fine-grained control of cookie usage. Users frequently get annoyed by the banner’s pervasiveness as they interrupt “natural” browsing on the Web. As a remedy, different browser extensions have been developed to automate the interaction with cookie banners.
In this work, we perform a large-scale measurement study comparing the effectiveness of extensions for “cookie banner interaction.” We configured the extensions to express different privacy choices (e.g., accepting all cookies, accepting functional cookies, or rejecting all cookies) to understand their capabilities to execute a user’s preferences. The results show statistically significant differences in which cookies are set, how many of them are set, and which types are set—even for extensions that aim to implement the same cookie choice. Extensions for “cookie banner interaction” can effectively reduce the number of set cookies compared to no interaction with the banners. However, all extensions increase the tracking requests significantly except when rejecting all cookies.
Air Handling units (AHU) are designed to guarantee a high indoor air quality for any time and outdoor condition all over the year. To do so, the AHU removes particle matter like dust or pollen and adapts the thermophysical properties of air to the desired, seasonal indoor comfort conditions. AHU have a robust design and thus operate for more than fifteen years, sometimes even for decades. An AHU designed today must consider and anticipate the change of user needs as well as outdoor air conditions for the next twenty years. To anticipate the outdoor air condition of coming decades, scientific models exist, which allow the design of peak performance and capacities of the air treatment components. It is most likely, that the ongoing climate change will lead to higher temperatures as well as higher humidity, while the comfort zone of human beings will remain at today’s values. Next to the impact of global warming with average rise of mean air temperature local effects will influence the operation of AHU. On effect investigated here is the steep temperature increase in city centres called urban heat islands. Heating and cooling capacities as well as water consumption for humidification are investigated for a reference AHU for fifteen regional locations in Germany. These regions represent all climate zones within the country. Additionally, the urban heat island effect was investigated for Berlin Alexanderplatz compared a rural area close by. The AHU was chosen to operate in an intensive care unit of a hospital. The set-up leads to 24/7 operation with 8760 hours per year. The article presents the modelling of current and future weather data as well as the unit set up. The calculated hourly performance and capacity parameters for current (reference year 2012) and future weather data (reference year 2045) yield energy consumption and peak loads of the unit for heating, cooling and humidification. The results are displayed by relative comparisons of each performance value.