Filtern
Erscheinungsjahr
Dokumenttyp
- Wissenschaftlicher Artikel (212)
- Konferenzveröffentlichung (210)
- Teil eines Buches (Kapitel) (31)
- Sonstiges (29)
- Video (14)
- Buch (Monographie) (13)
- Preprint (9)
- Dissertation (4)
- Arbeitspapier (4)
- Bericht (3)
Sprache
- Englisch (534) (entfernen)
Schlagworte
- Robotik (8)
- Flugkörper (7)
- UAV (7)
- Rettungsrobotik (5)
- Polymer-Elektrolytmembran-Brennstoffzelle (4)
- adhesion (4)
- Bionik (3)
- Erweiterte Realität <Informatik> (3)
- Gespenstschrecken (3)
- Haftorgan (3)
Institut
- Westfälisches Institut für Gesundheit (115)
- Institut für Internetsicherheit (56)
- Informatik und Kommunikation (51)
- Elektrotechnik und angewandte Naturwissenschaften (50)
- Wirtschaft und Informationstechnik Bocholt (46)
- Westfälisches Energieinstitut (44)
- Institut für biologische und chemische Informatik (37)
- Maschinenbau Bocholt (37)
- Institut Arbeit und Technik (15)
- Wirtschaftsingenieurwesen (15)
Jdpd - An open Java Simulation Kernel for Molecular Fragment Dissipative Particle Dynamics (DPD)
Jdpd is an open Java simulation kernel for Molecular Fragment Dissipative Particle Dynamics (DPD) with parallelizable force calculation, efficient caching options and fast property calculations. It is characterized by an interface and factory-pattern driven design for simple code changes and may help to avoid problems of polyglot programming. Detailed input/output communication, parallelization and process control as well as internal logging capabilities for debugging purposes are supported. The kernel may be utilized in different simulation environments ranging from flexible scripting solutions up to fully integrated “all-in-one” simulation systems like MFsim.
Since Jdpd version 1.6.1.0 Jdpd is available in a (basic) double-precision version and a (derived) single-precision version (= JdpdSP) for all numerical calculations, where the single precision version needs about half the memory of the double precision version.
Jdpd uses the Apache Commons Math and Apache Commons RNG libraries and is published as open source under the GNU General Public License version 3. This repository comprises the Java bytecode libraries (including the Apache Commons Math and RNG libraries), the Javadoc HTML documentation and the Netbeans source code packages including Unit tests.
Jdpd has been described in the scientific literature (the final manuscript 2018 - van den Broek - Jdpd - Final Manucsript.pdf is added to the repository) and used for DPD studies (see references below).
See text file JdpdVersionHistory.txt for a version history with more detailed information.
MFsim - An open Java all-in-one rich-client simulation environment for mesoscopic simulation
MFsim is an open Java all-in-one rich-client computing environment for mesoscopic simulation with Jdpd as its default simulation kernel for Molecular Fragment Dissipative Particle Dynamics (DPD). The environment integrates and supports the complete preparation-simulation-evaluation triad of a mesoscopic simulation task. Productive highlights are a SPICES molecular structure editor, a PDB-to-SPICES parser for particle-based peptide/protein representations, a support of polymer definitions, a compartment editor for complex simulation box start configurations, interactive and flexible simulation box views including analytics, simulation movie generation or animated diagrams. As an open project, MFsim enables customized extensions for different fields of research.
MFsim uses several open libraries (see MFSimVersionHistory.txt for details and references below) and is published as open source under the GNU General Public License version 3 (see LICENSE).
MFsim has been described in the scientific literature and used for DPD studies.
A Robust Interface for Head Motion based Control of a Robot Arm using MARG and Visual Sensors
(2018)
Head-controlled human machine interfaces have gained popularity over the past years, especially in the restoration of the autonomy of severely disabled people, like tetraplegics. These interfaces need to be reliable and robust regarding the environmental conditions to guarantee safety of the user and enable a direct interaction between a human and a machine. This paper presents a hybrid MARG and visual sensor system for head orientation estimation which is in this case used to teleoperate a robotic arm. The system contains a Magnetic Angular Rate Gravity (MARG)-sensor and a Tobii eye tracker 4C. A MARG sensor consists of tri-axis accelerometer, gyroscope as well as a magnetometer which enable a complete measurement of orientation relative to the direction of gravity and magnetic field of the earth. The tri-axis magnetometer is sensitive to external magnetic fields which result in incorrect orientation estimation from the sensor fusion process. In this work the Tobii eye tracker 4C is used to increase head orientation estimation because it also features head tracking even though it is commonly used for eye tracking. This type of visual sensor does not suffer magnetic drift. However, it computes orientation data only, if a user is detectable. Within this work a state machine is presented which enables data fusion of the MARG and visual sensor to improve orientation estimation. The fusion of the orientation data of MARG and visual sensors enables a robust interface, which is immune against external magnetic fields. Therefore, it increases the safety of the human machine interaction.
The set of transactions that occurs on the public ledger of an Ethereum network in a specific time frame can be represented as a directed graph, with vertices representing addresses and an edge indicating the interaction between two addresses.
While there exists preliminary research on analyzing an Ethereum network by the means of graph analysis, most existing work is focused on either the public Ethereum Mainnet or on analyzing the different semantic transaction layers using static graph analysis in order to carve out the different network properties (such as interconnectivity, degrees of centrality, etc.) needed to characterize a blockchain network. By analyzing the consortium-run bloxberg Proof-of-Authority (PoA) Ethereum network, we show that we can identify suspicious and potentially malicious behaviour of network participants by employing statistical graph analysis. We thereby show that it is possible to identify the potentially malicious
exploitation of an unmetered and weakly secured blockchain network resource. In addition, we show that Temporal Network Analysis is a promising technique to identify the occurrence of anomalies in a PoA Ethereum network.
With ongoing developments in the field of smart cities and digitalization in general, data is becoming a driving factor and value stream for new and existing economies alike. However, there exists an increasing centralization and monopolization of data holders and service providers, especially in the form of the big US-based technology companies in the western world and central technology providers with close ties to the government in the Asian regions. Self Sovereign Identity (SSI) provides the technical building blocks to create decentralized data-driven systems, which bring data autonomy back to the users. In this paper we propose a system in which the combination of SSI and token economy based incentivisation strategies makes it possible to unlock the potential value of data-pools without compromising the data autonomy of the users.
Proof of Existence as a blockchain service has first been published in 2013 as a public notary service on the Bitcoin network and can be used to verify the existence of a particular file in a specific point of time without sharing the file or its content itself. This service is also available on the Ethereum based bloxberg network, a decentralized research infrastructure that is governed, operated and developed by an international consortium of research facilities. Since it is desirable to integrate the creation of this proof tightly into the research workflow, namely the acquisition and processing of research data, we show a simple to integrate MATLAB extension based solution with the concept being applicable to other programming languages and environments as well.
A Crypto-Token Based Charging Incentivization
Scheme for Sustainable Light Electric Vehicle
Sharing
(2021)
The ecological impact of shared light electric vehicles (LEV) such as kick scooters is still widely discussed. Especially the fact that the vehicles and batteries are collected using diesel vans in order to charge empty batteries with electricity of unclear origin is perceived as unsustainable. A better option could be to let the users charge the vehicles themselves whenever it is necessary. For this, a decentralized,flexible and easy to install network of off-grid solar charging stations could bring renewable electricity where it is needed without sacrificing the convenience of a free float sharing system. Since the charging stations are powered by solar energy the most efficient way to utilize them would be to charge the vehicles when the sun is shining. In order to make users charge the vehicle it is necessary to provide some form of benefit for
them doing so. This could be either a discount or free rides. A
particularly robust and well-established mechanism is controlling incentives via means of blockchain-based cryptotokens. This paper demonstrates a crypto-token based scheme for incentivizing users to charge sharing vehicles during times of considerable solar irradiation in order to contribute to more sustainable mobility services.
In this experimental work we present a novel electrolyzer system for the production of hydrogen and oxygen at high pressure levels without an additional mechanical compressor. Due to its control strategies, the operation conditions for this electrolyzer can be kept optimal for each load situation of the system. Furthermore, the novel system design allows for dynamic long-term operation as well as for easy maintainability. Therefore, the device meets the requirements for prospective power-to-gas applications, especially, in order to store excess energy from renewable sources. A laboratory scale device has been developed and high-pressure operation was validated. We also studied the long-term stability of the system by applying dynamic load cycles with load changes every 30 sec. After 80 h of operation the used membrane electrode assembly (MEA) was investigated by means of SEM, EDX and XRD analysis.
The technology of polymer electrolyte membrane (PEM) electrolysis provides an efficient way to produce hydrogen. In combination with renewable energy sources, it promises to be one of the key factors towards a carbon-free energy infrastructure in the future. Today, PEM electrolyzers with a power consumption higher than 1 MW and a gas output pressure of 30 bar (or even higher) are already commercially available. Nevertheless, fundamental research and development for an improved efficiency is far from being finally accomplished, and mostly takes place on a laboratory scale. Upscaling the laboratory prototypes to an industrial size usually cannot be achieved without facing further problems and/or losing efficiency. With our novel system design based on hydraulic cell compression, a lot of the commonly occurring problems like inhomogeneous temperature and current distribution can be avoided. In this study we present first results of an upscaling by a factor of 30 in active cell area.
Performance enhancing study for large scale PEM electrolyzer cells based on hydraulic compression
(2017)
A compact and efficient PEM electrolyser stack design based on hydraulic single cell compression
(2019)
Purpose
Although courage has generally been understood as a powerful virtue, research to establish it as a psychological construct is in its infancy. We examined courage in organizations against the backdrop of positive psychology with a design in the Grounded Theory tradition that connects Positive Organizational Behavior and Positive Organizational Scholarship.
Method
The sample consists of organizations that define courage in their mission statement and organizations without such a definition. It includes employees and executives, exploring workplace courage on the macro as well as the micro level. Eleven organizations and 23 participants contributed to the interview study.
Results
Applying Glaser's theoretical coding, specifically the C-family, we propose that courage arises from a decisional conflict in three major domains: the self, social interaction, and performance. It is located on a continuum between apathy and foolhardiness and can take on reactive, proactive, or autonomous forms. Whether and to what extent courage manifests, is a dynamic process contingent upon organizational structure, culture, and communication climate as well as individual cognitiveaffective personality systems.
Limitations
The model depicts the complexity of the phenomenon, rather than details of its individual components. It goes beyond pre-defined categories and prevailing definitions.
Implications
Modern organizations are characterized by volatility, uncertainty, complexity, and ambiguity (VUCA).
Courage is crucial in such an environment and can be systematically fostered across the whole human
resource management cycle.
Value
The study advances theory building on courage in the workplace and highlights its potential to be
measured, developed and managed for more effective work performance.
Article 134 TFEU
(2023)
Article 135 TFEU
(2023)
Biomimetics is a well-known approach for technical innovation. However, most of its influence remains in the academic field. One option for increasing its application in the practice of technical design is to enhance the use of the biomimetic process with a step-by-step standard, building a bridge to common engineering procedures. This article presents the endeavor of an interdisciplinary expert panel from the fields of biology, engineering science, and industry to develop a standard that links biomimetics to the classical processes of product development and engineering design. This new standard, VDI 6220 Part 2, proposes a process description that is compatible and connectable to classical approaches in engineering design. The standard encompasses both the solution-based and the problem-driven process of biomimetics. It is intended to be used in any product development process for more biomimetic applications in the future.
Biomimetics is the interdisciplinary co-operation of various scientific disciplines and fields of innovation, and it aims to solve practical problems using biological models. Biomimetic research and its fields of application are manifold, and the community is made up of a wide range of disciplines, from biologists and engineers to designers. Guidelines and standards can build a common ground for understanding of the field, communication across disciplines, present and future projects and implementation of biomimetic knowledge. Since 2015, three international standards have been published and defined terms and definitions, as well as specific applications. The scientific literature and patents in several databases were searched for citations of the published standards. Standards or technical guidelines on biomimetics are represented both in the scientific literature and in patents. However, taking into account the increasing number of publications in biomimetics, the number of publications (52) citing the international standards is low. This shows that the perception of technical rules is still underrepresented in the academic field. Greater awareness and acceptance of the importance of standards for quality assurance even in the academic environment is discussed, and active participation in the corresponding International Organization for Standardization committee on biomimetics is asked for.
As a rule, an experiment carried out at school or in undergraduate study
courses is rather simple and not very informative. However, when the experiments
are to be performed using modern methods, they are often abstract and
difficult to understand. Here, we describe a quick and simple experiment,
namely the enzymatic characterization of ptyalin (human salivary amylase)
using a starch degradation assay. With the experimental setup presented here,
enzyme parameters, such as pH optimum, temperature optimum, chloride
dependence, and sensitivity to certain chemicals can be easily determined. This
experiment can serve as a good model for enzyme characterization in general,
as modern methods usually follow the same principle: determination of the
activity of the enzyme under different conditions. As different alleles occur in
humans, a random selection of test subjects will be quite different with regard
to ptyalin activities. Therefore, when the students measure their own ptyalin
activity, significant differences will emerge, and this will give them an idea of
the genetic diversity in human populations. The evaluation has shown that the
pupils have gained a solid understanding of the topic through this experiment.
To address the question which neocortical layers and cell types are important for the perception of a sensory stimulus, we performed multielectrode recordings in the barrel cortex of head-fixed mice performing a single-whisker go/no-go detection task with vibrotactile stimuli of differing intensities. We found that behavioral detection probability decreased gradually over the course of each session, which was well explained by a signal detection theory-based model that posits stable psychometric sensitivity and a variable decision criterion updated after each reinforcement, reflecting decreasing motivation. Analysis of multiunit activity demonstrated highest neurometric sensitivity in layer 4, which was achieved within only 30 ms after stimulus onset. At the level of single neurons, we observed substantial heterogeneity of neurometric sensitivity within and across layers, ranging from nonresponsiveness to approaching or even exceeding psychometric sensitivity. In all cortical layers, putative inhibitory interneurons on average proffered higher neurometric sensitivity than putative excitatory neurons. In infragranular layers, neurons increasing firing rate in response to stimulation featured higher sensitivities than neurons decreasing firing rate. Offline machine-learning-based analysis of videos of behavioral sessions showed that mice performed better when not moving, which at the neuronal level, was reflected by increased stimulus-evoked firing rates.
In this paper, we investigate the influence of different disease groups on the size of different 1 anatomical structures. To this end, we first modify and improve an existing anatomical segmentation 2 model. Then, we use this model to segment 104 anatomical structures from computed tomography 3 (CT) scans and compute their volumes from the segmentation. After correlating the results with each 4 other, we find no new significant correlations. After correlating the volume data with known diseases 5 for each case, we find two weak correlations, one of which has not been described before and for 6 which we present a possible explanation.
Improved Plasma Membrane Models as Test Systems for the Membrane
Disrupting Activity of Kalata B1
(2017)
Steps Towards an Open All-in-one Rich-Client Environment for Particle-Based Mesoscopic Simulation
(2018)
Based on the fact that titanium and titanium alloys have poor fretting fatigue resistance and poor tribological properties, it is necessary to apply some surface engineering methods in order to increase the exploitation characteristics of these materials. One may either implement some surface treatment technologies or even deposit overlay coatings by thermal spraying.
The present study is focused on the achieved properties of the ceramic coatings (Al2O3 + 13 wt.% TiO2) deposited onto a titanium substrate using high velocity oxygen fuel (HVOF) and plasma spraying (APS) respectively.
The effect of the deposition method on the microstructure, phase constituents, and mechanical properties of the ceramic coatings was investigated by means of scanning electron microscopy (SEM), X-ray diffraction technique (XRD) and nanoindentation tests. The sliding wear performances of the Al2O3–TiO2 coatings were tested using a pin on disk wear tester.
Web advertisements are the primary financial source for many online services, but also for cybercriminals. Successful ad campaigns rely on good online profiles of their potential customers. The financial potentials of displaying ads have led to the rise of malware that injects or replaces ads on websites, in particular, so-called adware. This development leads to always further optimized and customized advertising. For these customization's, various tracking methods are used. However, only sparse work has gone into privacy issues emerging from adware. In this paper, we investigate the tracking capabilities and related privacy implications of adware and potentially unwanted programs (PUPs). Therefore, we developed a framework that allows us to analyze any network communication of the Firefox browser on the application level to circumvent encryption like TLS. We use this to dynamically analyze the communication streams of over 16,000 adware or potentially unwanted programs samples that tamper with the users' browser session. Our results indicate that roughly 37% of the requests issued by the analyzed samples contain private information and are accordingly able to track users. Additionally, we analyze which tracking techniques and services are used.
The European General Data Protection Regulation (GDPR), which went into effect in May 2018, brought new rules for the processing of personal data that affect many business models, including online advertising. The regulation’s definition of personal data applies to every company that collects data from European Internet users. This includes tracking services that, until then, argued that they were collecting anonymous information and data protection requirements would not apply to their businesses.
Previous studies have analyzed the impact of the GDPR on the prevalence of online tracking, with mixed results. In this paper, we go beyond the analysis of the number of third parties and focus on the underlying information sharing networks between online advertising companies in terms of client-side cookie syncing. Using graph analysis, our measurement shows that the number of ID syncing connections decreased by around 40 % around the time the GDPR went into effect, but a long-term analysis shows a slight rebound since then. While we can show a decrease in information sharing between third parties, which is likely related to the legislation, the data also shows that the amount of tracking, as well as the general structure of cooperation, was not affected. Consolidation in the ecosystem led to a more centralized infrastructure that might actually have negative effects on user privacy, as fewer companies perform tracking on more sites.
Advanced Persistent Threats (APTs) are one of the main challenges in modern computer security. They are planned and performed by well-funded, highly-trained and often state-based actors. The first step of such an attack is the reconnaissance of the target. In this phase, the adversary tries to gather as much intelligence on the victim as possible to prepare further actions. An essential part of this initial data collection phase is the identification of possible gateways to intrude the target.
In this paper, we aim to analyze the data that threat actors can use to plan their attacks. To do so, we analyze in a first step 93 APT reports and find that most (80 %) of them begin by sending phishing emails to their victims. Based on this analysis, we measure the extent of data openly available of 30 entities to understand if and how much data they leak that can potentially be used by an adversary to craft sophisticated spear phishing emails. We then use this data to quantify how many employees are potential targets for such attacks. We show that 83 % of the analyzed entities leak several attributes of uses, which can all be used to craft sophisticated phishing emails.
In the modern Web, service providers often rely heavily on third parties to run their services. For example, they make use of ad networks to finance their services, externally hosted libraries to develop features quickly, and analytics providers to gain insights into visitor behavior.
For security and privacy, website owners need to be aware of the content they provide their users. However, in reality, they often do not know which third parties are embedded, for example, when these third parties request additional content as it is common in real-time ad auctions.
In this paper, we present a large-scale measurement study to analyze the magnitude of these new challenges. To better reflect the connectedness of third parties, we measured their relations in a model we call third party trees, which reflects an approximation of the loading dependencies of all third parties embedded into a given website. Using this concept, we show that including a single third party can lead to subsequent requests from up to eight additional services. Furthermore, our findings indicate that the third parties embedded on a page load are not always deterministic, as 50 % of the branches in the third party trees change between repeated visits. In addition, we found that 93 % of the analyzed websites embedded third parties that are located in regions that might not be in line with the current legal framework. Our study also replicates previous work that mostly focused on landing pages of websites. We show that this method is only able to measure a lower bound as subsites show a significant increase of privacy-invasive techniques. For example, our results show an increase of used cookies by about 36 % when crawling websites more deeply.
Renewable and sustainable energy production by many small and distributed producers is revolutionizing the energy landscape as we know it. Consumers produce energy, making them to prosumers in the smart grid. The interaction between prosumers and other entities in the grid and the optimal utilization of new smart grid components (electric cars, freezers, solar panels, etc.) are crucial for the success of the smart grid. The Power Trading Agent Competition is an open simulation platform that allows researchers to conduct low risk studies in this new energy market. In this work we present Maxon16, an autonomous energy broker and champion of the 2016's Power Trading Agent Competition. We present the strategies the broker used in the final round and evaluate the effectiveness of the strategies by analyzing the tournament's results.
This thesis evaluates the effects of the GDPR using a technical and human-centric approach. We assess challenges service providers face when they want to design GDPR-proof web applications. On the technical side, we perform two large-scale measurement studies. The first study aims to illuminate third party loading dependencies in web applications. The second study provides a detailed analysis of the information-sharing networks between online adver-tising companies. The human-centric analysis studies how companies implemented the Right to Access and if users can profit from the new right.
Media Brand Management
(2022)
The management of media brands faces challenges. In order to be able to point out possible solutions, this article first explains the concept and the nature of “media brands.” Subsequently, various theoretical approaches to the explanation of media brands and their management are presented. Regardless of theoretical preferences, it is important to keep in mind the brand-strategic complexity of media management that is subsequently described. Due to their specificity, special attention is paid to the basic strategic positioning options and to the communication management of media brands. In this way, the special features of media brand management become clear in comparison with other products and services.
Hydrogen concentrations in ZnO single crystals exposing different surfaces have been determined to be in the range of (0.02–0.04) at.% with an error of ±0.01 at.% using nuclear reaction analysis. In the subsurface region, the hydrogen concentration has been determined to be higher by up to a factor of 10. In contrast to the hydrogen in the bulk, part of the subsurface hydrogen is less strongly bound, can be removed by heating to 550°C, and reaccommodated by loading with atomic hydrogen. By exposing the ZnO(10-10) surface to water above room temperature and to atomic hydrogen, respectively, hydroxylation with the same coverage of hydrogen is observed.
Social innovations «meet social needs», are «good for society» and «enhance society’s capacity to act». But what does their rising importance tell us about the current state of public policy in Europe and its effectiveness in achieving social and economic goals? Some might see social innovation as a critique of public intervention, filling the gaps left by years of policy failure. Others emphasise the innovative potential of cross-boundary collaboration between the public sector, the private sector, the third sector and the household.
This paper explores the conditions under which the state either enables or constrains effective social innovation by transcending the boundaries between different actors. We argue that social innovation is closely linked to public sector innovation, particularly in relation to new modes of policy production and implementation, and to new forms of organisation within the state that challenge functional demarcations and role definitions.
Socio-cultural dynamics in spatial policy: explaining the on-going success of cluster politics
(2013)
Solutions to empower and (re-)engage vulnerable and marginalised populations to unfold their hidden potential allowing them to fully participate the social, economic, cultural and political life, necessarily involve institutional change. This in turn necessitates understanding the processes and mechanisms by which social innovations lead to in-stitutional change. Considering the specific nature of social innova-tions as interactive, generative and contextualised phenomena while maintaining that many practices at the micro-level can add up to patterns and regularities at the macro-level, middle-range theorising (MRT) is proposed as an appropriate method to theoreti-cally underpin and substantiate theoretical advancements towards a multidisciplinary perspective on the economic dimensions of social innovation, identifying the direction of future empirical inquiries.
In an effort to better understand the various forms of social innovation, mapping has become a common and widely applied method for gaining insights into social innovation practices. The transdisciplinary nature of social innovation research has led to a plurality of distinct approaches and methods. Given the increasing interest in social innovation, and the apparent endeavour among policymakers to utilise social innovation to address current societal challenges, it is argued that mapping efforts need to be streamlined in order to make better use of their results. The article describes 17 ongoing or recently finalised research projects on social innovation and their methodological approaches on “mapping” social innovations. It provides a systematic overview on project objectives, SI definitions and mapping approaches for each of the scrutinised projects and ends with a synoptical analysis on methods, objectives and missing research.
This technical report is about the architecture and integration of commercial UAVs in Search and Rescue missions. We describe a framework that consists of heterogeneous UAVs, a UAV task planner, a bridge to the UAVs, an intelligent image hub, and a 3D point cloud generator. A first version of the framework was developed and tested in several training missions in the EU project TRADR.
This technical report is about the mission and the experience gained during the reconnaissance of an industrial hall with hazardous substances after a major fire in Berlin. During this operation, only UAVs and cameras were used to obtain information about the site and the building. First, a geo-referenced 3D model of the building was created in order to plan the entry into the hall. Subsequently, the UAVs were used to fly in the heavily damaged interior and take pictures from inside of the hall. A 360° camera mounted under the UAV was used to collect images of the surrounding area especially from sections that were difficult to fly into. Since the collected data set contained similar images as well as blurred images, it was cleaned from non-optimal images using visual SLAM, bundle adjustment and blur detection so that a 3D model and overviews could be calculated. It was shown that the emergency services were not able to extract the necessary information from the 3D model. Therefore, an interactive panorama viewer with links to other 360° images was implemented where the links to the other images depends on the semi dense point cloud and located camera positions of the visual SLAM algorithm so that the emergency forces could view the surroundings.
In this paper, we present a method for detecting objects of interest, including cars, humans, and fire, in aerial images captured by unmanned aerial vehicles (UAVs) usually during vegetation fires. To achieve this, we use artificial neural networks and create a dataset for supervised learning. We accomplish the assisted labeling of the dataset through the implementation of an object detection pipeline that combines classic image processing techniques with pretrained neural networks. In addition, we develop a data augmentation pipeline to augment the dataset with utomatically labeled images. Finally, we evaluate the performance of different neural networks.
This technical report is about the architecture and integration of very small commercial UAVs (< 40 cm diagonal) in indoor Search and Rescue missions. One UAV is manually controlled by only one single human operator delivering live video streams and image series for later 3D scene modelling and inspection. In order to assist the operator who has to simultaneously observe the environment and navigate through it we use multiple deep neural networks to provide guided autonomy, automatic object detection and classification and local 3D scene modelling. Our methods help to reduce the cognitive load of the operator. We describe a framework for quick integration of new methods from the field of Deep Learning, enabling for rapid evaluation in real scenarios, including the interaction of methods.
In the realm of digital situational awareness during disaster situations, accurate digital representations,
like 3D models, play an indispensable role. To ensure the
safety of rescue teams, robotic platforms are often deployed
to generate these models. In this paper, we introduce an
innovative approach that synergizes the capabilities of compact Unmaned Arial Vehicles (UAVs), smaller than 30 cm, equipped with 360° cameras and the advances of Neural Radiance Fields (NeRFs). A NeRF, a specialized neural network, can deduce a 3D representation of any scene using 2D images and then synthesize it from various angles upon request. This method is especially tailored for urban environments which have experienced significant destruction, where the structural integrity of buildings is compromised to the point of barring entry—commonly observed post-earthquakes and after severe fires. We have tested our approach through recent post-fire scenario, underlining the efficacy of NeRFs even in challenging outdoor environments characterized by water, snow, varying light conditions, and reflective surfaces.
This paper presents a novel approach to build consistent 3D maps for multi robot cooperation in USAR environments. The sensor streams from unmanned aerial vehicles (UAVs) and ground robots (UGV) are fused in one consistent map. The UAV camera data are used to generate 3D point clouds that are fused with the 3D point clouds generated by a rolling 2D laser scanner at the UGV. The registration method is based on the matching of corresponding planar segments that are extracted from the point clouds. Based on the registration, an approach for a globally optimized localization is presented. Apart from the structural information of the point clouds, it is important to mention that no further information is required for the localization. Two examples show the performance of the overall registration.
The two churches, San Francesco and Sant'Agostino in Amatrice, Italy was hit by an earthquake on August 24 2016. Both churches are in a state of partial collapse, in need of shoring to prevent potential further destruction and to preserve the national heritage. The video show the mission at 1.Sept.2016 in clips of 10 seconds.
The TRADR project was asked by the Italian firebrigade Vigili del Fuoco to provide 3D textured models of two churches.
The team entered San Francesco with two UGVs (ground robots) and one UAV (drone, flown by Prof. Surmann), teleoperating them entirely out of line of sight and partially in collaboration. We entered Sant'Agostino with one UAV (also flown by Prof. Surmann) while two other UAVs were providing a view from different angles to facilitate maneuvering them entirely out of line of sight.
Venice 2018: Tradr Review
(2018)
The video shows an orthopoto and a textured 3D model of the location. 300 images were recorded in two short flights with a Mavic Pro in 50 meter height. The first one was a single grid while the camera facing down and the second one was a double grid facing the camera at an 60 degree angle. The 3D model is computed with OpenDroneMap.