Filtern
Dokumenttyp
- Konferenzveröffentlichung (16) (entfernen)
Sprache
- Englisch (16) (entfernen)
Volltext vorhanden
- ja (16) (entfernen)
Schlagworte
- Bionik (3)
- Gespenstschrecken (3)
- Haftorgan (3)
- adhesion (3)
- stick insects (3)
- Competency-Oriented Exams (2)
- 360° Panorama (1)
- API 1130 (1)
- Artificial Intelligence (1)
- Autonomous Agents (1)
Renewable and sustainable energy production by many small and distributed producers is revolutionizing the energy landscape as we know it. Consumers produce energy, making them to prosumers in the smart grid. The interaction between prosumers and other entities in the grid and the optimal utilization of new smart grid components (electric cars, freezers, solar panels, etc.) are crucial for the success of the smart grid. The Power Trading Agent Competition is an open simulation platform that allows researchers to conduct low risk studies in this new energy market. In this work we present Maxon16, an autonomous energy broker and champion of the 2016's Power Trading Agent Competition. We present the strategies the broker used in the final round and evaluate the effectiveness of the strategies by analyzing the tournament's results.
Since the 1980’s, against the backdrop of global warming and the decline of conventional energy resources, low emission and renewable energy systems have gotten into the focus of politics as well as research and development. In order to decrease the emission of greenhouse gases Germany intents to generate 80% of its electrical energy from renewable and low emission sources by 2050. For low emission electricity generation hydrogen operated fuel cells are a potential solution. However, although fuel cell technology has been well known since the 19th century cost effective materials are needed to achieve a breakthrough in the market.
Proton Exchange Membrane Fuel Cells with Carbon Nanotubes as Electrode Material
At the Westphalian Energy Institute of the Wesphalian University of Applied Sciences one main focus is on the research of proton exchange membrane fuel cells (PEMFC). PEMFC membrane electrode assemblies (MEA) consist of a polymer membrane with electrolytic properties covered on both sides by a catalyst layer (CL) as well as a porous and electrical conductive gas diffusion layer (GDL).
For PEMFC carbon nanotubes (CNT) have ideal properties as electrode material concerning electrical conductivity, oxidation resistance and media transport. CNTs are suitable for the use as catalyst support material within the CL due to their large surface in comparison to conventional carbon supports. Furthermore, oxygen plasma treated CNTs show electrochemical activity referred to hydrogen adsorption and desorption, which has been shown by cyclic voltammetry in 0.5 M sulfuric acid solution. According to the PEMFCs anode a GDL coated with oxygen plasma activated CNTs has promising properties to significantly reduce catalyst content (e.g. platinum) of the anodic CL.
In the realm of digital situational awareness during disaster situations, accurate digital representations,
like 3D models, play an indispensable role. To ensure the
safety of rescue teams, robotic platforms are often deployed
to generate these models. In this paper, we introduce an
innovative approach that synergizes the capabilities of compact Unmaned Arial Vehicles (UAVs), smaller than 30 cm, equipped with 360° cameras and the advances of Neural Radiance Fields (NeRFs). A NeRF, a specialized neural network, can deduce a 3D representation of any scene using 2D images and then synthesize it from various angles upon request. This method is especially tailored for urban environments which have experienced significant destruction, where the structural integrity of buildings is compromised to the point of barring entry—commonly observed post-earthquakes and after severe fires. We have tested our approach through recent post-fire scenario, underlining the efficacy of NeRFs even in challenging outdoor environments characterized by water, snow, varying light conditions, and reflective surfaces.
This technical report is about the architecture and integration of very small commercial UAVs (< 40 cm diagonal) in indoor Search and Rescue missions. One UAV is manually controlled by only one single human operator delivering live video streams and image series for later 3D scene modelling and inspection. In order to assist the operator who has to simultaneously observe the environment and navigate through it we use multiple deep neural networks to provide guided autonomy, automatic object detection and classification and local 3D scene modelling. Our methods help to reduce the cognitive load of the operator. We describe a framework for quick integration of new methods from the field of Deep Learning, enabling for rapid evaluation in real scenarios, including the interaction of methods.
This paper reveals various approaches undertaken over more than two decades of teaching undergraduate programming classes at different Higher Education Institutions, in order to improve student activation and participation in class and consequently teaching and learning effectiveness.
While new technologies and the ubiquity of smartphones and internet access has brought new tools to the classroom and opened new didactic approaches, lessons learned from this personal long-term study show that neither technology itself nor any single new and often hyped didactic approach ensured sustained improvement of student activation. Rather it needs an integrated yet open approach towards a participative learning space supported but not created by new tools, technology and innovative teaching methods.
In this paper, we present a method for detecting objects of interest, including cars, humans, and fire, in aerial images captured by unmanned aerial vehicles (UAVs) usually during vegetation fires. To achieve this, we use artificial neural networks and create a dataset for supervised learning. We accomplish the assisted labeling of the dataset through the implementation of an object detection pipeline that combines classic image processing techniques with pretrained neural networks. In addition, we develop a data augmentation pipeline to augment the dataset with utomatically labeled images. Finally, we evaluate the performance of different neural networks.