Filtern
Erscheinungsjahr
Dokumenttyp
- Wissenschaftlicher Artikel (209)
- Konferenzveröffentlichung (209)
- Teil eines Buches (Kapitel) (31)
- Sonstiges (27)
- Video (14)
- Buch (Monographie) (13)
- Preprint (9)
- Dissertation (4)
- Arbeitspapier (4)
- Bericht (3)
Sprache
- Englisch (528) (entfernen)
Schlagworte
- Robotik (8)
- Flugkörper (7)
- UAV (7)
- Rettungsrobotik (5)
- Polymer-Elektrolytmembran-Brennstoffzelle (4)
- adhesion (4)
- Bionik (3)
- Erweiterte Realität <Informatik> (3)
- Gespenstschrecken (3)
- Haftorgan (3)
Institut
- Westfälisches Institut für Gesundheit (115)
- Institut für Internetsicherheit (56)
- Informatik und Kommunikation (51)
- Elektrotechnik und angewandte Naturwissenschaften (50)
- Wirtschaft und Informationstechnik Bocholt (46)
- Westfälisches Energieinstitut (42)
- Maschinenbau Bocholt (37)
- Institut für biologische und chemische Informatik (35)
- Institut Arbeit und Technik (15)
- Wirtschaftsingenieurwesen (15)
- Institut für Innovationsforschung und -management (11)
- Fachbereiche (9)
- Wirtschaftsrecht (9)
- Maschinenbau und Facilities Management (3)
- Strategische Projekte (2)
- Institute (1)
n-type silicon modules
(2023)
The photovoltaic industry is facing an exponential growth in the recent years fostered by a dramatic decrease in installation prices. This cost reduction is achieved by means of several mechanisms. First, because of the optimization of the design and installation process of current PV projects, and second, by the optimization, in terms of performance, in the manufacturing techniques and material combinations within the modules, which also has an impact on both, the installation process, and the levelized cost of electricity (LCOE).
One popular trend is to increase the power delivered by photovoltaic modules, either by using larger wafer sizes or by combining more cells within the module unit. This solution means a significant increase in the size of these devices, but it implies an optimization in the design of photovoltaic plants. This results in an installation cost reduction which turns into a decrease in the LCOE.
However, this solution does not represent a breakthrough in addressing the real challenge of the technology which affects the module requirements. The innovation efforts must be focused on improving the modules capability to produce energy without enlarging the harvesting area. This challenge can be faced by approaching some of the module characteristics which are summarized in this chapter.
This paper reveals various approaches undertaken over more than two decades of teaching undergraduate programming classes at different Higher Education Institutions, in order to improve student activation and participation in class and consequently teaching and learning effectiveness.
While new technologies and the ubiquity of smartphones and internet access has brought new tools to the classroom and opened new didactic approaches, lessons learned from this personal long-term study show that neither technology itself nor any single new and often hyped didactic approach ensured sustained improvement of student activation. Rather it needs an integrated yet open approach towards a participative learning space supported but not created by new tools, technology and innovative teaching methods.
This paper presents a pragmatic approach for stepwise introduction of peer assessment elements in undergraduate programming classes, discusses some lessons learned so far and directions for further work. Students are invited to challenge their peers with their own programming exercises to be submitted through Moodle and evaluated by other students according to a predefined rubric and supervised by teaching assistants. Preliminary results show an increased activation and motivation of students leading to a better performance in the final programming exams.
In this work a mathematical approach to calculate solar panel temperature based on measured irradiance, temperature and wind speed is applied. With the calculated module temperature, the electrical solar module characteristics is determined. A program developed in MatLab App Designer allows to import measurement data from a weather station and calculates the module temperature based on the mathematical NOCT and stationary approach with a time step between the measurements of 5 minutes. Three commercially available solar panels with different cell and interconnection technologies are used for the verification of the established models. The results show a strong correlation between the measured and by the stationary model predicted module temperature with a coefficient of determination R2 close to 1 and a root mean square deviation (RMSE) of ≤ 2.5 K for a time period of three months. Based on the predicted temperature, measured irradiance in module plane and specific module information the program models the electrical data as time series in 5-minute steps. Predicted to measured power for a time period of three months shows a linear correlation with an R2 of 0.99 and a mean absolute error (MAE) of 3.5, 2.7 and 4.8 for module ID 1, 2 and 3. The calculated energy (exemplarily for module ID 2) based on the measured, calculated by the NOCT and stationary model for this time period is 118.4 kWh, resp. 116.7 kWh and 117.8 kWh. This is equivalent to an uncertainty of 1.4% for the NOCT and 0.5% for the stationary model.
Advanced Determination of Temperature Coefficients of Photovoltaic Modules by Field Measurements
(2023)
In this work data from outdoor measurements, acquired over the course of up to three years on commercially available solar panels, is used to determine the temperature coefficients and compare these to the information as stated by the producer in the data sheets. A program developed in MatLab App Designer allows to import the electrical and ambient measurement data. Filter algorithms for solar irradiance narrow the irradiance level down to ~1000 W/m2 before linear regression methods are applied to obtain the temperature coefficients. A repeatability investigation proves the accuracy of the determined temperature coefficients which are in good agreement to the supplier specification if the specified values for power are not larger than -0.3%/K. Further optimization is achieved by applying wind filter techniques and days with clear sky condition. With the big (measurement) data on hand it was possible to determine the change of the temperature coefficients for varying irradiance. As stated in literature we see an increase of the temperature coefficient of voltage and a decline for the temperature coefficient of power with increasing irradiance.
As a rule, an experiment carried out at school or in undergraduate study
courses is rather simple and not very informative. However, when the experiments
are to be performed using modern methods, they are often abstract and
difficult to understand. Here, we describe a quick and simple experiment,
namely the enzymatic characterization of ptyalin (human salivary amylase)
using a starch degradation assay. With the experimental setup presented here,
enzyme parameters, such as pH optimum, temperature optimum, chloride
dependence, and sensitivity to certain chemicals can be easily determined. This
experiment can serve as a good model for enzyme characterization in general,
as modern methods usually follow the same principle: determination of the
activity of the enzyme under different conditions. As different alleles occur in
humans, a random selection of test subjects will be quite different with regard
to ptyalin activities. Therefore, when the students measure their own ptyalin
activity, significant differences will emerge, and this will give them an idea of
the genetic diversity in human populations. The evaluation has shown that the
pupils have gained a solid understanding of the topic through this experiment.
With ongoing developments in the field of smart cities and digitalization in general, data is becoming a driving factor and value stream for new and existing economies alike. However, there exists an increasing centralization and monopolization of data holders and service providers, especially in the form of the big US-based technology companies in the western world and central technology providers with close ties to the government in the Asian regions. Self Sovereign Identity (SSI) provides the technical building blocks to create decentralized data-driven systems, which bring data autonomy back to the users. In this paper we propose a system in which the combination of SSI and token economy based incentivisation strategies makes it possible to unlock the potential value of data-pools without compromising the data autonomy of the users.
The European General Data Protection Regulation (GDPR), which went into effect in May 2018, brought new rules for the processing of personal data that affect many business models, including online advertising. The regulation’s definition of personal data applies to every company that collects data from European Internet users. This includes tracking services that, until then, argued that they were collecting anonymous information and data protection requirements would not apply to their businesses.
Previous studies have analyzed the impact of the GDPR on the prevalence of online tracking, with mixed results. In this paper, we go beyond the analysis of the number of third parties and focus on the underlying information sharing networks between online advertising companies in terms of client-side cookie syncing. Using graph analysis, our measurement shows that the number of ID syncing connections decreased by around 40 % around the time the GDPR went into effect, but a long-term analysis shows a slight rebound since then. While we can show a decrease in information sharing between third parties, which is likely related to the legislation, the data also shows that the amount of tracking, as well as the general structure of cooperation, was not affected. Consolidation in the ecosystem led to a more centralized infrastructure that might actually have negative effects on user privacy, as fewer companies perform tracking on more sites.
In the modern Web, service providers often rely heavily on third parties to run their services. For example, they make use of ad networks to finance their services, externally hosted libraries to develop features quickly, and analytics providers to gain insights into visitor behavior.
For security and privacy, website owners need to be aware of the content they provide their users. However, in reality, they often do not know which third parties are embedded, for example, when these third parties request additional content as it is common in real-time ad auctions.
In this paper, we present a large-scale measurement study to analyze the magnitude of these new challenges. To better reflect the connectedness of third parties, we measured their relations in a model we call third party trees, which reflects an approximation of the loading dependencies of all third parties embedded into a given website. Using this concept, we show that including a single third party can lead to subsequent requests from up to eight additional services. Furthermore, our findings indicate that the third parties embedded on a page load are not always deterministic, as 50 % of the branches in the third party trees change between repeated visits. In addition, we found that 93 % of the analyzed websites embedded third parties that are located in regions that might not be in line with the current legal framework. Our study also replicates previous work that mostly focused on landing pages of websites. We show that this method is only able to measure a lower bound as subsites show a significant increase of privacy-invasive techniques. For example, our results show an increase of used cookies by about 36 % when crawling websites more deeply.
Advanced Persistent Threats (APTs) are one of the main challenges in modern computer security. They are planned and performed by well-funded, highly-trained and often state-based actors. The first step of such an attack is the reconnaissance of the target. In this phase, the adversary tries to gather as much intelligence on the victim as possible to prepare further actions. An essential part of this initial data collection phase is the identification of possible gateways to intrude the target.
In this paper, we aim to analyze the data that threat actors can use to plan their attacks. To do so, we analyze in a first step 93 APT reports and find that most (80 %) of them begin by sending phishing emails to their victims. Based on this analysis, we measure the extent of data openly available of 30 entities to understand if and how much data they leak that can potentially be used by an adversary to craft sophisticated spear phishing emails. We then use this data to quantify how many employees are potential targets for such attacks. We show that 83 % of the analyzed entities leak several attributes of uses, which can all be used to craft sophisticated phishing emails.
The set of transactions that occurs on the public ledger of an Ethereum network in a specific time frame can be represented as a directed graph, with vertices representing addresses and an edge indicating the interaction between two addresses.
While there exists preliminary research on analyzing an Ethereum network by the means of graph analysis, most existing work is focused on either the public Ethereum Mainnet or on analyzing the different semantic transaction layers using static graph analysis in order to carve out the different network properties (such as interconnectivity, degrees of centrality, etc.) needed to characterize a blockchain network. By analyzing the consortium-run bloxberg Proof-of-Authority (PoA) Ethereum network, we show that we can identify suspicious and potentially malicious behaviour of network participants by employing statistical graph analysis. We thereby show that it is possible to identify the potentially malicious
exploitation of an unmetered and weakly secured blockchain network resource. In addition, we show that Temporal Network Analysis is a promising technique to identify the occurrence of anomalies in a PoA Ethereum network.
Software updates take an essential role in keeping IT environments secure. If service providers delay or do not install updates, it can cause unwanted security implications for their environments. This paper conducts a large-scale measurement study of the update behavior of websites and their utilized software stacks. Across 18 months, we analyze over 5.6M websites and 246 distinct client- and server-side software distributions. We found that almost all analyzed sites use outdated software. To understand the possible security implications of outdated software, we analyze the potential vulnerabilities that affect the utilized software. We show that software components are getting older and more vulnerable because they are not updated. We find that 95 % of the analyzed websites use at least one product for which a vulnerability existed.
A Crypto-Token Based Charging Incentivization
Scheme for Sustainable Light Electric Vehicle
Sharing
(2021)
The ecological impact of shared light electric vehicles (LEV) such as kick scooters is still widely discussed. Especially the fact that the vehicles and batteries are collected using diesel vans in order to charge empty batteries with electricity of unclear origin is perceived as unsustainable. A better option could be to let the users charge the vehicles themselves whenever it is necessary. For this, a decentralized,flexible and easy to install network of off-grid solar charging stations could bring renewable electricity where it is needed without sacrificing the convenience of a free float sharing system. Since the charging stations are powered by solar energy the most efficient way to utilize them would be to charge the vehicles when the sun is shining. In order to make users charge the vehicle it is necessary to provide some form of benefit for
them doing so. This could be either a discount or free rides. A
particularly robust and well-established mechanism is controlling incentives via means of blockchain-based cryptotokens. This paper demonstrates a crypto-token based scheme for incentivizing users to charge sharing vehicles during times of considerable solar irradiation in order to contribute to more sustainable mobility services.
Third-party tracking is a common and broadly used technique on the Web. Different defense mechanisms have emerged to counter these practices (e.g. browser vendors that ban all third-party cookies). However, these countermeasures only target third-party trackers and ignore the first party because the narrative is that such monitoring is mostly used to improve the utilized service (e.g. analytical services). In this paper, we present a large-scale measurement study that analyzes tracking performed by the first party but utilized by a third party to circumvent standard tracking preventing techniques. We visit the top 15,000 websites to analyze first-party cookies used to track users and a technique called “DNS CNAME cloaking”, which can be used by a third party to place first-party cookies. Using this data, we show that 76% of sites effectively utilize such tracking techniques. In a long-running analysis, we show that the usage of such cookies increased by more than 50% over 2021.
Measurement studies are essential for research and industry alike to understand the Web’s inner workings better and help quantify specific phenomena. Performing such studies is demanding due to the dynamic nature and size of the Web. An experiment’s careful design and setup are complex, and many factors might affect the results. However, while several works have independently observed differences in
the outcome of an experiment (e.g., the number of observed trackers) based on the measurement setup, it is unclear what causes such deviations. This work investigates the reasons for these differences by visiting 1.7M webpages with five different measurement setups. Based on this, we build ‘dependency trees’ for each page and cross-compare the nodes in the trees. The results show that the measured trees differ considerably, that the cause of differences can be attributed to specific nodes, and that even identical measurement setups can produce different results.
Proof of Existence as a blockchain service has first been published in 2013 as a public notary service on the Bitcoin network and can be used to verify the existence of a particular file in a specific point of time without sharing the file or its content itself. This service is also available on the Ethereum based bloxberg network, a decentralized research infrastructure that is governed, operated and developed by an international consortium of research facilities. Since it is desirable to integrate the creation of this proof tightly into the research workflow, namely the acquisition and processing of research data, we show a simple to integrate MATLAB extension based solution with the concept being applicable to other programming languages and environments as well.
The number of publications describing chemical structures has increased steadily over the last decades. However, the majority of published chemical information is currently not available in machine-readable form in public databases. It remains a challenge to automate the process of information extraction in a way that requires less manual intervention - especially the mining of chemical structure depictions. As an open-source platform that leverages recent advancements in deep learning, computer vision, and natural language processing, DECIMER.ai (Deep lEarning for Chemical IMagE Recognition) strives to automatically segment, classify, and translate chemical structure depictions from the printed literature. The segmentation and classification tools are the only openly available packages of their kind, and the optical chemical structure recognition (OCSR) core application yields outstanding performance on all benchmark datasets. The source code, the trained models and the datasets developed in this work have been published under permissive licences. An instance of the DECIMER web application is available at https://decimer.ai.
Cookie notices (or cookie banners) are a popular mechanism for websites to provide (European) Internet users a tool to choose which cookies the site may set. Banner implementations range from merely providing information that a site uses cookies over offering the choice to accepting or denying all cookies to allowing fine-grained control of cookie usage. Users frequently get annoyed by the banner’s pervasiveness as they interrupt “natural” browsing on the Web. As a remedy, different browser extensions have been developed to automate the interaction with cookie banners.
In this work, we perform a large-scale measurement study comparing the effectiveness of extensions for “cookie banner interaction.” We configured the extensions to express different privacy choices (e.g., accepting all cookies, accepting functional cookies, or rejecting all cookies) to understand their capabilities to execute a user’s preferences. The results show statistically significant differences in which cookies are set, how many of them are set, and which types are set—even for extensions that aim to implement the same cookie choice. Extensions for “cookie banner interaction” can effectively reduce the number of set cookies compared to no interaction with the banners. However, all extensions increase the
tracking requests significantly except when rejecting all cookies.