Die 50 zuletzt veröffentlichten Dokumente
Computational methods for the accurate prediction of protein folding based on amino acid sequences have been researched for decades. The field has been significantly advanced in recent years by deep learning-based approaches, like AlphaFold, RoseTTAFold, or ColabFold. Although these can be used by the scientific community in various, mostly free and open ways, they are not yet widely used by bench scientists in relevant fields such as protein biochemistry or molecular biology, who are often not familiar with software tools such as scripting notebooks, command-line interfaces or cloud computing. In addition, visual inspection functionalities like protein structure displays, structure alignments, and specific protein hotspot analyses are required as a second step to interpret and apply the predicted structures in ongoing research studies.
PySSA (Python rich client for visual protein Sequence to Structure Analysis) is an open Graphical User Interface (GUI) application combining the protein sequence to structure prediction capabilities of ColabFold with the open-source variant of the molecular structure visualisation and analysis system PyMOL to make both available to the scientific end-user. PySSA enables the creation of managed and shareable projects with defined protein structure prediction and corresponding alignment workflows that can be conveniently performed by scientists without specialised computer skills or programming knowledge on their local computers. Thus, PySSA can help make protein structure prediction more accessible for end-users in protein chemistry and molecular biology as well as be used for educational purposes. It is openly available on GitHub, alongside a custom graphical installer executable for the Windows operating system: https://github.com/urban233/PySSA/wiki/Installation-for-Windows-Operating-System.
To demonstrate the capabilities of PySSA, its usage in a protein mutation study on the protein drug Bone Morphogenetic Protein 2 (BMP2) is described: the structure prediction results indicate that the previously reported BMP2-2Hep-7M mutant, which is intended to be less prone to aggregation, does not exhibit significant spatial rearrangements of amino acid residues interacting with the receptor.
An automated pipeline for comprehensive calculation of intermolecular interaction energies based on molecular force-fields using the Tinker molecular modelling package is presented. Starting with non-optimized chemically intuitive monomer structures, the pipeline allows the approximation of global minimum energy monomers and dimers, configuration sampling for various monomer-monomer distances, estimation of coordination numbers by molecular dynamics simulations, and the evaluation of differential pair interaction energies. The latter are used to derive Flory-Huggins parameters and isotropic particle-particle repulsions for Dissipative Particle Dynamics (DPD). The computational results for force fields MM3, MMFF94, OPLS-AA and AMOEBA09 are analyzed with Density Functional Theory (DFT) calculations and DPD simulations for a mixture of the non-ionic polyoxyethylene alkyl ether surfactant C10E4 with water to demonstrate the usefulness of the approach.
Advancements in Hand-Drawn Chemical Structure Recognition through an Enhanced DECIMER Architecture
(2024)
Accurate recognition of hand-drawn chemical structures is crucial for digitising hand-written chemical information found in traditional laboratory notebooks or for facilitating stylus-based structure entry on tablets or smartphones. However, the inherent variability in hand-drawn structures poses challenges for existing Optical Chemical Structure Recognition (OCSR) software. To address this, we present an enhanced Deep lEarning for Chemical ImagE Recognition (DECIMER) architecture that leverages a combination of Convolutional Neural Networks (CNNs) and Transformers to improve the recognition of hand-drawn chemical structures. The model incorporates an EfficientNetV2 CNN encoder that extracts features from hand-drawn images, followed by a Transformer decoder that converts the extracted features into Simplified Molecular Input Line Entry System (SMILES) strings. Our models were trained using synthetic hand-drawn images generated by RanDepict, a tool for depicting chemical structures with different style elements. To evaluate the model's performance, a benchmark was performed using a real-world dataset of hand-drawn chemical structures. The results indicate that our improved DECIMER architecture exhibits a significantly enhanced recognition accuracy compared to other approaches.
Inspired by the super-human performance of deep learning models in playing the game of Go after being presented with virtually unlimited training data, we looked into areas in chemistry where similar situations could be achieved. Encountering large amounts of training data in chemistry is still rare, so we turned to two areas where realistic training data can be fabricated in large quantities, namely a) the recognition of machine-readable structures from images of chemical diagrams and b) the conversion of IUPAC(-like) names into structures and vice versa. In this talk, we outline the challenges, technical implementation and results of this study.
Optical Chemical Structure Recognition (OCSR): Vast amounts of chemical information remain hidden in the primary literature and have yet to be curated into open-access databases. To automate the process of extracting chemical structures from scientific papers, we developed the DECIMER.ai project. This open-source platform provides an integrated solution for identifying, segmenting, and recognising chemical structure depictions in scientific literature. DECIMER.ai comprises three main components: DECIMER-Segmentation, which utilises a Mask-RCNN model to detect and segment images of chemical structure depictions; DECIMER-Image Classifier EfficientNet-based classification model identifies which images contain chemical structures and DECIMER-Image Transformer which acts as an OCSR engine which combines an encoder-decoder model to convert the segmented chemical structure images into machine-readable formats, like the SMILES string.
DECIMER.ai is data-driven, relying solely on the training data to make accurate predictions without hand-coded rules or assumptions. The latest model was trained with 127 million structures and 483 million depictions (4 different per structure) on Google TPU-V4 VMs
Name to Structure Conversion: The conversion of structures to IUPAC(-like) or systematic names has been solved algorithmically or rule-based in satisfying ways. This fact, on the other side, provided us with an opportunity to generate a name-structure training pair at a very large scale to train a proof-of-concept transformer network and evaluate its performance.
In this work, the largest model was trained using almost one billion SMILES strings. The Lexichem software utility from OpenEye was employed to generate the IUPAC names used in the training process. STOUT V2 was trained on Google TPU-V4 VMs. The model's accuracy was validated through one-to-one string matching, BLEU scores, and Tanimoto similarity calculations. To further verify the model's reliability, every IUPAC name generated by STOUT V2 was analysed for accuracy and retranslated using OPSIN, a widely used open-source software for converting IUPAC names to SMILES. This additional validation step confirmed the high fidelity of STOUT V2's translations.
The DECIMER.ai Project
(2024)
Over the past few decades, the number of publications describing chemical structures and their metadata has increased significantly. Chemists have published the majority of this information as bitmap images along with other important information as human-readable text in printed literature and have never been retained and preserved in publicly available databases as machine-readable formats. Manually extracting such data from printed literature is error-prone, time-consuming, and tedious. The recognition and translation of images of chemical structures from printed literature into machine-readable format is known as Optical Chemical Structure Recognition (OCSR). In recent years, deep-learning-based OCSR tools have become increasingly popular. While many of these tools claim to be highly accurate, they are either unavailable to the public or proprietary. Meanwhile, the available open-source tools are significantly time-consuming to set up. Furthermore, none of these offers an end-to-end workflow capable of detecting chemical structures, segmenting them, classifying them, and translating them into machine-readable formats.
To address this issue, we present the DECIMER.ai project, an open-source platform that provides an integrated solution for identifying, segmenting, and recognizing chemical structure depictions within the scientific literature. DECIMER.ai comprises three main components: DECIMER-Segmentation, which utilizes a Mask-RCNN model to detect and segment images of chemical structure depictions; DECIMER-Image Classifier EfficientNet-based classification model identifies which images contain chemical structures and DECIMER-Image Transformer which acts as an OCSR engine which combines an encoder-decoder model to convert the segmented chemical structure images into machine-readable formats, like the SMILES string.
A key strength of DECIMER.ai is that its algorithms are data-driven, relying solely on the training data to make accurate predictions without any hand-coded rules or assumptions. By offering this comprehensive, open-source, and transparent pipeline, DECIMER.ai enables automated extraction and representation of chemical data from unstructured publications, facilitating applications in chemoinformatics and drug discovery.
Advancements in hand-drawn chemical structure recognition through an enhanced DECIMER architecture
(2024)
Accurate recognition of hand-drawn chemical structures is crucial for digitising hand-written chemical information in traditional laboratory notebooks or facilitating stylus-based structure entry on tablets or smartphones. However, the inherent variability in hand-drawn structures poses challenges for existing Optical Chemical Structure Recognition (OCSR) software. To address this, we present an enhanced Deep lEarning for Chemical ImagE Recognition (DECIMER) architecture that leverages a combination of Convolutional Neural Networks (CNNs) and Transformers to improve the recognition of hand-drawn chemical structures. The model incorporates an EfficientNetV2 CNN encoder that extracts features from hand-drawn images, followed by a Transformer decoder that converts the extracted features into Simplified Molecular Input Line Entry System (SMILES) strings. Our models were trained using synthetic hand-drawn images generated by RanDepict, a tool for depicting chemical structures with different style elements. A benchmark was performed using a real-world dataset of hand-drawn chemical structures to evaluate the model's performance. The results indicate that our improved DECIMER architecture exhibits a significantly enhanced recognition accuracy compared to other approaches.
An Augmented Multiphase Rail Launcher With a Modular Design: Extended Setup and Muzzle Fed Operation
(2024)
Bifacial photovoltaic (PV) modules are able to utilize light from both sides and can therefore significantly increase the electric yield of PV power plants, thus reducing the cost and improving profitability. Bifacial PV technology has a huge potential to reach a major market share, in particular when considering utility scale PV plants. Accordingly, bifacial PV is currently attracting increasing attention from involved engineers, scientists and investors. There is a lack of available, structured information about this topic. A book that focuses exclusively on bifacial PV thus meets an increasing need. Bifacial Photovoltaics: Technology, applications and economics provides an overview of the history, status and future of bifacial PV technology with a focus on crystalline silicon technology, covering the areas of cells, modules, and systems. In addition, topics like energy yield simulations and bankability are addressed. It is a must-read for researchers and manufacturers involved with cutting-edge photovoltaics.
MFsim - An open Java all-in-one rich-client simulation environment for mesoscopic simulation
MFsim is an open Java all-in-one rich-client computing environment for mesoscopic simulation with Jdpd as its default simulation kernel for Molecular Fragment Dissipative Particle Dynamics (DPD). The environment integrates and supports the complete preparation-simulation-evaluation triad of a mesoscopic simulation task. Productive highlights are a SPICES molecular structure editor, a PDB-to-SPICES parser for particle-based peptide/protein representations, a support of polymer definitions, a compartment editor for complex simulation box start configurations, interactive and flexible simulation box views including analytics, simulation movie generation or animated diagrams. As an open project, MFsim enables customized extensions for different fields of research.
MFsim uses several open libraries (see MFSimVersionHistory.txt for details and references below) and is published as open source under the GNU General Public License version 3 (see LICENSE).
MFsim has been described in the scientific literature and used for DPD studies.
Jdpd - An open Java Simulation Kernel for Molecular Fragment Dissipative Particle Dynamics (DPD)
Jdpd is an open Java simulation kernel for Molecular Fragment Dissipative Particle Dynamics (DPD) with parallelizable force calculation, efficient caching options and fast property calculations. It is characterized by an interface and factory-pattern driven design for simple code changes and may help to avoid problems of polyglot programming. Detailed input/output communication, parallelization and process control as well as internal logging capabilities for debugging purposes are supported. The kernel may be utilized in different simulation environments ranging from flexible scripting solutions up to fully integrated “all-in-one” simulation systems like MFsim.
Since Jdpd version 1.6.1.0 Jdpd is available in a (basic) double-precision version and a (derived) single-precision version (= JdpdSP) for all numerical calculations, where the single precision version needs about half the memory of the double precision version.
Jdpd uses the Apache Commons Math and Apache Commons RNG libraries and is published as open source under the GNU General Public License version 3. This repository comprises the Java bytecode libraries (including the Apache Commons Math and RNG libraries), the Javadoc HTML documentation and the Netbeans source code packages including Unit tests.
Jdpd has been described in the scientific literature (the final manuscript 2018 - van den Broek - Jdpd - Final Manucsript.pdf is added to the repository) and used for DPD studies (see references below).
See text file JdpdVersionHistory.txt for a version history with more detailed information.
Dieser Leitfaden richtet sich in erster Linie an Studierende, die wissen wollen, wie sie ihre eigene digitale Identität souverän gestalten können. Aber er richtet sich auch an alle anderen, die schon immer wissen wollten, was eine digitale Identität beinhaltet und was man tun muss, um sie im eigenen Sinn zu gestalten und vor Missbrauch zu schützen. Wir sind fast alle täglich im Internet und in den sogenannten Social Media unterwegs. Wir nutzen diese digitale Welt, um etwas nachzuschlagen, uns mit Bekannten und Freunden zu treffen, potenziellen Arbeitgebern unsere Stärken zu präsentieren und vieles mehr. Wir werden aber auch von diesen Medien benutzt. Unsere Daten, die wir eingeben, sind ein wertvolles Gut und wir sollten sie nicht leichtfertig mit anderen teilen oder aus der Hand geben. All das wissen wir theoretisch, dennoch verhalten wir uns oft nicht so, wie es angemessen wäre. Aus Bequemlichkeit, aus Unwissenheit oder weil uns die Konsequenzen nicht wirklich klar oder zu abstrakt sind. Dieser Leitfaden soll daher zunächst einmal sensibilisieren, für die Gefahren, aber auch vor allem für die Möglichkeiten, die sich bei der Selbstpräsentation im World Wide Web ergeben können. Gegenstand des Leitfadens ist damit die bewusste Gestaltung der eigenen digitalen Identität. Themen, wie z. B. sichere Authentifizierung im Internet, werden nicht betrachtet.
Wir möchten euch daher einladen herauszufinden, wie ihr euch im Internet geeignet präsentieren, eine eigene digitale Identität kreieren und diese kontrollieren könnt. Dazu findet ihr im ersten Teil dieses Leitfadens Hintergrundinformationen zur digitalen Identität und im zweiten Teil geben wir euch Handlungsempfehlungen zur vorteilhaften Online-Selbstdarstellung.
Stereo Camera Setup for 360° Digital Image Correlation to Reveal Smart Structures of Hakea Fruits
(2024)
About forty years after its first application, digital image correlation (DIC) has become an established method for measuring surface displacements and deformations of objects under stress. To date, DIC has been used in a variety of in vitro and in vivo studies to biomechanically characterise biological samples in order to reveal biomimetic principles. However, when surfaces of samples strongly deform or twist, they cannot be thoroughly traced. To overcome this challenge, different DIC setups have been developed to provide additional sensor perspectives and, thus, capture larger parts of an object’s surface. Herein, we discuss current solutions for this multi-perspective DIC, and we present our own approach to a 360 DIC system based on a single stereo-camera setup. Using this setup, we are able to characterise the desiccation-driven opening mechanism of two woody Hakea fruits over their entire surfaces. Both the breaking mechanism and the actuation of the two valves in predominantly dead plant material are models for smart materials. Based on these results, an evaluation of the setup for 360 DIC regarding its use in deducing biomimetic principles is given. Furthermore, we propose a way to improve and apply the method for future measurements.
Aufgrund der Energiewende und den steigenden Anforderungen an die technische Gebäudeausrüstung gewinnt der Betrieb von Wärmepumpen in Gebäuden immer mehr an Bedeutung. Inzwischen existiert eine Vielzahl an Wärmepumpen-Systemen, die unterschiedliche Vor- und Nachteile sowie Einsatzmöglichkeiten aufweisen. Sofern die Installation einer Wärmepumpe für den Wohngebäudesektor in Betracht gezogen wird, muss eruiert werden, welches System sowohl ökologisch als auch ökonomisch für das Bauvorhaben am sinnvollsten ist. Hierfür wurde eine Bewertungstool entwickelt, das den Einsatz der unterschiedlichen Wärmepumpensysteme bewertet und auch Nutzern mit wenig Expertise eine Entscheidungshilfe ermöglicht. Für eine möglichst ganzheitliche Betrachtung können verschiedene Szenarien mit Hilfe des Bewertungstools überprüft werden. Hierzu können Indikatoren wie Standortdaten, Gebäudedaten, Parameter für die Trinkwassererwärmung, die Systemtemperaturen der Heizung und die Betriebsweise der Wärmepumpe im Tool variiert werden. Die Ergebnisse des Bewertungstools zeigen, wie die unterschiedlichen Nutzungsanforderungen sich auf die Jahresarbeitszahl und den Energiebedarf auswirken. Zusätzlich werden Investitions- und Verbrauchskosten für die unterschiedlichen Szenarien abgeschätzt und berechnet. Bei der ökologischen Bewertung wird der Fokus der Betrachtung auf den TEWI-Wert gelegt, um den Einfluss von verschiedener Kältemittel im Lebenszyklus der Wärmepumpe zu berücksichtigen.
An automated pipeline for comprehensive calculation of intermolecular interaction energies based on molecular force-fields using the Tinker molecular modelling package is presented. Starting with non-optimized chemically intuitive monomer structures, the pipeline allows the approximation of global minimum energy monomers and dimers, configuration sampling for various monomer-monomer distances, estimation of coordination numbers by molecular dynamics simulations, and the evaluation of differential pair interaction energies. The latter are used to derive Flory-Huggins parameters and isotropic particle-particle repulsions for Dissipative Particle Dynamics (DPD). The computational results for force fields MM3, MMFF94, OPLSAA and AMOEBA09 are analyzed with Density Functional Theory (DFT) calculations and DPD simulations for a mixture of the non-ionic polyoxyethylene alkyl ether surfactant C10E4 with water to demonstrate the usefulness of the approach.
Einleitung und Fragestellung:
Abusive Supervision wird mit willentlicher Leistungszurückhaltung, verringerter Motivation, erhöhtem Stresserleben, psychosomatischen Beschwerden und Burnout bei Mitarbeitenden assoziiert. Angesichts der hohen Prävalenz destruktiver Führung bleibt bislang die Frage offen, welche
protektiven Ressourcen die genannten Zusammenhänge abpuffern.
Theoretischer Hintergrund:
Abusive Supervision bezieht sich auf das Ausmaß der feindseligen verbalen und nonverbalen Verhaltensweisen einer Führungskraft. Basierend auf dem Anforderungs- Ressourcen- Modell gehen wir davon aus, dass sich personale Ressourcen, die Mitarbeitende in der arbeitsfreien Zeit aufbauen, positiv auf den negativen Effekt zwischen destruktiver Führung und Mitarbeitergesundheit auswirken. Wir fokussieren hier die generalisierte Selbstwirksamkeitserwartung, die sich im Sinne der sozialkognitiven Theorie und zahlreichen empirischen Befunden als gesundheitsrelevante Ressource im
Umgang mit domänenübergreifenden Belastungen herausgestellt hat. Diese sollte durch Bewältigungserfahrung in der arbeitsfreien Zeit gefördert werden. Bewältigungserfahrung in der Freizeit bedeutet die Gelegenheit des Erlebens von Kompetenz und Fachwissen.
Methode:
Die Moderatoranalyse wurde im Rahmen einer Querschnittsbefragung einer anfallenden Stichprobe mit N = 305 Personen getestet. Die Variablen wurden mit der Abusive Supervision Scale (Tepper, 2000), dem REQ (Sonnentag & Fritz, 2007), und der Subskala emotionale Erschöpfung des MBI (Büssing & Perrar, 1992) gemessen.
Ergebnisse:
In dieser Studie zeigen „Mastery Experiences“ einen hypothesenkonformen Puffereffekt, nicht jedoch die anderen Erholungsstrategien, die auch mit getestet wurden. Es zeigt sich also die Tendenz, dass sich Mitarbeitende durch das Erlernen neuer Kompetenzen und den Aufbau von Selbstwirksamkeit vor den gesundheitsschädlichen Auswirkungen destruktiver Führung schützen können. Das
Korrelationsmuster deutet aber vrmtl. auch problematische Aspekte dieser Erholungsstrategie an.
Diskussion:
Limitierend muss erwähnt werden, dass wir die vermutete vermittelnde Variable Selbstwirksamkeit nicht explizit gemessen haben, und dass zukünftige Untersuchungen den Effekt in Form einer mediierten Moderation replizieren müssen.
n-type silicon modules
(2023)
The photovoltaic industry is facing an exponential growth in the recent years fostered by a dramatic decrease in installation prices. This cost reduction is achieved by means of several mechanisms. First, because of the optimization of the design and installation process of current PV projects, and second, by the optimization, in terms of performance, in the manufacturing techniques and material combinations within the modules, which also has an impact on both, the installation process, and the levelized cost of electricity (LCOE).
One popular trend is to increase the power delivered by photovoltaic modules, either by using larger wafer sizes or by combining more cells within the module unit. This solution means a significant increase in the size of these devices, but it implies an optimization in the design of photovoltaic plants. This results in an installation cost reduction which turns into a decrease in the LCOE.
However, this solution does not represent a breakthrough in addressing the real challenge of the technology which affects the module requirements. The innovation efforts must be focused on improving the modules capability to produce energy without enlarging the harvesting area. This challenge can be faced by approaching some of the module characteristics which are summarized in this chapter.
This paper reveals various approaches undertaken over more than two decades of teaching undergraduate programming classes at different Higher Education Institutions, in order to improve student activation and participation in class and consequently teaching and learning effectiveness.
While new technologies and the ubiquity of smartphones and internet access has brought new tools to the classroom and opened new didactic approaches, lessons learned from this personal long-term study show that neither technology itself nor any single new and often hyped didactic approach ensured sustained improvement of student activation. Rather it needs an integrated yet open approach towards a participative learning space supported but not created by new tools, technology and innovative teaching methods.
This paper presents a pragmatic approach for stepwise introduction of peer assessment elements in undergraduate programming classes, discusses some lessons learned so far and directions for further work. Students are invited to challenge their peers with their own programming exercises to be submitted through Moodle and evaluated by other students according to a predefined rubric and supervised by teaching assistants. Preliminary results show an increased activation and motivation of students leading to a better performance in the final programming exams.
Nachhaltigkeit von intelligenten Gebäuden - Ein Blick auf die Gesetzgebungen und Praxismöglichkeiten
(2023)
Gebäude sind durch ihre Herstellung und den Betrieb für einen erheblichen Teil der CO2-Emissionen in Europa verantwortlich. Die EU und Deutschland wollen durch milliardenschwere Maßnahmenpakete diese Emissionen bis zum Jahr 2045 (Deutschland) bzw. 2050 (EU) auf null reduzieren. Neben der Gebäudehülle als maßgeblicher Faktor für die Wärmebilanz zum Heizen und Kühlen spielt die Gebäudeautomation eine wichtige Rolle. Wie Gebäude intelligenter und smarter werden und wie sich das auf die Energieeffizienz auswirkt, soll im Folgenden betrachtet werden.
In this work a mathematical approach to calculate solar panel temperature based on measured irradiance, temperature and wind speed is applied. With the calculated module temperature, the electrical solar module characteristics is determined. A program developed in MatLab App Designer allows to import measurement data from a weather station and calculates the module temperature based on the mathematical NOCT and stationary approach with a time step between the measurements of 5 minutes. Three commercially available solar panels with different cell and interconnection technologies are used for the verification of the established models. The results show a strong correlation between the measured and by the stationary model predicted module temperature with a coefficient of determination R2 close to 1 and a root mean square deviation (RMSE) of ≤ 2.5 K for a time period of three months. Based on the predicted temperature, measured irradiance in module plane and specific module information the program models the electrical data as time series in 5-minute steps. Predicted to measured power for a time period of three months shows a linear correlation with an R2 of 0.99 and a mean absolute error (MAE) of 3.5, 2.7 and 4.8 for module ID 1, 2 and 3. The calculated energy (exemplarily for module ID 2) based on the measured, calculated by the NOCT and stationary model for this time period is 118.4 kWh, resp. 116.7 kWh and 117.8 kWh. This is equivalent to an uncertainty of 1.4% for the NOCT and 0.5% for the stationary model.
Advanced Determination of Temperature Coefficients of Photovoltaic Modules by Field Measurements
(2023)
In this work data from outdoor measurements, acquired over the course of up to three years on commercially available solar panels, is used to determine the temperature coefficients and compare these to the information as stated by the producer in the data sheets. A program developed in MatLab App Designer allows to import the electrical and ambient measurement data. Filter algorithms for solar irradiance narrow the irradiance level down to ~1000 W/m2 before linear regression methods are applied to obtain the temperature coefficients. A repeatability investigation proves the accuracy of the determined temperature coefficients which are in good agreement to the supplier specification if the specified values for power are not larger than -0.3%/K. Further optimization is achieved by applying wind filter techniques and days with clear sky condition. With the big (measurement) data on hand it was possible to determine the change of the temperature coefficients for varying irradiance. As stated in literature we see an increase of the temperature coefficient of voltage and a decline for the temperature coefficient of power with increasing irradiance.
Die neue Aufgabe der internen Kommunikation: schwierige Unternehmenspersönlichkeiten erkennen
(2023)
As a rule, an experiment carried out at school or in undergraduate study
courses is rather simple and not very informative. However, when the experiments
are to be performed using modern methods, they are often abstract and
difficult to understand. Here, we describe a quick and simple experiment,
namely the enzymatic characterization of ptyalin (human salivary amylase)
using a starch degradation assay. With the experimental setup presented here,
enzyme parameters, such as pH optimum, temperature optimum, chloride
dependence, and sensitivity to certain chemicals can be easily determined. This
experiment can serve as a good model for enzyme characterization in general,
as modern methods usually follow the same principle: determination of the
activity of the enzyme under different conditions. As different alleles occur in
humans, a random selection of test subjects will be quite different with regard
to ptyalin activities. Therefore, when the students measure their own ptyalin
activity, significant differences will emerge, and this will give them an idea of
the genetic diversity in human populations. The evaluation has shown that the
pupils have gained a solid understanding of the topic through this experiment.
With ongoing developments in the field of smart cities and digitalization in general, data is becoming a driving factor and value stream for new and existing economies alike. However, there exists an increasing centralization and monopolization of data holders and service providers, especially in the form of the big US-based technology companies in the western world and central technology providers with close ties to the government in the Asian regions. Self Sovereign Identity (SSI) provides the technical building blocks to create decentralized data-driven systems, which bring data autonomy back to the users. In this paper we propose a system in which the combination of SSI and token economy based incentivisation strategies makes it possible to unlock the potential value of data-pools without compromising the data autonomy of the users.
The European General Data Protection Regulation (GDPR), which went into effect in May 2018, brought new rules for the processing of personal data that affect many business models, including online advertising. The regulation’s definition of personal data applies to every company that collects data from European Internet users. This includes tracking services that, until then, argued that they were collecting anonymous information and data protection requirements would not apply to their businesses.
Previous studies have analyzed the impact of the GDPR on the prevalence of online tracking, with mixed results. In this paper, we go beyond the analysis of the number of third parties and focus on the underlying information sharing networks between online advertising companies in terms of client-side cookie syncing. Using graph analysis, our measurement shows that the number of ID syncing connections decreased by around 40 % around the time the GDPR went into effect, but a long-term analysis shows a slight rebound since then. While we can show a decrease in information sharing between third parties, which is likely related to the legislation, the data also shows that the amount of tracking, as well as the general structure of cooperation, was not affected. Consolidation in the ecosystem led to a more centralized infrastructure that might actually have negative effects on user privacy, as fewer companies perform tracking on more sites.
In the modern Web, service providers often rely heavily on third parties to run their services. For example, they make use of ad networks to finance their services, externally hosted libraries to develop features quickly, and analytics providers to gain insights into visitor behavior.
For security and privacy, website owners need to be aware of the content they provide their users. However, in reality, they often do not know which third parties are embedded, for example, when these third parties request additional content as it is common in real-time ad auctions.
In this paper, we present a large-scale measurement study to analyze the magnitude of these new challenges. To better reflect the connectedness of third parties, we measured their relations in a model we call third party trees, which reflects an approximation of the loading dependencies of all third parties embedded into a given website. Using this concept, we show that including a single third party can lead to subsequent requests from up to eight additional services. Furthermore, our findings indicate that the third parties embedded on a page load are not always deterministic, as 50 % of the branches in the third party trees change between repeated visits. In addition, we found that 93 % of the analyzed websites embedded third parties that are located in regions that might not be in line with the current legal framework. Our study also replicates previous work that mostly focused on landing pages of websites. We show that this method is only able to measure a lower bound as subsites show a significant increase of privacy-invasive techniques. For example, our results show an increase of used cookies by about 36 % when crawling websites more deeply.
Advanced Persistent Threats (APTs) are one of the main challenges in modern computer security. They are planned and performed by well-funded, highly-trained and often state-based actors. The first step of such an attack is the reconnaissance of the target. In this phase, the adversary tries to gather as much intelligence on the victim as possible to prepare further actions. An essential part of this initial data collection phase is the identification of possible gateways to intrude the target.
In this paper, we aim to analyze the data that threat actors can use to plan their attacks. To do so, we analyze in a first step 93 APT reports and find that most (80 %) of them begin by sending phishing emails to their victims. Based on this analysis, we measure the extent of data openly available of 30 entities to understand if and how much data they leak that can potentially be used by an adversary to craft sophisticated spear phishing emails. We then use this data to quantify how many employees are potential targets for such attacks. We show that 83 % of the analyzed entities leak several attributes of uses, which can all be used to craft sophisticated phishing emails.
The set of transactions that occurs on the public ledger of an Ethereum network in a specific time frame can be represented as a directed graph, with vertices representing addresses and an edge indicating the interaction between two addresses.
While there exists preliminary research on analyzing an Ethereum network by the means of graph analysis, most existing work is focused on either the public Ethereum Mainnet or on analyzing the different semantic transaction layers using static graph analysis in order to carve out the different network properties (such as interconnectivity, degrees of centrality, etc.) needed to characterize a blockchain network. By analyzing the consortium-run bloxberg Proof-of-Authority (PoA) Ethereum network, we show that we can identify suspicious and potentially malicious behaviour of network participants by employing statistical graph analysis. We thereby show that it is possible to identify the potentially malicious
exploitation of an unmetered and weakly secured blockchain network resource. In addition, we show that Temporal Network Analysis is a promising technique to identify the occurrence of anomalies in a PoA Ethereum network.
This paper analyses the status quo of large-scale decision making combined with the possibility of blockchain as an underlying decentralized architecture to govern common pool resources in a collective manner and evaluates them according to their requirements and features (technical and non-technical). Due to an increasing trend in the distribution of knowledge and an increasing amount of information, the combination of these decentralized technologies and approaches, can not only be beneficial for consortial governance using blockchain but can also help communities to govern common goods and resources. Blockchain and its trust-enhancing properties can potenitally be a catalysator for more collaborative behavior among participants and may lead to new insights about collective action and CPRs.
Die Digitalisierung ist die Basis für das Wohlergehen unserer modernen und globalen Informations- und Wissensgesellschaft und schreitet immer schneller voran. Dabei eröffnet die Digitalisierung über alle Branchen und Unternehmensgrößen hinweg enorme Wachstumschancen und führt zu immer besseren Prozessen, die die Effizienz steigern und Kosten reduzieren. Der Digitalisierungsprozess beschleunigt auf allen Ebenen und der Wertschöpfungsanteil der IT in allen Produkten und Lösungen wird immer größer. Die möglichen Erfolgsfaktoren der Digitalisierung sind vielfältig: Die Kommunikationsgeschwindigkeiten und -qualitäten, die mit 5G- und Glasfasernetzen neue Anwendungen möglich machen. Die Smartness der Endgeräte, wie Smartwatches, Smartphones, PADs, IoT-Geräte usw., die viele neue positive Möglichkeiten mit sich bringt. Aber auch immer leistungsfähigere zentrale IT-Systeme, wie Cloud-Angebote, Hyperscaler, KI-Anwendungen usw., schaffen Innovationen mit großen Potenzialen.
Moderne Benutzerschnittstellen, wie Sprache und Gestik, vereinfachen die Bedienung für die Nutzer. Die Optimierung von Prozessen schafft ein enormes Rationalisierungspotenzial, das es zu heben gilt, um wettbewerbsfähig zu bleiben und die Wachstumschancen für unser Wohlergehen zu nutzen. Neue Möglichkeiten mit Videokonferenzen, Cloud-Anwendungen usw., im Homeoffice zu arbeiten und damit die Personenmobilität zu reduzieren und die Umwelt zu schonen.