Filtern
Erscheinungsjahr
- 2023 (43) (entfernen)
Dokumenttyp
- Wissenschaftlicher Artikel (17)
- Konferenzveröffentlichung (12)
- Teil eines Buches (Kapitel) (8)
- Video (2)
- Sonstiges (2)
- Preprint (2)
Sprache
- Englisch (43) (entfernen)
Schlagworte
- Dissipative Particle Dynamics (2)
- Field measurement (2)
- Solar modules (2)
- 360° Panorama (1)
- AI (1)
- Additive manufacturing Directed energy deposition-arc 316L stainless steel Corrosion behavior Electrochemical corrosion (1)
- Augmented Multiphase (1)
- Augmented Three-Phase AC-Railgun (1)
- Chemistry Development Kit, CDK, Molecule fragmentation, In silico fragmentation, Scaffolds, Functional groups, Glycosidic moieties, Rich client, Graphical user interface, GUI (1)
- Competency-Oriented Exams (1)
Institut
- Fachbereiche (8)
- Institut für biologische und chemische Informatik (8)
- Informatik und Kommunikation (5)
- Maschinenbau Bocholt (5)
- Maschinenbau und Facilities Management (4)
- Westfälisches Energieinstitut (3)
- Wirtschaft und Informationstechnik Bocholt (2)
- Elektrotechnik und angewandte Naturwissenschaften (1)
- Institute (1)
- Strategische Projekte (1)
Ni-based alloys are among the materials of choice in developing high-quality coatings for ambient and high temperature applications that require protection against intense wear and corrosion. The current study aims to develop and characterize NiCrBSi coatings with high wear resistance and improved adhesion to the substrate. Starting with nickel-based feedstock powders, thermally sprayed coatings were initially fabricated. Prior to deposition, the powders were characterized in terms of microstructure, particle size, chemical composition, flowability, and density. For comparison, three types of powders with different chemical compositions and characteristics were deposited onto a 1.7227 tempered steel substrate using oxyacetylene flame spraying, and subsequently, the coatings were inductively remelted. Ball-on-disc sliding wear testing was chosen to investigate the tribological properties of both the as-sprayed and induction-remelted coatings. The results reveal that, in the case of as-sprayed coatings, the main wear mechanisms were abrasive, independent of powder chemical composition, and correlated with intense wear losses due to the poor intersplat cohesion typical of flame-sprayed coatings. The remelting treatment improved the performance of the coatings in terms of wear compared to that of the as-sprayed ones, and the density and lower porosity achieved during the induction post-treatment had a significant positive role in this behavior.
Without proper post-processing (often using flame, furnace, laser remelting, and induction) or reinforcements’ addition, Ni-based flame-sprayed coatings generally manifest moderate adhesion to the substrate, high porosity, unmelted particles, undesirable oxides, or weak wear resistance and mechanical properties. The current research aimed to investigate the addition of ZrO2 as reinforcement to the self-fluxing alloy coatings. Mechanically mixed NiCrBSi-ZrO2 powders were thermally sprayed onto an industrially relevant high-grade steel. After thermal spraying, the samples were differently post-processed with a flame gun and with a vacuum furnace, respectively. Scanning electron microscopy showed a porosity reduction for the vacuum-heat-treated samples compared to that of the flame-post-processed ones. X-ray diffraction measurements showed differences in the main peaks of the patterns for the thermal processed samples compared to the as-sprayed ones, these having a direct influence on the mechanical behavior of the coatings. Although a slight microhardness decrease was observed in the case of vacuum-remelted samples, the overall low porosity and the phase differences helped the coating to perform better during wear-resistance testing, realized using a ball-on-disk arrangement, compared to the as-sprayed reference samples.
Among the FDM process variables, one of the less addressed in previous research is the filament color. Moreover, if not explicitly targeted, the filament color is usually not even mentioned.
Aiming to point out if, and to what extent, the color of the PLA filaments influences the dimensional precision and the mechanical strength of FDM prints, the authors of the present research carried out experiments on tensile specimens. The variable parameters were the layer height (0.05 mm, 0.10 mm, 0.15 mm, 0.20 mm) and the material color (natural, black, red, grey). The experimental results clearly showed that the filament color is an influential factor for the dimensional accuracy as well as for the tensile strength of the FDM printed PLA parts. Moreover, the two way ANOVA test performed revealed that the strongest effect on the tensile strength was exerted by the PLA color (2 = 97.3%), followed by the layer height (2 = 85.5%) and the interaction between the PLA color and the layer height (2 = 80.0%). Under the same printing conditions, the best dimensional accuracy was ensured by the black PLA (0.17% width deviations, respectively 5.48% height deviations), whilst the grey PLA showed the highest ultimate tensile strength values (between 57.10 MPa and 59.82 MPa).
Among all additive manufacturing processes, Directed Energy Deposition-Arc (DED-Arc) shows significantly shorter production times and is particularly suitable for large-volume components of simple to medium complexity. To exploit the full potential of this process, the microstructural, mechanical and corrosion behavior have to be studied. High stickout distances lead to a large offset, which leads to an instable electric arc and thus defects such as lack of fusion. Since corrosion preferentially occurs at such defects, the main objective of this work is to investigate the influence of the stickout distance on the corrosion
behavior and microstructure of stainless steel manufactured by DED-Arc.
Within the heterogenous structure of the manufactured samples lack of fusion defects were detected. The quantity of such defects was reduced by applying a shorter stickout distance. The corrosion behavior of the additively manufactured specimens was investigated by means of potentiodynamic polarization measurements. The semi-logarithmic current density potential curves showed a similar course and thus similar corrosion resistance like that of the conventionally forged sample. The polarization curve of the reference material shows numerous current peaks, both in the anodic and cathodic regions. This metastable behavior is induced by the presence of manganese sulfides. On the sample surface a local attack by pitting corrosion was identified.
MFsim - An open Java all-in-one rich-client simulation environment for mesoscopic simulation
MFsim is an open Java all-in-one rich-client computing environment for mesoscopic simulation with Jdpd as its default simulation kernel for Molecular Fragment Dissipative Particle Dynamics (DPD). The environment integrates and supports the complete preparation-simulation-evaluation triad of a mesoscopic simulation task. Productive highlights are a SPICES molecular structure editor, a PDB-to-SPICES parser for particle-based peptide/protein representations, a support of polymer definitions, a compartment editor for complex simulation box start configurations, interactive and flexible simulation box views including analytics, simulation movie generation or animated diagrams. As an open project, MFsim enables customized extensions for different fields of research.
MFsim uses several open libraries (see MFSimVersionHistory.txt for details and references below) and is published as open source under the GNU General Public License version 3 (see LICENSE).
MFsim has been described in the scientific literature and used for DPD studies.
Jdpd - An open Java Simulation Kernel for Molecular Fragment Dissipative Particle Dynamics (DPD)
Jdpd is an open Java simulation kernel for Molecular Fragment Dissipative Particle Dynamics (DPD) with parallelizable force calculation, efficient caching options and fast property calculations. It is characterized by an interface and factory-pattern driven design for simple code changes and may help to avoid problems of polyglot programming. Detailed input/output communication, parallelization and process control as well as internal logging capabilities for debugging purposes are supported. The kernel may be utilized in different simulation environments ranging from flexible scripting solutions up to fully integrated “all-in-one” simulation systems like MFsim.
Since Jdpd version 1.6.1.0 Jdpd is available in a (basic) double-precision version and a (derived) single-precision version (= JdpdSP) for all numerical calculations, where the single precision version needs about half the memory of the double precision version.
Jdpd uses the Apache Commons Math and Apache Commons RNG libraries and is published as open source under the GNU General Public License version 3. This repository comprises the Java bytecode libraries (including the Apache Commons Math and RNG libraries), the Javadoc HTML documentation and the Netbeans source code packages including Unit tests.
Jdpd has been described in the scientific literature (the final manuscript 2018 - van den Broek - Jdpd - Final Manucsript.pdf is added to the repository) and used for DPD studies (see references below).
See text file JdpdVersionHistory.txt for a version history with more detailed information.
n-type silicon modules
(2023)
The photovoltaic industry is facing an exponential growth in the recent years fostered by a dramatic decrease in installation prices. This cost reduction is achieved by means of several mechanisms. First, because of the optimization of the design and installation process of current PV projects, and second, by the optimization, in terms of performance, in the manufacturing techniques and material combinations within the modules, which also has an impact on both, the installation process, and the levelized cost of electricity (LCOE).
One popular trend is to increase the power delivered by photovoltaic modules, either by using larger wafer sizes or by combining more cells within the module unit. This solution means a significant increase in the size of these devices, but it implies an optimization in the design of photovoltaic plants. This results in an installation cost reduction which turns into a decrease in the LCOE.
However, this solution does not represent a breakthrough in addressing the real challenge of the technology which affects the module requirements. The innovation efforts must be focused on improving the modules capability to produce energy without enlarging the harvesting area. This challenge can be faced by approaching some of the module characteristics which are summarized in this chapter.
This paper reveals various approaches undertaken over more than two decades of teaching undergraduate programming classes at different Higher Education Institutions, in order to improve student activation and participation in class and consequently teaching and learning effectiveness.
While new technologies and the ubiquity of smartphones and internet access has brought new tools to the classroom and opened new didactic approaches, lessons learned from this personal long-term study show that neither technology itself nor any single new and often hyped didactic approach ensured sustained improvement of student activation. Rather it needs an integrated yet open approach towards a participative learning space supported but not created by new tools, technology and innovative teaching methods.
This paper presents a pragmatic approach for stepwise introduction of peer assessment elements in undergraduate programming classes, discusses some lessons learned so far and directions for further work. Students are invited to challenge their peers with their own programming exercises to be submitted through Moodle and evaluated by other students according to a predefined rubric and supervised by teaching assistants. Preliminary results show an increased activation and motivation of students leading to a better performance in the final programming exams.
In this work a mathematical approach to calculate solar panel temperature based on measured irradiance, temperature and wind speed is applied. With the calculated module temperature, the electrical solar module characteristics is determined. A program developed in MatLab App Designer allows to import measurement data from a weather station and calculates the module temperature based on the mathematical NOCT and stationary approach with a time step between the measurements of 5 minutes. Three commercially available solar panels with different cell and interconnection technologies are used for the verification of the established models. The results show a strong correlation between the measured and by the stationary model predicted module temperature with a coefficient of determination R2 close to 1 and a root mean square deviation (RMSE) of ≤ 2.5 K for a time period of three months. Based on the predicted temperature, measured irradiance in module plane and specific module information the program models the electrical data as time series in 5-minute steps. Predicted to measured power for a time period of three months shows a linear correlation with an R2 of 0.99 and a mean absolute error (MAE) of 3.5, 2.7 and 4.8 for module ID 1, 2 and 3. The calculated energy (exemplarily for module ID 2) based on the measured, calculated by the NOCT and stationary model for this time period is 118.4 kWh, resp. 116.7 kWh and 117.8 kWh. This is equivalent to an uncertainty of 1.4% for the NOCT and 0.5% for the stationary model.
Advanced Determination of Temperature Coefficients of Photovoltaic Modules by Field Measurements
(2023)
In this work data from outdoor measurements, acquired over the course of up to three years on commercially available solar panels, is used to determine the temperature coefficients and compare these to the information as stated by the producer in the data sheets. A program developed in MatLab App Designer allows to import the electrical and ambient measurement data. Filter algorithms for solar irradiance narrow the irradiance level down to ~1000 W/m2 before linear regression methods are applied to obtain the temperature coefficients. A repeatability investigation proves the accuracy of the determined temperature coefficients which are in good agreement to the supplier specification if the specified values for power are not larger than -0.3%/K. Further optimization is achieved by applying wind filter techniques and days with clear sky condition. With the big (measurement) data on hand it was possible to determine the change of the temperature coefficients for varying irradiance. As stated in literature we see an increase of the temperature coefficient of voltage and a decline for the temperature coefficient of power with increasing irradiance.
Measurement studies are essential for research and industry alike to understand the Web’s inner workings better and help quantify specific phenomena. Performing such studies is demanding due to the dynamic nature and size of the Web. An experiment’s careful design and setup are complex, and many factors might affect the results. However, while several works have independently observed differences in
the outcome of an experiment (e.g., the number of observed trackers) based on the measurement setup, it is unclear what causes such deviations. This work investigates the reasons for these differences by visiting 1.7M webpages with five different measurement setups. Based on this, we build ‘dependency trees’ for each page and cross-compare the nodes in the trees. The results show that the measured trees differ considerably, that the cause of differences can be attributed to specific nodes, and that even identical measurement setups can produce different results.
The number of publications describing chemical structures has increased steadily over the last decades. However, the majority of published chemical information is currently not available in machine-readable form in public databases. It remains a challenge to automate the process of information extraction in a way that requires less manual intervention - especially the mining of chemical structure depictions. As an open-source platform that leverages recent advancements in deep learning, computer vision, and natural language processing, DECIMER.ai (Deep lEarning for Chemical IMagE Recognition) strives to automatically segment, classify, and translate chemical structure depictions from the printed literature. The segmentation and classification tools are the only openly available packages of their kind, and the optical chemical structure recognition (OCSR) core application yields outstanding performance on all benchmark datasets. The source code, the trained models and the datasets developed in this work have been published under permissive licences. An instance of the DECIMER web application is available at https://decimer.ai.
Cookie notices (or cookie banners) are a popular mechanism for websites to provide (European) Internet users a tool to choose which cookies the site may set. Banner implementations range from merely providing information that a site uses cookies over offering the choice to accepting or denying all cookies to allowing fine-grained control of cookie usage. Users frequently get annoyed by the banner’s pervasiveness as they interrupt “natural” browsing on the Web. As a remedy, different browser extensions have been developed to automate the interaction with cookie banners.
In this work, we perform a large-scale measurement study comparing the effectiveness of extensions for “cookie banner interaction.” We configured the extensions to express different privacy choices (e.g., accepting all cookies, accepting functional cookies, or rejecting all cookies) to understand their capabilities to execute a user’s preferences. The results show statistically significant differences in which cookies are set, how many of them are set, and which types are set—even for extensions that aim to implement the same cookie choice. Extensions for “cookie banner interaction” can effectively reduce the number of set cookies compared to no interaction with the banners. However, all extensions increase the tracking requests significantly except when rejecting all cookies.
In the realm of digital situational awareness during disaster situations, accurate digital representations,
like 3D models, play an indispensable role. To ensure the
safety of rescue teams, robotic platforms are often deployed
to generate these models. In this paper, we introduce an
innovative approach that synergizes the capabilities of compact Unmaned Arial Vehicles (UAVs), smaller than 30 cm, equipped with 360° cameras and the advances of Neural Radiance Fields (NeRFs). A NeRF, a specialized neural network, can deduce a 3D representation of any scene using 2D images and then synthesize it from various angles upon request. This method is especially tailored for urban environments which have experienced significant destruction, where the structural integrity of buildings is compromised to the point of barring entry—commonly observed post-earthquakes and after severe fires. We have tested our approach through recent post-fire scenario, underlining the efficacy of NeRFs even in challenging outdoor environments characterized by water, snow, varying light conditions, and reflective surfaces.
In this paper, we present a method for detecting objects of interest, including cars, humans, and fire, in aerial images captured by unmanned aerial vehicles (UAVs) usually during vegetation fires. To achieve this, we use artificial neural networks and create a dataset for supervised learning. We accomplish the assisted labeling of the dataset through the implementation of an object detection pipeline that combines classic image processing techniques with pretrained neural networks. In addition, we develop a data augmentation pipeline to augment the dataset with utomatically labeled images. Finally, we evaluate the performance of different neural networks.
The video shows a very high resolution 3D point cloud !!! of the outdoor area of the German Rescue Robotics Center. For the recording, a 25-second POI flight was performed with a Mavic 3. From the 4K video footage captured during this flight, 77 images were cropped and localized within 4 minutes using colmap and processed using Neural Radiance Fields (NeRF). The nerfacto model of Nerfstudio was trained on an Nvidia RTX 4090 for 8 minutes. In summary, a top 3D model is available to task forces after about 13 minutes. The calculation is performed locally on site by the RobLW of the DRZ. The video shown here shows a free camera path rendered at 60 hz (Full HD).
In this paper, we investigate the influence of different disease groups on the size of different 1 anatomical structures. To this end, we first modify and improve an existing anatomical segmentation 2 model. Then, we use this model to segment 104 anatomical structures from computed tomography 3 (CT) scans and compute their volumes from the segmentation. After correlating the results with each 4 other, we find no new significant correlations. After correlating the volume data with known diseases 5 for each case, we find two weak correlations, one of which has not been described before and for 6 which we present a possible explanation.
Problem
- How to effectively use aerial robots to support rescue forces?
- How to achieve good flight characteristics and long flight times?
- How to enable simple and intuitive control?
- How to efficiently record image data of the environment?
- How to generate flight and image data for rescue forces?
Implementation:
The flying robot was designed in Autodesk Fusion360. In order to achieve high stability as well as low weight, the frame was milled from carbon. Mounts such as for GPS and 360° camera were 3D printed. A special feature is that the flying robot is not visible in the panoramic view of the 360° camera. The flight controller of the robot was set up using Ardupilot. The communication with the robot is done via MAVLink (UDP).To support different platforms, a software was realized as a web application. The front end was created using HTML, CSS and Javascript.
The back end is based on Flask-Socket-IO (Python). For the intelligent recognition of motor vehicles a micro controller with an integrated camera is used. For the post-processing of flight and video data a pipeline was implemented for automation.
Design and Development of a Bioreactor System for Mechanical Stimulation of Musculoskeletal Tissue
(2023)
We report on the development of a bioreactor system for mechanical stimulation of musculoskeletal tissues. The ultimate object is to improve the quality of medical treatment following injuries of the enthesis tissue. To this end, the tissue formation process through the effect of mechanical stimulation is investigated. A six-well system was designed, 3D printed and tested. An integrated actuator creates strain by applying a force. A contactless position sensor monitors the travels. An electronic circuit controls the bioreactor using a microcontroller. An IoT platform connects the microcontroller to a smartphone, enabling the user to alter variables, trigger actions and monitor the system. The system was stabilised by implementing two PID controllers and safety measures. The results show that the bioreactor design is suited to execute mechanical stimulation and to investigate the tissue formation and regeneration process …
Dephasing in quantum systems is typically the result of their interaction with environmental degrees of freedom. We investigate within a spin-boson model the influence of a super-Ohmic environment on the dynamics of a quantum two-state system. A super-Ohmic environment thereby models typical bulk phonons which are a common disturbance for solid state quantum systems as, for example, nitrogen-vacancy centers. By applying the numerically exact quasiadiabatic path-integral approach we show that for strong system-bath coupling, pseudocoherent dynamics emerges, i.e., oscillatory dynamics at short times due to slaving of the quantum system to the bath dynamics. We extend the phase diagram known for sub-Ohmic and Ohmic environments into the super-Ohmic regime and observe a pronounced nonmonotonous behavior. Super-Ohmic purely dephasing fluctuations strongly suppress the amplitude of coherent dynamics at very short times with no subsequent further decay at later times. Nevertheless, they render the dynamics overdamped. The corresponding phase separation line shows also a nonmonotonous behavior, very similar to the pseudocoherent dynamics.
We propose a quantum-mechanical model to calculate the current through a single molecular junction immersed in a solvent and surrounded by a thin shell of bound water under an applied ac voltage. The solvent plus hydration shell are captured by a dielectric continuum model for which the resulting spectral density is determined. Here the dielectric properties, e.g., the Debye relaxation time and the dielectric constant, of the bulk solvent and the hydration shell as well as the shell thickness directly enter. We determine the charge current through the molecular junction under an ac voltage in the sequential tunneling regime where we solve a quantum master equation by a real-time diagrammatic technique. Interestingly, the Fourier components of the charge current show an exponential-like decline when the hydration shell thickness increases. Finally, we apply our findings to binary solvent mixtures with varying volume fractions and find that the current is highly sensitive to both the hydration shell thickness as well as the volume fraction of the solvent mixture, giving rise to possible applications as shell and concentration sensors on the molecular scale.
This chapter is a commentary on Principle 21 of the United Nations Guiding Principles on Business and Human Rights (UNGPs). The UNGPs, endorsed by the United Nations Human Rights Council in 2011, are the first universally accepted framework for addressing business responsibilities for human rights. They outline State obligations to protect human rights, businesses’ responsibility to respect human rights, and the importance of both States and businesses offering adequate remedies for human rights breaches.
Article 135 TFEU
(2023)
Article 134 TFEU
(2023)
This chapter is a commentary on Principle 20 of the United Nations Guiding Principles on Business and Human Rights (UNGPs). The UNGPs, endorsed by the United Nations Human Rights Council in 2011, are the first universally accepted framework for addressing business responsibilities for human rights. They outline State obligations to protect human rights, businesses’ responsibility to respect human rights, and the importance of both States and businesses offering adequate remedies for human rights breaches.
The German supply chain law ( Lieferkettensorgfaltspflichtengesetz, abbreviated: LkSG) which enters into force on 1 January 2023 is part of the developing legal framework for human rights in global supply chains. Like the French vigilance law, it represents a new generation of supply chain laws which impose mandatory human rights due diligence obligations. The LkSG requires enterprises to exercise a number of due diligence obligations – from conducting risk analysis to undertaking preventive measures or remedial actions. The law is based on public enforcement via a competent authority, the Federal Office for Economic Affairs and Export Control (BAFA). The BAFA monitors and enforces compliance with the due diligence obligations. Non-compliant enterprises can be fined with up to 800,000 Euros and, in some cases, up to 2% of the annual turnover. Whilst the LkSG is an important step towards achieving greater corporate sustainability, it also has limitations. It was a political compromise and, as such, it does not include a new civil liability for non-compliance. Moreover, by default, it only applies to the enterprise’s own business area and its direct suppliers, whereas indirect suppliers are only included where the enterprise has substantiated knowledge that an obligation has been violated.
The concept of “Internationalisation at Home“ has gained momentum with the increasing digitalization of education and limitations on mobility. Collaborative Online International Learning (COIL) is an innovative, cost-effective instructional method that promotes intercul-tural learning through online collaboration between faculty and students from different countries or locations. The benefits of using COIL courses have been widely recognized, with learners developing intercultural competencies, digital skills, international education experi-ence, and global awareness.
However, multicultural communication in project environments can be complex and demand awareness of cultural variations . The creation and development of effective cross-cultural collectivism, trust, communication, and empathy in leadership is an important ingredient for remote project collaborations success. This is an area that has been least explored in re-search on communication in virtual teams.
The GIPE projects are mainly carried out as so-called Collaborative Online International Learning (COIL) events. However, to gain a “real world“ experience abroad in an intercultural team, students from all partner universities can participate in the Spring School being held for two weeks in Germany and the Germany students present and hand-over the results in the country of the partner university. The main objective of this research was to examine the experiences of students participating in the GIPE project and to evaluate the effectiveness of the project in enhancing intercultural competencies and fostering collaboration among stu-dents from different continents. This paper will also explore the implications of the GIPE project for Education 2.0 considering the COVID-19 pandemic and the future of education delivery and administration transformation.
The disruptive nature of the changing media landscape and technology-driven advances in communication have led to innovative ways of organizing work in the information and communication industry. This reorganization of work is reflected in the concept of New Work, which rethinks working concepts, styles, and employee behavior. Based on a survey among staff in the information and communication industry (n = 380), this study investigates the status quo of the implementation of New Work measures and their effectiveness in helping companies reach organizational goals. The results show that New Work measures are widely adopted although there is still unused potential. Moreover, the study demonstrates that the implementation of New Work measures supports companies in achieving New Work goals as well as overall organizational goals in the contexts of agile management, change management, internal communication, and evaluation.
Recent years have seen a sharp increase in the development of deep learning and artificial intelligence-based molecular informatics. There has been a growing interest in applying deep learning to several subfields, including the digital transformation of synthetic chemistry, extraction of chemical information from the scientific literature, and AI in natural product-based drug discovery. The application of AI to molecular informatics is still constrained by the fact that most of the data used for training and testing deep learning models are not available as FAIR and open data. As open science practices continue to grow in popularity, initiatives which support FAIR and open data as well as open-source software have emerged. It is becoming increasingly important for researchers in the field of molecular informatics to embrace open science and to submit data and software in open repositories. With the advent of open-source deep learning frameworks and cloud computing platforms, academic researchers are now able to deploy and test their own deep learning models with ease. With the development of new and faster hardware for deep learning and the increasing number of initiatives towards digital research data management infrastructures, as well as a culture promoting open data, open source, and open science, AI-driven molecular informatics will continue to grow. This review examines the current state of open data and open algorithms in molecular informatics, as well as ways in which they could be improved in future.
The influence of molecular fragmentation and parameter settings on a mesoscopic dissipative particle dynamics (DPD) simulation of lamellar bilayer formation for a C10E4/water mixture is studied. A “bottom-up” decomposition of C10E4 into the smallest fragment molecules (particles) that satisfy chemical intuition leads to convincing simulation results which agree with experimental findings for bilayer formation and thickness. For integration of the equations of motion Shardlow’s S1 scheme proves to be a favorable choice with best overall performance. Increasing the integration time steps above the common setting of 0.04 DPD units leads to increasingly unphysical temperature drifts, but also to increasingly rapid formation of bilayer superstructures without significantly distorted particle distributions up to an integration time step of 0.12. A scaling of the mutual particle–particle repulsions that guide the dynamics has negligible influence within a considerable range of values but exhibits apparent lower thresholds beyond which a simulation fails. Repulsion parameter scaling and molecular particle decomposition show a mutual dependence. For mapping of concentrations to molecule numbers in the simulation box particle volume scaling should be taken into account. A repulsion parameter morphing investigation suggests to not overstretch repulsion parameter accuracy considerations.
Developing and implementing computational algorithms for the extraction of specific substructures from molecular graphs (in silico molecule fragmentation) is an iterative process. It involves repeated sequences of implementing a rule set, applying it to relevant structural data, checking the results, and adjusting the rules. This requires a computational workflow with data import, fragmentation algorithm integration, and result visualisation. The described workflow is normally unavailable for a new algorithm and must be set up individually. This work presents an open Java rich client Graphical User Interface (GUI) application to support the development of new in silico molecule fragmentation algorithms and make them readily available upon release. The MORTAR (MOlecule fRagmenTAtion fRamework) application visualises fragmentation results of a set of molecules in various ways and provides basic analysis features. Fragmentation algorithms can be integrated and developed within MORTAR by using a specific wrapper class. In addition, fragmentation pipelines with any combination of the available fragmentation methods can be executed. Upon release, three fragmentation algorithms are already integrated: ErtlFunctionalGroupsFinder, Sugar Removal Utility, and Scaffold Generator. These algorithms, as well as all cheminformatics functionalities in MORTAR, are implemented based on the Chemistry Development Kit (CDK).