Filtern
Erscheinungsjahr
Dokumenttyp
- Wissenschaftlicher Artikel (1015) (entfernen)
Sprache
- Deutsch (790)
- Englisch (223)
- Französisch (1)
- Spanisch (1)
Volltext vorhanden
- nein (1015) (entfernen)
Schlagworte
- Geldpolitik (6)
- Building Information Modeling (4)
- Kühllastberechnung (4)
- Qualitätsplan (3)
- Reinraumtechnik (3)
- VDI 2078 (3)
- CDK (2)
- Deutschlandwetter (2)
- Energiepolitik (2)
- Industry Foundation Classes (2)
Institut
- Wirtschaftsrecht (382)
- Institut für Internetsicherheit (160)
- Wirtschaft und Informationstechnik Bocholt (68)
- Institut für Innovationsforschung und -management (55)
- Westfälisches Energieinstitut (55)
- Westfälisches Institut für Gesundheit (47)
- Wirtschaft Gelsenkirchen (35)
- Elektrotechnik und angewandte Naturwissenschaften (33)
- Wirtschaftsingenieurwesen (23)
- Informatik und Kommunikation (20)
The German supply chain law ( Lieferkettensorgfaltspflichtengesetz, abbreviated: LkSG) which enters into force on 1 January 2023 is part of the developing legal framework for human rights in global supply chains. Like the French vigilance law, it represents a new generation of supply chain laws which impose mandatory human rights due diligence obligations. The LkSG requires enterprises to exercise a number of due diligence obligations – from conducting risk analysis to undertaking preventive measures or remedial actions. The law is based on public enforcement via a competent authority, the Federal Office for Economic Affairs and Export Control (BAFA). The BAFA monitors and enforces compliance with the due diligence obligations. Non-compliant enterprises can be fined with up to 800,000 Euros and, in some cases, up to 2% of the annual turnover. Whilst the LkSG is an important step towards achieving greater corporate sustainability, it also has limitations. It was a political compromise and, as such, it does not include a new civil liability for non-compliance. Moreover, by default, it only applies to the enterprise’s own business area and its direct suppliers, whereas indirect suppliers are only included where the enterprise has substantiated knowledge that an obligation has been violated.
Recent years have seen a sharp increase in the development of deep learning and artificial intelligence-based molecular informatics. There has been a growing interest in applying deep learning to several subfields, including the digital transformation of synthetic chemistry, extraction of chemical information from the scientific literature, and AI in natural product-based drug discovery. The application of AI to molecular informatics is still constrained by the fact that most of the data used for training and testing deep learning models are not available as FAIR and open data. As open science practices continue to grow in popularity, initiatives which support FAIR and open data as well as open-source software have emerged. It is becoming increasingly important for researchers in the field of molecular informatics to embrace open science and to submit data and software in open repositories. With the advent of open-source deep learning frameworks and cloud computing platforms, academic researchers are now able to deploy and test their own deep learning models with ease. With the development of new and faster hardware for deep learning and the increasing number of initiatives towards digital research data management infrastructures, as well as a culture promoting open data, open source, and open science, AI-driven molecular informatics will continue to grow. This review examines the current state of open data and open algorithms in molecular informatics, as well as ways in which they could be improved in future.
The influence of molecular fragmentation and parameter settings on a mesoscopic dissipative particle dynamics (DPD) simulation of lamellar bilayer formation for a C10E4/water mixture is studied. A “bottom-up” decomposition of C10E4 into the smallest fragment molecules (particles) that satisfy chemical intuition leads to convincing simulation results which agree with experimental findings for bilayer formation and thickness. For integration of the equations of motion Shardlow’s S1 scheme proves to be a favorable choice with best overall performance. Increasing the integration time steps above the common setting of 0.04 DPD units leads to increasingly unphysical temperature drifts, but also to increasingly rapid formation of bilayer superstructures without significantly distorted particle distributions up to an integration time step of 0.12. A scaling of the mutual particle–particle repulsions that guide the dynamics has negligible influence within a considerable range of values but exhibits apparent lower thresholds beyond which a simulation fails. Repulsion parameter scaling and molecular particle decomposition show a mutual dependence. For mapping of concentrations to molecule numbers in the simulation box particle volume scaling should be taken into account. A repulsion parameter morphing investigation suggests to not overstretch repulsion parameter accuracy considerations.
Developing and implementing computational algorithms for the extraction of specific substructures from molecular graphs (in silico molecule fragmentation) is an iterative process. It involves repeated sequences of implementing a rule set, applying it to relevant structural data, checking the results, and adjusting the rules. This requires a computational workflow with data import, fragmentation algorithm integration, and result visualisation. The described workflow is normally unavailable for a new algorithm and must be set up individually. This work presents an open Java rich client Graphical User Interface (GUI) application to support the development of new in silico molecule fragmentation algorithms and make them readily available upon release. The MORTAR (MOlecule fRagmenTAtion fRamework) application visualises fragmentation results of a set of molecules in various ways and provides basic analysis features. Fragmentation algorithms can be integrated and developed within MORTAR by using a specific wrapper class. In addition, fragmentation pipelines with any combination of the available fragmentation methods can be executed. Upon release, three fragmentation algorithms are already integrated: ErtlFunctionalGroupsFinder, Sugar Removal Utility, and Scaffold Generator. These algorithms, as well as all cheminformatics functionalities in MORTAR, are implemented based on the Chemistry Development Kit (CDK).
Zentrale Raumlufttechnische Anlagen (RLT-Anlagen) sind für Betriebszeiten von fünfzehn und mehr Jahren konzipiert. Nicht selten werden die Geräte auch nach 25 Jahren Dank Retrofit weiterbetrieben. Unberücksichtigt bleibt dabei, ob die zukünftigen, klimatischen Bedingungen noch der Auslegung entsprechen. Zur Überprüfung der klimatischen Änderungen können sogenannte Testreferenzjahre (TRY – Test Reference Year) genutzt werden. Diese basieren für die heutige Auslegung auf den lokalen, stündlichen Wetterbedingungen im Bezugsjahr 2012 und zusätzlich auf modellbasierten Wetterdaten für das Bezugsjahr 2045.
Das Zentralluftgerät einer Krankenhaus-Intensivstation wurde für die 15 Wetter¬stationen der VDI 4710, Blatt 3 in Deutschland auf die Leistungsanforderungen von heute und für das Jahr 2045 untersucht. Zusätzlich wurden für den Standort Berlin die aktuellen Wetteraufzeichnungen im Sommer 2020 betrachtet. Daraus lassen sich Rückschlüsse ziehen, wie sich städtische Wärmeinseln (UHI – Urban Heat Islands) zukünftig auf den Energie- und Leistungsbedarf zur Gebäudeklimatisierung auswirken werden.
Die Auswirkungen auf die Wärme- und Kältespitzenleistung sowie der kumulierte Energiebedarf werden genauso analysiert wie der Befeuchtungsbedarf. Hieraus lassen sich die potenziellen Leistungsreserven abschätzen und die Klimaresilienz der Anlagentechnik bewerten.
The concept of molecular scaffolds as defining core structures of organic molecules is utilised in many areas of chemistry and cheminformatics, e.g. drug design, chemical classification, or the analysis of high-throughput screening data. Here, we present Scaffold Generator, a comprehensive open library for the generation, handling, and display of molecular scaffolds, scaffold trees and networks. The new library is based on the Chemistry Development Kit (CDK) and highly customisable through multiple settings, e.g. five different structural framework definitions are available. For display of scaffold hierarchies, the open GraphStream Java library is utilised. Performance snapshots with natural products (NP) from the COCONUT (COlleCtion of Open Natural prodUcTs) database and drug molecules from DrugBank are reported. The generation of a scaffold network from more than 450,000 NP can be achieved within a single day.
Keine Landesgesetzgebungskompetenz für ausnahmsloses Verbot von Windenergieanlagen in Waldgebieten
(2022)
Short Selling
(2022)
The development of deep learning-based optical chemical structure recognition (OCSR) systems has led to a need for datasets of chemical structure depictions. The diversity of the features in the training data is an important factor for the generation of deep learning systems that generalise well and are not overfit to a specific type of input. In the case of chemical structure depictions, these features are defined by the depiction parameters such as bond length, line thickness, label font style and many others. Here we present RanDepict, a toolkit for the creation of diverse sets of chemical structure depictions. The diversity of the image features is generated by making use of all available depiction parameters in the depiction functionalities of the CDK, RDKit, and Indigo. Furthermore, there is the option to enhance and augment the image with features such as curved arrows, chemical labels around the structure, or other kinds of distortions. Using depiction feature fingerprints, RanDepict ensures diversely picked image features. Here, the depiction and augmentation features are summarised in binary vectors and the MaxMin algorithm is used to pick diverse samples out of all valid options. By making all resources described herein publicly available, we hope to contribute to the development of deep learning-based OCSR systems.
Die Ukrainekrise und coronabedingte Lieferkettenprobleme treiben derzeitdie Rohstoff-, Material- und Lebensmittelpreise hoch. Auch die Inflationser-wartungen steigen; es drohen Zweitrundeneffekte imGefolge höhererLohnforderungen und Lohnabschlüsse. Langfristig könnten in der Eurozoneweitere Faktoren die Inflation treiben, z.B. angebotsseitig der Fachkräfte-mangel sowie globale Nahrungsmittelknappheiten und politikseitig diegewollten Effekte der Klimapolitik. Der Beitrag diskutiert vor diesemHinter-grund geldpolitische Implikationen.
Planung bzw. Budgetierung bilden ein zentrales Element des Controlling. Aussagekräftige Ergebnisse der Budgetierung sind unverzichtbar für die Steuerung der im Unternehmen verfügbaren Ressourcen. Entsprechend der hohen Bedeutung der Budgetierung ist eine intensive methodische Innovationsbereitschaft bei den Planungsinstrumenten in den letzten Jahren zu beobachten. Zwischenzeitlich werden auch die Einsatzbereiche des Zero-Based-Budgeting wieder intensiver diskutiert (= reloaded), wobei hier die Besonderheit darin besteht, dass dieses Instrument bereits in den 80er Jahren für einige Jahre eine stärkere Beachtung in der betriebswirtschaftlichen Theorie und Praxis gefunden hatte.
The use of molecular string representations for deep learning in chemistry has been steadily increasing in recent years. The complexity of existing string representations, and the difficulty in creating meaningful tokens from them, lead to the development of new string representations for chemical structures. In this study, the translation of chemical structure depictions in the form of bitmap images to corresponding molecular string representations was examined. An analysis of the recently developed DeepSMILES and SELFIES representations in comparison with the most commonly used SMILES representation is presented where the ability to translate image features into string representations with transformer models was specifically tested. The SMILES representation exhibits the best overall performance whereas SELFIES guarantee valid chemical structures. DeepSMILES perform in between SMILES and SELFIES, InChIs are not appropriate for the learning task. All investigations were performed using publicly available datasets and the code used to train and evaluate the models has been made available to the public.
Ein größerer Anteil der in den letzten Jahren vorgenommenen Unternehmensakquisitionen wurde maßgeblich mit attraktiven Synergieerwartungen begründet. Bei näherer Betrachtung können diese Synergien oft nur wenig präzise quantifiziert und der Zeitpunkt ihrer Realisierung nur ungenau eingeordnet werden. Der vorliegende Beitrag zeigt die Bedeutung von Synergien in Verbindung mit dem Goodwill, grenzt die Kosten- und Umsatzsynergien inhaltlich ab und befasst sich auf der Basis zahlreicher Studien mit dem aktuellen Erkenntnisstand in Verbindung mit der Vorbereitung und Realisierung von Kosten- und Umsatzsynergien.
Das Phänomen des Shareholder Activismbzw. der aktivistischen Investorenwar bis vor wenigen Jahren primär aus demangloamerikanischen Raumbekannt. Seit einiger Zeit sind verstärkt auch europäische und deutscheUnternehmen das Ziel von aktivistischen Aktionären. Der vorliegendeBeitrag zeigt die Zielsetzungen dieser Investorengruppe und die hierbeiverfolgten Strategien bzw. eingesetzten Maßnahmen auf, womit paralleleine Beschreibung des Geschäftsmodells des finanziell geprägten Share-holder Activismvorgelegt wird.
The translation of images of chemical structures into machine-readable representations of the depicted molecules is known as optical chemical structure recognition (OCSR). There has been a lot of progress over the last three decades in this field, but the development of systems for the recognition of complex hand-drawn structure depictions is still at the beginning. Currently, there is no data for the systematic evaluation of OCSR methods on hand-drawn structures available. Here we present DECIMER — Hand-drawn molecule images, a standardised, openly available benchmark dataset of 5088 hand-drawn depictions of diversely picked chemical structures. Every structure depiction in the dataset is mapped to a machine-readable representation of the underlying molecule. The dataset is openly available and published under the CC-BY 4.0 licence which applies very few limitations. We hope that it will contribute to the further development of the field.
Different charge treatment approaches are examined for cyclotide-induced plasma membrane disruption by lipid extraction studied with dissipative particle dynamics. A pure Coulomb approach with truncated forces tuned to avoid individual strong ion pairing still reveals hidden statistical pairing effects that may lead to artificial membrane stabilization or distortion of cyclotide activity depending on the cyclotide’s charge state. While qualitative behavior is not affected in an apparent manner, more sensitive quantitative evaluations can be systematically biased. The findings suggest a charge smearing of point charges by an adequate charge distribution. For large mesoscopic simulation boxes, approximations for the Ewald sum to account for mirror charges due to periodic boundary conditions are of negligible influence.
„Digital gestützte Lehrveranstaltungen“ im Sinne von § 1a Abs. 2 LVV (NRW) – eine erste Annäherung
(2022)
Der Datenjournalismus wird gleichermaßen stark in der Nachrichtenbranche beobachtet und in der Journalismusforschung reflektiert. Dieser Beitrag beschreibt das Phänomen zunächst im Kontext des Megatrends der Automatisierung des Journalismus. Anschließend wird die erste Trendstudie zum Da-tenjournalismus in Deutschland vorgestellt: Die Berufsfeldstudie war 2012 und 2019 im Feld. Die ge-wählten Items ermöglichen einen Längsschnitt-Vergleich der Entwicklung des Datenjournalismus. Bei einem Vergleich mit den nationalen Daten der „Worlds of Journalism Study“ werden weitere Gemein-samkeiten und Unterschiede deutlich. Die Ergebnisse zeigen, dass sich der Datenjournalismus in Deutschland zunehmend institutionalisiert hat und Datenjournalist:innen sich stark einem investigati-ven politischen Journalismus verpflichtet fühlen.
Im zweiten Corona-Winter sollen die Schulen offenbleiben. Neben Fenster- und mechanischer Lüftung werden mobile Raumluftreiniger als sinnvolle Maßnahmen angesehen, um das Infektionsrisiko zu reduzieren. Dabei stellt sich die Frage, wie deren sicherer und zuverlässiger Betrieb gestalten sein muss. Gibt es bevorzugte Aufstellpositionen im Raum und wie wirkt sich die Luftbewegung auf die Behaglichkeit aus? Die Datenlage hierzu ist noch unzureichend (vgl. HLH-Interview mit Dr. Gommel, HLH 10/2021). Diese Fragen werden für einen typischen Seminar- und Klassenraum näher beleuchtet.
Robot arms are one of many assistive technologies used by people with motor impairments. Assistive robot arms can allow people to perform activities of daily living (ADL) involving grasping and manipulating objects in their environment without the assistance of caregivers. Suitable input devices (e.g., joysticks) mostly have two Degrees of Freedom (DoF), while most assistive robot arms have six or more. This results in time-consuming and cognitively demanding mode switches to change the mapping of DoFs to control the robot. One option to decrease the difficulty of controlling a high-DoF assistive robot arm using a low-DoF input device is to assign different combinations of movement-DoFs to the device’s input DoFs depending on the current situation (adaptive control). To explore this method of control, we designed two adaptive control methods for a realistic virtual 3D environment. We evaluated our methods against a commonly used non-adaptive control method that requires the user to switch controls manually. This was conducted in a simulated remote study that used Virtual Reality and involved 39 non-disabled participants. Our results show that the number of mode switches necessary to complete a simple pick-and-place task decreases significantl when using an adaptive control type. In contrast, the task completion time and workload stay the same. A thematic analysis of qualitative feedback of our participants suggests that a longer period of training could further improve the performance of adaptive control methods.
Nowadays, robots are found in a growing number of areas where they collaborate closely with humans. Enabled by lightweight materials and safety sensors, these cobots are gaining increasing popularity in domestic care, where they support people with physical impairments in their everyday lives. However, when cobots perform actions autonomously, it remains challenging for human collaborators to understand and predict their behavior, which is crucial for achieving trust and user acceptance. One significant aspect of predicting cobot behavior is understanding their perception and comprehending how they “see” the world. To tackle this challenge, we compared three different visualization techniques for Spatial Augmented Reality. All of these communicate cobot perception by visually indicating which objects in the cobot’s surrounding have been identified by their sensors. We compared the well-established visualizations Wedge and Halo against our proposed visualization Line in a remote user experiment with participants suffering from physical impairments. In a second remote experiment, we validated these findings with a broader non-specific user base. Our findings show that Line, a lower complexity visualization, results in significantly faster reaction times compared to Halo, and lower task load compared to both Wedge and Halo. Overall, users prefer Line as a more straightforward visualization. In Spatial Augmented Reality, with its known disadvantage of limited projection area size, established off-screen visualizations are not effective in communicating cobot perception and Line presents an easy-to-understand alternative.
We study the nonequilibrium dynamics of a quantum system under the influence of two noncommuting fluctuation sources, i.e., purely dephasing fluctuations and relaxational fluctuations. We find that increasing purely dephasing fluctuations suppress increasing relaxation in the quantum system. This effect is further enhanced when both fluctuation sources are fully correlated. These effects arise for medium to strong primary fluctuations already when the secondary fluctuations are weak due to their noncommuting coupling to the quantum system. Dephasing, in contrast, is increased by increasing any of the two fluctuations. Fully correlated fluctuations result in overdamping at much lower system-bath coupling than uncorrelated noncommuting fluctuations. In total, we observe that treating subdominant secondary environmental fluctuations perturbatively leads, as neglecting them, to erroneous conclusions.
Welding and joining of components processed by additive manufacturing (AM) to other AMas well as conventionally produced components is of high importance for industry as thisallows to combine advantages of either technique and to produce large-scale structures,respectively. One of the key influencing factors with respect to weldability and mechanicalproperties of AM components was found to be the inherent microstructural anisotropy ofthese components. In present work, the precipitation-hardenable AleSi10Mg was fabri-cated in different build orientations using selective laser melting (SLM) and subsequentlyjoined by friction stir welding (FSW) in different combinations. Microstructural analysisshowed considerable grain refinement in the friction stir zone, however, pronouncedsoftening occurred in this area. The latter can be mainly attributed to changes in themorphology and size of Si particles. Upon combination of different build orientations aremarkable influence on the tensile strength of FSW joints was seen. Cyclic deformationresponses of SLM and FSW samples were examined in depth. Fatigue properties of thisalloy in the low-cycle fatigue (LCF) regime imply that SLM samples with the building di-rection parallel to the loading direction show superior performance under cyclic loading ascompared to the other conditions and the FSW joints. From results presented solid process-microstructure-property relationships are drawn.
Cone-Beam computed tomography (CBCT) has become the most important component of modern radiotherapy for positioning tumor patients directly before treatment. In this work we investigate alternations to standard acquisition protocol, called preset, for patients with a tumor in the thoracic region. The effects of the changed acquisition parameters on the image quality are evaluated using the Catphan Phantom and the image analysis software Smári. The weighted CT dose index (CTDIW) is determined in each case and the effects of the different acquisition protocols on the patient dose are classified accordingly. Additionally, the clinical suitability of alternative presets is tested by investigating correctness of image registration using the CIRS thorax phantom. The results show that a significant dose reduction can be achieved. It can be reduced by 51% for a full rotation by adjusting the gantry speed.
Flying insects employ elegant optical-flow-based strategies to solve complex tasks such as landing or obstacle avoidance. Roboticists have mimicked these strategies on flying robots with only limited success, because optical flow (1) cannot disentangle distance from velocity and (2) is less informative in the highly important flight direction. Here, we propose a solution to these fundamental shortcomings by having robots learn to estimate distances to objects by their visual appearance. The learning process obtains supervised targets from a stability-based distance estimation approach. We have successfully implemented the process on a small flying robot. For the task of landing, it results in faster, smooth landings. For the task of obstacle avoidance, it results in higher success rates at higher flight speeds. Our results yield improved robotic visual navigation capabilities and lead to a novel hypothesis on insect intelligence: behaviours that were described as optical-flow-based and hardwired actually benefit from learning processes.
Background: By reviewing image quality and diagnostic perception, the suitability of a statistical model-based iterative reconstruction algorithm in conjunction with low-dose computed tomography for lung cancer screening is investigated.
Methods: Artificial lung nodules shaped as spheres and spiculated spheres made from material with calibrated Hounsfield units were attached on marked positions in the lung structure of anthropomorphic phantoms. The phantoms were scanned using standard high contrast, and two low-dose computed tomography protocols: low-dose and ultra-low-dose. For the reconstruction, the filtered back projection and the iterative reconstruction algorithm ADMIRE at different strength levels (S1–S5) and the kernels Bl57, Br32, Br69 were used. Expert radiologists assessed image quality by performing 4-field-ranking tests and reading all image series to examine the aptitude for the detectability of lung nodules. Signal-to-noise ratio was investigated as objective image quality parameter.
Results: In ranking tests for lung foci detection expert radiologists prefer medium to high iterative reconstruction strength levels. For the standard clinical kernel Bl57 and varying phantom diameter, a noticeable preference for S4 was detected. Experienced radiologists graded filtered back projection reconstructed images with the highest perceptibility. Less experienced readers assessed filtered back projection and iterative reconstruction equally with the highest grades for the Bl57 kernel. Independently of the dose protocol, the signal-to-noise ratio increases with the iterative reconstruction strength level, specifically for Br69 and Bl57.
Conclusions: Subjective image perception does not significantly correlate with the experience of the radiologist, which presumably mirrors reader’s training and accustomed reading adjustments. Regarding signal-to-noise ratio, iterative reconstruction outperforms filtered back projection for spheres and spiculated spheres. Iterative reconstruction matters. It promises to be an alternative to filtered back projection allowing for lung-cancer screening at markedly decreased radiation exposure but comparable or even improved image quality.
Cardiac and liver computed tomography (CT) perfusion has not been routinely implemented in the clinic and requires high radiation doses. The purpose of this study is to examine the radiation exposure and technical settings for cardiac and liver CT perfusion scans at different CT scanners. Two cardiac and three liver CT perfusion protocols were examined with the N1 LUNGMAN phantom at three multi-slice CT scanners: a single-source (I) and second- (II) and third-generation (III) dual-source CT scanners. Radiation doses were reported for the CT dose index (CTDIvol) and dose–length product (DLP) and a standardised DLP (DLP10cm) for cardiac and liver perfusion. The effective dose (ED10cm) for a standardised scan length of 10 cm was estimated using conversion factors based on the International Commission on Radiological Protection (ICRP) 110 phantoms and tissue-weighting factors from ICRP 103. The proposed total lifetime attributable risk of developing cancer was determined as a function of organ, age and sex for adults. Radiation exposure for CTDIvol, DLP/DLP10 cm and ED10 cm during CT perfusion was distributed as follows: for cardiac perfusion (II) 144 mGy, 1036 mGy·cm/1440 mGy·cm and 39 mSv, and (III) 28 mGy, 295 mGy·cm/279 mGy·cm and 8 mSv; for liver perfusion (I) 225 mGy, 3360 mGy·cm/2249 mGy·cm and 54 mSv, (II) 94 mGy, 1451 mGy·cm/937 mGy·cm and 22 mSv, and (III) 74 mGy, 1096 mGy·cm/739 mGy·cm and 18 mSv. The third-generation dual-source CT scanner applied the lowest doses. Proposed total lifetime attributable risk increased with decreasing age. Even though CT perfusion is a high-dose examination, we observed that new-generation CT scanners could achieve lower doses. There is a strong impact of organ, age and sex on lifetime attributable risk. Further investigations of the feasibility of these perfusion scans are required for clinical implementation.
The aim of this phantom study is to examine radiation doses of dual- and single-energy computed tomography (DECT and SECT) in the chest and upper abdomen for three different multi-slice CT scanners. A total of 34 CT protocols were examined with the phantom N1 LUNGMAN. Four different CT examination types of different anatomic regions were performed both in single- and dual-energy technique: chest, aorta, pulmonary arteries for suspected pulmonary embolism and liver. Radiation doses were examined for the CT dose index CTDIvol and dose-length product (DLP). Radiation doses of DECT were significantly higher than doses for SECT. In terms of CTDIvol, radiation doses were 1.1–3.2 times higher, and in terms of DLP, these were 1.1–3.8 times higher for DECT compared with SECT. The third-generation dual-source CT applied the lowest dose in 7 of 15 different examination types of different anatomic regions.
This introduction to a special issue about concepts and facets of entrepreneurial diversity serves as a starting point for further discussion and research in this field. For this purpose, we provide information about the roots of the study of diversity and current trends in entrepreneurship research and present a frame for (researching) entrepreneurial diversity. Additionally, we briefly summarize the three papers selected for inclusion in this special issue. Together, they offer insights into the intersections of different diversity dimensions, personality as a deep dimension of team composition, and a general critical reflection on the conceptualization of entrepreneurial diversity. Taken together, the papers in this special issue present new findings and contribute to further advancing the long overdue research on and discussion about diversity in the field of entrepreneurship.
Zum Begriff der Sonderabgabe
(2021)
As vaccination campaigns are in progress in most countries, hopes to win back more normality are rising. However, the exact path from a pandemic to an endemic virus remains uncertain. While in the pre-vaccination phase many critical indoor situations were avoided by strict control measures, for the transition phase a certain mitigation of the effect of indoor situations seems advisable.
To better understand the mechanisms of indoor airborne transmissions, we present a new time-discrete model to calculate the level of exposure towards infectious SARS-CoV-2 aerosol and carry out a sensitivity analysis for the level of SARS-CoV-2 aerosol exposure in indoor settings. Time limitations and the use of any kind of masks were found to be strong mitigation measures, while how far the effort for a strict use of professional face pieces instead of simple masks can be justified by the additional reduction of the exposure dose remains unclear. Very good ventilation of indoor spaces is mandatory. The definition of sufficient ventilation in regard to airborne SARS-CoV-2 transmission follows other rules than the standards in ventilation design. This means that especially smaller rooms most likely require a significantly greater fresh air supply than usual. Further research on 50% group models in schools is suggested. The benefits of a model in which the students come to school every day, but for a limited time, should be investigated. In terms of window ventilation, it has been found that many short opening periods are not only thermally beneficial, they also reduce the exposure dose. The fresh air supply is driven by the temperature gradient and wind speed. However, the sensitivity towards these parameters is not very high and in times of low wind and temperature gradients, there are no arguments against keep windows open in order to make up for the reduced air flow rate. Long total opening periods and large window surfaces will strongly reduce the exposure. Additionally, the results underline the expectable fact that exposure doses will increase when hygiene and control measures are reduced. It seems advisable to investigate what this means for the infection rate and the fatality of infections in populations with partial immunity. Very basic considerations suggest that the value of aerosol reduction measures may be reduced with very infectious variants such as delta.
Das neuartige Coronavirus SARS-COV-2 wird insbesondere in Innenräumen übertragen. Dabei spielen Aerosole, also kleinste Schwebeteilchen, eine wichtige Rolle. Der längere Aufenthalt in Räumen begünstigt die Wahrscheinlichkeit einer Übertragung auch über eine Distanz von mehr als 1,5 m.
Eine Möglichkeit, um die Schwebeteilchen aus der Raumluft zu entfernen sind Raumluftreiniger. Diese gibt es in verschiedenen Ausführungen und Funktionsprinzipien. Das vorliegende Dokument soll dabei helfen den richtigen Gerätetyp für die jeweilige Anwendung zu finden. Dabei geht es zum einen um große Räume hoher Belegungsdichte (z. B. Schulklassen), zum anderen um Restaurants und Freizeitstätten im öffentlichen Raum. Zu guter Letzt kann der Einsatz dieser Geräte auch im privaten Umfeld sinnvoll sein.
Für alle Geräte gilt: Sie unterstützen die Vermeidung von hohen Virenkonzentrationen im Raum. Das ist jedoch kein Ersatz zum regelmäßigen Lüften und der Zufuhr von „frischer Luft“ und damit mehr Sauerstoff für den Raum.