Refine
Year of publication
Document Type
- Conference Proceeding (15)
- Article (13)
- Contribution to a Periodical (1)
- Doctoral Thesis (1)
- Report (1)
- Working Paper (1)
Keywords
- Android (1)
- Artificial Intelligence (1)
- Autonomous Agents (1)
- Behavioral Economics (1)
- Cookie <Internet> (1)
- Datenschutz (1)
- Economics of Cybersecurity (1)
- MITRE (1)
- Machine Learning (1)
- Multi-Agent System (1)
Institute
Web advertisements are the primary financial source for many online services, but also for cybercriminals. Successful ad campaigns rely on good online profiles of their potential customers. The financial potentials of displaying ads have led to the rise of malware that injects or replaces ads on websites, in particular, so-called adware. This development leads to always further optimized and customized advertising. For these customization's, various tracking methods are used. However, only sparse work has gone into privacy issues emerging from adware. In this paper, we investigate the tracking capabilities and related privacy implications of adware and potentially unwanted programs (PUPs). Therefore, we developed a framework that allows us to analyze any network communication of the Firefox browser on the application level to circumvent encryption like TLS. We use this to dynamically analyze the communication streams of over 16,000 adware or potentially unwanted programs samples that tamper with the users' browser session. Our results indicate that roughly 37% of the requests issued by the analyzed samples contain private information and are accordingly able to track users. Additionally, we analyze which tracking techniques and services are used.
In diesem Artikel wird ein Alert-System für das Online-Banking vorgestellt, welches das Schutzniveau im Kontext von Social-Engineering-Angriffen sowohl clientseitig als auch serverseitig erhöhen soll. Hierfür wird durch das Alert-System ein kontinuierliches Lagebild über die aktuelle Gefahrenlage beim Online-Banking erstellt. Bei konkretem Bedarf wird der Nutzer punktuell vor aktuellen Betrugsmaschen gewarnt und zielgerichtet über Schutzvorkehrungen und Handlungsempfehlungen informiert. Für die Berechnung der aktuellen Gefahrenlage wurden unterschiedliche off-the-shelf-Algorithmen des Maschinellen Lernens verwendet und miteinander verglichen. Die Effektivität des Alert-Systems wurde anhand von echten Betrugsfällen evaluiert, die bei einer Bankengruppe in Deutschland aufgetreten sind. Zusätzlich wurde die Usability des Systems in einer Nutzerstudie mit 50 Teilnehmern untersucht. Die ersten Ergebnisse zeigen, dass die verwendeten Verfahren dazu geeignet sind, die Gefahrenlage im Online-Banking zu beurteilen und dass ein solches Alert-System auf hohe Akzeptanz bei Nutzern stößt.
In der modernen Informationsgesellschaft nehmen Online-Transaktionen einen wichtigen Teil unseres täglichen Lebens ein. In dieser Arbeit stellen wir ein nutzerzentriertes Protokoll vor, dass es Nutzern erlaubt vertrauenswürdige und sichere Transkationen durchzuführen selbst wenn sie ein nicht vertrau-enswürdiges oder mit Schadsoftware infiziertes Gerät nutzen. Das Protokoll nutzt einen CAPTCHA-artigen Ansatz, der verhindert, dass ein Angreifer eine Transaktion verändert ohne, dass Server oder Client dies bemerken. Dazu stellen wir dem Nutzer eine Aufgabe die kontextsensitive Informationen der Transaktion enthält. Die Aufgabe wird so gestellt, dass sie einfach von Menschen lösbar ist aber nur schwer automatisiert gelöst werden kann. Zur Evaluation des Systems haben wir eine Nutzerstudie (n=30) durchgeführt und berechnet mit welcher Wahrscheinlichkeit ein Angreifer erfolgreich die richtige Antwort auf die Frage erraten kann. Wir zeigen, dass ein Großteil der Transaktionen (> 94%) geschützt werden kann während das System selbst nutzbar bleibt.
Renewable and sustainable energy production by many small and distributed producers is revolutionizing the energy landscape as we know it. Consumers produce energy, making them to prosumers in the smart grid. The interaction between prosumers and other entities in the grid and the optimal utilization of new smart grid components (electric cars, freezers, solar panels, etc.) are crucial for the success of the smart grid. The Power Trading Agent Competition is an open simulation platform that allows researchers to conduct low risk studies in this new energy market. In this work we present Maxon16, an autonomous energy broker and champion of the 2016's Power Trading Agent Competition. We present the strategies the broker used in the final round and evaluate the effectiveness of the strategies by analyzing the tournament's results.
This thesis evaluates the effects of the GDPR using a technical and human-centric approach. We assess challenges service providers face when they want to design GDPR-proof web applications. On the technical side, we perform two large-scale measurement studies. The first study aims to illuminate third party loading dependencies in web applications. The second study provides a detailed analysis of the information-sharing networks between online adver-tising companies. The human-centric analysis studies how companies implemented the Right to Access and if users can profit from the new right.
Das Gesundheitswesen in Deutschland, Europa, aber auch weltweit steht gerade erst am Beginn eines notwendigen und besonderen Digitalisierungsschubs. Ein wichtiger Schritt im Rahmen dieser Digitalisierung wird es sein, sämtliche medizinische Daten leistungsträgerübergreifend einfach verfügbar zu machen. Dies ermöglicht neue Methoden der Behandlung wie durch KI-Ansätze oder die Vermeidung von Doppelbehandlungen. Zur Erreichung dieser Ziele ist es unabdingbar, dass moderne medizintechnische IT-Geräte miteinander vernetzt werden und die anfallenden Daten sicher verarbeitet und hinterlegt werden. Durch diesen Prozess entstehen aber auch neue Angriffsvektoren und die Risiken steigen erheblich an.
Diese Arbeit beschreibt zunächst grundlegende Cyber-Sicherheitsstrategien, die helfen die vorhandenen Risiken zu minimieren und mit den verbleibenden Risiken umzugehen. Zusätzlich werden konkrete Sicherheitsbedürfnisse- und Anforderungen, die zur Vernetzung von Medizintechnik und zur Verarbeitung von Daten in der Cloud, nötig sind diskutiert. Abschließend wird eine Gesamtarchitektur vorgestellt, die diese Sicherheitsbedürfnisse umsetzt.
In dieser Arbeit wird eine ganzheitliche Bedrohung für Business-Chat-Anwendungen aufgezeigt und bewertet: Chishing – Phishing über Business-Chats. Die Bedrohung hat ihren Ursprung in den Anfängen der heutigen vernetzten Welt und das zugrunde liegende Problem wird als Spoofing in seiner einfachsten Form bezeichnet. In vier von sechs Business-Chat-Tools, die in dieser Arbeit analysiert werden, ist es möglich, Anzeigenamen, Profilbilder und weitere persönliche Informationen erfolgreich zu fälschen. Dies stellt eine Bedrohung für Unternehmen dar, da es Insider-Bedrohungen Vorschub leistet und unter Umständen auch externen Entitäten dazu einlädt, sich als interne Mitarbeiterin auszugeben.
Advanced Persistent Threats (APTs) are one of the main challenges in modern computer security. They are planned and performed by well-funded, highly-trained and often state-based actors. The first step of such an attack is the reconnaissance of the target. In this phase, the adversary tries to gather as much intelligence on the victim as possible to prepare further actions. An essential part of this initial data collection phase is the identification of possible gateways to intrude the target.
In this paper, we aim to analyze the data that threat actors can use to plan their attacks. To do so, we analyze in a first step 93 APT reports and find that most (80 %) of them begin by sending phishing emails to their victims. Based on this analysis, we measure the extent of data openly available of 30 entities to understand if and how much data they leak that can potentially be used by an adversary to craft sophisticated spear phishing emails. We then use this data to quantify how many employees are potential targets for such attacks. We show that 83 % of the analyzed entities leak several attributes of uses, which can all be used to craft sophisticated phishing emails.
Web measurement studies can shed light on not yet fully understood phenomena and thus are essential for analyzing how the modern Web works. This often requires building new and adjustinng existing crawling setups, which has led to a wide variety of analysis tools for different (but related) aspects. If these efforts are not sufficiently documented, the reproducibility and replicability of the measurements may suffer—two properties that are crucial to sustainable research. In this paper, we survey 117 recent research papers to derive best practices for Web-based measurement studies and specify criteria that need to be met in practice. When applying these criteria to the surveyed papers, we find that the experimental setup and other aspects essential to reproducing and replicating results are often missing. We underline the criticality of this finding by performing a large-scale Web measurement study on 4.5 million pages with 24 different measurement setups to demonstrate the influence of the individual criteria. Our experiments show that slight differences in the experimental setup directly affect the overall results and must be documented accurately and carefully.
Third-party tracking is a common and broadly used technique on the Web. Different defense mechanisms have emerged to counter these practices (e.g. browser vendors that ban all third-party cookies). However, these countermeasures only target third-party trackers and ignore the first party because the narrative is that such monitoring is mostly used to improve the utilized service (e.g. analytical services). In this paper, we present a large-scale measurement study that analyzes tracking performed by the first party but utilized by a third party to circumvent standard tracking preventing techniques. We visit the top 15,000 websites to analyze first-party cookies used to track users and a technique called “DNS CNAME cloaking”, which can be used by a third party to place first-party cookies. Using this data, we show that 76% of sites effectively utilize such tracking techniques. In a long-running analysis, we show that the usage of such cookies increased by more than 50% over 2021.
In the modern Web, service providers often rely heavily on third parties to run their services. For example, they make use of ad networks to finance their services, externally hosted libraries to develop features quickly, and analytics providers to gain insights into visitor behavior.
For security and privacy, website owners need to be aware of the content they provide their users. However, in reality, they often do not know which third parties are embedded, for example, when these third parties request additional content as it is common in real-time ad auctions.
In this paper, we present a large-scale measurement study to analyze the magnitude of these new challenges. To better reflect the connectedness of third parties, we measured their relations in a model we call third party trees, which reflects an approximation of the loading dependencies of all third parties embedded into a given website. Using this concept, we show that including a single third party can lead to subsequent requests from up to eight additional services. Furthermore, our findings indicate that the third parties embedded on a page load are not always deterministic, as 50 % of the branches in the third party trees change between repeated visits. In addition, we found that 93 % of the analyzed websites embedded third parties that are located in regions that might not be in line with the current legal framework. Our study also replicates previous work that mostly focused on landing pages of websites. We show that this method is only able to measure a lower bound as subsites show a significant increase of privacy-invasive techniques. For example, our results show an increase of used cookies by about 36 % when crawling websites more deeply.
The European General Data Protection Regulation (GDPR), which went into effect in May 2018, brought new rules for the processing of personal data that affect many business models, including online advertising. The regulation’s definition of personal data applies to every company that collects data from European Internet users. This includes tracking services that, until then, argued that they were collecting anonymous information and data protection requirements would not apply to their businesses.
Previous studies have analyzed the impact of the GDPR on the prevalence of online tracking, with mixed results. In this paper, we go beyond the analysis of the number of third parties and focus on the underlying information sharing networks between online advertising companies in terms of client-side cookie syncing. Using graph analysis, our measurement shows that the number of ID syncing connections decreased by around 40 % around the time the GDPR went into effect, but a long-term analysis shows a slight rebound since then. While we can show a decrease in information sharing between third parties, which is likely related to the legislation, the data also shows that the amount of tracking, as well as the general structure of cooperation, was not affected. Consolidation in the ecosystem led to a more centralized infrastructure that might actually have negative effects on user privacy, as fewer companies perform tracking on more sites.
Measurement studies are essential for research and industry alike to understand the Web’s inner workings better and help quantify specific phenomena. Performing such studies is demanding due to the dynamic nature and size of the Web. An experiment’s careful design and setup are complex, and many factors might affect the results. However, while several works have independently observed differences in
the outcome of an experiment (e.g., the number of observed trackers) based on the measurement setup, it is unclear what causes such deviations. This work investigates the reasons for these differences by visiting 1.7M webpages with five different measurement setups. Based on this, we build ‘dependency trees’ for each page and cross-compare the nodes in the trees. The results show that the measured trees differ considerably, that the cause of differences can be attributed to specific nodes, and that even identical measurement setups can produce different results.
Damit die medizinische Versorgung weiterhin flächendeckend gewährleistet werden kann und den explodierenden Kosten Einhalt geboten wird, muss ein Gesundheitswesen der Zukunft auf digitalen Technologien basieren. Die Kritikalität der entsprechenden Health-Services ruft Cyber-Sicherheit auf den Plan – die Sensibilität der im Gesundheitswesen verarbeiteten Daten den Datenschutz. Ein zukunftsfähiges Gesundheitswesen braucht einen stringenten Rechtsrahmen, eine moderne cloudbasierte Telematikinfrastruktur, die je nach Sicherheitsbedarf in verschiedenen Modellen umgesetzt werden kann, einen restriktiven Umgang mit globalen Public-Cloud-Providern, eine besonders gesicherte, leistungsstarke Forschungsdateninfrastruktur – etwa zur Optimierung von KI-Fähigkeiten, sichere Gesundheitsanwendungen und einiges mehr. Hier ein Ausblick.
Software updates take an essential role in keeping IT environments secure. If service providers delay or do not install updates, it can cause unwanted security implications for their environments. This paper conducts a large-scale measurement study of the update behavior of websites and their utilized software stacks. Across 18 months, we analyze over 5.6M websites and 246 distinct client- and server-side software distributions. We found that almost all analyzed sites use outdated software. To understand the possible security implications of outdated software, we analyze the potential vulnerabilities that affect the utilized software. We show that software components are getting older and more vulnerable because they are not updated. We find that 95 % of the analyzed websites use at least one product for which a vulnerability existed.
Vor vier Jahren betrat die Datenschutz-Grundverordnung (DS-GVO) die Bühne und brachte für Unternehmen und Nutzer gleichermaßen Veränderungen mit sich. Doch gerade im dynamischen Umfeld des Online-Marketings tauchen ständig neue und oft knifflige Fragen auf – Fragen, die nun im Rahmen einer wissenschaftlichen Studie etwas genauer unter die Lupe genommen wurden.
Cookie notices (or cookie banners) are a popular mechanism for websites to provide (European) Internet users a tool to choose which cookies the site may set. Banner implementations range from merely providing information that a site uses cookies over offering the choice to accepting or denying all cookies to allowing fine-grained control of cookie usage. Users frequently get annoyed by the banner’s pervasiveness as they interrupt “natural” browsing on the Web. As a remedy, different browser extensions have been developed to automate the interaction with cookie banners.
In this work, we perform a large-scale measurement study comparing the effectiveness of extensions for “cookie banner interaction.” We configured the extensions to express different privacy choices (e.g., accepting all cookies, accepting functional cookies, or rejecting all cookies) to understand their capabilities to execute a user’s preferences. The results show statistically significant differences in which cookies are set, how many of them are set, and which types are set—even for extensions that aim to implement the same cookie choice. Extensions for “cookie banner interaction” can effectively reduce the number of set cookies compared to no interaction with the banners. However, all extensions increase the tracking requests significantly except when rejecting all cookies.
This Paper explores how emergent technologies such as 6G and tactile Internet can potentially enhance cognitive, personal informatics (CPI) in participatory healthcare, promoting patient-centered healthcare models through high-speed, reliable communication networks. It highlights the transition to improved patient engagement and better health outcomes facilitated by these technologies, underscoring the importance of ultra-reliable, low-latency communications (URLLC) and realizing the tactile Internet’s potential in healthcare. This innovation could dramatically transform telemedicine and mobile health (mHealth) by enabling remote healthcare delivery while providing a better understanding of the inner workings of the patient. While generating many advantages, these developments have disadvantages and risks. Therefore, this study addresses the critical security and privacy concerns related to the digital transformation of healthcare. Our work focuses on the challenges of managing and understanding cognitive data within the CPI and the potential threats from analyzing such data. It proposed a comprehensive analysis of potential vulnerabilities and cyber threats, emphasizing the need for robust security frameworks designed with resilience in mind to protect sensitive cognitive data. We present scenarios for reward and punishment systems and their impacts on users. In conclusion, we outline a vision for the future of secure, resilient, and patient-centric digital healthcare systems that leverage 6G and the tactile Internet to enhance the CPI. We offer policy recommendations and strategic directions for stakeholders to create a secure, empowering environment for patients to manage their cognitive health information.
This paper discusses the transformative potential of 6G technology and the tactile Internet in reshaping participatory healthcare models while architecturing these digital healthcare systems with security and resiliency by design. As healthcare continues to advance towards more inclusive and patient-centered approaches, the role of emerging technologies like mobile health, 6G, and the Internet will become increasingly significant in facilitating these interactions while ensuring the security and privacy of patient data. Furthermore, the organizations providing healthcare to patients must ensure compliance with different regulations, which are also focusing more and more on cybersecurity issues.
Abstract
Filter lists are used by various users, tools, and researchers to identify tracking technologies on the Web. These lists are created and maintained by dedicated communities. Aside from popular blocking lists (e.g., EasyList), the communities create region-specific blocklists that account for trackers and ads that are only common in these regions. The lists aim to keep the size of a general blocklist minimal while protecting users against region-specific trackers.
In this paper, we perform a large-scale Web measurement study
to understand how different region-specific filter lists (e.g., a blocklist specifically designed for French users) protect users when visiting websites. We define three privacy scenarios to understand when and how users benefit from these regional lists and what effect they have in practice. The results show that although the lists differ significantly, the number of rules they contain is unrelated to the number of blocked requests. We find that the lists’ overall efficacy varies notably. Filter lists also do not meet the expectation that they increase user protection in the regions for which they were designed. Finally, we show that the majority of the rules on the lists were not used in our experiment and that only a fraction of the rules would provide comparable protection for users.
Abstract
This paper challenges the conventional assumption in cybersecurity that users act as rational actors. Despite numerous technical solutions, awareness campaigns, and organizational strategies aimed at bolstering cybersecurity, these often overlook the prevalence of non-rational user behavior. Our study, involving a survey of 208 participants, empirically demonstrates this aspect. We found that a significant portion of users (55.3%) would accept a substantial risk (35%) to click on a potentially malicious link or attachment. This propensity increases to 61% when users are led to believe there is a 65% chance of facing no adverse consequences. To address this irrationality, we explored the efficacy of nudging mechanisms within email systems. Our qualitative user study revealed that incorporating a simple colored nudge in the email intably enhance the ability of users to discern malicious emails, improving decision-making accuracy by an average of 10%.
Abstract
For years, researchers have been analyzing mobile Android apps to investigate diverse properties such as software engineering practices, business models, security, privacy, or usability, as well as differences between marketplaces. While similar studies on iOS have been limited, recent work has started to analyze and compare Android apps with those for iOS. To obtain the most representative analysis results across platforms, the ideal approach is to compare their characteristics and behavior for the same set of apps, e. g., to study a set of apps for iOS and their respective counterparts for Android. Previous work has only attempted to identify and evaluate such cross-platform apps to a limited degree, mostly comparing sets of apps independently drawn from app stores, manually matching small sets of apps, or relying on brittle matches based on app and developer names. This results in (1) comparing apps whose behavior and properties significantly differ, (2) limited scalability, and (3) the risk of matching only a small fraction of apps.
In this work, we propose a novel approach to create an extensive dataset of cross-platform apps for the iOS and Android ecosystems. We describe an analysis pipeline for discovering, retrieving, and matching apps from the Apple App Store and Google Play Store that we used to create a set of 3,322 cross-platform apps out of 10,000 popular apps for iOS and Android, respectively. We evaluate existing and new approaches for cross-platform app matching against a set of reference pairs that we obtained from Google's data migration service. We identify a combination of seven features from app store metadata and the apps themselves to match iOS and Android apps with high confidence (95.82 %). Compared to previous attempts that identified 14 % of apps as cross-platform, we are able to match 34 % of apps in our dataset. To foster future research in the cross-platform analysis of mobile apps, we make our pipeline available to the community.
Abstract
In this paper, we shed light on shared hosting services’ security and trust implications and measure their attack surfaces. To do so, we analyzed 30 shared hosters and found that all of them might leak relevant information, which could be abused unnoticed. An adversary could use this attack surface to covertly extract data from various third parties registered with a shared hoster. Furthermore, we found that most hosters suffer from vulnerabilities that can be used by an internal attacker (i.e., someone using the service) to compromise other hosted services or the entire system.