Filtern
Dokumenttyp
- Wissenschaftlicher Artikel (13)
- Konferenzveröffentlichung (10)
- Dissertation (1)
- Bericht (1)
Schlagworte
- Artificial Intelligence (1)
- Autonomous Agents (1)
- Cookie <Internet> (1)
- Datenschutz (1)
- MITRE (1)
- Machine Learning (1)
- Multi-Agent System (1)
- OSINT (1)
- Objektverfolgung (1)
- Online-Werbung (1)
Institut
The European General Data Protection Regulation (GDPR), which went into effect in May 2018, brought new rules for the processing of personal data that affect many business models, including online advertising. The regulation’s definition of personal data applies to every company that collects data from European Internet users. This includes tracking services that, until then, argued that they were collecting anonymous information and data protection requirements would not apply to their businesses.
Previous studies have analyzed the impact of the GDPR on the prevalence of online tracking, with mixed results. In this paper, we go beyond the analysis of the number of third parties and focus on the underlying information sharing networks between online advertising companies in terms of client-side cookie syncing. Using graph analysis, our measurement shows that the number of ID syncing connections decreased by around 40 % around the time the GDPR went into effect, but a long-term analysis shows a slight rebound since then. While we can show a decrease in information sharing between third parties, which is likely related to the legislation, the data also shows that the amount of tracking, as well as the general structure of cooperation, was not affected. Consolidation in the ecosystem led to a more centralized infrastructure that might actually have negative effects on user privacy, as fewer companies perform tracking on more sites.
Measurement studies are essential for research and industry alike to understand the Web’s inner workings better and help quantify specific phenomena. Performing such studies is demanding due to the dynamic nature and size of the Web. An experiment’s careful design and setup are complex, and many factors might affect the results. However, while several works have independently observed differences in
the outcome of an experiment (e.g., the number of observed trackers) based on the measurement setup, it is unclear what causes such deviations. This work investigates the reasons for these differences by visiting 1.7M webpages with five different measurement setups. Based on this, we build ‘dependency trees’ for each page and cross-compare the nodes in the trees. The results show that the measured trees differ considerably, that the cause of differences can be attributed to specific nodes, and that even identical measurement setups can produce different results.
Software updates take an essential role in keeping IT environments secure. If service providers delay or do not install updates, it can cause unwanted security implications for their environments. This paper conducts a large-scale measurement study of the update behavior of websites and their utilized software stacks. Across 18 months, we analyze over 5.6M websites and 246 distinct client- and server-side software distributions. We found that almost all analyzed sites use outdated software. To understand the possible security implications of outdated software, we analyze the potential vulnerabilities that affect the utilized software. We show that software components are getting older and more vulnerable because they are not updated. We find that 95 % of the analyzed websites use at least one product for which a vulnerability existed.
Advanced Persistent Threats (APTs) are one of the main challenges in modern computer security. They are planned and performed by well-funded, highly-trained and often state-based actors. The first step of such an attack is the reconnaissance of the target. In this phase, the adversary tries to gather as much intelligence on the victim as possible to prepare further actions. An essential part of this initial data collection phase is the identification of possible gateways to intrude the target.
In this paper, we aim to analyze the data that threat actors can use to plan their attacks. To do so, we analyze in a first step 93 APT reports and find that most (80 %) of them begin by sending phishing emails to their victims. Based on this analysis, we measure the extent of data openly available of 30 entities to understand if and how much data they leak that can potentially be used by an adversary to craft sophisticated spear phishing emails. We then use this data to quantify how many employees are potential targets for such attacks. We show that 83 % of the analyzed entities leak several attributes of uses, which can all be used to craft sophisticated phishing emails.
Web measurement studies can shed light on not yet fully understood phenomena and thus are essential for analyzing how the modern Web works. This often requires building new and adjustinng existing crawling setups, which has led to a wide variety of analysis tools for different (but related) aspects. If these efforts are not sufficiently documented, the reproducibility and replicability of the measurements may suffer—two properties that are crucial to sustainable research. In this paper, we survey 117 recent research papers to derive best practices for Web-based measurement studies and specify criteria that need to be met in practice. When applying these criteria to the surveyed papers, we find that the experimental setup and other aspects essential to reproducing and replicating results are often missing. We underline the criticality of this finding by performing a large-scale Web measurement study on 4.5 million pages with 24 different measurement setups to demonstrate the influence of the individual criteria. Our experiments show that slight differences in the experimental setup directly affect the overall results and must be documented accurately and carefully.
In diesem Artikel wird ein Alert-System für das Online-Banking vorgestellt, welches das Schutzniveau im Kontext von Social-Engineering-Angriffen sowohl clientseitig als auch serverseitig erhöhen soll. Hierfür wird durch das Alert-System ein kontinuierliches Lagebild über die aktuelle Gefahrenlage beim Online-Banking erstellt. Bei konkretem Bedarf wird der Nutzer punktuell vor aktuellen Betrugsmaschen gewarnt und zielgerichtet über Schutzvorkehrungen und Handlungsempfehlungen informiert. Für die Berechnung der aktuellen Gefahrenlage wurden unterschiedliche off-the-shelf-Algorithmen des Maschinellen Lernens verwendet und miteinander verglichen. Die Effektivität des Alert-Systems wurde anhand von echten Betrugsfällen evaluiert, die bei einer Bankengruppe in Deutschland aufgetreten sind. Zusätzlich wurde die Usability des Systems in einer Nutzerstudie mit 50 Teilnehmern untersucht. Die ersten Ergebnisse zeigen, dass die verwendeten Verfahren dazu geeignet sind, die Gefahrenlage im Online-Banking zu beurteilen und dass ein solches Alert-System auf hohe Akzeptanz bei Nutzern stößt.
Damit die medizinische Versorgung weiterhin flächendeckend gewährleistet werden kann und den explodierenden Kosten Einhalt geboten wird, muss ein Gesundheitswesen der Zukunft auf digitalen Technologien basieren. Die Kritikalität der entsprechenden Health-Services ruft Cyber-Sicherheit auf den Plan – die Sensibilität der im Gesundheitswesen verarbeiteten Daten den Datenschutz. Ein zukunftsfähiges Gesundheitswesen braucht einen stringenten Rechtsrahmen, eine moderne cloudbasierte Telematikinfrastruktur, die je nach Sicherheitsbedarf in verschiedenen Modellen umgesetzt werden kann, einen restriktiven Umgang mit globalen Public-Cloud-Providern, eine besonders gesicherte, leistungsstarke Forschungsdateninfrastruktur – etwa zur Optimierung von KI-Fähigkeiten, sichere Gesundheitsanwendungen und einiges mehr. Hier ein Ausblick.