DTL 2016

DTLCONFERENCES

DTL 2016 Travel Grants.

DTL Travel Grants 2016

The following proposals are amongst the top-third of all proposals and are invited to share their ideas and work with other members of the DTL community. We will reserve a special place for them to present their projects at the forthcoming DTL in November 2016 at Columbia University (New York) as well as a travel grant to attend it.

l

Split-test: Can Your Metadata Reconcile Multiple Identities

Augustin Chaintreau (co-PI) (Computer Science Department, Columbia University); Oana Goga (co-PI) (Max Plank Institute for Software Systems); Christopher Riederer (Columbia University)

Pseudonyms, pen names and other non-identifying signatures have ancestral roots; recently they became so common as to become a xture of our online life. The social web binds each of our activities to an immediate and attributed exposure, such as Facebook via its real name policy. But outside social networking boundaries pseudos are widely used to split the trails of our digital footprints. As many experienced, sometimes the lifestyles, activities or opinions you promote in your photo stream, your microblog, or any of your pro le is better held at bay from your everyday life.

Unfortunately, as we proved in our past research, many among us are eager to share metadata in a way that renders pseudonyms de facto obsolete. The craft in the choice of two innocuous usernames makes no diference when a few glances at your locations or other features reveal that those pro les are jointly owned. In this project, we build Split-test which allows everyone to immediately and safely detect when two of their accounts are at risk of identity reconciliation. Split-test turns upside down recent research on pro le atching and mobility models – including some conducted by the PIs. Through Split-test, owners of Instagram, Twitter, Facebook and other online accounts submit a pair of accounts and receive an estimate of their singular similarities, together with a list of their most identifying features. Split-test is designed to (1) leverage social network pro le overlap and maximum likelihood prediction in a conservative manner, (2) exploit multiple data-sets gathered by the PIs across social media domains for initial validation and selfimprovement, and (3) provide privacy by comparing data and metadata of both accounts via local access.

l

A Platform for Contextual Mobile Privacy

Serge Egelman (ICSI / UC Berkeley); Primal Wijesekera (University of British Columbia, Canada)

Mobile platforms now include privacy controls that allow users to control how third-party applications access potentially sensitive data, such as personal information (e.g., address book contacts) or sensor data (e.g., location). However, previous research has shown us that these systems are failing users due to poor usability. For instance, users often do not notice the indicators or do not understand them. As a result, both Apple’s iOS and Google’s Android have recently shifted towards an ask-on- rst-use model: when applications rst access potentially sensitive information, the user is prompted with a runtime warning. However, this model is still likely failing users because the context in which the user rst grants access may be substantially di erent than the contexts in which applications subsequently access the data (without having to ask permission from the user). Our goal is to give mobile device users more control over how applications are accessing their data by designing systems that are contextually aware, such that users are only confronted with privacy decisions in situations that they are likely to nd concerning. In this proposal, we outline two very speci c deliverables: instrumentation to monitor how often (and under what circumstances) sensitive user data is being accessed and new privacy controls to allow users to more e ectively regulate access to their information.

Cross-device Tracking Canaries: Automated analysis of personally identifiable information flows

George Danezis (University College London); Vasilios Mavroudis (University College London)

Cross-device Tracking (XDT) is currently the Holy Grail for marketers, allowing them to track a user’s activities across different devices to provide more targeted content. Unfortunately, XDT comes with numerous security and privacy shortcomings which have been widely neglected by advertisers. More specifically, the XDT ecosystem is usually treated as a walled-garden, where it is assumed that only benign actors are participating. Additionally, in most cases the users are participating inadvertently without any clearly usable option to opt-out. One prime example is XDT based on ultrasounds, where unbeknownst to the user inaudible beacons are used to link her across her devices. Existing tools (e.g., Adblock Plus, AdAway, Ghostery) are aimed towards blacklisting all traffic from advertising companies. However, to mitigate the negative effects of XDT the user needs to install a different application on each of his devices and in many cases go through a technically challenging process (e.g., Android OS rooting). Moreover, some of these applications maintain whitelists, with opaque entry procedures.

We argue that blacklisting is not a sustainable security practice and instead we focus on tools providing end-users with full control over their personal data, their profile, and their participation in the XDT ecosystem. Such an approach will protect the user, will raise awareness, and will promote the use of privacy-preserving practices in the advertising industry. Towards this goal, we will initially develop a testbed to study the most commonly used cross-device tracking techniques in practice. We will initially leverage our existing work on cross-device tracking using ultrasounds and extend our current testbed to support multiple XDT techniques. Surprisingly, until now no systematic analysis of the techniques used in the wild, has been conducted and there is very little understanding of the complex device linking methods used by advertisers. Subsequently, based on our findings, we will design and develop a set of tools that will regulate the flow of personal data and the XDT process. These tools will identify interesting portions of source code in websites and applications (e.g., windows binaries, Android apps), and either automatically monitor and filter the flow of personal data, or in cases of compiled binaries, notify the user and highlight areas of interest for a human analyst to study.

l

Choose Your Own Tw-adventure: A Game for Reverse-Engineering

Libby Hemphill (Illinois Institute of Technology); Carol Elizabeth Schmitz (Illinois Institute of Technology)

In March 2016, ten years after its initial launch, Twitter changed its timeline display behavior from reversechronological order to an algorithmically curated order based on what Twitter believes users are “likely to care about most.” [1] Twitter claims to select those tweets “based on accounts you interact with most, Tweets you engage with, and much more.” [1] In this project, we will:

1. reverse engineering how Twitter constructs these curated timelines and
2. develop a game that teaches users how their behaviors influence the content they are shown.

Twitter’s end users stand to gain the most from our project, but the impacts of the curation algorithm are likely useful to activists as well. For instance, if the algorithm privileges certain types of content at the expense of others’ (e.g., dominant voices over dissenting views), exposing the algorithm will make it easier to counteract its effects.

l

Outcomes-Based Evaluation of Web Tracking Defenses

Franziska Roesner (University of Washington); Tadayoshi Kohno (University of Washington); Paul Vines (University of Washington)

Large amounts of the content on the web today is funded by advertising. While this phenomenon represents a windfall for users in terms of available content and services, it comes at a price in terms of privacy. The primary goal of advertisers is to advertise as effectively as possible, which has led to an entire data economy in which advertisers attempt to gather as much data about users as possible, in order to target them with advertisements more effectively.

Sophisticated targeting of advertisements can be potentially harmful to end users in several ways. First, effectively targeting ads requires that advertisers collect and algorithmically infer significant amounts of information about users, including possibly private and/or sensitive information, such as health conditions or financial situations (e.g., debt). Second, advertisers can (perhaps unintentionally) take advantage of this information to target ads that manipulate users (e.g., targeting people prone to debt or depression) or discriminate on pricing.

These privacy and other concerns surrounding the practices of Internet advertising companies have led to a variety of defenses aimed at preventing these types of tracking and targeted advertising. For example, browser extensions like AdBlock directly block ads; browser extensions like Ghostery and Privacy Badger aim to block trackers; and browser vendors have implemented a “Do Not Track” option that sets a header on all outgoing requests. However, to date there has not been a largescale empirical evaluation of the effectiveness of these defenses, leaving end users without clear options for protecting their privacy. Indeed, a recent EU report [2] on online privacy tools identified the need for rigorous evaluation of tracking defenses.

Transparency in Ranking

Julia Stoyanovich (Drexel University); Vera Zaychik Moffitt (Drexel University)

Algorithmic decisions often result in scoring and ranking individuals, to determine credit worthiness, de-sirability for college admissions and employment, and attractiveness as dating partners. As is often the case with algorithmic processes, rankers can, and do misbehave | they discriminate against individuals and violate privacy. Rankers tend to be opaque, making discrimination and privacy violations dicult to identify. Despite the ubiquity of rankers, there is, to the best of our knowledge, no technical work that focuses on making rankers transparent. The goal of our project is to ll this gap.

In this project we propose to lay the foundations for enabling transparency for algorithmic rankers. We will do so by developing methods for explaining ranked outputs to a user, making ranked results less opaque. We will also develop methods for reverse-engineering rankers, which will demystify the ranking process, and will help support auditing for fairness and non-discrimination. We will implement and evaluate all methods developed as part of this work in a user-facing prototype called TranspaRank.

This work will make critical contributions to computer science, and will give transparency tools to every-day Web users and regulators. All outcomes of this work will be made publicly available in the open source.

DTL2017

You can now check the Program and confirmed speakers of DTL Conference 2017 to take place from 11-13th December in Barcelona. Get your early tickets before Oct 15th. Join our Newsletter to stay up to date!

This website uses cookies to improve user experience. By using our website you consent to all cookies in accordance with our Cookie Policy.

OK More information