DTL 2017

DTLCONFERENCES

DTL 2017 Grantees.

DTL Grantees 2017

45 projects from 18 different countries have been submitted to our Grants Program 2017. After hectic days reviewing all the projects submitted, the DTL Research Committee Chairs, Nikolaos Laoutaris and Claude Castelluccia, are proud to announce the projects awarded with a €50,000 grant each.

 

A note from the DTL Research Commitee Chairs, Nikolaos Laoutaris and Claude Castelluccia:

All the 45 submissions received were discussed extensively online and 11 of them were further vetted in the live PC meeting.

For a proposal to make it as a finalist, it had to describe a novel working end-user software or a collection platform that improves data/algorithm transparency and/or privacy.

Proposals got bonus points if they targeted priority areas for this year or if they built constructively upon earlier DTL tools.

9 submissions out of 11 discussed in the PC meeting were presented in no particular order to the DTL board on Friday June 23. The board selected 6 submissions to fund.

We thank the PC for their hard work in reviewing, extensive online discussions and participating in the PC meeting. We thank the board for their pointed questions and selecting submissions that we all hope will generate transparency software. We thank all the submittees for their time.

We expect the grant awardees to complete their software, make the code and data available, acknowledge the support from DTL, and present their results with a demo at DTL next year.

FA*IR: A tool for fair rankings in search

Meike Zehlike (Technische Universität Berlin); Francesco Bonchi (ISI Foundation // Eurecat); Carlos Castillo (Eurecat // UPF); Sara Hajian (Eurecat); Ricardo Baeza-Yates (UPF); Odej Kao (Technische Universität Berlin)

People search engines, for example, are increasingly common for job recruiting, for finding a freelancer, and even for finding companionship or friendship. As in similar cases, a top-k ranking algorithm is used to find the most suitable way of shortlisting and ordering the items (persons, in this case), considering that if the number of candidates matching a query is large, most users will not scan the entire list. Conventionally, these lists are ranked in descending order of some measure of the relative quality of items (e.g. years of experience or education, up-votes, or inferred attractiveness). Unsurprisingly, the results of these ranking and search algorithms potentially have an impact on the people who are ranked, and contribute to shaping the experience of everybody online and offline. Due to its high importance and impact, our aim is to develop the first fair open source search API. This fair ranking tool will enforce ranked group fairness, ensuring that all prefixes of the ranking have a fair share of items across the groups of interest, and ranked individual fairness, reducing the number of cases in which a less qualified or lower scoring item is placed above a more qualified or higher scoring item. We will create this fair search API by extending a popular, well-tested open source search engine: Apache Solr. We will develop this search API considering both the specific use case of people search, as well as considering a general-purpose search engine with fairness criteria. Taking a long-term view, we believe the use of this tool will be an important step towards achieving diversity and reducing inequality and discrimination in the online world, and consequently in society as a whole.

Transparency via Automated Dynamic Analysis at Scale

Serge Egelman (UC Berkeley); Primal Wijesekera (UC Berkeley)

We propose a new transparency tool—in the form of a website and API—that allows end-users, regulators, and developers to examine the privacy behaviors of mobile applications. This tool will display the results of our automated application analysis, providing transparency into data-sharing behaviors. Our ultimate goal is to create an end-to-end testbed that allows us to offer analytics-as-a-service at scale: we take a mobile application binary (i.e., Android APK) as input, automatically execute it in a virtualized environment that is monitored by our instrumentation, perform a broad exploration of code branches via a combination of simulated user input and crowdsourced real user input, and then generate reports of relevant application security and privacy behaviors as output.

Our tools will allow us to detect how applications access and share sensitive data, thereby providing transparency into location tracking, device and user fingerprinting, PII leakage, and even various anti-competitive business practices (e.g., privacy policy and legal violations). The results of this automated and reproducible analysis will be structured in a database and made available to the research community, to detect emergent security threats; to relevant regulatory and enforcement authorities, who can enforce violations of regulations and/or privacy policies; and to the general public, who can find more information about the applications they or their family members might use. Through the automatic generation and dissemination of this data using our tools, we will provide transparency into mobile application behaviors.

Increasing transparency of data aggregation by Facebook and partners

Alan Mislove (Northeastern University); Krishna P. Gummadi (Max Planck Institute for Software Systems), Giridhari Venkatadri (Northeastern University)

In this proposal, we aim to develop a tool for end users to explore how their data is being aggregated by Facebook and its partners. In other words, our goal is to allow users to see what data Facebook knows about them and is making available as a targeting parameter to its advertisers, and whether that data comes from Facebook itself or a data broker. Our proposal has three primary tasks:

1) Measurement study of data brokers. We first plan on collecting a list of all of the Facebook partner data brokers, and the categories of data that they provide, by crawling Facebook’s ad submission web pages in countries around the world. Our preliminary data crawl shows that there are at least five different data brokers who partner with Facebook who collectively provide 35 categories of data (covering 1,111 different potential attributes), and that the data they provide varies significantly by country.

2) End-user tool to reveal Facebook and data broker information. We propose to then build a tool that will (a) show Facebook users the attributes that Facebook has on them that advertisers are able to target, and reveal the data partner from which this data came.

3) User survey for data accuracy. Finally, we propose to implement a user survey as part of the end-user tool. This survey will allow users to report back to us (in an anonymized fashion) about the completeness and accuracy of the data that Facebook and its partners have collected. The results of this survey will allow us to understand how much data Facebook is collecting worldwide.

Facing the hard truth: showing users what mobile apps can learn about them from the location data they collect

Albert Banchs (University Carlos III of Madrid / IMDEA Networks); Marco Fiore (Consiglio Nazionale delle Ricerche), Marco Gramaglia (University Carlos III of Madrid), Dario Bega (University Carlos III of Madrid / IMDEA Networks)

While most mobile applications nowadays fulfil basic privacy-preserving properties, they still leave significant surfaces open for privacy breaches that leverage on subtle features of the collected data. The focus of this proposal is to analyse this problem for trajectory data. Such data is the foundation of Location-Based Services (LBS), which represent a significant portion of today’s most popular mobile services, but may also be collected by applications that offer opt-in geo-referencing or even by the mobile operator itself. While the the user is usually informed at install time that the service will access positioning data, she is given absolutely no information about the frequency with which data is collected and how it is used precisely upon collection — including purposes that go beyond the primary objective of the application. The objective of this project is raising user awareness about the privacy leakage of trajectory data. To this end, we will provide end-users with (i) a clear, intuitive visualizations of the precise spatiotemporal trajectory information gathered by each mobile application, (ii) an equivalent visualization from localization data possibly gathered by the operator from their mobile network activity, and (iii) indirect knowledge that may infer from the trajectory data it gathered by using data mining techniques, including, e.g., home address, employer’s name, commuting patterns, religion, health issues, etc.

Where the Insecure Things Are: Easy-to-use identification of insecure and privacy leaking IoT devices

Sascha Fahl (Head of Information Security Chair, Leibniz University of Hanover); Yasemin Acar (Leibniz University of Hanover); Dominik Wermke (Leibniz University of Hanover)

The ever increasing number of smart devices enrich users’ lives in various ways. Smart door locks grant or deny access to people’s houses, medical devices such as insulin pumps have Internet connected remote controls, toys for kids have integrated video cameras, heating can be scheduled to match the season, coffee can be brewed before leaving the bed and light’s hue can be changed according to the mood, all by applying some settings in an app.

But this convenience also comes with some hefty drawbacks: Security and privacy protecting features for Internet of Things ({IoT}) devices and associated control apps are often lenient to say the least. Especially end-users inexperienced in information security and privacy are exposed to a large number of risks by vulnerable devices.

In recent times, this has led to a wide reaching integration of {IoT} devices into botnets, turning the generally weak computation power of the devices into devastating denial-of-service attacks. Vulnerable devices also led to critical leaks of private information such as voice messages in kids’ toys. While existing solutions allow end-users to identify insecure and malicious devices, they mostly rely on specific hardware in the form of routers or only detects devices that are likely already infected. Purchasing specific hardware requires enormous effort from end-users and requires them to be very much aware of the possible risks stemming from {IoT} devices. Given that previous research showed, that information security and privacy are mostly never end-users’ primary concerns, we argue that a promising solution to the given problem needs to be easy-to-use and must be fully integrated into the user’s existing device infrastructure.

Therefore, we believe that end-users would be empowered to protect their own privacy if they did not need specialized hardware or having to research tech-savvy articles on the Internet. We want to help end-users identify vulnerable {IoT} devices in their network, offering users comprehensive risk assessment and easy-to-apply countermeasures.

For this, we propose a mobile app called IoTdroid that (a) scans {IoT} devices in a network for known vulnerabilities such as insecure network connections or authentication, (b) acts as a {WiFi} hotspot to investigate network traffic between {IoT} devices and remote cloud servers for insecurities and privacy leaks and (c) identifies vulnerable {IoT} control apps installed on the user’s mobile device. End-users will be enabled to quickly investigate the security and privacy status of their {IoT} devices. We will distribute \app{} as a free Android application in Google Play.

Exposing Demographic Biases of News Publishers and Promoters on Social Media

Krishna P. Gummadi (Max Planck Institute for Software Systems); Niloy Ganguly (Indian Institute of Technology Kharagpur); Abhijnan Chakraborty (Indian Institute of Technology Kharagpur)

Online social media sites like Facebook and Twitter have emerged as popular destinations for users to receive, share, and discuss news about the world around them. Unlike traditional media, today there is no mechanism available for the social media users to know the biases of the news publishers. In this proposal, we argue for bringing in transparency in the news production and dissemination over social media. More specifically, our aim is to build a tool for the news readers, which makes the biases of news publishers and promoters transparent

DTL2017

You can now check the Program and confirmed speakers of DTL Conference 2017 to take place from 11-12th December in Barcelona. If you are a student or academic you can now get your “Student ticket. Join our Newsletter to stay up to date!

This website uses cookies to improve user experience. By using our website you consent to all cookies in accordance with our Cookie Policy.

OK More information