DTL 2018 text logo

Hackathon BD4SG

Challenge – Introduction and Objectives

Machine Learning and Artificial Intelligence are increasingly applied across our society, and predictions are that this will grow exponentially in the coming years. Machines will take more and more decisions for us, yet recent research and publications have demonstrated that there are several undesired consequences of AI generating concerns about bias & unfair discrimination and black box algorithms.  A lot has been discussed about data privacy & security, and the debate around ethical use of data is only starting. Quite often, the discussion about data ethics is happening on a case by case basis and in this challenge we want to promote a more structural approach.

The objective is twofold:

1) To find out whether those concerns are limited to a few highly visible cases, or whether they are potentially happening on a much larger scale

We all know cases such as COMPAS, the crime-predicting algorithm that was more likely to incorrectly categorize black defendants as having a high risk of reoffending. However, the real question is whether these are incidental cases or whether this happens at large: (the support information section includes additional explanation for each of the topics).

  • Detect, explain and visualize cases of unfair discrimination due to improper use or implementation of AI systems
  • Identify and visualize Open Data sets that contain undesired bias potentially affecting protected groups

The second objective of this challenge is:

2) To develop tools and/or algorithms that help detect and mitigate the concerns 

The ethical dimension of artificial intelligence is quickly getting traction both in press, research and think tanks. Consequently, some of the top companies in this industry are developing tools for detecting and removing bias from AI. Not only companies such as Accenture, Microsoft and Facebook have created their own private tools, but open-source tools have also been released. For instance, IBM launched AI Fairness 360, an open source library to help detect and remove bias in machine-learning models and datasets. Topics of interest for this objective therefore include but are not limited to: (The support information section contains additional information for each of the listed topics).

  • Tools for explaining the conclusions reached by an AI algorithm towards mitigating the fear of “unexplainable” AI
  • Tools to detect bias in data sets related to sensitive data (impacting protected groups)
  • Tools to detect correlations in data sets between normal variables and sensitive variables)
  • Tools to re-identify anonymized data of public data sets
  • Tools to detect unbalanced outcomes of algorithms within sub groups of the population regarding false positives and false negatives
  • Methods & tools for providing an “ethical” score of data sets

Logistics & Rules

1. Rules

  • Maximum number of participants per team: 4
  • Projects should be related to one, or both, of the two objectives listed
  • Submitted projects should be original, not previously published nor honored with an award
  • The hackathon starts at the opening of the registration and lasts until the deadline for delivery of the projects
  • Apart from the ceremony event where prizes are delivered to the winners, the hackathon requires no physical presence
  • The winners receive the prize award in exchange for a, worldwide, perpetual, non-exclusive license for Telefonica to use the projects
  • The challenge is open worldwide

2. Deadlines

  • Opening of the registration: 11/14/2018
  • Delivery of the projects: 12/15/2018
  • Winners notified and announced prior to 12/31/2018
  • Ceremony for winners: early 2019 (TBC)

3. Prizes

  1. First prize: €1500
  2. Second prize: €1000
  3. Third prize: €750
  4. Joker: €250

4. Contact platform:

  • Each team that signs up for the hackathon will be added to a Basecamp repository where they can upload documents and communicate with the event organizers

This website uses cookies to improve user experience. By using our website you consent to all cookies in accordance with our Cookie Policy.

OK More information