I’m a PhD student at the Technical University in Berlin and I’m working in algorithmic fairness, together with people from Eurecat in Barcelona and from UPF in Barcelona
How would you define fairness?
I would define fairness in general in terms of equality of opportunity. This is what people usually do in philosophies nowadays, so that they refer fairness to be something that in a state, for example, people should have equal chances in their life to achieve a certain good or to achieve a superior advantage. And then there is a definition that like in particular that talks about substantive equality of opportunity that does not only mean that people are judged based on their qualifications, but also requires that the probability of getting such qualifications are equally distributed amongst groups. And this should be reflected in algorithms as well.
Do companies really care about data fairness? And users?
I would say no, as long as there is no legislation or if it gains them some profit, which I’m not sure if it will do, but society I think does care a lot and so the user which is affected by it of course also cares a lot if they know, which is not necessarily the case.
What do you think could be the most significant challenges and developments in the field of online fairness?
I think one of the biggest challenge in algorithmic fairness is that all the approaches that we know today require that we have some knowledge about, not protected attributes, but about sensitive attributes of the uses and this is a very strong contradiction towards the privacy issue that we face. So, all the approaches nowadays rely on some kind of knowledge about the sensitive attributes, but at the same time you want to get rid of this knowledge in the databases. And this is something that we don’t really understand how to deal with yet.
We are planning to develop plug-ins for a very commonly used search API, Lucene, to provide mechanisms that allow fair search results in web search
Which are your current projects involving data privacy and transparency?
One of the projects is certainly the one that was funded by the DTL. We are planning to develop plug-ins for a very commonly used search API, Lucene, and elastic search, to provide mechanisms that allow fair search results in web search and then there is also other issues I want to address within the fairness community which is that much of the work is hard to compare amongst each other because we don’t really have a common understanding of what fairness is and what bias is and people tend to take their own definitions, wherever they come from, and say I do something against it, but you are not really able to compare the approaches at the moment because people in the community don’t really understand each other when they talk about fairness and bias.
What do you think of the Data Transparency Lab?
Well I am very happy that I got the opportunity to let my research evolve by the Data Transparency Lab, so I’m very happy to be grantee and I also like the focus on developing tools very much because I think we can do like a lot of theory things, but if it never comes into practice then it’s not going to help anyone.