AI Ethics

Fairness in AI – How Mature are We to Introduce it?

Fairness in AI is some topic which is gaining pace very fast. Looking at it holistically, we don’t have a proper definition for Fairness in AI. What is fairness? Something which ensures that there is no bias. The bias can be racial, the bias can be economic, can be age related or can be anything.
How is this bias introduced? There can be a few standard factors which can cause the bias, though –

  1. Incorrect labels: The classic case is that of Unprofessional Hairstlyes search. Any search with the mentioned search string will return a bunch of African Women with their traditional hairstyles. This can be due to a historic dataset used for this purpose or a personal bias of the person labelling the stuff as such
  2. Use of Proxy Data: If, for some reason, there is no direct data available, how do you approach the question? One good case of proxy data usage is job eligibility criterion. A person having a degree will get a job is supposed to be a straightforward requirement. But, what if the society exists in such a way that only a certain segment of the society is traditionally educated, as like 1960s America where education is almost always the forte of Whites? Without even thinking about the Blacks, we have created a data model which discriminates against the Blacks.
  3. Incomplete Data: Incomplete data also includes bias. For example, how do you assess the creditworthiness of a person who didn’t take a loan whatsoever? Or a person who is arrested and released on good conduct, and wants to work hard to lead a changed life?
  4. Societal Bias: Inherent bias of the society is also a considerable factor in defining the bias. A singer by default makes one visualize a woman and a surgeon, a man. Can you visualize a woman truck driver or a woman plumber? If a society which doesn’t have woman plumbers, how do you expect the dataset to handle women applying for plumbing jobs?
  5. Accuracy vs Tradeoff: At some point while training the model, you need to call quits – a model can never have 100% accuracy. It can only learn from a false prediction but it can never be perfect on the word go. If the error which inherently exists in the model can introduce data discrepancies, there is no need to think twice if it introduces bias in fairness.

The problem is, all of these should be addressed. The fundamental question then would be, who decides what is less fair or more fair, or how do you quantify bias in a model. It is possible that the models we have today are fairly accurate but they themselves are biased against, say a person who thinks he is a deer and roams around, dressed like a deer. There are demands for institutionalization of reduction of bias, but again, regulation is also not fair!!
Take for example, GDPR compliance. A regulation like GDPR at the onset will create two problems – it raises questions over the capability of a smaller entity to comply with the norms. Small company, small fine, right? But, on the other hand, it what stops the big company from targeted advertising?
Fairness in AI is a philosophical quagmire. What should be the first focus? A fully mature AI over which standards can be created easily or an AI in inception being stifled by impossible rules? A level of bias can be reduced by inspection or using models identifying bias. But, it should only end at there and there shouldn’t be any mandate to have it as a legal compulsion.

Leave a comment