top of page
k tress 3.jpeg

Data Society

The rise of the blackbox society

A recommendation algorithm is intrinsically discriminant in the sense that its role is to distinguish, discern and treat individuals differently as a function of certain characteristics. Assurances need to be made that these classifications are based on objective and socially acceptable criteria, sheltered from the establishment of name registries, such as bank blacklists that are based on nonsensical criterion, or such as lists of potential terrorists or potentially dangerous individuals. These mechanisms need to be upheld if necessary, without the constant appeal to business secrecy and competitive fairness as an attempt to close off access the mechanisms behind these systems.

​

In his book The Black Box Society, Frank Pasquale, Law Professor at the University of Maryland, challenges the power of hidden algorithms that influence our choices, determine our solvency, and judge our employability. He asks: “Can a credit card company authorize an increase in interest rates of a couple because they’re seeing a marriage counselor? If so, should the owners of the card be informed?”

 

If algorithms are neutral technology tools, their uses are not. To give an example, Frank Pasquale explains that three credit bureaus, Experian, TransUnion, and Equifax, routinely score millions of individuals. But not always the same way. In one study of 500,000 files, “29% of consumers [had] credit scores that differ by at least fifty points between credit bureaus.” Fifty points can mean tens of thousands of dollars in extra payments over the life of a mortgage; unless the aims of the different bureaus diverge in undisclosed ways, so much variation suggests that the assessment process is more than a little arbitrary.

 

In a report, the FTC showed how algorithms can refuse rights for certain individuals based on actions committed by other people they share characteristics with. This is the case of a credit card company that diminished the credit ceiling of a client based on an analysis done on other clients frequenting the same spots as the client and having bad credit history. Another credit card company reached a settlement with the FTC because of its scoring practices on consumers. They varied based on the choices revealed by the credit cards such as consulting marriage counseling. These practices contributed to a form of deindividualization, treating individuals based on characteristics shared by the group they assimilate to as opposed to their own behavior and actions.

​

These behavioral profiling techniques draw up the future of risk management. The criterion used to determine essential rights such as precontractual agreements, access to social rights, access to credit, insurance, employment ignore the concerned individuals. Nobel Prize winner in Economics Jean Tirole underlines this danger in this form of risk management, already present in collective insurance contracts, most notably to the detriment of the unemployed or people of old age with health issues. Internet companies will select “good risks” by offering contracts issued on the basis of data collected from the web that supersedes in terms of size and accuracy any form of data currently held by insurance companies. He even adds that “the future competitors of Axa, Generali or Nippon life will be called Google, Amazon or Facebook”.

 

Behavioral profiling equally has legal consequences. For example, the American company Northpointe developed a software system named Compas, which is used to produce a criminal sentence before an actual trial and evaluate recidivism rates. A Wisconsin resident by the name of Eric L. Loomis, was sentenced to 6 years in jail following the software’s verdict, with the system precising that the accused presented a “high risk of violence and a high recidivism rate”. The judge applied the prediction made by Compas, and report that Loomis was “identified by the Compas software to be a high-risk person for his community”. The accused challenged the verdict, arguing that it was a violation of his rights by using an algorithm that was inaccessible to him. 

 

The company producing the software maintains that the algorithm at the heart of the system is protected by trade secret, consequently refusing access to its system. The Supreme Court of Wisconsin evaded responding to this embarrassing question and confirmed the verdict, considering that the accused would have been sentenced similarly using traditional methods. This example highlights important ethical questions. A study made by four journalists compared the journey of 10,000 arrestees in relation to the prediction made by the algorithm, revealed important racial bias: blacks were systematically considered more likely to relapse into criminality. We must not underestimate the ethical blindness of algorithms. There is a strong chance that these algorithms isolate discriminatory categories from every day life.

​

In the domain of insurance contracts for automobiles, an important decision by the Court of Justice of the European Union rejected with severity the criteria of gender. This decision followed a referral made by the Constitutional Court of Belgium on a prejudicial question raised by Test-Achats, a consumer association which criticized the gender differences applied on tariffs in Automobile purchases that deemed women to be more prudent drivers. A European directive in 2004 asserted the exclusion of gender as a criterion to calculate rates and tariffs of insurance. In 2011, the Court of justice settled the question with: “Taking into account the gender of the insured as a risk factor in insurance contract constitutes discrimination.” 

​

Meanwhile, in 2013, a British company proposed a new offer dubbed as “Drive like a girl”, which proposed to install black boxes inside of cars to monitor driving behavior. A reduced tariff would be attributed to the driver that drove with the qualities of prudence regularly attributed to the female sex. This example underlines how we can jump from identity-based segmentation to behavioral segmentation measured by big data algorithms.

​

Algorithms are not immune from discriminatory practices and can even contribute to it. Whether religious, ethnic or gender discrimination, algorithms can breathe new life into discriminatory behavior. Even if algorithms present an apparent air of objectivity, we must be able to monitor this space, and maintain a discussion around it in order to distinguish between cause and effect and the varied correlations tracked in the observed phenomena.

​

Adrien Basdevant

bottom of page