Can an algorithm be racist? No, but it can still carry the germ of discrimination. Question of "bias“, A word that has now become commonplace at the same time that, mirroring itself in its digital representation, society has understood that it has clear and distorting signs of its own image on its face. A research published by Science has highlighted how precisely an algorithm can move huge amounts of capital starting from these distortions, causing damage to the whole society because of superficiality (which we will soon be able to describe as incompetence, by now) with which the decision algorithms are compound states.
When the algorithm makes color distinctions
At the center of the study (50,000 analyzed medical records) there would be an algorithm developed to calibrate the interventions of the public health in the United States, a process that establishes priorities on a statistical basis and thus guides the budget to control the most complex situations. While it is unimaginable to think that a machine can be racist, simple evidence shows how this process heavily discriminates against people based on skin color. This evidence may seem paradoxical (especially when we are in the presence of systems that calibrate the public health of millions of people, when we analyze them in the third millennium and when we are dealing with machine learning processes), but what speaks is the simple reality of the facts:
It is surprising to see that people who are self-identified as having black skin have generally been assigned a lower risk assessment than white people with the same health status.
The problem is that all this has serious repercussions on the way in which one's health is treated and how aid is managed. According to research evaluations, the simple color of the skin is able to reduce the annual expenditure on medical care per capita by $ 1,800; moreover, while only 17.7% of black skin patients get additional treatment based on current algorithms, a balanced assessment would bring this average to as much as 46.5%.
Although the real cost is comparable, the health status of people with black skin is worse: diabetes, anemia, liver problems and high blood pressure make the state of health generally worse, therefore theoretically worthy of greater attention: this is not the case, which it opens up to backward considerations that have revealed how the algorithm is certainly not spoiled by an autonomous racist movement, but that is based on polluted parameters and evaluations (therefore on a basic intelligence).
The developers of the system, "Optum of Eden Prairie" thank the researchers for the evidences, but reject the objections: the cost model is only one of the possible systems of analysis and according to them not the most balanced, having to put on the track also the experience of doctors and other clinical factors. At the same time, the researchers, although they focused their studies on a single analysis system, would have shown the same distortions even in other competing systems. The problem would therefore be widespread, representing a serious stain for the US health-care system.
The underlying question is therefore deliberately provocative: can an algorithm be racist? This question mark opens up a new frontier of study and skills: what anthropological abilities must a developer develop if he intends to study man through statistics, AI and machine learning? Can we delegate certain decision steps to an algorithm without having duly examined the balance with which these assessments are concluded? An algorithm cannot be racist, but the way in which it is put in place can be superficial. And superficiality does nothing but serve as a megaphone for the distortions of society, which, in turn, most often carry with them prejudices.