Algorithmic Biasness

Algorithmic Biasness | Ethics for UPSC Civil Services Examination | Triumph IAS


Algorithmic Biasness

[Relevant for Public Ethics, Integrity and Aptitude]

Algorithmic Biasness

In the present era of Artificial Intelligence (AI) and Machine Learning (ML) AI becomes capable of analyzing data set and coming out with viable results for any query put to it. Through ML that AI keeps evolving over time and becomes more and more expert in answering questions.

But, there is a certain “dark spots” in this process; what we call as Algorithmic Bias.  

This biasness has made increasingly apparent that the promises of AI aren’t distributed equally — it risks exacerbating social and economic disparities, particularly across demographic characteristics such as race and in India towards certain denotified castes or may be certain biasness against rural people.

Business and government leaders are being called on to ensure the benefits of AI-driven advancements are accessible to all. Yet it seems that for each passing day there is some new way in which AI creates inequality, resulting in a reactive patchwork of solutions — or often no response at all.

Algorithmic bias occurs when algorithms make decisions that systematically disadvantage certain groups of people.

But why do this happen in the first place?

If we take India into consideration and criminal (both convicted and suspected) data banks and make machine learn from the profiles of each data set; naturally machine learning would involve all biasness of the data sets to get into its learning. So, next time when one would put in a new data and inquire about its probability of criminal antecedent than depending upon the biased learning upon which that machine has learnt it would throw a bias result.

To understand it simply: take for example – most undertrials and criminals come from a certain social background having certain educational backgrounds (mostly non-matriculate) so, a machine having learnt on that data-sets would always have that inbuilt bias to identify any new data set having similar socio-educational background to term it as criminal even if that person has done some small mistake like breaking red light. With time this biasness would grow in that algorithm thus making it bias against certain castes, locality or may be with certain educational background.

Algorithmic bias often occurs because certain populations are underrepresented in the data used to train AI algorithms or because pre-existing societal prejudices are baked into the data itself.

It can have disastrous consequences when applied to key areas such as healthcare, criminal justice, and credit scoring. Scientists investigating a widely used healthcare algorithm found that it severely underestimated the needs of rural women as far as screening for breast cancer was concerned, leading to significantly less care. This is not just unfair, but profoundly harmful.

While minimizing algorithmic bias is an important piece of the puzzle, unfortunately it is not sufficient for ensuring equitable outcomes. Complex social processes and market forces lurk beneath the surface, giving rise to a landscape of winners and losers that cannot be explained by algorithmic bias alone. To fully understand this uneven landscape, we need to understand how AI shapes the supply and demand for goods and services in ways that perpetuate and even create inequality.

Afterall did we forget that even after all this development in ML and AI; machines are still based on GIGO – Garbage In is what you get i.e. Garbage Out.

It is thus important that before State gives too much discretion on such Algorithm vis-à-vis such important data points interpretation which involves issues related to civil liberties, privacy and other ethical dimensions of individual liberty it is important to make such ML learning based on neutral data set and if not possible than to make certain provisions in the algorithm itself where such biasness is minimized. But, the bigger question is – Is it possible?

(Reference: Static portion)


Related Blogs …

Ethical Standards in Public Service, Best Sociology Optional Coaching, Sociology Optional Syllabus. Navigating the Complex Terrain of Integrity Pact in India's Public Procurement, Best Sociology Optional Coaching, Sociology Optional Syllabus.

Follow us :

🔎 https://www.instagram.com/triumphias

🔎 www.triumphias.com

🔎 https://www.youtube.com/c/TriumphIAS

🔎 https://t.me/VikashRanjanSociology

Find More Blogs…

Compare and contrast Karl Marx’s and Max weber’s Karl Marx- Historical Materialism
Position of Women In the Modern Indian Society Sociology: Social system and pattern variables

 

Leave a Reply

Your email address will not be published. Required fields are marked *