Written by: Talha Jamil
“AI is good at describing the world as it is today with all of its biases, but it does not know how the world should be.” (Joanne Chen, Partner, Foundation Capital, at SXSW 2018).
Artificial intelligence (AI) took off in some inspiring new direction, from chatbots using conversational AI to algorithms that can identify abnormalities in patient X-rays. AI is transforming every industry and providing new opportunities for businesses to improve efficiency, reduce costs, and make more informed decisions.
However, with this transformative technology comes a responsibility to ensure that its use is ethical and aligned with human values. Ethical considerations in AI are critical to ensuring that the technology benefits society as a whole and does not perpetuate or exacerbate existing inequalities.
Potential For Biased or Discriminatory Decision-Making
One of the most significant ethical considerations in AI is the potential for biased or discriminatory decision-making. AI is dependent on human programming and is only as good as the data it is trained on, and if that data is biased or discriminatory, then the models used to make decisions will reflect those biases.
The data used to “teach” AI can result in fostering racial and gender profiling, as well as other hidden and normalized biases. If a society has a history of discriminatory practices, AI will provide a rearview outcome, limiting equity-based progress with seeming legitimacy.
For example, if facial recognition systems are trained on datasets that are predominantly made up of white faces, they may not perform as well on faces of other races, resulting in discriminatory outcomes.
Potential For Technologies to Exacerbate Inequality
Another ethical consideration in AI and machine learning (ML) is the potential for technologies to exacerbate inequality. If AI and ML algorithms are used in hiring processes, they may inadvertently discriminate against certain groups of people, such as those from underrepresented communities or with non-traditional backgrounds.
Additionally, if these technologies are only accessible to those with the means to develop or purchase them, they may widen the gap between the haves and have-nots.
Privacy – Breaching Confidentialities
Privacy is also a critical ethical consideration in the use of AI and ML. As the technology becomes more pervasive, the platforms using it may collect and store large amounts of personal data, and a lack of proper security measures could lead to breaches and hacking.
The misuse or unauthorized access to sensitive personal information could have severe consequences for individuals and society as a whole.
Real-World Examples to represent AI/ML relevant considerations
Real-world examples have shown the ethical considerations in AI and ML. Such as:
- The American Civil Liberties Union (ACLU) found that Amazon’s facial recognition technology, Rekognition, was much less accurate in identifying darker-skinned individuals, raising concerns about disproportionate harm to people of color or marginalized groups.
- Amazon scrapped a recruiting tool that used AI to screen job applicants after it was found to be biased against women.
- Additionally, the UK’s National Health Service (NHS) was criticized for giving Palantir access to patient data in exchange for assistance in managing the COVID-19 pandemic, as the deal gave a private company access to sensitive health data and raised concerns about exacerbating existing inequalities in healthcare access.
- The COMPAS algorithm used widely in the US for guiding sentencing by predicting the likelihood of a criminal reoffending was reported to be racially biased, and could exacerbate racial biases due to the feedback loop it creates.
- Similarly, PredPol, an algorithm designed to predict when and where crimes will take place, could lead police to unfairly target certain neighborhoods, as it was found to repeatedly send officers to neighborhoods with a high proportion of people from racial minorities, regardless of the true crime rate in those areas.
Finally, a study found that three of the latest gender-recognition AIs could correctly identify a person’s gender from a photograph 99% of the time, but only for white men, and accuracy dropped to 35% for dark-skinned women, increasing the risk of false identification of women and minorities. The study showed that the data on which algorithms are trained impacts their accuracy, leading to biases.
The development and use of AI and machine learning technologies have immense potential to benefit society in various ways. However, the ethical considerations surrounding their use must be given paramount importance.
The potential for biased and discriminatory decision-making, the exacerbation of inequality, and the threat to privacy must be taken seriously. The examples cited above illustrate how AI can perpetuate existing inequalities and lead to harmful outcomes for marginalized groups.
As the use of AI becomes increasingly pervasive, it is crucial to ensure that its development and implementation align with human values, ethical principles, and social responsibility. By doing so, we can leverage the full potential of AI while minimizing the risks of perpetuating discrimination and inequality.