Should ChatGPT be shut down? And Other Ethical Issues Facing Artificial Intelligence
Updated: Apr 13
ChatGPT, a free tool to generate text and code as well as other things, was released in November 2022. The language model quickly gained popularity on social media platforms. Within five days, it gained one million users, and by January, it had 100 million monthly active users, becoming the fastest-growing company in Internet history.
For a deeper dive into how it all started you can read the first part of our blog here.
In this blog we will be talking about AI ethics and what happens when such systems are not closely studied with organizational awareness, keeping in mind important ethical questions.
ChatGPT is a fascinating tool to interact with and it has made the world a much more interesting place already. However, there is a dark side to this magical technology we get to play with.
Open Letter Calling for Pausing ChatGPT Development
Recently a letter was signed by hundreds of industry and tech business leaders including people like Elon Musk, Apple co-founder Steve Wozniak and many engineers from Amazon, Google, Meta, and Microsoft asking for a 6-month pause on the further development of ChatGPT.
The New York Times wrote "Elon Musk, the chief executive of Twitter and Tesla, and other tech leaders have criticized an “out-of-control race” to develop more advanced artificial intelligence."
Although later parts of the letter and signatures were discredited, and some of the signatories came out saying they did not sign it or others backed up their initial support, the letter was enough to start a conversation about the next steps in this provocative technology.
This is not the first controversy surrounding AI technologies like ChatGPT’s development.
If you google ChatGPT and Kenyan workers, you will see numerous news articles on how OpenAI exploited a group of Kenyan content moderators working for less than 2$ an hour to filter traumatizing, disturbing content from ChatGPT.
There are always at least two sides to every story especially when it comes to new technology. Beyond the pristine silicon valley face of this groundbreaking AI system, there is the familiar face of inequality.
Humans Role in Machine Learning
When I asked ChatGPT to comment, the language model responded by acknowledging that the companies are using crowdsourcing platforms to collect large amounts of data to train machine learning models and to fine tune chatGPT's output so it emulate natural language more closely.
One of the straightforward but difficult problems to solve is fact-checking. People quickly started to use ChatGPT the way they use a search engine, to get information. However, ChatGPT and other AI systems have been found to give out false or made-up information to users. So human input is necessary to fact check the AI.
Other times the problems are a little more emotionally taxing.
“In some cases, this work has involved tasks such as content moderation, which can be emotionally challenging and may expose workers to disturbing or traumatizing material.”
As these technologies continue to evolve, it’s important that those driving their development are aware of the potential risks and ethical concerns, and proactively address them early on. Otherwise, we risk vulnerable individuals falling between the cracks of innovation.
Interestingly, ChatGPT stated that people, including the exploited Kenyan workers “, have willingly participated in crowdsourcing projects as a means of earning money and gaining valuable work experience. These opportunities provide important income and training opportunities for people in areas with high unemployment rates or limited access to other forms of work.”
This is one of the problems with Artificial Intelligence
Dry data without context or human interpretation and analysis is bound to lead to wrong conclusions. To view any situation involving vulnerable individuals being exploited as willful participation is crass and misguided at best - and ill-intentioned at worst.
This is not the first time something like this happened. A book called Behind the Screen: Content Moderation in the Shadows of Social Media looks at how major social media companies are offshoring work to poorer countries and disadvantaged communities to pay less than what they would pay domestically - and without any benefits.
In some cases these workers had to watch psychologically scarring content, from animal cruelty to child pornography, to filter these out of our now-clean feeds.
Are Algorithms spreading hate and fake news?
Another one of the issues that you are likely to encounter with chatGPT is its propensity to bring up false information, made-up citations, and fake sources. OpenAI has a warning about this, and a simple thumbs-up or down rating system that users can use to vote on the generated response to help improve its next answers.
Before chatGPT, there were chatbots that were trained on whatever data was available on the internet. We watched these AI chatbots turn racist and sexist very quickly.
Algorithms of Oppression
In a book called Algorithms of Oppression, academic Safiya Noble looks at Google algorithms and how biases creep in, such as presenting white women when you type “beautiful” or white men if you type “doctor”.
In 2013 UN partnered up with Memac Ogilvy & Mather Dubai to make a campaign highlighting sexism on the internet. They used search data from March 9th, 2013 where the Google search bar was suggesting autocomplete sentences starting with women shouldn’t or women cannot with sexist suggestions based on previous searches by people.
In another example, Noble considers how gender-neutral languages such as Turkish get caught up in gender biases. In the Turkish language, there are no common pronouns for “he” (masculine) and “she” (feminine).
Google Translate from Turkish into English however will naturally translate the sentence “they” are a doctor into “he” is a doctor, and “they” are a nurse into “she” is a nurse.
It is argued that this bias in AI is a mere reflection of humanity’s own biases, as machine learning is fed with our own historical data.
The Illusion of Algorithmic Objectivity
Despite all these present problems with bias, AI is still being developed at lightning speed and with little ethical oversight. Some are concerned that AI will progress faster than we can keep up, leaving ethics in the dust.
The Harvard Gazette quotes political philosopher Michael Sandel: “Part of the appeal of algorithmic decision-making is that it seems to offer an objective way of overcoming human subjectivity, bias, and prejudice, but we are discovering that many of the algorithms that decide who should get parole, for example, or who should be presented with employment opportunities or housing … replicate and embed the biases that already exist in our society.”
Ethical engineering and use cases for technology will be an ongoing negotiation in the near future. In the meantime, these tools and technologies will challenge our careers and shape entire industries with every passing day.
In order to keep up with the evolving professional sector, you need some strategies. What can you do so that you don’t become disposable, or your skill set does not become obsolete in a post-automation world? there may be some hope.
Although some people, like the signatories of the open letter to pause further development of ChatGPT, are suggesting that it is best to put the brakes on further developing these intelligent systems before we figure out the ethics and other surrounding issues, it is almost impossible to do that.
We have to simultaneously be working on the ethics, regulation, and legislation surrounding these emerging technologies as they are being developed if we want to have ethical AI systems.
Algorithmic Justice League
Luckily there are people working on these issues. To highlight some of them, there is an organization called the Algorithmic Justice League founded by Joy Boulamwini, an MIT-trained computer scientist and digital activist.
Its goal is to address the problems of bias in artificial intelligence (AI) and machine learning (ML) algorithms.
Buolamwini discovered the issue of algorithmic bias through her research on facial recognition technology. She found that facial recognition algorithms were less accurate in identifying people with darker skin tones and women. This led her to find the AJL in 2016.
Later on, her research findings were substantiated in facial recognition algorithms used by law enforcement which had error rates of more than 40% with darker-skinned women than white men. This was corroborated by the federal government and applied to all systems from IBM to Microsoft to Amazon.
Some of the AJL's notable projects include the Gender Shades project, which tested the accuracy of facial recognition algorithms in identifying gender and skin tone, and the Safe Face Pledge, which calls on companies to ensure their facial recognition technology is not used for harmful purposes.
Race After Technology
Another pioneer in the field of ethical technology is Princeton University professor Ruha Benjamin who wrote a book called Race After Technology.
The book explores the intersection of race and technology, arguing that technology is not neutral and that it often perpetuates racial biases and inequalities. She also discusses how technology can be used to challenge and subvert these biases, and emphasizes the importance of diverse representation within the tech industry
As our world is further penetrated by intelligent technologies that make life easier for some of us, it is important to always be vigilant about different communities' different experiences with these technologies.
The future is now, but keep your wits about you so that you are always critical no matter how mesmerizing the next hot tech may be.