ver since artificial intelligence (AI) was still a science fiction concept, we have been looking to mimic and even replace what humans do. As of 2024,
we have achieved this through RPA applications, Generative AI (GenAI), and even a human-like robot (we all remember Sophia the Robot, right?).
As more and more AI-based platforms started being integrated into our society, ethical concerns also began to rise.
Yes, artificial intelligence, especially generative AI platforms like ChatGPT and Midjourney, has generated (no pun intended) some excitement among many people – me included. At the same time, it has also created many AI concerns regarding the potential harm from its use – intended or not.
There are many ways in which bad people can use AI software to do bad things. And there are also many ways where (good) people can screw up something using a software. This is why some people worry about the potential misuse of AI, while others question the ethical implications regarding bias and discrimination, copyright protection, privacy, governance, and even accountability.
This is why the need for ethical AI regulations is increasing. Ethical AI represents the idea of AI-based solutions designed and implemented with high standards, understanding, and especially unbiased. Because in all bad and possible bad situations, there should be a certain level of ethical governance to ensure that AI technologies won’t go off the road. After all, we wouldn’t want to end up in the same scenario as in Terminator 2.
So, here are the trends of ethical issues that we might see in 2024 while using artificial intelligence technologies.
1. Ethical Concerns of Generative AI
After their public release at the end of 2022, ChatGPT and OpenAI have opened the door to whole new possibilities in the digital world. However, they have also brought AI ethics that need to be addressed.
Generative models are trained on biased datasets, which allows them to predict and replicate the data they’re fed. Unfortunately, however, they will end up perpetuating the same biases recklessly. As these solutions become increasingly integrated into our daily lives, it’s necessary to create a powerful governance framework and address these ethical implications.
At the same time, they also provide content that includes false or misleading information. For example, I was playing around with a platform, and it generated that Will Smith had announced on Twitter the release of his new album. The thing is, Will Smith isn't dropping any new albums, and he doesn't even have a Twitter account.
Even more so, specific legal implications could also appear since GenAI works by mirroring existing copyrighted materials. Think about the music industry! There is a growing trend of generating songs with the artist’s voice. Well, if GenAI can produce a song that resembles the artist’s copyrighted song, it could result in an expensive legal battle.
Subscribe to our newsletter
2. Dangerous AI Malfunctions
One of humanity’s biggest fears is being attacked by robots. While robots are far from making such decisions consciously, they can malfunction if their system shuts down, leading to catastrophic consequences.
In the last couple of years, we have witnessed such incidents. At the end of 2023, an incident was made public in which a Tesla robot attacked an engineer in November 2021. Luckily, he was just injured. However, this accident sparked many concerns regarding the safety protocols and ethical considerations surrounding AI-powered robotics. However, Tesla reported zero robot-related accidents in the following years, confirming that they had implemented such regulations.
The problem with AI ethics is that there is a relatively large number of similar accidents. Earlier in 2024, an industrial robot crushed to death a worker in South Korea. Even more so, an increasing number of automated warehouse robots at Amazon are injuring employees. And unfortunately, if we don’t do something about it, the number might increase even more in 2024.
3. Privacy Issues
We already know that data has become the most valuable resource in the world (which seems a little scary). Privacy concerns have been a topic of discussion in AI ethics for a long time as artificial intelligence technologies continuously store and process information in their systems. At the same time, in 2024, there is growing concern regarding who has access to this data. Unfortunately, if all that personal data falls into the wrong hands, it could be used for horrible things.
Artificial intelligence tools have the potential to indirectly uncover private details from what we might think is harmless – a phenomenon known as “predictive harm.” But their algorithms and technologies can predict deeply personal characteristics like sexual orientation, relationship status, political views, and even health conditions. Later, they will use this information in their favor. The Facebook – Cambridge Analytica scandal reflected how using personal data can influence us to take specific actions.
4. The Use of Deepfakes
One of the biggest AI concerns is deepfakes. Nowadays, deepfakes can completely replace voice and facial recognition while also outsmarting all these systems, possibly breaching security protocols. In fact, research indicates that a Microsoft API was deceived in over 75% of cases by simple deepfake generations.
Even more so, there are many ethical implications regarding impersonation. Deceptive practices, such as influencing public sentiment in political manners, can become highly problematic. Unfortunately, this doesn’t stop here! Deepfakes also affect public figures and in the entertainment world.
Black Mirror’s “Joan is Awful” illustrates precisely the reality that we’re experiencing. In the episode, Salma Hayek appears as a character (she’s also an actress in the episode). But here’s the catch in the episode: it’s not her playing the character; it’s just a digital version, and the entire story is CGI animated work. This is what they call “performance cloning”.
But while this might not sound as concerning as the other trends, the problem arises when it is used to clone people without their consent. Recently, Michel Janse (@michel.c.janse), a woman in the US, stated on TikTok that a company used artificial intelligence and deepfake technologies to create an ad, stealing her identity and likeness. Allegedly, the only difference in the ad was her voice.
Yes, it’s an exciting time for artificial intelligence technologies! But remember! Our greatest strength is how we use technology. So, it’s our responsibility to use it ethically and responsibly!