Opinion | AI does more harm than good – The News Record

4 minutes, 2 seconds Read


AI should’ve been regulated as soon as it began to gain traction.

Artificial intelligence (AI) has made the news again after sexually explicit AI-generated images of Taylor Swift went viral on X (twitter). The images caused outrage, one so big that it reached Congress. 

The situation has made people question AI’s place in our society and how people could be protected from it and its negative effects. AI should’ve been regulated as soon as it began to gain traction. Taylor Swift isn’t the first person this has happened to. 

President Biden finally began an attempt at regulation back in  October. However, many people have been asking why this subject wasn’t approached years ago when advocates first said situations like this would happen. 

AI deepfakes like Swift’s are another form of sexual harassment and revenge pornography. Even if it’s not a real image, using a person’s likeness to create a nude image of them qualifies as visual harassment.  

Additionally, revenge porn is illegal in 46 states and D.C. Creating a pornographic image of a person is a crime that should include AI deepfakes. 

Social media services’ failure to enforce rules that protect people, especially young women, and girls, from AI deepfakes is also to blame for the rise in AI-generated pornography and propaganda. 

The images of Taylor Swift came from 4chan, where a community of users created nude images of female celebrities using AI. The images then spread to X, where the post stayed up for 17 hours before Elon Musk finally deleted it and suspended the account.  

Social media platforms need to be held accountable for their inability to create effective rules to keep users safe. Moreover, AI has been repeatedly used for propaganda and other forms of  disinformation

While sometimes it may just be a seemingly obvious meme image of the president doing something silly, this can escalate into realistic looking deepfakes of politicians or organizations saying hateful or false things. These images or videos have real consequences.

Additionally, AI bots have increased online, which increase the spread of misinformation and propaganda. AI art has also been used to create propaganda.  

It’s crucial to stop AI’s spread of misinformation, and the spread of misinformation online in general. 

Recent advances in artificial intelligence have also made it all the more terrifying to think about. Sora, OpenAI’s new video-generation model, allows users to create photorealistic videos using just a text prompt or an image.  

While some obvious mistakes in certain videos can make users determine they’re made by AI, most of them are scarily realistic. Sora only allows users to create videos that are a minute long, but the possibilities with a program like this are terrifying to think about. 

While OpenAI assesses for risks in its programs, the fact that we can create technology like this using real people’s faces is already harmful. 

Image-based AI has been shown to have biases when it comes to gender and race. It feeds off images and captions taken off the internet and plays into Western stereotypes, giving prompts for “prisoners” as mainly black men and “a productive person” as mostly white men. 

These stereotypes are being normalized by AI, which then negatively affects these communities even more. However, companies don’t seem to be doing enough to improve the data. 

AI art is another big concern for creatives. As AI improves, it’s able to feed off human-made art to improve its creations. People have begun to use AI programs instead of paying artists for their work. 

At the same time, it uses artists’ artwork to improve, thus stealing real artists’ hard work. Artists should be paid to create art, which is not only better but more genuine. AI cannot create anything original, but humans can. 

Governments’ slow reactions to AI are not surprising, though. Generative AI isn’t the only problem. 

AI surveillance technology is used by at least 75 countries around the world, led by China. Facial recognition technology has always had racist biases, especially in the US, leading to false arrests, and heightened surveillance can enable repression. 

Israel also used facial recognition to track Palestinians and restrict them. The advancement of AI has made surveillance easier, and while sometimes it keeps us safer, it also can be a danger. It makes sense why governments haven’t attempted to regulate it until it began to escalate. 

It’s important to be aware of the dangers of AI. It can be helpful and has led to many practical advances. 

However, at this point, it is far too unregulated and dangerous to celebrate it without being afraid of its implications. We need to be careful about the content we consume and be wary of using generative AI, even if it’s simply just a meme. 

This post was originally published on this site

Similar Posts