Regulate artificial intelligence, don’t fear it

Cartoon by Brianna Moreno-Angel

By Connor Fleck

Technology is constantly evolving, and Artificial Intelligence (AI) is one of the latest advancements that has caught the public’s attention.

Although AI has many benefits, recent videos surfacing online show how it can be used to manipulate and fake the voices of famous people. This has led to concerns about the potential harm that AI could cause.

Einstein theorized about splitting atoms but never intended for those theories to be used to create nuclear bombs. AI is meant to assist us humans, but when will it become a threat?
All it takes for AI to mimic people’s voices with minimal flaws is a three-second recording of a person’s voice.

The way a person’s voice can be weaponized and manipulated are endless. AI can fake politicians’ voices to possibly spread misinformation and propaganda, further dividing political groups or making people vote for things they would not have without the influence of the manipulated videos. AI could also be used to make it seem as if people have used profanities or slurs which could damage a person’s reputation and the rest of their career.

This conflict can be especially dangerous in a world dominated by technology. With the majority of people putting their entire lives online, it wouldn’t be difficult for the AI to collect three seconds’ worth of videos of one’s voice.

Deep-faked videos make it challenging to distinguish between what is real and what is not. This combination between voice mimicry and false videos makes the Internet a dangerous place to find information.

Another concern regarding AI is ChatGPT, which many people believe could spell the end of knowledge for the human race. With its ability to write essays on virtually any topic, people worry that ChatGPT answers and essays will replace student work in the future. This would obviously result in students being less willing to think critically and present their ideas on their own.

Although this resource can be used to our advantage, if it is not used with moderation, we may face extreme consequences from abusing ChatGPT. Those consequences might include a zero in the scorebook and a trip to the Dean’s Office, but even worse is the lack of skill in the future.

One way to protect yourself from this spread of misinformation is to utilize cross-source examination. Cross-source examination is the act of using multiple sources to reach a consensus on a certain topic. That way, you will be able to find out if what you’re looking for is legitimate and accurate.

Despite the concerns about AI, it should not be feared. AI has many useful applications that can benefit society. However, it is crucial to be aware of the potential dangers and take action against those who use it for the wrong reasons. We must continue to develop ethical standards and policies to regulate AI use and ensure that it benefits humanity.