Are Deepfakes Online Helping or Hurting?

An Overview of Deepfake Online
Deepfake content is rapidly appearing online, causing big changes in the internet. A deepfake is a digital media format such as video or audio, that uses AI to make a person’s face or voice seem real. The digital world makes it possible for this tool to cause people to question the truth—something that excites some and scares others.
Lately, deepfakes have become more widespread on the internet. In 2019, the total number of deepfake videos found online was 7,964. According to Sensity AI, by 2023, the number of cyber attacks had already surpassed 500,000. They suggest that this matter is significant in the United States, since elections, celebrity news and privacy issues are regularly discussed there. At this time, deepfake online causes concern about false information, online privacy and cybersecurity.
How Deepfake Videos Are Produced and Posted on the Internet
The Tech Used to Produce Deepfakes
Generative adversarial networks (GANs) are AI techniques used to make deepfakes. They have two neural networks, one to generate fake data and one to judge if the data is legitimate. With continued advances in technology, these videos or audio clips become so realistic that telling them apart from the real thing can be difficult.
Anyone can use tools such as DeepFaceLab and FaceSwap because they are so widely available. What worries many people about deepfake detection online content is how rapidly it can appear on popular social media websites, messaging systems and video-sharing sites like TikTok and YouTube.
Instances and Results from the Real World
Political Manipulation
These videos have been used to spread false information in politics. During 2020, a dishonest video of House Speaker Pelosi became popular, portraying her as being drunk. Although it wasn’t an actual deepfake, it allowed for more complicated deepfake manipulation to happen on the internet in the future.
Now that the 2024 U.S. Presidential election is over, people are still paying close attention to election integrity. Some experts believe that it is possible for deepfake information to confuse people or attack the reputations of candidates during elections.
Financial fraud and scams are common in the financial world.
A Hong Kong company lost $25 million after a deepfake video call convinced staff that a company executive had approved a transfer. In the U.S., similar scams have been aimed at banks and top company officials.
As people start to believe in deepfake videos, they are increasingly used for scamming, blackmailing and manipulating others.
Deepfakes are often seen in pop culture and entertainment.
The Conflict of Dual-Use
Sometimes, deepfakes are used for harmless purposes. AI is now used in Hollywood to give actors new looks or bring back dead actors for roles in movies. In The Mandalorian, Luke Skywalker was shown on screen using deepfake technology and voice synthesis. By using deepfake online media in these ways, it is clear that it can contribute to new ideas when handled properly.
Just as fan-created deepfake videos can go viral with millions of views, showing the innovation’s popularity and how it balances between right and wrong.
Considering the Law and Morals
There are no widespread U.S. regulations for cryptocurrencies.
At this time, it is not illegal in the U.S. to produce or share deepfake videos under federal law. Even so, a few states are making changes. By 2019, California made it illegal to use deepfakes in online content during the 60 days before an election and in pornographic material without the people’s permission.
Virginia and Texas have approved laws making it illegal to use deepfakes in a malicious manner. While it is commonly argued by experts that a central approach is necessary to deal with this fast-growing threat.
The topics covered in this chapter are consent, defamation and digital identity.
Deepfake technology leads to serious questions about ethics, mainlyrelated to consent and stealing someone’s identity. The use of someone’s image without their consent can be unsettling and damage their reputation.
According to Deeptrace, in the United States, more than 96% of deepfake videos on the internet were non-consensual pornography aimed at women.
Addressing the Problem of Deepfakes
Technologies Used for Detection and AI Defense
Researchers and technology companies are constantly seeking ways to detect deepfakes. Microsoft has introduced the Video Authenticator tool and Deepware Scanner is supporting platforms in reviewing new content. They review features such as different facial expressions and unusual ways of blinking to find deepfake videos online.
The Semantic Forensics (SemaFor) program is being supported by the U.S. Department of Defense to develop a system that can accurately spot falsified images and videos.
Raising Public Awareness and Helping to Educate People on Media
Educating people is among the most effective steps to defend against deepfakes. The more people know about deepfakes, the better they can judge any online content.
It is important that schools provide media literacy classes and platforms such as Snopes and PolitiFact are available for checking news and information online.
Deepfakes and the Future of Online Security
With artificial intelligence developing, the quality of deepfake online content will also improve. We will likely witness more powerful media in the coming years, along with tougher systems to detect it and, it is hoped, better regulations. If the balance between new technology and responsibility is maintained, deepfakes can be used for positive reasons.
Conclusion
Deepfake technology on the internet is a key step in the development of digital information. Despite its artistic and entertaining value, it raises major issues about truth, privacy and democracy. To overcome these difficulties, the U.S. should rely on strong laws, increased education and modern technology.