blogfiesta.com | The Creepy New Internet Trend That’s Seriously Threatening People’s Privacy
17701
post-template-default,single,single-post,postid-17701,single-format-standard,woocommerce-no-js,ajax_fade,page_not_loaded,,columns-4,qode-theme-ver-10.0,wpb-js-composer js-comp-ver-4.12,vc_responsive

The Creepy New Internet Trend That’s Seriously Threatening People’s Privacy

The Creepy New Internet Trend That’s Seriously Threatening People’s Privacy

With enough effort, you can find virtually anything on the Internet. While that might seem like a good thing when you’re trying to dig up a specific funny cat video, not everything out there is so benign. Sometimes, it’s better to avoid delving too deeply into your Google search results; you never know what things are better off unseen.

In fact, there’s one creepy online threat that’s hiding in plain sight. Not only is it unsettling, but it’s also proving to be a serious threat to both people’s privacy and the future of world politics.

Imagine that you were absentmindedly browsing the Internet one evening after work. After checking your emails, maybe you decided to click over to Facebook. There, you saw something strange in your newsfeed.

There was a video of a celebrity, in this case Mark Zuckerberg, playing. At first, he seemed to be giving a standard interview about Facebook’s business practices. But then his presentation took a strange turn.

Before long, he was explaining that he was the “one man with total control of billions of people’s stolen data, all their secrets, their lives, their futures.” He was, essentially, Big Brother from 1984. It sounded like a conspiracy theory; what was happening?

If the video seems unbelievable, that’s because it wasn’t really Zuckerberg speaking. In fact, the clip is emblematic of a troubling new digital trend, which realistically alters videos and takes aim at celebrities. It places everyone’s personal security at risk.

These videos, commonly known as deepfakes, sound like something out of a sci-fi movie. How could someone have created a fake video showing a public figure making outlandish claims without futuristic technology?

These videos follow the same principle as using Photoshop to manipulate pictures, just applied to a moving image. There is one crucial, dangerous difference between the two digital mediums, however.

Since manipulated images are static, there’s a limit to what they can depict; anything too outlandish makes the fake obvious. Doctored videos, however, can literally place words in someone’s mouth.

The idea of unsavory characters using your image as a digital puppet is undeniably creepy, but it still gets worse. These manipulated videos can be dangerous in a couple of concrete ways.

Even if a video is debunked as fake, a moving, talking image still has some staying power. That’s coming to the forefront in politics, where deepfakes can damage someone’s reputation with the voting public.

In May 2019, for example, someone created a spoof video of US Speaker of the House Nancy Pelosi slurring her words during a press conference. The creator had dramatically slowed down an actual clip of her talking, and it spread far and wide.

The video was even shared on Donald Trump’s Twitter account; that post alone was retweeted over 30,000 times. Even though the clip has been debunked, it still lodged itself in people’s memories.

The manipulated media took on extra importance with elections approaching. “Deepfakes are one of the most alarming trends I have witnessed as a Congresswoman to date,” wrote US Congresswoman Yvette Clarke.

“If the American public can be made to believe and trust altered videos,” she continued, “our democracy is in grave danger. We need to work together to stop deepfakes from becoming the defining feature of the 2020 elections.”

But the videos’ manipulation of truth can also affect other facets of life. In fact, some legal experts are concerned that they will set a dangerous precedent in the courtroom.

If criminals can easily manipulate moving images, videos may become inadmissible evidence in the court of law. Unfortunately, many technology companies haven’t taken a stand to stop the spread of these dangerous deepfakes.

One common video manipulating program called FakeApp is readily available for download online. While it’s not the easiest to use, anyone can theoretically learn to create convincing deepfakes if they’re willing to practice.

Samsung engineers have also created their own algorithm capable of turning a single still photo into a clip of that person speaking. They tested it on photos of famous celebrities with chilling success.

While that algorithm isn’t public yet, the message is clear: the technology to create deepfakes exists and is uncomfortably accessible. There is one digital hope to fight against them, however.

While poorly made deepfakes are easy to identify, higher quality ones are less obvious. That’s why computer scientists at the University of Albany came up with a new plan to fight fire with fire.

They created their own algorithm, built to identify instances in which the manipulating software has digitally transformed an video. This allows the program to flag deepfakes, ideally preventing their proliferation.

Share