We’ve been warning you for years about the rise of online censorship by the establishment elite in Silicon Valley and today that threat goes to the next level with the US Defense Department launching an assault on online freedom to stop “disinformation.” We are rapidly approaching a dystopian reality that will stifle all forms of dissent against those who rule over us. The question is, why do you continue to allow it by giving all of your data, time, and energy to big tech platforms?
Fake news and social media posts are such a threat to U.S. security that the Defense Department is launching a project to repel “large-scale, automated disinformation attacks,” as the top Republican in Congress blocks efforts to protect the integrity of elections.The Defense Advanced Research Projects Agency wants custom software that can unearth fakes hidden among more than 500,000 stories, photos, video and audio clips.
If successful, the system after four years of trials may expand to detect malicious intent and prevent viral fake news from polarizing society.“A decade ago, today’s state-of-the-art would have registered as sci-fi — that’s how fast the improvements have come,” said Andrew Grotto at the Center for International Security at Stanford University.
“There is no reason to think the pace of innovation will slow any time soon.”U.S. officials have been working on plans to prevent outside hackers from flooding social channels with false information ahead of the 2020 election. The drive has been hindered by Senate Majority Leader Mitch McConnell’s refusal to consider election-security legislation. Critics have labeled him #MoscowMitch, saying he left the U.S. vulnerable to meddling by Russia, prompting his retort of “modern-day McCarthyism.”
President Donald Trump has repeatedly rejected allegations that dubious content on platforms like Facebook, Twitter and Google aided his election win. Hillary Clinton supporters claimed a flood of fake items may have helped sway the results in 2016.How Russia Meddling Became Social Media’s Problem: QuickTake Q&A“The risk factor is social media being abused and used to influence the elections,” Syracuse University assistant professor of communications Jennifer Grygiel said in a telephone interview. “It’s really interesting that Darpa is trying to create these detection systems but good luck is what I say. It won’t be anywhere near perfect until there is legislative oversight.
There’s a huge gap and that’s a concern.”False news stories and so-called deepfakes are increasingly sophisticated and making it more difficult for data-driven software to spot. AI imagery has advanced in recent years and is now used by Hollywood, the fashion industry and facial recognition systems. Researchers have shown that these generative adversarial networks — or GANs — can be used to create fake videos.
Famously, Oscar-winning filmmaker Jordan Peele created a fake video of former President Barack Obama talking about the Black Panthers, Ben Carson, and making an alleged slur against Trump, to highlight the risk of trusting material online.After the 2016 election, Facebook Chief Executive Officer Mark Zuckerberg played down fake news as a challenge for the world’s biggest social media platform. He later signaled that he took the problem seriously and would let users flag content and enable fact-checkers to label stories in dispute.
These judgments subsequently prevented stories being turned into paid advertisements, which were one key avenue toward viral promotion.In June, Zuckerberg said Facebook made an “execution mistake” when it didn’t act fast enough to identify a doctored video of House Speaker Nancy Pelosi in which her speech was slurred and distorted.
“Where things get especially scary is the prospect of malicious actors combining different forms of fake content into a seamless platform,” Grotto said. “Researchers can already produce convincing fake videos, generate persuasively realistic text, and deploy chatbots to interact with people. Imagine the potential persuasive impact on vulnerable people that integrating these technologies could have: an interactive deepfake of an influential person engaged in AI-directed propaganda on a bot-to-person basis.
”By increasing the number algorithm checks, the military research agency hopes it can spot fake news with malicious intent before going viral.“A comprehensive suite of semantic inconsistency detectors would dramatically increase the burden on media falsifiers, requiring the creators of falsified media to get every semantic detail correct, while defenders only need to find one, or a very few, inconsistencies,” the agency said in its Aug. 23 concept document for the Semantic Forensics program.
The agency added: “These Sema For technologies will help identify, deter, and understand adversary disinformation campaigns.”Current surveillance systems are prone to “semantic errors.” An example, according to the agency, is software not noticing mismatched earrings in a fake video or photo.
Source: Cybersecurity – Bloomberg