According to the Washington Post, Congress is drafting a bill to create a federal social media task force to police speech on the internet and target pro-free speech websites like Gab. They are framing this around “protecting users from harmful online content,” when in reality users already have all the tools they need as individuals to protect themselves. The federal government doesn’t belong in the position of policing online speech anymore than Big Tech companies do.
What these lawmakers fail to realize is that the signal can not be stopped. The open source and decentralized version of the internet is already here. Anyone, anywhere in the world can now create and host their own Gab Social server with their own rules. We can’t stop them. The government can’t stop them. Big Tech can’t stop them.
More speech is always the answer. So long as Gab exists and so long as the first amendment is law in the United States, there is absolutely nothing that can be done to force us or any other platform to police WrongThink, “offensive” speech, and dissent.
Congressional lawmakers are drafting a bill to create a “national commission” at the Department of Homeland Security to study the ways that social media can be weaponized — and the effectiveness of tech giants’ efforts to protect users from harmful content online.
The draft House bill obtained by The Washington Post is slated to be introduced and considered next week. If passed, the commission would be empowered — with the authority to hold hearings and issue subpoenas — to study the way social media companies police the Web and to recommend potential legislation. It also would create a federal social media task force to coordinate the government’s response to security issues.
The effort reflects a growing push by members of Congress to combat online hate speech, disinformation and other harmful content online, including a hearing held Wednesday where Senate lawmakers questioned Facebook, Google and Twitter executives to probe whether their platforms have become conduits for real-world violence.
All three tech giants told lawmakers at the Wednesday hearing that they have made progress in combating dangerous posts, photos and videos — improvements they attributed largely to advancements in their artificial-intelligence tools. But some Democrats and Republicans in Congress still contend the companies haven’t acted aggressively enough.
“I would suggest even more needs to be done, and it needs to be better, and you have the resources and technological capability to do more and better,” Democratic Sen. Richard Blumenthal (Conn.) said at the hearing.
Lawmakers have grown increasingly concerned about the use of social media sites as conduits for violence and extremism, pointing to recent attacks including the mass shooting in Christchurch, New Zealand. Users uploaded videos of the deadly incidents at two mosques earlier this year, evading tech giants’ censors and then proving difficult to scrub.
But the most vile content has appeared on sites such as Gab, a haven for the alt-right, and 8chan, an anonymous message board. The latter site has been taken down in the aftermath of a shooting in El Paso this year that left 22 people dead. The suspect there is believed to have posted a manifesto to 8chan before carrying out his attack.
Lawmakers led by Rep. Bennie Thompson (D-Miss.), chairman of the House Homeland Security Committee, grilled the owner of 8chan at a private session this year. Thompson later said he had plans for a bill that would create the social media commission.
“One thing’s for sure — the challenge of preventing online terrorism content is one of the greatest post-9/11 homeland security challenges,” he said in a statement Wednesday.
In the Senate, the tech giants faced similar concerns from lawmakers. “In today’s Internet-connected society, misinformation, fake news, deep fakes and viral online conspiracy theories have become the norm,” said Republican Sen. Roger Wicker (Miss.), the chairman of the Senate Commerce Committee, to open the Wednesday hearing.
In response, Facebook, Google and Twitter said during their testimony that they had seen success in deploying automated tools to police for hate, violence and terrorist propaganda.
YouTube said nearly 90 percent of the 9 million videos they had removed in the second quarter of the year had been flagged by automated tools. Those played a major role in removing videos, comments and channels flagged for hate speech, which the company said had spiked in recent months.
Facebook said this week it would begin using police training videos to help its automated tools better detect first-person shooting videos like the one recorded in Christchurch. The company said its detection system, which was designed to automatically flag and remove videos showing violence, sex or objectionable content, now finds a rule violation on its live-streaming system in an average of 12 seconds. Also this week, Facebook also announced updates to its efforts to stop and remove hate speech, including unveiling a roughly 40-person independent board that will oversee content decisions and shape company policy.Source