Live-streamed Carnage: The Hard Lessons of Mass Murder Technology

These days, mass snipers like the one now carried out in the Buffalo, NY supermarket raid never stop planning their brutal attacks. They also create marketing plans while organizing the live-streaming of their massacres on social platforms in hopes of fomenting more violence.

Sites like Twitter, Facebook and now the game streaming platform Twitch have learned painful lessons in dealing with the violent videos that now often accompany these shootings. But experts are calling for a broader discussion of live streams, including whether they should even exist, as once these videos come online, it’s nearly impossible to erase them completely.

The self-described white supremacist gunman who police say killed 10 people, most of them black, at a Buffalo supermarket on Saturday mounted a GoPro camera on his helmet to stream his attack live on Twitch, the video game streaming platform used by another gunman in 2019 who killed two people at a synagogue in Halle, Germany.

He had already outlined his plan in a detailed but disjointed set of online diary entries that were apparently posted publicly before the attack, although it is unclear how people might have seen them. His goal: to inspire imitators and spread his racist beliefs. After all, he was a copycat himself.

He decided not to stream on Facebook, as another mass shooter did when he killed 51 people at two mosques in Christchurch, New Zealand, three years ago. Unlike Twitch, Facebook requires users to sign up for an account to watch live streams.

Still, not everything went according to plan. In particular, by most accounts, platforms responded more quickly to stop the spread of the Buffalo video than they did after the 2019 Christchurch shooting, said Megan Squire, a senior researcher and technology expert at the Southern Poverty Law Center.

Another Twitch user watching the live video likely flagged it to Twitch content moderators, she said, which would have helped Twitch stop the stream less than two minutes after the first shots by a company spokesperson. Twitch did not say how the video was flagged.

“In this case, they did very well,” Squire said. “The fact that the video is so hard to find now is proof of that.”

In 2019, the Christchurch shooting was streamed live on Facebook for 17 minutes and quickly spread to other platforms. This time around, the platforms generally seemed to coordinate better, most notably sharing digital “signatures” of the video used to detect and remove copies.

But the platform’s algorithms may have a harder time identifying a copied video if someone edited it. This created problems, such as when some internet forum users remake Buffalo’s video with twisted attempts at humor. Tech companies would need to use “more sophisticated algorithms” to detect these fuzzy matches, Squire said.

“It feels darker and more cynical,” she said of attempts to get the footage out there in recent days.

Twitch has over 2.5 million viewers at any given time; about 8 million content creators stream videos on the platform each month, according to the company. The site uses a combination of user reports, algorithms and moderators to detect and remove any violence that occurs on the platform. The company said it quickly removed the shooter’s broadcast, but did not share many details about what happened on Saturday – including whether the broadcast was reported or how many people watched the riot live.

A Twitch spokesperson said the company shared the live stream with the Global Internet Forum to Counter Terrorism, a non-profit group created by tech companies to help others monitor their own platforms for rebroadcasts. But the video clips still made their way to other platforms, including the Streamable website, where it was available to millions of views. A spokesperson for Hopin, the company that owns Streamable, said on Monday that it was working to remove the videos and close the accounts of those who uploaded them.

Looking ahead, platforms could face future complications from moderating a Texas law — reinstated by an appeals court last week — that prohibits large social media companies from “censoring” users’ views. The gunman “had a very specific point of view” and the law isn’t clear enough to create a risk for platforms that moderate people like him, said Jeff Kosseff, an associate professor of cybersecurity law at the US Naval Academy. “It really puts the finger on the scale of keeping harmful content,” he said.

Alexa Koenig, executive director of the Center for Human Rights at the University of California, Berkeley, said there has been a shift in the way tech companies are responding to these events. In particular, Koenig said, coordination between the companies to create fingerprint repositories for extremist videos so they can’t be reuploaded to other platforms “has been an incredibly important development.”

A Twitch spokesperson said the company will review how it responded to the shooter’s live stream.

Experts suggest sites like Twitch could exercise more control over who can live stream and when — for example, creating delays or whitelisting valid users while banning rule breakers. More broadly, Koenig said, “there’s also a general social conversation that needs to happen around the usefulness of live streaming and when it’s valuable, when it’s not, and how we put secure norms around how it’s used and what happens if you use it. ”.

Another option, of course, would be to end the live stream altogether. But that’s almost impossible to imagine, given how much tech companies rely on live streams to attract and keep users engaged to make money.

Freedom of expression, Koenig said, is often the reason tech platforms give for enabling this form of technology — in addition to the tacit profit component. But that has to be balanced “with privacy rights and some of the other issues that come up in this case,” Koenig said.

Leave a Reply

Your email address will not be published.