Can Technology Stop Mass Shootings?

Can Technology Stop Mass Shootings? – We are all devastated and disgusted by the events of this past week. As with most human problems, people want to turn to technology to solve for them. Better background checks, artificial intelligence to find high risk individuals, and other big brother types of initiatives. Here’s a great piece by James Rundle at the Wall Street Journal on the reasons why it’s not as simple as all that…

Can Technology Stop Mass Shootings?

President Trump’s call for law enforcement and social-media companies to develop technology to prevent mass shootings cuts to the heart of several enormous challenges, experts say.

“I am directing the Department of Justice to work in partnership with local state and federal agencies, as well as social-media companies, to develop tools that can detect mass shooters before they strike,” Mr. Trump said at a press conference Monday.

His comments came after shootings over the weekend left at least 31 dead in El Paso, Texas, and Dayton, Ohio.

The suspect in the El Paso shootings had allegedly posted a manifesto to the online forum 8chan about 20 minutes before the attack. The forum had hosted such material before, including remarks from a shooter who attacked a synagogue in Poway, Calif., in April. The fact that 8chan has become a gathering place for extremists who have turned violent raises questions about whether law enforcement could have detected the potential threat of the El Paso shooter.

Police departments and federal agencies have used software that monitors social-media activity for several years. But these capabilities can vary wildly between locations, said Brian Jackson, senior physical scientist at the Rand Corp. who focuses on homeland security, criminal justice and emergency preparedness.

“Most law-enforcement agencies in the United States, particularly at the state and local level, don’t have a whole lot of capability and technical people to manage and respond to digital evidence more generally, much less real-time detection,” he said.

The technical problem lies in part with the scale of social-media data and the speed with which law enforcement has to respond to threats. Around 4.75 billion pieces of content are shared on Facebook Inc. ’s social-media platform per day, while about 126 million people are active on Twitter Inc. ’s website daily, according to company filings. Identifying specific threats in a post or tweet challenges technology experts and law enforcement. Detecting them in time to stop a planned attack may be impossible.

“It’s a lot easier said than done,” said James Hayes, a vice president at security consulting firm Guidepost Solutions LLC who earlier worked as a special agent in charge of Homeland Security Investigations’ New York field office.

Still, law-enforcement agencies know that social-media monitoring is essential, he said: “The reality is that it’s going to be a significant challenge to be quick enough to identify a specific location and take action to prevent it, but you absolutely have to do everything you can to detect that.”

Mass shooters in recent years who have published material online have tended to do so an hour or less before the attacks, giving law enforcement a narrow window in which to respond to threat intelligence, Mr. Jackson said.

In the case of the Christchurch, New Zealand, attacks in 2018, the perpetrator live-streamed his assault via Facebook. The live video wasn’t detected by the company’s artificial-intelligence systems and wasn’t reported by a user until 29 minutes after it began. The shooters in El Paso and in Poway both allegedly wrote that they were inspired by the Christchurch attacker.

Monitoring potential attackers now yields less information than in the past, Mr. Hayes said. Today’s attackers are aware that they are being listened to, and can turn that to their advantage.

“I worked [in counter] terrorism from 2002 to 2007, and you’d hear people talking months in advance, talking about who they were coordinating with, and the trail was a lot fresher,” he said. “That’s not done so much anymore; these individuals that are planning these attacks aren’t sharing that information on the web.”

In a hearing before the House Intelligence and Counterterrorism subcommittee in June, academics and policy specialists outlined some of the limitations of modern technology when it comes to monitoring terrorist content online. Alex Stamos, an adjunct professor at Stanford University’s Freeman-Spogli Institute and a former chief security officer at Facebook, told the hearing that current AI technology was “more artificial than intelligent,” even at the cutting edge.

Continue Reading

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.