Facebook to train AI systems using police firearms training videos

Top Stories

FILE – In this March 29, 2018 file photo, the Facebook logo on a screen at Nasdaq in Time Square, New York. Facebook and its partners have asked financial authorities in Switzerland to evaluate their plan to create a new digital currency called Libra. Facebook has said a nonprofit association headquartered in Geneva would oversee Libra, putting it under Swiss regulatory authority. The Swiss Financial Market Supervisory Authority said Wednesday, Sept. 11, 2019 the Libra Association has requested an “assessment” of its plan. (AP Photo/Richard Drew, file)

(CBS News)–Facebook will work with law enforcement organizations to train its artificial intelligence systems to recognize videos of violent events, the company said on Tuesday. The social media giant’s AI systems were of a mass shooting at a mosque in Christchurch, New Zealand.

The effort will use body-cam footage of firearms training provided by U.S. and U.K. government and law enforcement agencies. The aim is to develop systems that can automatically detect first-person violent events without also flagging similar footage from movies and video games.

The AI training is part of a broader effort to crack down on extremism on Facebook’s platforms. The company has been working to crack down on extremist material on its service, so far with mixed success. In March, it expanded its definition of prohibited content to ban U.S. white nationalist and white separatist material as well as that from international terrorist groups. It says it has banned 200 white supremacist organizations and 26 million pieces of content related to global terrorist groups like ISIS and al Qaeda.

Extremist videos are just one item in a long list of troubles Facebook faces. It was fined $5 billion by U.S. regulators over its privacy practices. A group of state attorneys general has launched its own antitrust investigation into Facebook. And it is also part of broader investigations into “big tech” by Congress and the U.S. Justice Department.

More regulation might be needed to deal with the problem of extremist material, said Dipayan Ghosh, a former Facebook employee and White House tech policy adviser who is currently a Harvard fellow.

“Content takedowns will always be highly contentious because of the platforms’ core business model to maximize engagement,” Ghosh told the Associated Press. “And if the companies become too aggressive in their takedowns, then the other side — including propagators of hate speech — will cry out.”

Copyright 2020 Nexstar Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Recent Updates

More 7 Day Forecast

More Political Stories

More Politics

Trending Stories

Don't Miss

Trending Stories