
Facebook’s artificial-intelligence software, which is said to detect violence in livestreams on the platform, did not react to the video of the Christchurch massacre. „To do that, we first have to provide our systems with massive amounts of data from just such content – which is difficult, as such events are thankfully rare,“ a corporate spokesperson said today.
Another challenge for the software is to distinguish real violence from the transfer of video game scenes. „For example, if our systems alarmed thousands of hours of livestream video games, our reviewers could miss out on the important real-world videos“ that could alert Facebook helpers.
1.2 million upload attempts
The assassin, who killed 50 people in attacks on two mosques in New Zealand’s Christchurch on Friday, broadcast the attack in real time on Facebook Live. The company reiterated earlier data that the 17-minute livestream was seen by fewer than 200 users and the first user notice reached the network 12 minutes after the end of the broadcast. After the end of a live stream, a recording remains available.
It is still unclear how long the original video of the attacker was online before it was removed from Facebook. A spokesman said that if someone reported the video during the livestream, the hint would have been processed faster. The original video has been seen around 4,000 times – but later contributed to the fact that several users have uploaded copies of other services.
While Facebook’s software blocked 1.2 million attempts to re-upload the video in the first 24 hours, it also leaked about 300,000 uploads. Among other things, this is due to the fact that it had to do with over 800 modified versions of the video.