Google will try to increase the number of its employees who police extreme content across its internet services to over 10,000, the head of Google-owned YouTube has said.
According to Wojcicki, YouTube spent previous year "testing new systems to combat emerging and evolving threats" and invested in "powerful new machine learning technology", and is now ready to employ this expertise to tackle "problematic content". The company has been under increasing pressure from politicians, law enforcement and advertisers to remove content promoting terrorism, child pornography or other illegal activities on the video-sharing site.
The reports led several big brands including Mars and Adidas to pull advertising from the site.
It is hard to know at this stage whether machine learning can adequately flag disturbing content aimed at children, as much of this type of content could be hard for an algorithm to recognise as disturbing or creepy, which is why human content reviewers are needed.
Google also promised more manual work to check that ads are not running alongside content that reflects badly on advertisers.
Following the furore over the plethora of unsuitable videos, YouTube announced it was to implement measures such as removing adverts from videos depicting family entertainment characters engaged in violent behaviour, and blocking all comments on videos targeted at minors if inappropriate user comments are uploaded.
Ms Wojcicki moved to reassure video-makers that they won't be adversely affected by any changes, saying: "We've heard loud and clear from creators that we have to be more accurate when it comes to reviewing content, so we don't demonetise videos by mistake".
Now, YouTube chief Susan Wojcicki has explained how the platform plans to keep a closer eye on the videos it hosts going forward by applying the lessons it learned fighting violent extremism content. Wojcicki says the company has begun training its algorithms to improve child safety on the platform and to be better at detecting hate speech.