
Our human operators regularly run quality tests and complete additional training to remove bias from the machine learning models.

But if we teach our machine learning models that it’s only used in hate speech, we might erroneously remove reviews that promote a gay business owner or an LGBTQ+ safe space. For example, sometimes the word “gay” is used as a derogatory term, and that’s not something we tolerate in Google reviews. Training a machine on the difference between acceptable and policy-violating content is a delicate balance.

They have different strengths so we continue to invest tremendously in both. Given the volume of reviews we regularly receive, we’ve found that we need both the nuanced understanding that humans offer and the scale that machines provide to help us moderate contributed content. You can think of our moderation system as a security guard that stops unauthorized people from getting into a building - but instead, our team is stopping bad content from being posted on Google. Moderating reviews with the help of machine learningĪs soon as someone posts a review, we send it to our moderation system to make sure the review doesn’t violate any of our policies.
