Global Head of Trust & Safety Matt Halprin and Vice President, Product Management, Jennifer Flannery O’Connor recently shared insights into policy development processes at the company. Here are some key takeaways:
📌 Top Goal: preventing egregious real-world harm
📍 Consistently reviewing and updating policy as laws, regulations, and information/data change and evolve.
📍 Collaboration with NGOs, academics, and relevant experts across different viewpoints and different countries to inform policy review.
📍 Thinking ahead and being proactive, instead of reactive.
🛑 Trust & Safety team makes it happen once Policy updates or development are necessary. Here’s how:
✅ Analysis starts with: 1) how common is the harmful content found on YT? and 2) how does existing policy address the subject?
✅ Analysis is done across all videos, not even just a single video.
✅ Options for new or update policy are generated, including scenarios about how it would impact existing content and ways enforcement actions can be applied (age-restrictions, removal, etc.)
✅ Options are teased out and one top choice is sent through rounds of assessment
✅ Key goals to be achieved include: 1) Mitigate egregious real-world harm while balancing a desire for freedom of expression; and 2) Allow for consistent enforcement by thousands of content moderators across the globe.
✅ Final sign-offs are needed from various leads, and then the highest levels with the Chief Product Office (Neal Mohan) and CEO (Susan Wojcicki).
✅ Any disagreement starts the process over.
There is also a process focused on identifying emerging issues, led by the Intelligence Desk team within Trust & Safety, and the potential risks they bring to the platform.
Lastly, enforcement of policies is driven by both people and machine learning (ML) technology implementing guidelines. New guidelines are given to human content moderators to understand how successful, or not, enforcement of the policy may be.
If successful, the guidelines are rolled out to a larger group of moderators, across different languages, backgrounds, and experience levels. Then the guidelines are improved upon (which can take a few months) before ML technologies are trained to implement the guidelines. The ML models are then tested. Once testing is complete, the policy is ready to launch with both human and machine review working together (humans reviewing the machine flags).
To help with measuring success, YouTube: 1) released a metric called the “violative view rate, which looks at how many views on YouTube come from violative material.”; and 2) tracks the number of appeals by creators; and 3) the number of reinstatements.
As language and expression continue to evolve, so will the Community Guidelines.
Read the full blog post from Google.
One response to “How does Google develop policy at YouTube?”
[…] How does Google develop policy at YouTube? Get the inside scoop… […]