The massive volume of user-generated content (UGC) being shared online can be attributed to social media platforms. The global UGC platform market size is expected to reach $18.65 billion by 20281. This exponential growth reinforces the need for more stringent and proactive content moderation measures from these online platforms.
A world-leading video and photo-sharing social media platform partnered with Us to improve the accuracy, efficiency, and performance of their Machine Learning (ML) model’s text and image classification capabilities to ensure the integrity and security of the platform for all its users.
The client had previously developed an ML model with their previous outsourcing partner to automatically block sensitive text content on their platform, but it lacked the knowledge to identify the nuances in certain colloquial words and phrases.
TaskUs’ #RidiculouslyGood Teammates established a critical human review/data classification initiative to identify gaps and potential improvements with our client’s ML model:
Improved the ML models across seven languages to understand emojis and text trickery.
Implemented an intensive two-week-long training program to master the client’s various policies and review tooling processes.
Established a proactive, two-way real-time and weekly communication process.
Social media platforms should continuously work to improve their ML capabilities, as the exponential increase in the number of users poses a major challenge in protecting their users from harmful content.
TaskUs leverages years of in-depth experience in both data classification and ML to help create a safer internet for all.
Download our case study, Text and Image Classification for a Social Media Company, to learn more about our critical human review initiative and how to use this framework to provide a greater and safer user experience.