Dr. Alex Taylor (third from left) and Dr. Jina Suh (second from left) joined by Christopher Fernandes (Division Vice President for Learning & Development), JC de Villa (Quantitative Research Scientist, Wellness & Resiliency Division of Research) and Ana Terrese Junio (Sr. Manager for AI Services) during the site visit.

The rapid growth of AI and its associated complex user risks make systematic research in AI safety operations critically important. Specifically, we need to identify and clearly define the processes by which human agents create and enhance AI safety. The current terminology, such as “datawork” or “annotation,” is too limited and fails to capture its full scope.

At TaskUs, we’re committed to operational excellence and supporting new, rapidly expanding technology. As such, we collaborated on a project with the University of Edinburgh and Microsoft Research to provide a pathway for the researchers to real-world settings and the people involved in or supporting AI safety work. The study focused on understanding how data enrichment and safety practices are operationalized in making AI responsible.

Research through immersive fieldwork

This joint research undertaking served as a logical and dynamic expansion of our previous work, specifically our white paper on AI safety frontliners. This year’s study was primarily driven by site visits to TaskUs offices for a holistic immersion in the organizational setup and culture where AI safety work operates at scale.

The researchers Dr. Alex Taylor (University of Edinburgh) and Dr. Jina Suh (Microsoft) conducted focus group discussions, individual interviews, observations and participatory engagements with the operations, learning experience and wellness teams respectively.

Discovery was focused on generating rich first-person, lived accounts of not just work processes and tools but also the values, ethics, sociocultures and support provisions that influence how AI systems are set up and secured. Priority was placed on centering the human role — both direct and indirect — in shaping the AI safety loop.

TaskUs site VPs guided the research team around the sections of our facilities dedicated to creating an engaging and thriving workplace — job training spaces, on-site nursery, gym and recuperative areas. The researchers engaged with employees across all organizational levels to observe and document the agile flow of strategies, feedback and practices essential for operationalizing AI safety tasks.

The field observations also included live demonstrations of the TaskUs Resiliency Studio — a proactive resilience skill building program evidenced to protect frontline staff against the risks of exposure to egregious content and demanding workflows.

Drs. Alex Taylor and Jina Suh participating in a wellness session alongside TaskUs employees.

A preview of our findings

While detailed reports and publications are in the works, early findings from the study shed light on three key aspects:

  • Humans and their subjectivity drive the AI frontline: AI responsibility involves more than structured model training and fine tuning based on broad principles. Humans in the loop, operating at the frontier of AI deployment, continually feed tacit subject matter expertise and contextual ethics to add nuance and sensitivity into evolving AI tools and services. 
  • Systemic thinking is central to emerging tech work: Given the high work volume and evolving operating protocols driven by AI’s proliferation, this mindset is necessary to build a stable, wellness-focused work environment. 
  • A sense of agency and influence promotes psychological safety: AI operators find purpose in their work when they understand its impact on user safety and developments in responsible technology, and how it helps fine-tune policies for AI safety.

A new approach to AI safety

The findings call for rethinking the linear approach to AI safety by enhancing frontline capacity building and wellness support to stimulate skill enhancement, critical thinking and tangible recognition of impact.

The present study lays essential groundwork to shape future operational strategies and training standards for AI safety operations.