AI is a powerful tool, but it can also be a dangerous weapon in the wrong hands. One of the most egregious, AI can generate or respond to child sexual abuse material (CSAM). Bad actors can use coded language to slip past traditional detection.
Three leading AI developers brought in TaskUs to prevent model misuse without triggering false positives or compromising performance. Our AI and Trust & Safety experts built a rigorous adversarial testing program to test limits and expose blind spots, including: