AI-s on the Prize: Navigating AI Risk Through Trust + Safety and Risk + Response

What are the risks of AI? Experts converge to talk about ChatGPT and more, discussing challenges and the role of TaskUs in today’s landscape.

Published on April 4, 2023
Last Updated on October 31, 2024

Are computers becoming too intelligent for our own good?

AI-powered engines like ChatGPT and DALL-E have been making waves, showing the (unsettling) potential of technologies like natural language processing (NLP) and machine learning (ML). Today, such programs are used by virtually everyone for almost anything, from generating codes to answering school assignments. Surprisingly (and frighteningly), they produce accurate results so good that people have started questioning their job security. Moreover, possibility of being replaced by robots is not the only AI risk these applications present; other issues, such as fraud and deepfakes, arise.

TaskUs’ Risk + Response Global Head Jeff Chugg, Risk + Response Director Joe McNamara, Trust + Safety Division Vice President Phil Tomlinson, and Trust + Safety Senior Director Siva Raghava recently convened in a fireside chat to talk about the risk of AI and their take on the future of safety and security in its rapid evolution.

Are Computers Going to Take Over Our Jobs?

One of the biggest AI risks is the chance of businesses replacing human work with AI-generated output. Joe kicked off the conversation with a couple of big questions: “Is my job going away? How is this technology going to essentially become self-aware and turn into Skynet?”

The World Economic Forum predicts that AI will replace some 85 million jobs by 2025. A scary thought, especially when you think about how capable a technology ChatGPT is. Phil, however, had an interesting take on the issue, and the risk of AI as a whole:

“I'm kind of an AI skeptic. And what I mean by that is the technology is incredible. And it's doing amazing things. Is it going to replace the things that we as humans grasp on like the true soul food of life, like music and art? And great writing? Is it going to be able to emulate Hemingway, Vermeer, or Bob Dylan? Absolutely not. Because all of those things come from a place of human suffering, and pain, and a journey.

While sometimes eerily effective and accurate, AI will always have its limitations. “[AI] can do a lot of things that are cool. It can help you write press releases, debug your code, or maybe cheat on your exams,” Phil continues. “And this is where we're kind of leaning into—this technology is just a tool.” 

The same World Economic Forum report estimates that around 97 million new human jobs will be created due to AI, with all industries benefiting from this boom. Today, we’re already seeing the advantages of utilizing the technology: 

  • Nearly 30% of US professionals say they’ve used AI tools for work. 
  • A study done by MIT graduate students found that people were able to accomplish office tasks with ChatGPT in 17 minutes versus 27 minutes without it. 

Instead of fearing it, we ought to view the rapid progress of AI-powered applications with a more optimistic perspective.

Diving into the Dark Depths of AI

As the (now) old adage goes,, “With great power comes great responsibility.” Such tools also bring in a host of risks and bad actors. Circling back to AI risk, Phil explained the role of an organization like TaskUs in mitigating such issues:

“There are two ways that we're thinking about it at TaskUs. One of them is how do we train the models so that they are better and less likely to do something weird, like recommend you to jump off a bridge or tell you how to make a bomb… there's a whole element of how do we train the data that feeds these models? How do we do some adversarial testing of the models so that they don't do bad things… and we find out how to make them better? This is all work that TaskUs is actually doing today.

Talking about the risks of AI, Jeff Chugg brought up the controversy surrounding AI art and deepfakes. The conversation shifted to how to tell the difference between AI-generated and human-made content. Phil emphasized the need for human and tech collaboration as a solution: 

Like any problem of bad content or bad actors on the internet, it'll be an intersection of technology and humans, you will need the technology to do the scale, to do the sort of lower hanging stuff, but you'll need humans to both correct where the technology goes wrong.

We already know that AI can produce super photo-realistic images and video overlays. But where do we set the boundaries? What should the technology be able to do, and how automated should we allow it to be? What are other AI risks that we have to account for? 

Phil brought back the point of AI being just a tool at the end of the day. He explained, “It's what you do with it… take a hammer, and you can build a house, or you can take a hammer, and you can do something terrible with it. I think it's less about putting boundaries in place for the hammer and more about making sure society understands what are the benefits and what are the risks of the hammer, and how do I encourage, motivate, and incentivize broader society to use the hammer in the right way?”

AI Risks and Challenges: What’s Next?

What are the biggest challenges in the Trust and Safety space? When Joe asked ChatGPT this question, it listed five things: content moderation, scalability, false positive or negative rate, privacy, and adapting to changing threats. While not wrong, Siva emphasized the need to set rules: 

At the intersection of technology and humans, you need policies. You need lexicons; you need guidelines.” 

As an expert in online security, Siva stressed the importance of going back to the basics, on setting frameworks that set technological boundaries and make users feel safe despite the risks of AI. “The biggest challenges are, how fast can we develop these guidelines, the boundaries and the policies in this space, and how fast and accurately we implement and enforce them, be it in the AI space or the human space.”

That being said, AI risk is clearly going to be an ongoing and scaling challenge for any industry. For a tech-enabled company like TaskUs, going back to the human element is key. To cap the conversation off, Siva summed it up nicely: “Collaboration, identifying the right need of hands… And two, being agile, fast, and deploying solutions at scale in a most cost-effective manner will be very, very critical for us to be differentiated in this space.”

Face the Future of AI with Us

A future with hyper-intelligent chatbots and automated machines is as exciting as it is terrifying. One thing is certain: the risks of AI brought about by bad actors. You’ll need a trusted and experienced partner to help you navigate the uncertainties and challenges of emerging technologies.

And that’s why we’re here.

Recognized as the Everest Group’s World’s Fastest Business Process (outsourcing) Service Provider in 2022 and highly rated in the Gartner Peer Review, TaskUs is known for providing Ridiculously Good AI Solutions, Trust & Safety services, and Risk Management solutions to companies all over the world.

We understand the need to stay a step ahead and constantly innovate new technology, techniques, and training methodologies. Through the combination of highly capable humans and purpose-built technology, we deliver the strongest combination of tools, training, and processes to deter, combat, and deal with AI risk. 


References

Siva Raghava
Senior Director, Trust + Safety
As a Senior Director, he and his team focus primarily on scaling Trust & Safety practices globally. Developed expertise while helping organizations in Product Operations, Content Moderation and Management, Project Mgmt., Global Solutioning & Vendor Management, Digital Marketing Ops, Content Policy Creation, and Content Policy Enforcement. Siva is a "truly diversified" Trust and Safety professional driving a purpose on platform safety for online communities at large for over 17 years. Worked with some of the premium brands in this space building deep domain expertise globally.