1. Home >
  2. Internet & Security

OpenAI Wants Your Help Devising Creepy Ways to Use Generative AI

By figuring out how bad actors might use DALL-E, ChatGPT, and other generative AI programs for evil, OpenAI can hopefully prepare for the worst.
By Adrianna Nine
The OpenAI homepage on a laptop.
Credit: Jonathan Kemper/Unsplash

If the proliferation of generative AI has kept you up at night with its numerous potential consequences, you can use that anxiety for good. To stay one step ahead of online creeps, OpenAI is asking the public to help it realize ways bad actors might leverage generative AI.  

The company behind DALL-E, ChatGPT, Whisper, and other AI models has formed a new “preparedness team” aimed at mitigating both real and hypothetical threats. “We believe that frontier AI models, which will exceed the capabilities currently present in the most advanced existing models, have the potential to benefit all of humanity,” the company wrote in a blog post Thursday. “But they also pose increasingly severe risks.” According to OpenAI, these risks span categories from basic cybersecurity to chemical, biological, radiological, and nuclear threats. 

To temper these hazards, OpenAI hopes to get into nefarious users’ heads. This is where you come in. As part of the new team’s “AI Preparedness Challenge,” OpenAI is asking members of the public to share ways in which bad actors might commit “catastrophic misuse” of its programs. For their efforts, up to ten people will be awarded a chunk of $25,000 in API credits, see their work published (though it’s unclear where), and even be considered for employment on the actual preparedness team. 

ChatGPT website open on a laptop screen.
Credit: Rolf van Root/Unsplash

Submitting an idea to the team involves a single online form. After offering your name, email, and a link to your LinkedIn profile (or resume), you’ll need to get into the headspace of a malicious web user who’s been given unrestricted access to all of OpenAI’s assets. This, according to OpenAI, will help the company prepare for the potential theft of its frontier AI model weights. You’ll have to share what type of misuse the bad actor would commit, describe that persona’s evil step-by-step process, and offer a few ways in which damage might be prevented. OpenAI has even gone so far as to ask that part of the case study is 

That’s a good chunk of labor without any guaranteed compensation. OpenAI doesn’t say how they might use your suggestions without placing you among the challenge’s winners, either, meaning your homework assignment might not receive the recognition it deserves. Nonetheless, if you’re confident in your ability to help one of the world’s biggest generative AI makers dodge unspeakable catastrophe, it could be worth a try.

Tagged In

Artificial Intelligence Cyber Attacks

More from Internet & Security

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of use(Opens in a new window) and Privacy Policy. You may unsubscribe from the newsletter at any time.
Thanks for Signing Up