AI at Work

What not to share with ChatGPT if you use it for work

TL;DR: A lot of things.
By Cecily Mauran  on 
All products featured here are independently selected by our editors and writers. If you buy something through links on our site, Mashable may earn an affiliate commission.
Illustration of a hand holding the OpenAI logo surrounded by visualizations of data shared online
How to not overshare with ChatGPT Credit: Mashable composite; Shutterstock

We examine how AI is changing the future of work — and how, in many ways, that future is already here.


The question is no longer "What can ChatGPT do?" It's "What should I share with it?"

Internet users are generally aware of the risks of possible data breaches, and the ways our personal information is used online. But ChatGPT's seductive capabilities seem to have created a blind spot around hazards we normally take precautions to avoid. OpenAI only recently announced a new privacy feature which lets ChatGPT users disable chat history, preventing conversations from being used to improve and refine the model.

"It's a step in the right direction," said Nader Henein, a privacy research VP at Gartner who has two decades of experience in corporate cybersecurity and data protection. "But the fundamental issue with privacy and AI is that you can't do much in terms of retroactive governance after the model is built."

Henein says to think about ChatGPT as an affable stranger sitting behind you on the bus recording you with a camera phone. "They have a very kind voice, they seem like nice people. Would you then go and have the same conversation with that? Because that's what it is." He continued, "it's well-intentioned, but if it hurts you — it's like a sociopath, they won't think about it twice."

Even OpenAI's CEO Sam Altman has acknowledged the risks of relying on ChatGPT. "It's a mistake to be relying on it for anything important right now. We have lots of work to do on robustness and truthfulness," he tweeted in December 2022.

Essentially, treat ChatGPT prompts as you would anything else you publish online. "The best assumption is that anyone in the world can read anything you put on the internet — emails, social media, blogs, LLMs — do not ever post anything you do not want someone else to read," said Gary Smith, Fletcher Jones Professor of Economics at Pomona College and author of Distrust: Big Data, Data-Torturing, and the Assault on Science. ChatGPT can be used as an alternative to Google Search or Wikipedia, as long as it's fact-checked, he said. But it shouldn't be relied on for much else.

The bottom line is that there are still risks, made even more precarious because of ChatGPT's allure. Whether you're using ChatGPT in your personal life or to boost work productivity, consider this your friendly reminder to think twice about what you share with ChatGPT.

Understand the risks of using ChatGPT

First, let's look at what OpenAI tells users about how they use their data. Not everyone's privacy priorities are the same, but it's important to know the fine print for the next time you open up ChatGPT.

1. Hackers might infiltrate the super popular app

First and foremost, there's the possibility of someone outside of OpenAI hacking in and stealing your data. There's always an inherent risk of data exposure from bugs and hackers while using a third party service, and ChatGPT is no exception. In March 2023, a ChatGPT bug was discovered to have exposed titles, the first message of new conversations, and payment information from ChatGPT Plus users.

"All this information you're pushing into it is highly problematic, because there's a good chance it might be susceptible to machine learning attacks. That's number one," said Henein. "Number two, it's probably sitting in clear text somewhere in the log. Whether or not somebody is going to look at it, I don't know, neither do you. That's the problem."

2. Your conversations are stored somewhere on a server

While it's unlikely, certain OpenAI employees have access to user content. On the ChatGPT FAQs page, OpenAI says user content is stored on its systems and other "trusted service providers' systems in the US." So while OpenAI removes identifiable personal information, before its "de-identified," it exists in raw form on its servers. Some authorized OpenAI personnel have access to user content for four explicit reasons: one of them being to "fine tune" their models, unless users opt out.

Mashable Light Speed
Want more out-of-this world tech, space and science stories?
Sign up for Mashable's weekly Light Speed newsletter.
By signing up you agree to our Terms of Use and Privacy Policy.
Thanks for signing up!

3. Your conversations are used to train the model (unless you opt out)

We'll get to opting out later, but unless you do that, your conversations are used to train ChatGPT. According to its data usage policy, which is scattered across several different articles on its site, OpenAI says, "we may use the data you provide us to improve our models." On another page, OpenAI says it may "aggregate or de-identify Personal Information and use the aggregated information to analyze the effectiveness of our Services." This means, theoretically the public can become aware of something like a business secret via whatever the model "learns."

Previously, users were only able to opt out of sharing their data with the model through a Google Form linked in the FAQs page. Now, OpenAI has introduced a more explicit way of disabling data sharing in the form of a toggle setting within your ChatGPT account. But even with this new "incognito mode," conversations are stored on OpenAI's server for 30 days. However, the company has relatively little to say on how they keep your data secure.

4. Your data won't be sold to third parties, the company says

OpenAI says it does not share user data to third parties for marketing or advertising purposes, so that's one less thing you have to worry about. But it does share user data with vendors and service providers for maintenance and operation of the site.

What might happen if you use ChatGPT at work?

ChatGPT and generative AI tools have been touted as the ultimate productivity hack. ChatGPT can draft articles, emails, social media posts, and summaries of long chunks of text. "There isn't an example that you can possibly think of that hasn't been done," said Henein.

But when Samsung employees used ChatGPT to check their code, they inadvertently revealed trade secrets. The electronics company has since banned the use of ChatGPT and threatened employees with disciplinary action if they fail to adhere to the new restrictions. Financial institutions like JPMorgan, Bank of America, and Citigroup have also banned or restricted the use of ChatGPT due to strict financial regulations about third-party messaging. Apple has also banned employees from using the chatbot.

The temptation to cut mundane work down into seconds seems to overshadow the fact that users are essentially publishing this information online. "You're thinking of it in the same way that you think of a calculator, you're thinking of it like Excel," he said. "You're not thinking that this information is going into the cloud and that it's going to be there in perpetuity either in a log somewhere, or in the model itself."

So if you want to use ChatGPT at work to break down concepts you don't understand, write copy, or analyze publicly available data, and there's no rule against it, cautiously proceed. But be very careful before you, for example, ask it to evaluate the code on the top secret missile guidance system you're working on, or have it write a summary of your boss' meeting with a corporate spy embedded at a competing company. That could cost you your job, or worse.

What might happen if you use ChatGPT as a therapist?

A survey conducted by healthtech company Tebra revealed that one in four Americans is more likely to talk to an AI chatbot than to attend therapy. Instances have already popped up of people using ChatGPT as a form of therapy, or seeking help for substance abuse. These examples were shared as exciting use cases for how ChatGPT can be a helpful, non-judgmental, and anonymous conversation partner. But your deepest, darkest admissions are stored somewhere in a server.

People tend to think their ChatGPT sessions are like a "walled garden" said Henein. "At the end, when I log out, everything inside of that [session] flushes down the toilet, and that's the end of the conversation. But that's not the case."

If you're a Person On The Internet, your personal data is already all over the place. But not the ChatGPT conversational medium where you might feel compelled to divulge intimate and personal thoughts. "LLMs are an illusion—a powerful illusion, but still an illusion reminiscent of the Eliza computer program that Joseph Weizenbaum created in the 1960s," said Smith.

Smith is referring to the "Eliza effect," or the human tendency to anthropomorphize things that are inanimate. "Even though users knew they were interacting with a computer program, many were convinced that the program had human-like intelligence and emotions and were happy to share their deepest feelings and most closely held secrets."

So given how OpenAI stores your conversations, try not to give yourself over to the illusion that it's a mental health wizard, and blurt out your innermost thoughts, unless you're prepared to broadcast your innermost thoughts to the world.

How to protect your data on ChatGPT

There's a way to go incognito when using ChatGPT. That means your conversations are still stored for 30 days, but they won't be used to train the model. By navigating to your account name, you can open up settings, then click on "Data Controls." From here you can toggle off "Chat History & Training." You can also clear past conversations by clicking on "General" and then "Clear all chats."

ChatGPT settings page showing a Chat History & Training toggle
Navigate to the settings page to disable your chat history. Credit: OpenAI
The best VPNs for protecting your data and identity

Best for connection speed
ExpressVPN (1 year + 3 months)
$6.67/month (save $6.28/month)
ExpressVPN logo

Best for encryption
NordVPN (2 years + 3 months)
$3.29/month (save $9.70/month)
NordVPN logo

Best for privacy
Proton VPN Plus (2 years)
$4.99/month (save $5/month)
Proton VPN logo
Mashable Image
Cecily Mauran

Cecily is a tech reporter at Mashable who covers AI, Apple, and emerging tech trends. Before getting her master's degree at Columbia Journalism School, she spent several years working with startups and social impact businesses for Unreasonable Group and B Lab. Before that, she co-founded a startup consulting business for emerging entrepreneurial hubs in South America, Europe, and Asia. You can find her on Twitter at @cecily_mauran.


More from AI at Work
This children's hospital is integrating AI with healthcare
a statue of a nurse and child in silhouette at Boston Children's Hospital

How to prepare to thrive professionally in an AI-integrated workforce
An illustrated man wearing glasses uses technology with multiple arms in front of a green tech-themed background pattern

How generative AI will affect the creator economy
A machine picking up social media posts.

Are 'prompt engineer' jobs real? Yes, but maybe not for long.
a prompt engineer at a desk over a giant OpenAI logo.

All the major generative AI tools that could enhance your worklife in 2023
A collage of AI software screenshots

Recommended For You
I used AI to plan my Costa Rica trip — why I'll never use it again
Depiction of using ChatGPT in Costa Rica

21 of the best ChatGPT courses you can take online for free
ChatGPT on phone

Kristi Noem's other dog fears for his life on 'SNL' Weekend Update
A smiling man in a dog suit.

Don’t like your DALL-E images? OpenAI now lets you edit them.
DALL-E logo on a phone


Trending on Mashable
NYT Connections today: See hints and answers for May 8
A phone displaying the New York Times game 'Connections.'

'Wordle' today: Here's the answer hints for May 8
a phone displaying Wordle

NYT's The Mini crossword answers for May 8
Closeup view of crossword puzzle clues

NYT Connections today: See hints and answers for May 7
A phone displaying the New York Times game 'Connections.'

K-pop fans are furious about racism at the Met Gala. Here's why.
(L-R) Bang Chan, Han, Felix, Seungmin, Hyunjin, I.N, Lee Know, and Changbin of Stray Kids attend The 2024 Met Gala Celebrating "Sleeping Beauties: Reawakening Fashion" at The Metropolitan Museum of Art on May 06, 2024 in New York City.
The biggest stories of the day delivered to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.
Thanks for signing up. See you at your inbox!