How to Use Generative AI Tools While Still Protecting Your Privacy

Here's how to take some control of your data while using artificial intelligence tools and apps.
Colorful transparent screens arranged in rows and overlapping each other
Photograph: akinbostanci/Getty Images

The explosion of consumer-facing tools that offer generative AI has created plenty of debate: These tools promise to transform the ways in which we live and work while also raising fundamental questions about how we can adapt to a world in which they're extensively used for just about anything.

As with any new technology riding a wave of initial popularity and interest, it pays to be careful in the way you use these AI generators and bots—in particular, in how much privacy and security you're giving up in return for being able to use them.

It's worth putting some guardrails in place right at the start of your journey with these tools, or indeed deciding not to deal with them at all, based on how your data is collected and processed. Here's what you need to look out for and the ways in which you can get some control back.

Always Check the Privacy Policy Before Use

Make sure AI tools are honest about how data is used.

OpenAI via David Nield

Checking the terms and conditions of apps before using them is a chore but worth the effort—you want to know what you're agreeing to. As is the norm everywhere from social media to travel planning, using an app often means giving the company behind it the rights to everything you put in, and sometimes everything they can learn about you and then some.

The OpenAI privacy policy, for example, can be found here—and there's more here on data collection. By default, anything you talk to ChatGPT about could be used to help its underlying large language model (LLM) “learn about language and how to understand and respond to it,” although personal information is not used “to build profiles about people, to contact them, to advertise to them, to try to sell them anything, or to sell the information itself.”

Personal information may also be used to improve OpenAI's services and to develop new programs and services. In short, it has access to everything you do on DALL-E or ChatGPT, and you're trusting OpenAI not to do anything shady with it (and to effectively protect its servers against hacking attempts).

It's a similar story with Google's privacy policy, which you can find here. There are some extra notes here for Google Bard: The information you input into the chatbot will be collected "to provide, improve, and develop Google products and services and machine learning technologies.” As with any data Google gets off you, Bard data may be used to personalize the ads you see.

Watch What You Share

It's maybe not a great idea to upload your own face for the AI treatment.

OpenAI via David Nield

Essentially, anything you input into or produce with an AI tool is likely to be used to further refine the AI and then to be used as the developer sees fit. With that in mind—and the constant threat of a data breach that can never be fully ruled out—it pays to be largely circumspect with what you enter into these engines.

When it comes to the tools that produce AI-enhanced versions of your face, for example—which seem to continue to increase in number—we wouldn't recommend using them unless you're happy with the possibility of seeing AI-generated visages like your own show up in other people's creations.

As far as text goes, steer completely clear of any personal, private, or sensitive information: We've already seen portions of chat histories leaked out due to a bug. As tempting as it might be to get ChatGPT to summarize your company's quarterly financial results or write a letter with your address and bank details in it, this is information that's best left out of these generative AI engines—not least because, as Microsoft admits, some AI prompts are manually reviewed by staff to check for inappropriate behavior.

To be fair this is something that the AI developers caution against. "Don’t include confidential or sensitive information in your Bard conversations," warns Google, while OpenAI encourages users "not to share any sensitive content" that could find it's way out to the wider web through the shared links feature. If you don't want it to ever in public or be used in an AI output, keep it to yourself.

Change the Settings

Google Bard data can be auto-deleted, if required.

Google via David Nield

You've decided you're OK with the privacy policy, you're making sure you're not oversharing—the final step is to explore the privacy and security controls you get inside your AI tools of choice. The good news is that most companies make these controls relatively visible and easy to operate.

Google Bard follows the lead of other Google products like Gmail or Google Maps: You can choose to have the data you give it automatically erased after a set period of time, or manually delete the data yourself, or let Google keep it indefinitely. To find the controls for Bard, head here and make your choice.

Like Google, Microsoft rolls its AI data management options in with the security and privacy settings for the rest of its products. Head here to find the privacy options for everything you do with Microsoft products, then click Search history to review (and if necessary delete) anything you've chatted with Bing AI about.

When it comes to ChatGPT on the web, click your email address (bottom left), then choose Settings and Data controls. You can stop ChatGPT from using your conversations to train its models here, but you'll lose access to the chat history feature at the same time. Conversations can also be wiped from the record by clicking the trash can icon next to them on the main screen individually, or by clicking your email address and Clear conversations and Confirm clear conversations to delete them all.