A laptop equipped with Microsoft Copilot+ is displayed at a Best Buy store on June 18, 2024 in Miami, Florida. Best Buy has begun selling Microsoft’s new line of AI-centric Copilot+ PCs to customers.
Joe Raedle | Getty Images News | Getty Images
Many consumers are fascinated by generative AI and are using new tools for all sorts of personal or professional matters.
But many are unaware of the potential privacy implications, which can be significant.
From OpenAI’s ChatGPT to Google’s Gemini to Microsoft’s Copilot to the new Apple Intelligence, consumer AI tools are readily available and growing in number. However, these tools have different privacy policies regarding the use and retention of user data. In many cases, consumers are not aware of how their data is or could be used.
This is where being an informed consumer becomes incredibly important. Depending on the tool, there are different granularities as to what you can control, depending on the type of tool, said Jodi Daniels, managing director and privacy consultant at Red Clover Advisors, which advises companies on privacy issues. “There’s no one-size-fits-all opt-out system for all tools,” Daniels said.
The proliferation of AI tools and their integration into much of what consumers do on their personal computers and smartphones makes these questions even more pertinent. A few months ago, for example, Microsoft released its first Surface PCs with a dedicated Copilot button on the keyboard to quickly access the chatbot, fulfilling a promise it made months earlier. Apple, meanwhile, last month unveiled its vision for AI, which centers around several smaller models running on Apple’s devices and chips. Company executives have spoken publicly about the company’s focus on privacy, which can be a challenge with AI models.
Here are several ways consumers can protect their privacy in the age of generative AI.
Ask AI the privacy questions it needs to be able to answer
Before choosing a tool, consumers should carefully read the associated privacy policies. How is your information used and how can it be used? Is there an option to opt out of data sharing? Is there a way to limit the data used and how long the data is retained? Can the data be deleted? Do users have to jump through hoops to find the opt-out settings?
It should raise red flags if you can’t easily answer these questions or find answers in the provider’s privacy policies, privacy professionals say.
“A tool that cares about privacy will tell you that,” Daniels said.
And if it doesn’t, “you have to own it,” Daniels added. “You can’t just assume the company is going to do the right thing. Every company has different values, and every company makes money differently.”
She cited the example of Grammarly, an editing tool used by many consumers and businesses, as a company that clearly explains in multiple places on its website how data is used.
Keep sensitive data out of large language models
Some people are very confident about feeding sensitive data into generative AI models, but Andrew Frost Moroz, founder of Aloha Browser, a privacy-focused browser, recommends that people not feed sensitive data because they don’t really know how it could be used or potentially misused.
This is true for any type of information that people might input, whether personal or professional. Many companies have expressed serious concerns about their employees using AI models to help them with their jobs, as they may not be aware of how that information is being used by the model for training purposes. If you input a confidential document, the AI model now has access to it, which can raise all sorts of concerns. Many companies will only approve the use of custom versions of AI generation tools that maintain a firewall between proprietary information and large language models.
Individuals should also be cautious about using AI models for things that are not public or that you wouldn’t want to share with others in any capacity, Frost Moroz said. It’s important to be aware of how you’re using AI. If you’re using it to summarize a Wikipedia article, that may not be a problem. But if you’re using it to summarize a personal legal document, for example, that’s not advisable. Or let’s say you have an image of a document and you want to copy a particular paragraph. You can ask the AI to read the text so you can copy it. In doing so, the AI model will know the contents of the document, so consumers should keep that in mind, he said.
Use the opt-out options offered by OpenAI and Google
Each AI generation tool has its own privacy policies and may offer opt-out options. Gemini, for example, allows users to create a retention period and delete certain data, among other activity controls.
Users can opt out of having their data used for ChatGPT’s model training. To do so, they must go to the profile icon in the bottom left of the page and select Data Controls under the Settings heading. They must then disable the “Improve model for everyone” feature. Until this option is disabled, new conversations will not be used to train ChatGPT’s models, according to an FAQ on OpenAI’s website.
There is no real benefit to consumers in allowing next-generation AI to train on their data, and there are risks that are still being studied, said Jacob Hoffman-Andrews, a senior technologist at the Electronic Frontier Foundation, an international nonprofit digital rights group.
If personal data is inappropriately posted on the web, consumers can have it removed and it will disappear from search engines. But detraining AI models is another matter entirely, he said. There may be ways to mitigate the use of certain information once it’s in an AI model, but it’s not foolproof and how to do it effectively is an area of active research, he said.
Opt for membership, like with Microsoft Copilot, only for the right reasons
Businesses are integrating generative AI into the everyday tools people use in their personal and professional lives. Copilot for Microsoft 365, for example, works with Word, Excel, and PowerPoint to help users with tasks like analysis, ideation, organization, and more.
For these tools, Microsoft says it does not share consumer data with any third party without permission and does not use customer data to train Copilot or its AI features without consent.
However, users can opt-in if they wish by logging into the Power Platform admin center, selecting Settings, Tenant Settings, and enabling data sharing for Dynamics 365 Copilot and Power Platform Copilot AI features. They enable data sharing and saving.
One benefit of this option is the ability to make existing features more effective. The downside, however, is that consumers lose control over how their data is used, which is an important consideration, privacy professionals say.
The good news is that consumers who have opted in to Microsoft can withdraw their consent at any time. Users can do this by going to the tenant settings page under Settings in the Power Platform admin center and turning off data sharing for Dynamics 365 Copilot and Power Platform Copilot AI features.
Set a short retention period for generative AI for research
Consumers may not think much before searching for information using AI, using it much like they would a search engine to generate information and insights. However, even searching for certain types of information using AI can be intrusive to a person’s privacy, so there are best practices to follow when using tools for this purpose as well. If possible, set a short retention period for the AI tool, Hoffman-Andrews said. And delete conversations, if possible, after you’ve obtained the information you’re looking for. Companies still keep server logs, but this can help reduce the risk that a third party has access to your account, he said. It can also reduce the risk that sensitive information is part of the model’s training. “It really depends on the privacy settings of the site in question.”