They’re going AI-ncognito.
Don’t like the idea of IT people peeping on your private ChatGPT messages?
Not to fear: parent company OpenAI has unveiled a new stealth mode that protects users’ conversations from prying employees’ eyes, as well as potential leaks.
“We’ll be moving more and more in this direction of prioritizing user privacy,” Mira Murati, the Silicon Valley firm’s chief technology officer, told Reuters of the privacy-promoting measure.
Similar to a web browser’s “incognito mode,” the new feature, released Tuesday, allows AI diehards to toggle off “Chat History & Training” in settings.
That means — in theory — that GPT will no longer store a compendium of the user’s previous chats, nor will they be reviewed by “AI trainers to improve our systems,” per the policy stipulated on the site.
Murati said that the measure was part of a months-long privacy campaign geared toward putting users “in the driver’s seat” regarding data collection.
“It’s completely eyes off and the models are super aligned: they do the things that you want to do,” the tech expert insisted.
This measure comes amid an uptick in privacy concerns surrounding ChatGPT.
Last month, Italian regulators banned the Microsoft-backed bot following a reported data breach, which raised alarm bells over user privacy and children’s safety.
The agency said that OpenAI had “no legal basis” for harvesting user data that was being gathered “to train the algorithms that power the platform.”
Around the same time, OpenAI reported that a glitch resulted in ChatGPT accidentally sharing random users’ conversation histories, PC magazine reported.
GPTers caught wise to the snafu after noticing that their history function was displaying unfamiliar conversations from apparent strangers.
“We feel awful about this,” OpenAI CEO Sam Altman tweeted about the leak, which thankfully didn’t disclose authors’ identities.