On Thursday, OpenAI launched Advanced Account Security, an optional feature for ChatGPT users aiming to bolster protection against digital threats. This innovation addresses the growing trend of utilizing ChatGPT for sensitive and critical tasks.
“As individuals increasingly rely on AI for personal inquiries and significant work, a ChatGPT account may accumulate sensitive data related to both personal and professional spheres,” OpenAI explained in its announcement. “For those such as journalists, politicians, activists, researchers, and security-focused users, the implications are particularly pronounced.”
The feature centralizes security and privacy controls within web account settings for ChatGPT and Codex accounts that share a login. It necessitates the use of passkeys or physical security keys instead of traditional passwords, while restricting account recovery to these methods only, excluding email and SMS options. Consequently, OpenAI will be unable to assist with account recovery if those methods are unavailable.
“Physical security keys, like YubiKeys, offer robust defenses against phishing attempts,” the company noted. “To facilitate access to this level of protection, we’ve partnered with Yubico, a leader in hardware-based authentication and account defense, to provide our users discounted rates on a specialized bundle of top-tier security keys.”
OpenAI will also offer discounts on bundles that include two keys for everyday use and backup. Users have the option to utilize other FIDO-compliant security keys or software-based passkeys.
The feature shortens sign-in sessions to reduce risk exposure if a device is compromised, while users receive notifications of logins and can review active sessions across devices. Additionally, it alters how user data is managed by excluding conversations from accounts with Advanced Account Security from model training.
OpenAI did not respond immediately to Decrypt’s request for comment.
This announcement coincides with ongoing phishing attacks that increasingly employ sophisticated scams. In March, an OpenClaw developer fell victim to a scam targeting cryptocurrency wallets via a fake GitHub account. The same month saw the hijacking of the Bonk.fun domain by scammers who deployed wallet-draining prompts. Earlier this month, a counterfeit Ledger app defrauded more than $9 million from over 50 users.
The rollout also impacts users in OpenAI’s “Trusted Access for Cyber” program, mandating the activation of Advanced Account Security starting June 1. Alternatively, organizations can verify their use of phishing-resistant authentication through single sign-on systems.
“Privacy and security are core to our product development philosophy, and we will persistently enhance protections to offer users greater control and stronger safeguards over time,” OpenAI stated. “We anticipate extending these efforts to other user groups, including enterprise environments where robust account security is equally crucial.”