A hacker said they purloined private details from countless OpenAI accounts-but researchers are doubtful, and the business is examining.
OpenAI states it's investigating after a hacker claimed to have actually swiped login qualifications for 20 countless the AI firm's user accounts-and put them up for sale on a dark web forum.
The pseudonymous breacher published a cryptic message in Russian advertising "more than 20 million gain access to codes to OpenAI accounts," calling it "a goldmine" and providing possible buyers what they claimed was sample information containing email addresses and passwords. As reported by Gbhackers, the full dataset was being sold "for simply a few dollars."
"I have over 20 million gain access to codes for OpenAI accounts," emirking wrote Thursday, according to an equated screenshot. "If you're interested, reach out-this is a goldmine, and Jesus concurs."
If genuine, this would be the third major security event for the AI business considering that the release of ChatGPT to the public. Last year, a hacker got access to the business's internal Slack messaging system. According to The New York Times, the hacker "took details about the design of the company's A.I. technologies."
Before that, in 2023 an even easier bug involving jailbreaking triggers enabled hackers to obtain the private information of OpenAI's paying customers.
This time, nevertheless, security researchers aren't even sure a hack took place. Daily Dot reporter Mikael Thalan wrote on X that he discovered void email addresses in the expected sample data: "No proof (suggests) this alleged OpenAI breach is genuine. A minimum of two addresses were void. The user's just other post on the forum is for a thief log. Thread has actually since been erased as well."
No evidence this supposed OpenAI breach is genuine.
Contacted every email address from the supposed sample of login qualifications.
At least 2 addresses were void. The user's just other post on the online forum is for a thief log. Thread has actually since been deleted also. https://t.co/yKpmxKQhsP
- Mikael Thalen (@MikaelThalen) February 6, 2025
OpenAI takes it 'seriously'
In a statement shown Decrypt, an OpenAI spokesperson acknowledged the situation while maintaining that the business's systems appeared safe.
"We take these claims seriously," the representative said, adding: "We have not seen any evidence that this is linked to a compromise of OpenAI systems to date."
The scope of the supposed breach sparked concerns due to OpenAI's massive user base. Millions of users worldwide depend on the business's tools like ChatGPT for service operations, educational functions, and content generation. A genuine breach might expose private conversations, industrial projects, and other sensitive information.
Until there's a final report, some preventive procedures are always advisable:
- Go to the "Configurations" tab, log out from all linked devices, scientific-programs.science and allow two-factor authentication or 2FA. This makes it essentially difficult for a hacker to gain access to the account, even if the login and passwords are compromised.
- If your bank supports it, then develop a virtual card number to handle . In this manner, it is much easier to find and avoid scams.
- Always watch on the conversations kept in the chatbot's memory, and be conscious of any phishing efforts. OpenAI does not request for any individual details, and any payment upgrade is always managed through the main OpenAI.com link.