One thing is for sure: when inputting information into ChatGPT, your privacy is crucial.

Since its inception last year, ChatGPT has grappled with potential vulnerabilities that could have allowed unauthorized access to sensitive data. The most recent incident underscores that even after a patch, lingering issues persist.

According to a report by Bleeping Computer, OpenAI has taken steps to resolve a flaw that led to ChatGPT inadvertently sharing user data, including conversations, user IDs, and session details, with unauthorized parties.

Although a fix has been implemented, security researcher Johann Rehberger, who uncovered the vulnerability, points out that significant security gaps still persist within OpenAI’s solution.

Uncovering the ChatGPT Data Leak

Rehberger utilized OpenAI’s custom GPTs feature to create a GPT that extracted data from ChatGPT. This discovery raises concerns, as custom GPTs are being promoted as AI applications, akin to the way the App Store transformed mobile applications for iPhones. If Rehberger could create a custom GPT, it’s foreseeable that malicious actors could exploit the flaw and develop custom GPTs to extract data from their targets.

Rehberger initially reported the “data exfiltration technique” to OpenAI in April and provided detailed information on the process in November. Recently, he noted on his website that OpenAI has patched the leak vulnerability, though not completely.

Rehberger explained, “The fix is not perfect, but a step in the right direction.” Unfortunately, the flaw still exists, allowing ChatGPT to be manipulated into sharing data.

“Some quick tests show that bits of info can still leak,” Rehberger wrote, clarifying that “it only leaks small amounts this way, is slow and more noticeable to a user.” Despite the remaining concerns, Rehberger believes it’s a “step in the right direction for sure.”

However, the security flaw still persists in the ChatGPT apps for iOS and Android, which are yet to be updated with a fix.

Therefore, users are advised to exercise caution when using custom GPTs and to preferably avoid AI apps from unfamiliar third parties.

Topics: Artificial Intelligence, Cybersecurity, ChatGPT, OpenAI

Shares:

Leave a Reply

Your email address will not be published. Required fields are marked *