One of the best things about ChatGPT is the huge library of third-party plugins that can make the AI chatbot do far more than OpenAI originally designed it for. From plugins that make running computations easier to those that allow you to pull information from your outside accounts, a cybersecurity firm has issued a stark warning about trusting plugins, as well as the security flaws that could give bad actors access to your other accounts.
The new research, which was headed by Salt Labs, warns that security flaws found directly within ChatGPT, as well as within the AI’s ecosystem, could give attackers the opportunity to install malicious plugins without your consent. This would effectively allow the bad actors to hijack your account and gain access to third-party websites like Github.
The good news here, of course, is that OpenAI is already winding down the use of ChatGPT plugins, writing in a post that will be ending the installation of new plugins on March 19, 2024. Any currently in-use plugins will no longer be available after April 9, 2024. While it might seem counterintuitive given how useful plugins can be, OpenAI has used the information gleaned from plugins to create GPTs, which let you custom-tailor the AI to specific use cases.
In this photo illustration, the ChatGPT (OpenAI) logo is displayed on a smartphone screen. Image source: Rafael Henrique/SOPA Images/LightRocket via Getty Images
It’s a good thing, too, because Salt Labs says that one of the biggest flaws it discovered was an exploit that allowed bad actors to exploit the OAuth workflow and trick users into installing an arbitrary plugin. All of this was accomplishable because ChatGPT doesn’t validate that the user has started the plugin installation. It’s a terrific chance for bad actors to swoop in and intercept any data shared by the victim.
Tech. Entertainment. Science. Your inbox.
Sign up for the most interesting tech & entertainment news out there.
By signing up, I agree to the Terms of Use and have reviewed the Privacy Notice.
Beyond that exploit, though, Salt Labs also unearthed issues with PluginLab, stating that bad actors could weaponize those issues to create zero-click account takeover attacks utilizing ChatGPT plugins as a launching point. This would allow those threat actors to gain access to connected third-party websites, like GitHub.
AI language models like those powering ChatGPT can be exceptionally helpful if you use them correctly. However, the exploits found in ChatGPT’s plugin options showcase just how important it remains to stay vigilant about your online protection and to always be aware of what you are installing when you’re working with these systems.