Salt Labs researchers identified generative AI ecosystems as a new interesting attack vector. vulnerabilities found during this research on ChatGPT ecosystem could have granted access to accounts of users, including GitHub repositories, including 0-click attacks.
I still love how stupid ‘hacking’ these things are. Like the poem shit. Thats the future. Tell a bot to say something a bunch of times and it spits out someone’s address.
Not related to the article at all mate.
This article is about how many plugins have Bern discovered to have implemented oath in a very insecure way and simply using them can expose your sensitive info you have linked to your chatgpt account.
IE:
You connect your github account to your chatgpt account (so you can ask chatgpt questions about your private codebase)
You install and use one of many other compromisable weakly implemented plugins
Attacker uses the weak plugin to compromise your whole account and can now access anything you attached to your account, IE they can now access your private git repos you hooked up in step 1…
Most of the attack vectors involve a basic (hard to notice) phish attack on weak oath urls.
The tricky part is the urls truly are and look legit. It isn’t a fake url, it actually links to the legit page, but they added some query params (the part after the ? In the url) that compromise the way it behaves
Yeah, it’s a legit exploit.
But it could also be mitigated by not giving your sensitive data to chatgpt.
I think that I understand what you mean, but what you said was kinda obvious, and also not particularly useful to the overall conversation.
I think you mean “don’t give OpenAI access to your personal data”, which I personally agree with.
I read what you wrote as being analogous to a patient coming in to a doctor and saying “It hurts when I do this” only to have the doctor say “well, don’t do that then”.
Considering the company’s garbage history with treating our data with any real respect, I would also recommend strongly against giving them more information about yourself or your works. I also think that the people that have decided against that advice to use these plugins should be made aware of the issue with them and that also OpenAI should fix them promptly.