[[{“value”:”
AI is encroaching into every app, and in some cases that could be a problem.
Take Slack, for example. The workplace instant messenger app has an optional suite of AI features you can pay extra for, but according to the security firm PromptArmor, it’s full of potential holes. The feature exists to help create quick summaries of conversations, but per PromptArmor, it does so with access to private DMs, and can be tricked into phishing other users.
The technical nitty-gritty details are all in the PromptArmor blog post, but the problem here is essentially twofold. For starters, Slack recently updated its AI system to be able to scrape data from private user DMs and file uploads on purpose. Beyond that, using a technique called “prompt injection,” PromptArmor proved you can use Slack AI to create malicious links that could potentially phish members of said Slack channel.
Mashable has reached out to Slack for comment on this. Per PromptArmor’s blog, the issue was raised to Slack ahead of the publication of its blog post. A spokesperson for Slack’s parent company SalesForce told The Register that the problem has been addressed, but did not go into specifics.
“When we became aware of the report, we launched an investigation into the described scenario where, under very limited and specific circumstances, a malicious actor with an existing account in the same Slack workspace could phish users for sensitive data,” the SalesForce spokesperson said. “We’ve deployed a patch to address the issue and have no evidence at this time of unauthorized access to customer data.”
If nothing else, it’s probably worth looking up the stated AI policies for every app that you use regularly.
“}]] Mashable Read More
Slack’s AI summary features have access to user DMs and files, and can be used to do malicious things, per a new report.