Adversarial Attacks On ๐๐ ๐๐ด๐ฒ๐ป๐ ๐๐ผ๐บ๐ฝ๐๐๐ฒ๐ฟ ๐๐ป๐๐ฒ๐ฟ๐ณ๐ฎ๐ฐ๐ฒ๐ (๐๐๐) ๐๐ถ๐ฎ ๐ฃ๐ผ๐ฝ-๐จ๐ฝ๐
๐๐จ๐ฆ๐ฏ๐ต๐ช๐ค ๐๐ฑ๐ฑ๐ญ๐ช๐ค๐ข๐ต๐ช๐ฐ๐ฏ๐ด ๐ช๐ฏ๐ต๐ฆ๐ณ๐ข๐ค๐ต๐ช๐ฏ๐จ ๐ธ๐ช๐ต๐ฉ ๐ฐ๐ฑ๐ฆ๐ณ๐ข๐ต๐ช๐ฏ๐จ ๐ด๐บ๐ด๐ต๐ฆ๐ฎ๐ด ๐ท๐ช๐ข ๐ต๐ฉ๐ฆ ๐๐๐ ๐ช๐ด ๐ด๐ถ๐ค๐ฉ ๐ข ๐ฏ๐ฆ๐ธ ๐ค๐ฐ๐ฏ๐ค๐ฆ๐ฑ๐ต...
๐๐จ๐ฆ๐ฏ๐ต๐ช๐ค ๐๐ฑ๐ฑ๐ญ๐ช๐ค๐ข๐ต๐ช๐ฐ๐ฏ๐ด ๐ช๐ฏ๐ต๐ฆ๐ณ๐ข๐ค๐ต๐ช๐ฏ๐จ ๐ธ๐ช๐ต๐ฉ ๐ฐ๐ฑ๐ฆ๐ณ๐ข๐ต๐ช๐ฏ๐จ ๐ด๐บ๐ด๐ต๐ฆ๐ฎ๐ด ๐ท๐ช๐ข ๐ต๐ฉ๐ฆ ๐๐๐ ๐ช๐ด ๐ด๐ถ๐ค๐ฉ ๐ข ๐ฏ๐ฆ๐ธ ๐ค๐ฐ๐ฏ๐ค๐ฆ๐ฑ๐ต ๐ข๐ฏ๐ฅ ๐บ๐ฆ๐ต ๐ข ๐ณ๐ฆ๐ค๐ฆ๐ฏ๐ต ๐ด๐ต๐ถ๐ฅ๐บ ๐ช๐ด ๐ค๐ฐ๐ฏ๐ด๐ช๐ฅ๐ฆ๐ณ๐ช๐ฏ๐จ ๐ธ๐ข๐บ๐ด ๐ช๐ฏ ๐ธ๐ฉ๐ช๐ค๐ฉ ๐ด๐ถ๐ค๐ฉ ๐ด๐บ๐ด๐ต๐ฆ๐ฎ๐ด ๐ค๐ข๐ฏ ๐ฃ๐ฆ ๐ข๐ต๐ต๐ข๐ค๐ฌ๐ฆ๐ฅ.
As developers increasingly use AI agents for automating tasks, understanding potential vulnerabilities becomes criticalโฆ
This study reveals that introducing adversarial pop-ups into Agentic Environments achieves an attack success rate of 86%, reducing task completion rates byย 47%.
Basic defences, like instructing agents to ignore pop-ups, proved ineffective, underscoring the need for more robust protection mechanisms against suchย attacks.
In Short
Just like human users need to undergo training to recognise phishing emails, so Language Models with vision capabilities need to be trained to recognise adversarial attacks via the GUI.
Language Models and agentic applications will need to undergo a similar process to ignore environmental noises and recognise prioritise legitimate instructions.
This also applies to embodied agents since many distractors in the physical environment might also be absent from the trainingย data.
There is also a case to be made for human users to oversee the automated agent workflow carefully to manage the potential risks from the environment.
The future might see work on effectively leveraging human supervision and intervention for necessary safety concerns.
There are also a case to be made for the user environment and safeguarding the digital world within which the AI Agent lives. If this digital world is a PC Operating System, then popups and undesired web content can be filtered on an OS level.
This, together with Language Model training can serve as a double check.
Some Background
The paper demonstrates vulnerabilities in Language Models / Foundation Models and AI Agents that interact with graphical user interfaces, showing that these agents can be manipulated by adversarial pop-ups, leading them to click on misleading elements instead of completing intended tasks.
The researchers introduced carefully crafted pop-ups in environments like OSWorld and VisualWebArena. Basic defence strategies, such as instructing the agent to ignore pop-ups, were ineffective, revealing the need for robust protections against such attacks in VLM-based automation.
Below the attack sequence and construct of the pop-up:
Conclusion
For language model based AI Agents to carry out tasks on the web, effective interaction with graphical user interfaces (GUIs) is crucial.
These AI Agents must interpret and respond to interface elements on webpages or screenshots, executing actions like clicking, scrolling, or typing based on commands such as โfind the cheapest productโ or โset the default search engine.โ
Although state-of-the-art Language Models / Foundation Models are improving in performing such tasks, they remain vulnerable to deceptive elements, such as pop-ups, fake download buttons, and misleading countdown timers.
While these visual distractions are typically intended to trick human users, they pose similar risks for AI Agents, which can be misled into dangerous actions, like downloading malware or navigating to fraudulent sites.
As AI Agents are perform tasks on behalf of users, addressing these risks becomes essential to protect both agents and end-users from unintended consequences.
Chief Evangelist @ Kore.ai | Iโm passionate about exploring the intersection of AI and language. From Language Models, AI Agents to Agentic Applications, Development Frameworks & Data-Centric Productivity Tools, I share insights and ideas on how these technologies are shaping the future.
COBUS GREYLING
Where AI Meets Language | Language Models, AI Agents, Agentic Applications, Development Frameworks & Data-Centricโฆcobusgreyling.com
Attacking Vision-Language Computer Agents via Pop-ups
Autonomous agents powered by large vision and language models (VLM) have demonstrated significant potential inโฆarxiv.org