AI continues to take over more and more of our day-to-day activities: Anthropic recently announced a Chrome extension that allows Claude AI to see browser activity and run actions on behalf of users, while Perplexity's Comet is an AI-powered browser that the company calls both a "personal assistant" and a "thinking partner."
A prompt injection attack is when hackers disguise malicious inputs to AI as legitimate ones, so generative models are tricked into divulging sensitive data or taking harmful action.
In practice, this may involve hiding malicious prompts on a webpage the LLM is likely to read in order to carry out an action. The content, which could be plain text or embedded in an image or PDF, may look harmless or be invisible to users (employing white text on a white background, for example). Hackers don't need code to carry out a prompt injection attack—just the right words in the right place.
How prompt injection compromises agentic browsers
An example from Malwarebytes Labs: You ask your agentic browser to find and book a cheap flight for your next vacation. If it has all of your passenger and payment information available (because you've provided it), AI can complete this request without any additional action from you. But if the cheapest flight is found on a malicious website set up for this purpose, the browser could hand your credit card number and other sensitive data directly to the scammers.
How to safely use agentic browsers
Mitigating prompt injection attack risks falls largely on the developers of agentic browsers rather than the user, with security experts recommending higher standards for user interaction and distinguishing between a user's request and other content consumed to carry out an task.
That said, while Perplexity and Anthropic and others address these issues on their end, you can put guardrails in place against prompt injection, such as limiting the data and accounts your agentic browser can access and requiring manual review for high-stakes tasks, such as authorizing payments. Malwarebytes Labs also recommends enabling multi-factor authentication on all accounts connected to agentic browsers, regularly reviewing account and browser activity, and keeping software updated to ensure security flaws are patched in a timely manner.
Hence then, the article about your ai browser may be vulnerable to prompt injection attacks was published today ( ) and is available on Live Hacker ( Middle East ) The editorial team at PressBee has edited and verified it, and it may have been modified, fully republished, or quoted. You can read and follow the updates of this news or article from its original source.
Read More Details
Finally We wish PressBee provided you with enough information of ( Your AI Browser May Be Vulnerable to 'Prompt Injection' Attacks )
Also on site :