Skip to main content
9 – 17 UHR +49 8031 3508270 LUITPOLDSTR. 9, 83022 ROSENHEIM
DE / EN

Why We Still Avoid AI Browsers: The Concrete Danger of Prompt Injection

Tobias Jonas Tobias Jonas | | 5 min read

The tech world is excited about a new generation of browsers. Tools like Perplexity’s or the new ChatGPT Atlas from OpenAI promise to revolutionize our browsing experience. They no longer act as passive display tools but as proactive, “agentic” assistants that can summarize web pages, perform actions, and be deeply integrated into our workflows.

But amid all the excitement, many overlook a fundamental security problem that is as new as the technology itself. For this reason, we have decided not to install these new AI browsers for the time being and advise caution, especially in enterprise environments. The reason has a name: Indirect Prompt Injection.

What is an “Indirect Prompt Injection”?

Forget complex malicious code or viruses. In an “Indirect Prompt Injection,” the artificial intelligence itself becomes the gateway. The attack does not occur by exploiting a technical vulnerability in the traditional sense, but by manipulating the instructions (prompts) that the AI processes from sources it trusts but that are controlled by third parties.

Think of it as Phishing 2.0 on Autopilot. Instead of tricking a user into clicking a link and manually entering their data, the AI is directly and invisibly manipulated into performing harmful actions. These attacks are also called “Man-in-the-Prompt” attacks because the attacker positions themselves between the user’s intent and the AI’s execution.

“CometJacking”: A Wake-Up Call with Technical Details

Researchers from the browser manufacturer Brave have recently impressively demonstrated this danger. [1][2] In their investigations, they named a specific attack method “CometJacking”, named after Perplexity’s AI browser Comet. [3]

The security researchers showed how easy it is to compromise an AI browser. They used various vectors:

  • Hidden instructions in web page content: Malicious prompts were placed directly in the HTML code of web pages. These instructions can be completely invisible to the human viewer – hidden in tiny font sizes, invisible markup, or elements concealed by CSS. When the AI analyzes such a page, it executes the hidden commands. [4]
  • Attack via URL parameters: An even more direct method, uncovered by security researchers at LayerX, uses manipulated links. [5] An attacker can create a link that contains a malicious prompt directly in the URL parameter (collection). [6] When the user clicks on it, the prompt instructs the AI agent not to search the web but to access its internal storage or connected services like Google Mail, steal the data, and send it to a server controlled by the attacker. [7]

The consequences are alarming and concretely proven. The researchers were able to make the AI browser:

  • Exfiltrate private data, such as email contents or calendar entries. [8]
  • Extract sensitive data like login credentials from the agent’s memory.
  • Encode data (e.g., with Base64) to bypass security mechanisms for detecting data leaks, and then send it unnoticed. [7]

OpenAI’s Atlas: The Danger Goes Mainstream

What was uncovered in a research study with Perplexity’s Comet is not an isolated problem. It is a systemic challenge for the entire category of AI-powered browsers. With the new ChatGPT Atlas, OpenAI is now potentially bringing similar functionality to millions of users.

Tellingly, even OpenAI explicitly warns in its own release notes: “Do not use Atlas for regulated or production data.” This warning is not an empty phrase but an admission that even the developers know the current risk. The technology is not yet mature for use in environments where sensitive or business-critical data is processed.

A New Era of Cybersecurity: Protecting AI from Context

We are entering a new era of IT security. Previously, the browser’s primary task was to protect the user from malicious websites. Now, the browser must additionally protect the AI from malicious instructions that come from the context of these websites.

The threat is shifting from pure code execution (exploits) to context manipulation. It’s no longer just about whether a program is malicious, but whether the intent of an AI agent can be manipulated through misleading information. A single successful prompt injection attack could be enough to compromise entire enterprise systems once AI agents begin to autonomously manage finances, workflows, and transactions.

Our Conclusion: Potential Yes, But Not Secure Enough Yet

Agentic AI browsers are undoubtedly a fascinating glimpse into the future of the internet. Their potential to automate complex tasks is enormous. From a professional security perspective, however, this future is not here yet.

As long as there are no robust and standardized mechanisms that protect an AI from executing malicious, hidden instructions, these tools pose an incalculable risk. The attack surface is too large and the potential damage is too high.

We will closely monitor developments, but for now we are sticking with proven and secure solutions and advise every company to do the same.


Sources

  1. Brave: “Agentic Browser Security: Indirect Prompt Injection in Perplexity Comet” (August 20, 2025)
  2. Red-Team News: “CometJacking: How Indirect Prompt Injection Compromised Perplexity’s AI Browser”
  3. The Hacker News: “CometJacking: One Click Can Turn Perplexity’s Comet AI Browser Into a Data Thief” (October 4, 2025)
  4. Red-Team News: “CometJacking: How Indirect Prompt Injection Compromised Perplexity’s AI Browser”
  5. LayerX: “CometJacking: How One Click Can Turn Perplexity’s Comet AI Browser Against You” (October 4, 2025)
  6. BleepingComputer: “CommetJacking attack tricks Comet browser into stealing emails” (October 3, 2025)
  7. LayerX: “CometJacking: How One Click Can Turn Perplexity’s Comet AI Browser Against You” (October 4, 2025)
  8. The Hacker News: “CometJacking: One Click Can Turn Perplexity’s Comet AI Browser Into a Data Thief” (October 4, 2025)
Tobias Jonas
Written by

Tobias Jonas

Co-CEO, M.Sc.

Tobias Jonas, M.Sc. ist Mitgründer und Co-CEO der innFactory AI Consulting GmbH. Er ist ein führender Innovator im Bereich Künstliche Intelligenz und Cloud Computing. Als Co-Founder der innFactory GmbH hat er hunderte KI- und Cloud-Projekte erfolgreich geleitet und das Unternehmen als wichtigen Akteur im deutschen IT-Sektor etabliert. Dabei ist Tobias immer am Puls der Zeit: Er erkannte früh das Potenzial von KI Agenten und veranstaltete dazu eines der ersten Meetups in Deutschland. Zudem wies er bereits im ersten Monat nach Veröffentlichung auf das MCP Protokoll hin und informierte seine Follower am Gründungstag über die Agentic AI Foundation. Neben seinen Geschäftsführerrollen engagiert sich Tobias Jonas in verschiedenen Fach- und Wirtschaftsverbänden, darunter der KI Bundesverband und der Digitalausschuss der IHK München und Oberbayern, und leitet praxisorientierte KI- und Cloudprojekte an der Technischen Hochschule Rosenheim. Als Keynote Speaker teilt er seine Expertise zu KI und vermittelt komplexe technologische Konzepte verständlich.

LinkedIn