“We now have Claude in the EU — via AWS Bedrock.” We hear this sentence at innFactory AI several times a week in consulting conversations. The meaning: the company has successfully connected Claude Sonnet or Claude Opus through a European hyperscaler, all compliance checkboxes are ticked, the data protection officer is happy. The expectation from the business unit is then often: “Great, so our employees can finally use claude.ai — in a GDPR-compliant way.”
This is exactly where disappointment sets in. Because what’s provided through AWS Bedrock, Google Vertex AI or Azure OpenAI Services is the model — not the product. It’s as if you bought an engine and were surprised that you didn’t get a car.
This article clears up the most common misconception in current enterprise AI adoption. We explain what really distinguishes Claude and ChatGPT as models, why the app experiences you know from personal use don’t migrate to the EU through cloud providers, and how European providers — both open source and commercial — close this gap. In particular, we show why CompanyGPT — our enterprise fork of LibreChat — is the pragmatic answer to this problem for many companies.
Model vs. product: the distinction almost no one draws cleanly
When we talk about AI in strategy workshops, we like to draw the following line:
The model is the AI itself. At Anthropic the models are called Claude Haiku, Sonnet and Opus. At OpenAI they are called GPT-5, GPT-5 mini, the o-series and Codex. At Google they are called Gemini 2.5 Pro and Flash. At its core, a model consists of trained weights and is addressed via an API.
The app (or product) is everything built around the model to make it usable for humans. This includes the web interface, memory, project workspaces, file uploads, third-party connectors, voice modes, image generation, agent frameworks, billing logic, SSO, audit logs, mobile apps, IDE plugins and much more.
What you see at claude.ai or chatgpt.com is the app. What Anthropic offers via AWS Bedrock and OpenAI offers via Azure OpenAI Services is the API to the model. These are two completely different products with different contracts, different data protection regimes and different feature sets.
| Layer | Examples | Who operates it? | Where does it live? |
|---|---|---|---|
| App / product | claude.ai, Claude Code, Claude for Work, ChatGPT, Codex, ChatGPT Enterprise | Anthropic / OpenAI directly | Primarily USA |
| Model API | Claude via AWS Bedrock, GPT via Azure OpenAI, Gemini via Vertex AI | AWS / Microsoft / Google on behalf of | EU region selectable |
| Custom app on model API | LibreChat, OpenWebUI, CompanyGPT | You or your partner | Your choice (incl. EU) |
This table explains 80% of the confusion we see in projects.
What actually distinguishes Claude and ChatGPT as models?
Before we get to the app topic, a sober look at the model layer is worthwhile — because plenty of half-truths circulate here too.
Claude (Anthropic)
Since 2024 Claude has been the strongest model on the market in many disciplines. Anthropic follows a research approach called Constitutional AI, which results in Claude hallucinating comparatively rarely, clearly admitting when it doesn’t know something, and remaining stable across long reasoning chains. The current family includes Claude Haiku 4.5 (fast, cheap), Claude Sonnet 4.7 (all-rounder) and Claude Opus 4.7 (highest quality). Strengths:
- Coding — Claude is the preferred model of many professional software teams
- Long context — stable up to 200,000 tokens, in enterprise tier significantly more
- Structured output — clean JSON, XML and Markdown generation
- Low hallucination rate — important for legal, medical, compliance use
ChatGPT / GPT (OpenAI)
The GPT family is broader and more focused on multimodality. GPT-5 is on par with Claude Opus in reasoning, but has its own strengths:
- Multimodality — image, voice (Advanced Voice Mode), video understanding are very mature
- Tool use & agents — early and deep ecosystem around function calling
- Codex — dedicated coding agent with cloud sandbox
- Broad world knowledge — through a huge training dataset
Gemini (Google) as the third pole
For completeness: Gemini 2.5 Pro has the longest productive context window (1M+ tokens), is natively multimodal, and integrates deeply with Google Workspace.
The key insight
All these differences are model properties. They transfer fully when you use the model through a hyperscaler. Claude via AWS Bedrock is the same Claude in answer quality as on claude.ai. But — and this is the decisive point — the app is a different one.
The big misconception: “We have Claude on Bedrock, so we have Claude.ai”
Let’s look concretely at what the three big hyperscaler offerings actually deliver — and what they don’t.
What you get from AWS Bedrock, Google Vertex AI and Azure OpenAI
- Access to the model via API (Claude, GPT, Gemini, depending on provider)
- Selectable EU region (Frankfurt, Dublin, Paris)
- Data Processing Agreement (DPA) with the hyperscaler
- Zero data retention contractually assurable (content not used for training)
- Industrial scaling, SLAs, monitoring
- Integration into existing cloud security and identity structures
That’s a lot — and it’s the correct path to embed a model into enterprise processes in a GDPR- and AI-Act-compliant way.
What you do not get from hyperscalers
- No claude.ai web interface — that exists exclusively on Anthropic infrastructure
- No Claude Code — the CLI and IDE coding agent runs against the Anthropic account, not via Bedrock
- No Claude for Work / Projects / Artifacts UI — the collaborative interface with memory and project containers stays at Anthropic
- No ChatGPT web app, no Custom GPT marketplace — the GPT Store logic is OpenAI-exclusive
- No Codex Cloud — OpenAI’s cloud coding agent is its own product, not replicable through Azure OpenAI
- No ChatGPT memory — the persistent, account-wide memory function only exists in the OpenAI app
- No Operator, no Sora, no Advanced Voice Mode — all standalone products
- No mobile apps with push notifications, voice input, context persistence
The reason is simple: these apps are operated by Anthropic and OpenAI themselves. If you buy ChatGPT Enterprise or Claude for Work directly, you get those experiences — under the respective vendor’s data protection terms, typically with a US connection that many European companies cannot or will not accept (public sector, critical infrastructure, regulated industries, § 203 StGB for professionals bound by confidentiality).
So if you want app experience plus EU data residency plus integration with your own data, you have to provide the app layer yourself. Anthropic and OpenAI know this — and that’s exactly why they deliberately offer their models through third-party clouds: so that companies or specialised service providers can build their own solutions on top.
And “business GPTs” don’t automatically solve the problem either
A second common confusion: “We bought ChatGPT Enterprise” or “We use Microsoft Copilot Chat — that’s GDPR-compliant ChatGPT, right?” This too isn’t quite accurate.
ChatGPT Enterprise is indeed a separate product with better data protection terms than the consumer version. But it remains hosted at OpenAI, in their US infrastructure, under their contract structure. GDPR compliance is feasible under EU standard contractual clauses, but not acceptable for every industry.
Microsoft 365 Copilot and Copilot Chat use GPT models but are tied to the Microsoft ecosystem. Your data in M365 is included in request processing, which helps for pure Office workflows — but gets tight for company-wide AI strategies (multi-model, custom RAG sources, custom agents).
Other “business GPTs” like Langdock, Neuland.ai or similar platforms are often SaaS solutions that replicate the app experience but force you to work in their cloud — with their own token markups, their own multi-tenancy and limited extensibility. We’ve covered this in detail in CompanyGPT vs. Langdock and CompanyGPT vs. Neuland.ai.
The honest answer is: a truly clean solution — app experience, your own cloud, your own data, your own choice of model — is not available off the shelf. It’s built with open-source components or specialised enterprise forks.
The market’s answer: custom web interfaces that clone the app experience
The European and global open-source community recognised this problem early and developed platforms that do exactly that: replicate the ChatGPT/Claude experience while keeping the model swappable and enabling hosting in your own cloud.
OpenWebUI
OpenWebUI is probably the best-known self-hosted chat interface. Originally built around Ollama for local models, today it supports all common cloud models. Strengths:
- Very active community, fast releases
- Focus on simple self-hosting (Docker, Kubernetes)
- Good integration for local models via Ollama
- Plugin system
Weaknesses in the enterprise context: Office file processing is rudimentary, RAG pipelines are basic, professional SSO/audit requirements often need to be added separately.
LibreChat
LibreChat is the second major open-source platform — and noticeably closer to enterprise needs. Strengths:
- Multi-LLM natively — OpenAI, Anthropic, Google, Bedrock, Azure, Mistral, local models in parallel
- MCP integration (Model Context Protocol) for tool connections
- Agents framework included
- Plugins and very flexible file processing
- Modular and well maintainable
LibreChat is the foundation we at innFactory AI chose as the starting point for our product CompanyGPT.
The limits of “stock installation”
Both platforms are excellent open-source projects — but for productive use in mid-market and enterprise environments, they lack functionality that no hobby project can deliver:
- Deep Office integration (Excel formulas, Word structures, PowerPoint layouts, PDF analysis with tables)
- Robust RAG pipelines with SharePoint, Confluence, file servers
- Diagram and chart generation with enterprise-grade outputs
- Audit logs, GDPR reporting, deletion workflows
- 24/7 support, SLAs, maintenance across version jumps
- Corporate branding, SSO via Azure AD/Entra ID, permission models
CompanyGPT: our enterprise-ready fork of LibreChat
This is exactly the gap CompanyGPT closes. We forked LibreChat and have consistently extended it for German and European mid-market and enterprise requirements over the past two years. Key extensions our fork brings:
CompanyFiles — file processing at enterprise level
Standard LibreChat can read files — but for daily business that’s not enough. CompanyFiles offers:
- Excel processing with understanding of formulas, multiple sheets, pivot tables
- Word documents with structure preservation (headings, lists, tracked changes)
- PowerPoint analysis including speaker notes and slide order
- PDF with table extraction and layout recognition — even for scanned documents via OCR
- Generation back into real Office formats, not just Markdown
CompanyRAG — deep knowledge integration
The central question in nearly every AI project: “How do I get our internal data into the chat?” CompanyRAG delivers:
- SharePoint connector with permission propagation
- Confluence, file share, wiki integration
- Hybrid search (semantic + keyword)
- Source citations with deep links back into the source system
- Document scope per user, team, permission group
Search and analysis capabilities over enterprise data
Beyond CompanyRAG, CompanyGPT offers true analysis modes: data extraction across document sets, comparative analyses across multiple contracts, trend evaluation across report series — use cases not covered by plain chat.
Diagram and visualisation generation
Mermaid diagrams, flowcharts, organisational charts, simple charts — generated directly from chat and exportable. Important for consulting, strategy and process use cases.
Multi-model from EU clouds
CompanyGPT integrates all relevant models in parallel — through GDPR-compliant routes:
- Claude via AWS Bedrock (EU Frankfurt)
- GPT via Azure OpenAI (Sweden Central, Germany West Central)
- Gemini via Google Vertex AI (europe-west3)
- Mistral, Llama, local models via STACKIT, OVH or on-premise
The user switches models with a click in chat — the app layer stays the same, only the model engine is swapped.
Hosted in your cloud, not ours
This is the key difference to many commercial “business GPTs”: CompanyGPT runs in your cloud subscription — Azure, AWS, GCP or STACKIT. We deliver the platform, you keep data sovereignty. No token markups, no multi-tenancy with foreign data, no licence fees — you pay model tokens directly to the hyperscaler.
More details on architecture, features and pricing are available on the CompanyGPT product page and in our post CompanyGPT — using your own AI models in a GDPR-compliant way in your enterprise.
Decision matrix: which path is right for you?
| Requirement | Recommended path |
|---|---|
| I need claude.ai or ChatGPT 1:1 for a few power users, US data flow is acceptable | Buy Claude for Work / ChatGPT Enterprise directly |
| I only need the model for a custom, technical integration (backend service, agent, custom software) | Hyperscaler API: Bedrock / Azure OpenAI / Vertex AI |
| I need the app experience for many employees, with EU data residency and integration with our data (SharePoint, Office, etc.) | CompanyGPT (Enterprise) or LibreChat / OpenWebUI self-managed |
| I want local models on our own servers, no cloud | OpenWebUI + Ollama, or CompanyGPT on-premise |
| I need coding agents like Claude Code or Codex, but GDPR-compliant | Currently a market gap — alternatives like OpenCode, Cline or custom agent frameworks on Bedrock/Vertex |
The last row is honestly one of the most interesting open points: coding agents in EU cloud variants are not yet as mature as the direct vendor products. There’s a lot happening here — we’re watching the field closely and will cover it in detail in a separate post.
Conclusion
The most important takeaway from this article is really just one sentence: cloud availability of a model does not mean app availability of a product.
If you connect Claude via AWS Bedrock, you have the model — not claude.ai, not Claude Code, not Claude for Work. If you use GPT via Azure OpenAI, you have the model — not ChatGPT, not Custom GPTs, not Codex. This distinction is not academic; it’s the basis of every serious AI strategy and every honest AI compliance assessment.
If you want the app experience in the EU with your own data, you need your own app layer. Open source provides excellent building blocks with OpenWebUI and LibreChat. For enterprise requirements — Office depth, RAG, SharePoint, diagrams, SSO, support — we’ve been building exactly the layer between hyperscaler API and end user with CompanyGPT for two years.
If you want to know what this means concretely for your company, talk to us: Contact us or request a CompanyGPT demo.
FAQ
Is Claude via AWS Bedrock the same as claude.ai?
No. Via Bedrock you get the Claude model as an API. claude.ai is the web app operated by Anthropic itself, with memory, projects, artifacts, computer use and other features. That app runs on Anthropic infrastructure and is not available through Bedrock.
Can I get Claude Code via Vertex AI or Bedrock?
No. Claude Code is a standalone Anthropic product (CLI and IDE agent) that runs against the Anthropic API directly and requires an Anthropic account. It is not currently usable via Bedrock or Vertex AI. Alternatives for GDPR-compliant coding are tools like OpenCode, Cline or custom agent frameworks that use Bedrock models.
Is ChatGPT Enterprise GDPR-compliant?
ChatGPT Enterprise has significantly better data protection terms than the consumer version, but runs on OpenAI infrastructure in the USA. With current standard contractual clauses, GDPR compliance can be achieved for many industries — but not for regulated areas such as professionals bound by § 203 StGB, critical infrastructure or certain public sector clients.
What’s the difference between LibreChat and CompanyGPT?
LibreChat is an excellent open-source project providing the base app layer for multi-LLM chat. CompanyGPT is our enterprise fork with additional components: deep Office file integration (CompanyFiles), SharePoint and RAG integration (CompanyRAG), diagram generation, audit logs, corporate branding, support SLAs and productive multi-model integration with EU hyperscalers.
Can I use Codex via Azure OpenAI?
Codex as a standalone product (Codex Cloud, Codex CLI) is an OpenAI offering and is not available via Azure OpenAI. The underlying code-capable models (e.g. GPT-5) are usable via Azure OpenAI as an API — but the Codex app experience must either be purchased directly from OpenAI or rebuilt with alternative coding agent frameworks.
Is it worth running OpenWebUI or LibreChat yourself?
For smaller teams, technically savvy organisations or as a pilot project: yes, absolutely. For productive enterprise use with hundreds or thousands of users, regulated requirements and deep business integration, we recommend an enterprise fork like CompanyGPT or an experienced implementation partner. The difference isn’t in the open-source code, but in operations, data connectors, support and adaptation to industry requirements.
Which model is “better” — Claude or GPT?
It depends on the use case. Claude is often the first choice today for coding, long documents, reasoning and sensitive content with low hallucination tolerance. GPT scores on multimodality (voice, image, video) and the broad tool ecosystem. Gemini is strong for very long contexts and Workspace integration. A multi-model platform like CompanyGPT lets you choose the best model per use case — without locking into a single vendor. More on this in our guide Claude for Enterprise: Anthropic in Business Use.
