In the fast-paced world of artificial intelligence, trust is the most important currency. Companies implementing AI solutions must be able to rely on their sensitive data being protected at all times. A recent security incident at the AI provider Localmind.ai on October 5, 2025 serves as an important warning: Without a fundamentally solid security strategy, even the best intentions can lead to serious data leaks.
This incident is not an isolated case but a lesson for the entire industry. It underscores how crucial a well-thought-out architecture, strict control mechanisms, and complete transparency are. Instead of pointing fingers, we want to use this occasion to objectively show what fundamentally different path we at innFactory take with our CompanyGPT to ensure the security and data sovereignty of our customers.
The Incident: A Misconfigured Instance with Far-Reaching Consequences
At Localmind.ai, a misconfigured beta test environment gave a third party extensive administrator rights. This led to a data leak that potentially affected company and customer data. The company’s immediate response to take the systems offline could not prevent a significant loss of trust – especially among customers from the government and corporate sectors.
The incident reveals two central vulnerabilities that must be avoided in modern software architectures: inadequately isolated test environments and incomplete access control.
The innFactory Approach: Security as a Design Principle of CompanyGPT
In developing CompanyGPT, we understood security not as an add-on feature but as the foundation of the entire architecture. Our approach differs in three essential points:
1. Transparency Through Open Source as a Security Guarantor While proprietary, closed systems represent a black box, we deliberately rely on proven open-source components. CompanyGPT is based on a fork of LibreChat, a leading open-source solution that is actively developed and reviewed by a global community of around 300 developers. Leading technology companies like Shopify also rely on this base.
- The Advantage: The source code is fully viewable. Potential vulnerabilities are reviewed by hundreds of eyes and quickly fixed. This transparency creates verifiable trust instead of just promising it.
2. Absolute Data Sovereignty Through Completely Isolated Tenants A core piece of our security strategy is the complete data sovereignty of the customer in their own Microsoft Azure Cloud environment. Unlike architectures that may be based on shared resources or poorly isolated environments, with us every CompanyGPT instance is a separate, completely isolated tenant.
- The Advantage: There are no shared databases, computing resources, or configurations between customers. An incident at one customer could technically never affect another. Your data remains exclusively in your dedicated and secure environment.
3. Seamless Access Control Through Established Enterprise Systems We don’t force new, separate user management systems. Instead, CompanyGPT integrates seamlessly into your existing IT infrastructure. Access is controlled via proven systems like Azure Entra ID (formerly Azure AD).
- The Advantage: Your IT department maintains full control. You can manage access rights granularly, revoke access for employees, or terminate sessions centrally – all with the tools they already know and trust. This also applies to custom connected MCP servers, which we can also secure via OAuth if needed.
Why Microsoft Azure? The Cloud as a Security Fortress
Our decision for Microsoft Azure as a technological base is a central pillar of our security promise. Azure offers a security level that on-premise solutions could only achieve with extremely high effort.
- GDPR-Compliant Data Storage: Your data is guaranteed to be stored and processed in European data centers that meet the strictest certifications and physical security standards.
- End-to-End Encryption: Data is protected at all times – in transit, at rest, and during processing. Even with theoretical access to a hard drive, the data would be unreadable.
- Strict Identity and Access Management: Azure offers advanced tools for monitoring and controlling access. Every action is logged, and unusual activities can be automatically detected and blocked.
- Zero-Trust Architecture: Azure is based on the “Never trust, always verify” principle. No access – even within the network – is considered trustworthy by default. Every request must authenticate and be authorized.
Direct Comparison of the Approaches
| localmind.ai (according to incident) | CompanyGPT (by innFactory AI) | |
|---|---|---|
| Software Base | Proprietary, non-transparent codebase. | Transparent, reviewed open-source base (LibreChat). |
| Architecture | Shared or inadequately isolated test/beta systems. | Completely isolated tenants in dedicated cloud environments. |
| Data Storage | Unclear separation between production & test data. | Strict separation, data sovereignty with the customer in their own Azure subscription. |
| Access Management | Compromise through weak configuration possible. | Integration into enterprise systems (Azure Entra ID, Google) with zero-trust principle. |
| Control & Audit | Lack of transparency led to undetected vulnerability. | Verifiable security through open source & comprehensive Azure security tools. |
Conclusion: Trust is Created Through Design, Not Promises
The incident at Localmind.ai is a valuable lesson for all companies relying on AI technology. Advertising promises of “local and secure” solutions are worthless if the technical implementation disregards fundamental security principles.
With CompanyGPT, innFactory offers a solution that thinks about security from the ground up. The combination of transparency (open source), control (Azure & enterprise login), and isolation (dedicated tenants) creates a robust and resilient architecture. This ensures that our customers can use the benefits of AI without risking control over their most valuable asset: their data.
