As AI becomes increasingly integrated into various sectors, concerns about data privacy and security are growing. Tech giants such as OpenAI, Anthropic, xAI, and Google are accumulating user data to enhance their AI models and ensure safety and security. However, this has led to apprehensions about data storage, access, and usage, particularly in highly regulated industries like healthcare, finance, and government.
San Francisco-based startup Confident Security is addressing these concerns with its end-to-end encryption tool, CONFSEC. The company aims to become "the Signal for AI" by ensuring that prompts and metadata are not stored, seen, or used for AI training, even by the model provider or any third party.
Confident Security's CEO, Jonathan Mortensen, emphasizes the importance of maintaining privacy when using AI tools. "The second that you give up your data to someone else, you’ve essentially reduced your privacy," he told TechCrunch. "And our product’s goal is to remove that trade-off."
The company recently emerged from stealth mode with $4.2 million in seed funding from Decibel, South Park Commons, Ex Ante, and Swyx. Confident Security aims to serve as an intermediary vendor between AI vendors and their customers, including hyperscalers, governments, and enterprises.
Mortensen believes that AI companies could benefit from offering Confident Security's tool to enterprise clients, unlocking the market for them. CONFSEC is also well-suited for new AI browsers like Perplexity's Comet, ensuring that customers' sensitive data is not stored on a server that could be accessed by the company or malicious actors.
CONFSEC is modeled after Apple's Private Cloud Compute (PCC) architecture, which Mortensen says is "10x better than anything out there in terms of guaranteeing that Apple cannot see your data" when running certain AI tasks securely in the cloud.
Confident Security's system works by first anonymizing data through encryption and routing it through services like Cloudflare or Fastly, ensuring that servers never see the original source or content. Next, it employs advanced encryption that only allows decryption under strict conditions.
"You can say you’re only allowed to decrypt this if you are not going to log the data, and you’re not going to use it for training, and you’re not going to let anyone see it," Mortensen explained.
Finally, the software running the AI inference is publicly logged and open to review, allowing experts to verify its guarantees.
Jess Leão, a partner at Decibel, praised Confident Security's approach, stating, "Confident Security is ahead of the curve in recognizing that the future of AI depends on trust built into the infrastructure itself. Without solutions like this, many enterprises simply can’t move forward with AI."
Although Confident Security is still in its early days, Mortensen said that CONFSEC has been tested, externally audited, and is production-ready. The team is in talks with banks, browsers, and search engines, among other potential clients, to integrate CONFSEC into their infrastructure stacks.
In conclusion, Confident Security's CONFSEC aims to address the growing concerns about data privacy and security in AI adoption. By offering an end-to-end encryption tool that guarantees the privacy of prompts and metadata, the company is paving the way for trust-based AI infrastructure in various industries.