2026 AI Privacy Report: How Leading Platforms Handle Your Data

Table of Contents
- Why this report exists
- Methodology
- What we evaluated
- Platform-by-platform findings
- The training question: who uses your data?
- Memory and data retention policies
- Export and portability
- The open-source alternative
- Key findings and scorecard
- Privacy-enhancing technologies
- The regulatory landscape
- The broader provider landscape
- What users should do
- About this report
Why this report exists
AI tools now hold some of the most personal data in your digital life. Your job title, your health questions, your financial plans, your writing style, your relationships, your goals, your anxieties. As AI memory becomes standard across every major platform, these tools are no longer just answering questions. They are building detailed, persistent profiles of who you are.
The privacy stakes have never been higher. In 2025, AI memory was a novelty feature available on a handful of platforms. In 2026, it is the default. ChatGPT, Claude, and Gemini all offer persistent memory that carries context across conversations. The data these systems hold about individual users is more intimate than what sits in most email inboxes or browser histories, because AI memory is distilled. It is not raw data buried in thousands of messages. It is a structured, machine-readable profile of your life.
Most users have no idea how this data is handled. Privacy policies are long. Settings menus are buried. Defaults favor the platform, not the user. We produced this report because we believe people deserve a clear, factual picture of how each major AI platform treats their data, their memory, and their privacy.
This is an independent analysis of the privacy practices of the five most widely used AI platforms in 2026. We examined their publicly documented policies, evaluated their product features, and compared them across seven key dimensions. The goal is simple: give you the information you need to make informed choices about where you put your most personal data.
Methodology
We evaluated the publicly available privacy policies, terms of service, help center documentation, and product features of each platform as of April 2026. Our focus was on what happens to consumer and individual users, not enterprise plans. Enterprise agreements generally include stronger data protections, dedicated infrastructure, and contractual guarantees that consumer plans do not offer. Since the vast majority of AI users are on consumer plans, that is where we concentrated our analysis.
We did not test internal data handling practices. We cannot verify what happens behind closed doors. This report reflects only what each company has publicly documented and what is observable through their products. Where policies are ambiguous, we note the ambiguity rather than speculating.
Each platform was evaluated across seven dimensions: default training policy, data retention period, memory storage location, memory export capability, encryption approach, human review practices, and private mode availability. These dimensions were chosen because they represent the most material factors in determining how private your AI experience actually is.
What we evaluated
We assessed each platform across seven dimensions. These represent the core questions that determine whether your data is genuinely protected or merely described as protected.
- Default training policy (opt-in vs opt-out). Does the platform use your conversations to train future models? If so, is this the default, or do you have to opt in? The distinction matters enormously: defaults determine what happens to the majority of users who never change their settings.
- Data retention period. How long does the platform keep your data after you generate it? Is there a defined retention window, or is data kept indefinitely? What happens when you delete something?
- Memory storage location (cloud vs device). Where does your AI memory physically live? On the company's servers, on your device, or some combination? Cloud storage means your data exists on infrastructure you do not control. Device storage means you hold it physically.
- Memory export capability. Can you export your AI memory in a usable format? Can you take it to another platform? Exportability is the test of ownership: if you cannot take your data with you, you do not truly own it.
- Encryption approach. Is your data encrypted in transit and at rest? Who holds the encryption keys? Encryption where the company holds the keys is better than no encryption, but it still means the company can decrypt your data if they choose to.
- Human review practices. Can company employees or contractors read your conversations? Under what circumstances? How is reviewed data handled afterward?
- Incognito or private mode availability. Does the platform offer a mode where conversations are not saved, not used for training, and not added to your memory? What are the limitations of that mode?
Platform-by-platform findings
Below is a detailed breakdown of how each major platform handles your data. We have tried to be fair and factual. Where a platform does something well, we say so. Where there are concerns, we explain them clearly.
ChatGPT (OpenAI)
Plans: Free, Go ($8/mo), Plus ($20/mo), Pro ($100/mo), Pro Max ($200/mo), Business ($25/user/mo), Enterprise (custom). Note: OpenAI launched the $100/mo Pro tier on April 9, 2026, filling the gap between Plus and the original $200/mo tier.
Memory: Available on all plans, with limited capacity on Free and full memory on paid plans. ChatGPT remembers facts across conversations: your name, preferences, job, projects, and other details you share over time. You can view and manage individual memories in Settings.
Training: On Free, Go, Plus, and Pro plans, your conversations are used to train future models by default. An opt-out is available in Settings, but the default is opt-in. This means that unless you actively find and toggle the setting, your personal conversations contribute to model training. Business and Enterprise plans do not use conversations for training.
Retention: Conversations are retained per OpenAI's data retention policy. Even with chat history disabled, conversations are still retained for 30 days for abuse monitoring before being permanently deleted.
Export: You can export your full conversation history via Settings as a downloadable archive. However, your structured memories (the facts ChatGPT has learned about you) are not included in the export. These must be manually copied or extracted through workaround prompts. This is a significant gap: the most sensitive data, the distilled profile of who you are, is the hardest to export.
Encryption: Encrypted in transit via TLS. Encrypted at rest on OpenAI's servers. OpenAI holds the encryption keys.
Human review: Conversations may be reviewed by OpenAI staff for safety and product improvement. There is no public documentation specifying how long reviewed conversations are retained separately from standard retention policies.
Private mode: No dedicated incognito mode. You can disable chat history, but as noted, conversations are still retained for 30 days. There is no way to use ChatGPT with zero retention.
Claude (Anthropic)
Plans: Free, Pro ($20/mo), Max 5x ($100/mo), Max 20x ($200/mo), Team ($25/user/mo annual, $30/mo month-to-month), Enterprise (custom).
Memory: Persistent memory rolled out in March 2026 across all plans, including Free. Claude auto-summarizes your conversations every 24 hours and carries context across sessions. Each project maintains a separate memory, which is a thoughtful design choice for users who want different contexts for different areas of their life.
Training: Since October 2025, consumer accounts (Free, Pro, Max) are used for model training by default. An opt-out is available in Settings. Notably, opting in to training extends data retention from 30 days to up to 5 years, a detail that is easy to miss. Enterprise, Work, and Education plans are never used for training.
Retention: 30 days with training disabled. Up to 5 years with training enabled. This is one of the starkest retention differences in the industry, and it is tied directly to a training toggle that most users never touch.
Export: Chat history can be exported via Settings > Privacy > Export Data, delivered as a ZIP file with conversations in JSON format. For memory specifically, you can view and export your memories from Settings > Capabilities, or ask Claude directly to write out its memories of you. You can also import memories from other AI services like ChatGPT and Gemini.
Encryption: Encrypted in transit and at rest on Anthropic's servers. Anthropic holds the encryption keys.
Human review: Conversations may be reviewed by Anthropic staff for safety purposes.
Private mode: Claude offers an incognito mode where conversations are not saved to your history, not added to your memory, and not used for training regardless of your account settings. This is a meaningful feature. However, even incognito conversations are stored for a minimum of 30 days for safety and legal purposes. And incognito mode does not have access to your saved memories, so it is effectively a separate, context-free experience.
Gemini (Google)
Plans: Free, Google AI Pro ($19.99/mo), Google AI Ultra (~$42/mo, billed quarterly at $124.99), Workspace add-ons (Gemini Business $20/user/mo, Gemini Enterprise $30/user/mo).
Memory: Launched in March 2026, initially for Pro and Ultra subscribers before expanding to all plans including the free tier. Gemini memory includes Past Chats (conversation memory) and Personal Intelligence (context from Gmail, Calendar, Drive, and other Google services). Both are disabled by default. A notable feature is import capability: you can bring in memories and chat history from ChatGPT and Claude (up to 5 files per day, 5GB each). Your Gemini memory is tied to your Google account, which means it sits alongside your email, search history, location data, and other Google services.
Training: On consumer plans, conversations can be used for model training unless you opt out or disable Gemini activity saving. Workspace and Enterprise accounts are never used for training.
Retention: Default retention period is 18 months, configurable by Workspace admins to 3, 18, or 36 months. Consumer users are subject to the default.
Export: Available via Google Takeout, which is more comprehensive than most competitors offer. Gemini also provides import tools for other platforms, making it one of the easier platforms to move data into (though moving data out of Gemini and into a non-Google platform remains limited by format).
Encryption: Google's standard encryption: in transit and at rest. Google holds the encryption keys. Given the breadth of data Google holds across its services, the implications of a breach or internal access are broader than with a single-purpose AI tool.
Human review: Conversations are reviewed by specially trained teams. Reviewed data is retained for up to 3 years, disconnected from your account. The 3-year retention for reviewed data is worth noting: even data that has been disconnected from your identity persists for years in Google's systems.
Private mode: No dedicated incognito mode for Gemini specifically. You can pause Gemini activity saving, but this is a different mechanism than a true private mode and interacts with Google's broader activity controls.
DeepSeek
Plans: Free for consumers (unlimited messages with daily reset quotas, no paid tiers). Open-source model weights available for self-hosting. API available at low cost ($0.28-0.42 per million tokens).
Memory: No persistent cross-session memory natively. DeepSeek processes your prompt and generates a response, but does not build a profile of you over time.
Training: DeepSeek is an open-source model. When run locally or through a privacy-first platform, there is zero data retention. Your prompts are not stored, logged, or transmitted to any corporate server.
Retention: Depends entirely on where and how the model is run. On platforms like Anuma: zero retention. Self-hosted: zero retention. Through DeepSeek's own web interface: subject to their terms.
Export: Not applicable. Since no memory is stored, there is nothing to export.
Encryption: Depends on the implementation. When run locally, your data never leaves your device. When run through a platform, the platform's encryption policies apply.
Private mode: The model itself can run entirely on local hardware. This is the strongest form of privacy available: your data never touches someone else's server.
Grok (xAI)
Plans: Free (limited), X Premium ($8/mo), X Premium+ ($40/mo), SuperGrok ($30/mo standalone), Grok Business ($30/seat/mo), Enterprise (custom).
Memory: Limited conversation memory within sessions. Grok does not currently offer the same kind of persistent, cross-session memory as ChatGPT or Claude.
Training: Free, X Premium, X Premium+, and individual SuperGrok plans may use interactions for model training. Grok Business and Enterprise plans do not use conversations to train xAI's models by default.
Retention: Per xAI's data retention policy. Less transparency here compared to other platforms.
Export: No dedicated export functionality for conversations or any stored context.
Encryption: Standard encryption in transit and at rest.
Private mode: No documented incognito or private mode.
The training question: who uses your data?
This is the finding that matters most: every major closed-source AI platform trains on consumer conversations by default. The opt-out exists, but the default is opt-in. This means millions of users are contributing their most personal conversations to model training without actively choosing to do so.
The implications are significant. When you ask an AI about a health concern, discuss a salary negotiation, draft a breakup message, or process a family conflict, that conversation may be used to train the next version of the model. It becomes part of a dataset that engineers, researchers, and potentially contractors can access. It persists in training infrastructure long after you have moved on from the conversation.
| Platform | Default training (consumer) | Opt-out available | Enterprise exempt |
|---|---|---|---|
| ChatGPT | Yes | Yes | Yes |
| Claude | Yes (since Oct 2025) | Yes | Yes |
| Gemini | Yes | Yes | Yes |
| DeepSeek | No (open-source) | N/A | N/A |
| Grok | Yes | Yes | Yes |
The pattern is consistent. If you are paying $0 to $200 per month as an individual, your conversations are training data by default. The only exceptions are open-source models and enterprise agreements that typically cost significantly more and require organizational procurement.
This creates a two-tier privacy system. Companies and their employees get genuine data protection through enterprise plans. Individual users, including the people having the most personal conversations with AI, get opt-out toggles buried in settings menus.
To be fair to these platforms: training on user data is how they improve their models, and improved models benefit everyone. The question is not whether training is inherently wrong. It is whether the default should require users to actively protect their own privacy rather than actively choosing to contribute. Every other industry has moved toward opt-in consent for sensitive data usage. AI is the exception.
Memory and data retention policies
How long does your data persist? The answer varies dramatically across platforms, and in some cases, the answer changes depending on settings you may not know about.
The most striking example is Claude. If you have training disabled, your data is retained for 30 days. If training is enabled (the default), your data can be retained for up to 5 years. That is a 60x difference in retention, controlled by a single toggle that most users have never touched. The practical impact: millions of Claude users are on a 5-year retention track without realizing it.
Gemini's 18-month default retention is more moderate, but the interaction with Google's broader data ecosystem raises its own questions. Your Gemini data sits alongside your Gmail, your Google Search history, your Maps timeline, and your YouTube watch history. A single breach or legal demand could expose an extraordinarily complete picture of your digital life.
ChatGPT retains data per OpenAI's retention policy, with a 30-day minimum even when chat history is disabled. There is no documented path to zero retention for any ChatGPT user.
It is also important to understand what "deletion" means in practice. When you delete a conversation or a memory on most platforms, it is removed from the user-facing interface. But it may persist in backups, training datasets, safety logs, or legal compliance archives. True deletion, where no copy of your data exists anywhere in the company's infrastructure, is rarely guaranteed and difficult to verify.
| Platform | Standard retention | Minimum retention | Maximum retention |
|---|---|---|---|
| ChatGPT | Indefinite (active chats) | 30 days (after deletion) | Indefinite (unless deleted) |
| Claude | 30 days (training off) | 30 days | 5 years (training on); 7 years (flagged content) |
| Gemini | 18 months | 3 months (user or admin configurable) | 36 months (configurable); 3 years (human-reviewed) |
| DeepSeek | Zero | Zero | Zero |
| Grok | 30 days (after deletion) | 30 days | Indefinite (active chats); subject to EU DSA orders |
The takeaway: if you care about how long your data persists, check your training settings on every platform you use. The defaults may not align with your expectations.
Export and portability
Data portability is the test of ownership. If you cannot take your data with you when you leave a platform, you do not truly own it. By this standard, AI memory portability is the biggest unaddressed problem in the industry. As detailed in our platform findings above, no major platform makes memory fully portable: ChatGPT excludes structured memories from its export, Claude lets you view and copy a structured memory profile but has no one-click file download, and Gemini's Takeout format is Google-specific.
The portability gap creates lock-in. The more an AI knows about you, the harder it is to switch, because you lose all that accumulated context. This is not accidental. It is the same dynamic that keeps people on platforms across every industry: your data becomes the moat. The fact that no major platform prioritizes true portability tells you something about their incentives.
The open-source alternative
Open-source large language models offer a structurally different privacy model, and it is one that deserves more attention than it currently receives.
With open-source LLMs like DeepSeek, Kimi, MiniMax, and others, the privacy equation changes fundamentally. These models can run locally on your own hardware or through platforms that commit to zero data retention. Your prompts are processed and a response is generated, but nothing is stored, logged, or sent to a corporate server. The conversation exists only while it is happening.
This is not privacy by policy. It is privacy by architecture. There is no toggle to find, no setting to configure, no policy to read. The data simply does not persist because the system is not designed to hold it.
The key advantages of the open-source approach:
- Zero data retention by design. No server stores your conversations. No dataset includes your prompts. No training pipeline processes your personal context.
- No corporate servers processing your data. When run locally, your data never leaves your device. When run through a zero-retention platform, the data is processed and discarded.
- Fully auditable code. Anyone can inspect the model's code to verify that it does what it claims. This level of transparency is impossible with closed-source models where you must trust the company's word.
- Can run entirely on local hardware. For maximum privacy, you can run the model on your own computer. No internet connection required for inference. No third party involved at any point.
The tradeoff is real: open-source models generally do not offer persistent memory natively. They process each conversation independently, without the cross-session context that makes ChatGPT and Claude feel like they know you. For many users, this is a meaningful sacrifice.
But this tradeoff is not inevitable. A platform can add a memory layer on top of open-source models, giving you persistent context and continuity while keeping the zero-retention privacy of the underlying model. This is the approach Anuma takes: your memory lives on your device, encrypted and under your control, while the conversation processing happens through zero-retention models. You get both privacy and continuity without choosing between them.
The open-source alternative is not just for technical users. As platforms make these models accessible through consumer-friendly interfaces, the choice between cloud-based memory on someone else's server and local memory on your own device becomes a choice anyone can make.
Key findings and scorecard
After evaluating each platform across all seven dimensions, here are the key findings from our analysis:
- Every major closed-source AI platform trains on consumer data by default. The opt-out exists, but the default favors the platform. Millions of users are contributing personal conversations to training datasets without actively choosing to do so.
- Memory portability is effectively nonexistent. No major platform offers a complete, usable export of your AI memory. Conversation exports exist in some cases, but the structured memory (the most valuable and sensitive data) is locked in.
- Retention periods vary wildly and are often tied to training settings. Claude's 30-day vs 5-year retention gap depending on a single toggle is the most dramatic example, but every platform has nuances that most users are unaware of.
- Cloud storage is universal among closed-source platforms. Your memory lives on their servers, encrypted with their keys. This means your privacy ultimately depends on their decisions, their security, and their compliance with legal demands.
- Open-source models offer structurally stronger privacy. Zero retention by design, auditable code, and the ability to run locally. The tradeoff of no native memory is solvable at the platform level.
The following scorecard summarizes how each platform performs on the dimensions that matter most for user privacy. We have included Anuma for comparison, though we acknowledge our bias and encourage you to verify our claims independently.
| ChatGPT | Claude | Gemini | DeepSeek | Anuma | |
|---|---|---|---|---|---|
| Default no-training | ✗ | ✗ | ✗ | ✓ | ✓ |
| Memory on device | ✗ | ✗ | ✗ | N/A | ✓ |
| Memory exportable | Limited | Manual + chat export | Via Takeout | N/A | ✓ |
| Incognito mode | ✗ | ✓ | ✗ | N/A | ✗ |
| Zero retention option | ✗ | 30-day min | ✗ | ✓ | ✓ |
| Memory across all models | ✗ | ✗ | ✗ | ✗ | ✓ |
A few notes on this scorecard. We give Claude credit for its incognito mode, which is a genuine privacy feature that ChatGPT and Gemini do not match. Claude also deserves credit for offering both chat history export and memory export (manual but functional), plus the ability to import from other platforms. We note Gemini's export via Takeout as partial credit, since it exists but is not as portable as a truly open format. And we are honest that DeepSeek's "N/A" entries for memory-related features reflect the fact that the model itself does not store memory, rather than a failing of the model.
The scorecard is not meant to suggest that any platform is doing everything wrong. Each has made deliberate choices, and some of those choices reflect genuine engineering tradeoffs. The purpose is to make those choices visible so you can decide which tradeoffs you are comfortable with.
Privacy-enhancing technologies
Beyond platform policies, a set of technical approaches is emerging that can protect user data at the infrastructure level. These privacy-enhancing technologies (PETs) are worth understanding because they represent where the industry is heading.
Differential privacy adds mathematical noise to data sets so that patterns can be analyzed without identifying any individual. Think of it as blurring individual faces in a crowd photograph while still being able to count how many people are in the room. Google and Apple use this in some of their products. It allows companies to learn from user behavior without knowing what any specific user did.
Federated learning trains AI models across many devices without ever collecting the raw data in one place. Instead of sending your conversations to a server for training, the model learns locally on your device and only sends the abstract improvements (not your data) back to the central system. This approach keeps personal data on your device while still allowing the model to improve.
Homomorphic encryption allows computations to be performed on encrypted data without decrypting it first. The AI can process your request and return a result without ever seeing the unencrypted content. This technology is still maturing but represents the strongest possible privacy guarantee: useful output with zero data exposure.
On-device processing is the simplest and most immediately available PET. Open-source LLMs that run entirely on your hardware never send data anywhere. No server. No network request. No retention. This is what open-source LLMs on Anuma provide today.
The gap between what is technically possible and what major platforms actually implement is significant. Most of these technologies exist. Most platforms have not adopted them for consumer products. The question is when, not if, these approaches become standard.
The regulatory landscape
AI privacy regulation in 2026 is fragmented and racing to keep up with the technology. Here is where things stand.
The EU AI Act is the most comprehensive framework, classifying AI systems by risk level and imposing transparency requirements. High-risk AI systems must meet strict data governance standards. The Act also addresses foundation models (the large language models behind ChatGPT, Claude, and Gemini), requiring transparency about training data and practices.
The US has no single federal AI privacy law. Instead, a patchwork of state laws (California's CCPA/CPRA being the strongest) and sector-specific regulations govern AI data handling. Executive orders have established guidelines, but enforcement varies widely.
Other regions are developing their own frameworks. China has implemented AI-specific regulations focused on algorithmic transparency and deepfakes. Brazil, India, and other markets are drafting or updating data protection laws to address AI.
Key challenges in the regulatory landscape:
- Cross-border data flows: AI services are global, but privacy laws are national. A conversation with ChatGPT crosses jurisdictions instantly.
- Pace of change: AI capabilities are evolving faster than regulators can draft rules. By the time a law is passed, the technology it addresses may have changed fundamentally.
- Enforcement gaps: Even where regulations exist, enforcement is inconsistent. Opt-out defaults, buried privacy settings, and complex terms of service persist despite regulatory intent.
- Algorithmic transparency: Closed-source LLMs are inherently opaque. Regulators are pushing for explainability, but the technology does not easily support it.
For individual users, the regulatory reality is simple: you cannot rely on laws to protect your AI privacy today. Regulations are improving, but they lag behind the technology. Until enforcement catches up, the most effective privacy protection is choosing tools that are private by architecture, not just by policy.
The broader provider landscape
Our platform-by-platform analysis focused on the five most widely used AI assistants. But the AI privacy landscape extends well beyond ChatGPT, Claude, Gemini, DeepSeek, and Grok. Below is a broader overview of how major AI providers handle privacy, based on their publicly stated policies as of April 2026.
Important distinctions that appear across providers:
- Consumer chat vs enterprise/API: Most providers describe different default rules for whether user content may be used to improve models depending on the plan tier.
- Training vs safety monitoring: Some policies distinguish using data to "train/improve" models from using data for security, abuse prevention, and quality assurance (including human review).
- Retention and deletion: Retention windows and deletion workflows vary, and may be impacted by legal holds, security logs, and backup systems.
- Third-party sharing: Many providers describe sharing with subprocessors (cloud hosting, analytics, customer support) and require contractual controls.
Implementation guidance for privacy programs
For organizations evaluating AI tools, four practices help translate privacy policies into actual protection:
- Map policy claims to technical controls. Confirm where data is processed, what is logged, what is retained, and which subprocessors are involved. A privacy policy is a statement of intent. Technical controls are the implementation.
- Prefer enterprise/API tiers for sensitive data. Most providers offer stronger contractual controls (DPAs, audit terms, retention controls) in paid business tiers. Consumer plans rarely offer the same guarantees.
- Establish a model intake checklist. Before adopting any AI tool, verify: default training settings, retention windows, opt-out mechanisms, human review practices, and cross-border transfer mechanisms.
- Handle open-source/open-weight models separately. When deploying open models, the privacy policy governing user data is primarily the deployer's responsibility, not the model publisher's. Your infrastructure, your rules.
What users should do
Based on our findings, here are practical steps every AI user should take to protect their privacy. These apply regardless of which platform you use.
- Check your training settings on every platform you use. Log in to ChatGPT, Claude, Gemini, and any other AI tool you use regularly. Find the data and privacy settings. See whether your conversations are being used for training. The setting is typically in Settings > Data Controls or Settings > Privacy. Do this today.
- Opt out of training if you do not want your conversations used. If the platform allows it, disable training on your conversations. Be aware of the tradeoffs: on Claude, opting out reduces your retention period from 5 years to 30 days, which is actually a privacy benefit. On other platforms, the implications of opting out may differ.
- Use incognito or private modes for sensitive conversations. If you are discussing health, finances, legal matters, relationships, or anything you would not want stored on a corporate server, use the most private mode available. On Claude, that means incognito mode. On other platforms, investigate what options exist.
- Audit what your AI has memorized about you. Go through your AI's memory store periodically. On ChatGPT, you can view and delete individual memories. On Claude, you can review your memory summaries. You may be surprised by what has been retained. Delete anything you are not comfortable with the platform holding.
- Consider open-source LLMs for privacy-sensitive work. If you are working with confidential information, processing sensitive personal data, or simply want the strongest possible privacy guarantee, open-source models running locally or through zero-retention platforms are the gold standard.
- Look for platforms that offer encrypted, on-device memory. The architectural choice of where memory lives matters more than any policy. Memory on your device, encrypted with your keys, is fundamentally more private than memory on a corporate server, encrypted with their keys. This is not a minor distinction. It is the difference between privacy as a guarantee and privacy as a promise.
- Export your data regularly where possible. If the platform offers data export, use it. Keep local copies. Do not assume your data will always be accessible. Platforms change policies, shut down features, or experience outages. Your data is safest when you have your own copy.
These steps are not exhaustive, and they should not be necessary. In an ideal world, the defaults would protect you. But we are not in that world yet. Until the industry moves toward privacy by default, informed users need to take these steps themselves.
About this report
This report was produced by the Anuma team. We want to be transparent about our position: Anuma is a privacy-first AI platform. We are not a neutral third party. We have a product, a perspective, and a business interest in people caring about AI privacy.
That said, we have tried to be factual and fair in this analysis. Every claim in this report is based on publicly available documentation as of April 2026. We have given credit to competitors where it is due. Claude's incognito mode is a genuine privacy feature. Gemini's import tools and Google Takeout export are more comprehensive than most competitors. ChatGPT's conversation export is functional and useful. These are real strengths.
We have also tried to be honest about what we do not know. We cannot verify internal data handling practices. We cannot confirm what happens to data inside corporate infrastructure beyond what is publicly documented. Where we are uncertain, we say so.
This report reflects publicly available policies as of April 2026. AI platforms update their terms frequently, and some details may have changed since publication. We recommend verifying current policies directly with each provider before making decisions based on this report.
We welcome corrections. If any AI platform believes we have misrepresented their practices, we encourage them to contact us at hi@anuma.ai with specific corrections and we will update the report accordingly.
For further reading on the topics covered in this report, see our guides on what is AI memory, how Anuma's memory works, and open-source AI models.