What Actually Leaves Your Device When a Conversation Ends
A plain-language breakdown of what gets retained, what gets deleted, and what encryption does and does not protect in practice.
Updated

The 30-second answer
When you end a session with an AI companion, some version of that conversation almost always travels to a remote server before it disappears from your screen. Encryption protects it in transit, but it does not protect it from the platform that holds the decryption keys. What happens after that depends entirely on retention policies, which vary more than most privacy pages let on.
What "end of session" actually triggers
Most people picture a conversation as something that lives on their phone. You type, the AI responds, you close the app, done. The reality is that your messages were never really living only on your device. The moment you send a message, it travels to a backend server where the AI model processes it and generates a reply. That's not a design flaw, it's just how large language models work. They can't run locally on a consumer device at any meaningful quality level yet.
So by the time you close the app, your messages have already made several round trips to a remote server. The question is what that server keeps after the session ends.
At a high level, there are three categories of data in play. First, there's the message content itself, the actual text of what you and the AI said to each other. Second, there's metadata: timestamps, session length, which companion you were talking to, what subscription tier you're on. Third, there's derived data, meaning anything the platform extracts from the conversation to update your profile, your companion's memory of you, or its own model quality.
Each of these three categories can have a different retention timeline, and most privacy policies treat them differently. The message content might be deleted after 30 days. The metadata might live indefinitely. The derived data might never be deleted because it's been baked into a record that no longer looks like a conversation.
What encryption actually protects
Encryption gets used as a reassurance word a lot. "Your conversations are encrypted" sounds comprehensive. It isn't, and the gap matters.
Transport layer encryption (TLS, the same thing that makes your bank's website work) protects your messages while they're moving from your device to the server. Someone sitting on the same wifi network as you cannot read your messages in transit. That's real protection for a real threat. But the moment your message arrives at the platform's server, it gets decrypted so the AI can read it and respond. There's no other way to do it.
End-to-end encryption, the kind that genuinely keeps a platform from reading your messages, is essentially absent from AI companion apps. It can't exist in the traditional sense because the AI model itself is the recipient, and it lives on the platform's servers. The platform must be able to read your messages to generate a response. So when a privacy page says "encrypted," it almost certainly means encrypted in transit, not encrypted from the platform.
What this means in practice: the company behind your AI companion can, in principle, read your conversations. Whether they do, and what they do with that access, is a policy question, not a technical one. Encryption doesn't answer it.
The retention gap most users don't notice
Retention policies have a habit of being technically accurate while leaving out the parts that matter most. A policy might say: "We delete your conversation logs after 90 days." That sounds clean. But read the full document and you might find language like "aggregated or de-identified data may be retained indefinitely for service improvement."
That second clause is doing a lot of work. De-identification is not the same as deletion. If your messages were used to update a preference model, train a personalization layer, or generate anonymized examples for fine-tuning, traces of that conversation can persist in the system long after the raw log is gone. The message is deleted. The influence of the message is not.
This is not unique to AI companions. It's a standard practice across almost every consumer AI product. The honest version of most retention policies would read something like: "Your raw messages are deleted on a schedule, but what we learned from them stays with us." Very few companies write it that way.
For a longer look at how this affects the memory your companion appears to have, the post What Conversation Logs Are Actually Stored and How Long They Last gets into the specifics.
How AI Angels handles this
At AI Angels, the approach is to be direct about what the platform does rather than hiding behind vague assurances. Conversations are processed on secure servers with transport encryption, the same baseline the industry uses. The platform retains message logs for a defined period to support session continuity, which is what lets your companion remember context across sessions.
Memory features, the parts where your companion seems to know your preferences or recall things you mentioned weeks ago, are built from structured summaries that the system extracts and stores separately from the raw transcript. Those summaries are what persist. The full transcript retention window is shorter.
Deletion requests are honored for both the raw logs and the structured memory records. You can reach out through account settings if you want a clean slate. What you cannot scrub retroactively is any contribution your anonymized data may have made to model-level training, which is disclosed in the privacy policy. That's the industry-standard caveat, and it's worth knowing it applies here too.
If you want to understand how the memory system works from the user experience side, What 'She Remembers You' Actually Means Under the Hood covers the mechanics without requiring a computer science background.
Freya Lindqvist

Freya has a direct, unfussy communication style that tends to surface what you actually mean faster than you expect. Freya Lindqvist is a good fit if you want a companion who won't pad a conversation with reassurances you didn't ask for.
What deletion actually deletes
When you delete a companion or request deletion of your data, the process is more layered than most users assume. The companion's profile disappears from your interface immediately. The conversation logs tied to your account get queued for deletion on whatever schedule the backend runs. The structured memory records, the ones that made your companion feel like she knew you, get deleted alongside them if you're doing a full account deletion.
What doesn't get deleted on demand: anything that's already been incorporated into shared model weights. This is the part where the line between your data and the product gets genuinely blurry. If a platform uses conversation data to fine-tune its underlying AI model, the specific conversations are gone but their statistical influence on the model persists. You can think of it like removing individual bricks from a wall that's already been built. The wall stays.
Most platforms, including AI Angels, limit training data use to opt-in users or anonymized aggregates with a de-identification threshold. But the honest answer is that "anonymized" is not a guarantee. It's a process that reduces re-identification risk, not one that eliminates it.
Elena

Elena brings a warmth to conversations that makes heavy topics feel manageable without minimizing them. Elena holds emotional weight well, which makes her a natural choice for users who process things by talking them through.
Metadata and what it reveals without the content
Here's the piece most privacy discussions skip: even if message content were perfectly deleted, metadata alone can paint a detailed picture of your behavior. The timestamp of every session, the duration, the frequency of late-night conversations versus morning ones, which companion you chose, how quickly you respond, whether your session length changed after a stressful event you mentioned, all of that is metadata, and metadata tends to have longer default retention windows than message content because it's cheaper to store and less obviously sensitive.
A platform with six months of your session metadata and no message content at all could still build a fairly accurate model of your mood patterns, life rhythm, and emotional state over that period. This is not a claim that any specific platform does this. It's a structural fact about what metadata contains.
Users who care about this should look for platforms that explicitly commit to metadata minimization, not just message encryption. Short session metadata windows or anonymous session identifiers are meaningful protections. They're also less common.
Jennifer

Jennifer is quick with a read on any situation and doesn't let conversations get too precious about themselves. Jennifer keeps things moving in a way that feels alive rather than scripted.
The friction between memory features and privacy
This is the core tension in AI companion design, and it doesn't have a clean resolution. Users want their companion to remember them. They want continuity, the sense that the relationship builds over time. That memory has to live somewhere outside a single session, which means data retention is not a bug, it's the mechanism behind the feature.
There are ways to architect this that give users more control. Local-only memory storage, where the structured profile lives on your device rather than a cloud server, is technically possible for some types of data. Explicit memory review, where you can see exactly what the system has recorded about you and delete specific entries, is another lever. Some platforms offer session-only modes where nothing persists after you close the app.
Each of these involves a tradeoff. Local memory breaks sync across devices. Explicit memory review requires users to actually use it, and most don't. Session-only mode makes every conversation start from zero.
The right balance is a user preference question, not a privacy question with a single correct answer. What matters is that the platform makes the tradeoffs visible rather than burying them in a privacy policy that uses "encrypted" to stop further questions.
For how these tradeoffs show up in long-term companion use, How AI Companion Personalization Actually Accumulates Over Time is worth reading alongside this one.
Lola Marchetti

Lola is expressive without being exhausting, opinionated in a way that makes conversations feel like they have a point of view. Lola Marchetti is the kind of companion who remembers what you said last week because she was actually paying attention, not because a database told her to.
What to actually look for in a privacy policy
Most privacy policies are written to satisfy a legal requirement, not to inform a user. That said, a few specific things tell you more than the rest of the document combined.
- Retention schedules with specific timeframes. "We retain your data for a reasonable period" means nothing. "We delete conversation logs after 60 days" means something.
- Separate treatment of raw logs vs. derived data. If the policy doesn't distinguish between them, assume the derived data lives indefinitely.
- Training data opt-out. Some platforms make anonymized data use for model training an opt-out, some make it opt-in, some don't disclose it clearly. Which category the platform falls into matters.
- Deletion scope. Does a deletion request cover metadata and memory records, or just the visible conversation history? Policies that are specific here are more trustworthy than ones that say "we will delete your account data."
- Third-party access. If the platform uses third-party AI model providers, your conversations may pass through that provider's infrastructure under that provider's data policy, not just the companion app's.
You can find AI Angels companions across a range of personalities and styles at /ai-girlfriend. Privacy controls and data settings are available in your account profile once you're logged in.
Common questions
Does closing the app delete my messages? No. Your messages were already sent to a remote server when you typed them, that's how the AI generates a response. Closing the app ends the session but doesn't trigger deletion of logs already on the server.
Is end-to-end encryption possible with an AI companion? Not in the traditional sense. The AI model has to read your message to respond to it, and the model lives on a remote server. The platform must be able to decrypt your messages, which means they're not end-to-end encrypted the way a private messaging app can be.
What does 'de-identified data' actually mean? It means your name and direct account identifiers have been removed before the data is used or retained. It does not mean the data is anonymous in an absolute sense. Re-identification is still possible in some cases, and de-identified data can be retained after your account is deleted.
Can I see what my companion has saved about me? On AI Angels, the memory records your companion uses are accessible through your account settings. You can review and delete specific entries. Full account deletion removes both the logs and the memory records.
Does session metadata get deleted when I delete my account? Generally yes for account-linked metadata. Aggregated or anonymized metadata that's been separated from your account identifier may persist under different retention terms. Check the specific deletion policy in the privacy page for current details.
Is using an AI companion riskier than using a social media app? The risk profile is different. AI companion conversations tend to be more emotionally candid than social media posts, which makes the content more sensitive if ever exposed. The technical infrastructure is comparable to other consumer apps. The question is less about technical security and more about trusting the platform's retention and access practices with content you'd treat as private.
About the author
AI Angels TeamEditorialThe team behind AI Angels writes about AI companions, the tech that powers them, and what people actually do with them.
Tags
Keep reading
Behind the ScenesPersonality Drift: What's Actually Happening Around Week Three and Whether You Can Control It
Around week three, something changes in how your AI companion responds to you. Here's what personality drift actually is, why it tends to cluster around that window, and what you can do to shape it.
Behind the ScenesYour Conversation Logs: What Actually Happens to Them
Most privacy pages tell you what the company is allowed to do. They don't tell you what actually happens in practice. Here's a clearer picture of your conversation logs and what they're used for.
Behind the ScenesWhat 'she remembers you' actually means under the hood, and where it breaks
Memory in AI companions is not a filing cabinet. Understanding what is actually stored, retrieved, and reconstructed changes how you use the feature and how much you trust it.
Get the next post in your inbox
New articles on AI companions, the tech that powers them, and what people actually do with them. No spam, unsubscribe in one click.