Skip to main content
the privacy filter removes personal info from your screen data before the AI you’re chatting with sees it. names, emails, phone numbers, addresses, account numbers — all replaced with placeholders like [PERSON], [EMAIL], [PHONE] before the text ever leaves the filter. it runs inside a confidential-compute enclave — a virtual machine whose memory is encrypted by the CPU, so even the cloud provider can’t read what’s inside.

how it works

1

you flip the shield in chat

in the chat composer, click the shield icon above the send button. the toggle is saved between sessions.
2

screenpipe runs on your machine

when the AI asks for screen data (OCR text, audio transcripts, etc.), screenpipe-server on your computer sends that text to our privacy filter at pii.screenpipe.containers.tinfoil.dev.
3

the enclave strips personal info

a token-classification model reads the text and replaces personal info with tagged placeholders. the enclave has no disk, no logs, and encrypted memory — nothing persists past the request.
4

the AI gets the redacted version

the filtered text comes back with names/emails/etc replaced. that’s what reaches your AI chat — the AI never sees your personal info.

what gets removed

names

people names in any context — “Louis”, “Mr. Beaumont”, “louis.b@…”

emails

email addresses in any format

phones

phone numbers including international formats

addresses

physical addresses and postal codes

account numbers

SSNs, credit card numbers, bank accounts, IDs

secrets

API keys and tokens that look like secrets

urls

URLs that look personal

dates

dates that look personal (birthdates etc)
example:
Before:  "email louis@screenpi.pe about the invoice, call me at 555-1234"
After:   "email [EMAIL] about the invoice, call me at [PHONE]"

why not just a regex

most “PII scrubbers” are pattern matches: find anything with an @ + a domain, replace it. that catches obvious stuff and misses everything else — names with no context, addresses with no zip code, account numbers formatted oddly, dates. our filter uses openai/privacy-filter, a 1.5B-parameter token-classification model fine-tuned specifically for this. it reads the whole sentence and decides, token by token, what is and isn’t personal.

why it’s confidential

we run the filter inside a confidential-compute enclave hosted by Tinfoil. confidential compute uses CPU features (AMD SEV-SNP / Intel TDX) to encrypt the virtual machine’s memory at the hardware level. even the cloud provider running the physical hardware can’t read the memory. what that gives you:

attested code

the enclave publishes a signed hash of the container image it’s running. you (or your client) can fetch the hash and check it against the open-source code. if the hashes don’t match, the enclave is compromised and the client refuses to talk to it.

no disk

the enclave has no persistent storage. nothing written during a request survives past the request — by design, not policy.

no logs

the server code doesn’t log request bodies. anyone can audit it.

encrypted transport

HTTPS from your machine to the enclave. TLS termination happens inside the attested boundary, so the decrypted text only exists in enclave memory.

who sees what

  • you: you flip the toggle, you see the original data on your own screen.
  • screenpipe-server (local): sends raw text to the enclave, gets redacted text back. runs on your computer.
  • the enclave: decrypts the text, runs the model, returns the redacted version. destroys the memory immediately.
  • screenpipe cloud: not in the path. our backend can’t see the raw text — we built it this way deliberately.
  • the AI (claude, gpt, gemini, your local llama, whichever): only ever sees the redacted version.

where it applies

toggle the shield icon in the chat composer. the app adds filter_pii=1 to every search the AI runs on your screen data. persists across sessions.

limits

  • latency. adds ~1-2 seconds per search on first hit. responses are cached by content hash for 1 hour so repeated data (same email thread, same IDE file) is nearly free.
  • model imperfection. >99% on common categories in our tests, but not perfect. don’t rely on it as your only line of defense for critical secrets — combine with the “ignored windows” filter and don’t record password managers.
  • text only. the filter operates on text already extracted from screenshots (OCR, accessibility). raw image frames aren’t uploaded. if you enable cloud archive that uploads frames, those are separate and encrypted end-to-end by your own key.
  • pro tier. the shield icon is a Pro feature. compute inside a confidential enclave isn’t free. non-pro users see the shield with an upgrade link.
  • fail closed. if the enclave is unreachable, your search call fails with a clear error rather than silently returning unredacted text. you never get redaction that didn’t happen.

open source

all three layers are auditable: if you find a way to make the filter leak data, please tell us — we take this one seriously.