# give AI memory of your screen
Source: https://docs.screenpi.pe/ai-memory
Give your AI assistant memory of everything on your screen. Use screenpipe with Claude, Cursor, Ollama, and other LLMs.
screenpipe acts as a memory layer for AI — it gives LLMs context about what you've been doing on your computer.
## why AI needs memory
LLMs are stateless — they don't know what you were working on 5 minutes ago. screenpipe bridges this by:
* capturing everything on your screen 24/7
* extracting text via OCR
* making it searchable via REST API on `localhost:3030`
## connect to AI
### MCP (recommended)
screenpipe has a built-in MCP server that works with Claude Desktop, Cursor, and other MCP-compatible tools:
```json theme={null}
{
"mcpServers": {
"screenpipe": {
"command": "npx",
"args": ["-y", "screenpipe-mcp"]
}
}
}
```
see [MCP server setup](/mcp-server) for details.
### pipes (scheduled agents)
[pipes](/pipes) are AI agents that run on a schedule and act on your screen data automatically — like syncing to Obsidian, tracking time in Toggl, or sending daily summaries.
### direct API
any tool that can make HTTP requests can query screenpipe:
```bash theme={null}
# get recent screen activity
curl "http://localhost:3030/search?content_type=ocr&limit=20"
# search for specific content
curl "http://localhost:3030/search?q=meeting+notes&app_name=Slack&limit=10"
```
## use cases
| use case | how |
| ---------------------------- | -------------------------- |
| "what was I working on?" | search by time range |
| "summarize today's meetings" | query audio transcriptions |
| "find that code snippet" | search OCR text |
| "auto-track my time" | toggl-sync pipe |
| "sync activity to notes" | obsidian-sync pipe |
## privacy-first
* all data stays on your device
* use local LLMs (Ollama, LMStudio) for complete privacy
* filter what gets captured with `--ignored-windows` and `--included-windows`
* no data sent to cloud unless you explicitly choose cloud providers
## next steps
* [set up MCP server](/mcp-server) — connect to Claude, Cursor
* [set up pipes](/pipes) — scheduled AI agents
* [API reference](/cli-reference) — search parameters and API endpoints
* [join our discord](https://discord.gg/screenpipe) — get help from the community
## get screenpipe
screenpipe gives your AI assistants memory of everything on your screen. lifetime plans available — no subscriptions needed.
[download screenpipe →](https://screenpi.pe/onboarding)
# Get raw audio pipeline metrics snapshot
Source: https://docs.screenpi.pe/api-reference/audio-control/get-raw-audio-pipeline-metrics-snapshot
openapi.yaml get /audio/metrics
# List available audio devices
Source: https://docs.screenpi.pe/api-reference/audio-control/list-available-audio-devices
openapi.yaml get /audio/list
# Start audio processing
Source: https://docs.screenpi.pe/api-reference/audio-control/start-audio-processing
openapi.yaml post /audio/start
# Start recording a specific audio device
Source: https://docs.screenpi.pe/api-reference/audio-control/start-recording-a-specific-audio-device
openapi.yaml post /audio/device/start
# Stop audio processing
Source: https://docs.screenpi.pe/api-reference/audio-control/stop-audio-processing
openapi.yaml post /audio/stop
# Stop recording a specific audio device
Source: https://docs.screenpi.pe/api-reference/audio-control/stop-recording-a-specific-audio-device
openapi.yaml post /audio/device/stop
# Download and import data from other devices
Source: https://docs.screenpi.pe/api-reference/cloud-sync/download-and-import-data-from-other-devices
openapi.yaml post /sync/download
# Get cloud sync status
Source: https://docs.screenpi.pe/api-reference/cloud-sync/get-cloud-sync-status
openapi.yaml get /sync/status
# Initialize cloud sync with credentials
Source: https://docs.screenpi.pe/api-reference/cloud-sync/initialize-cloud-sync-with-credentials
openapi.yaml post /sync/init
# Lock sync for exclusive access
Source: https://docs.screenpi.pe/api-reference/cloud-sync/lock-sync-for-exclusive-access
openapi.yaml post /sync/lock
# Trigger a sync operation
Source: https://docs.screenpi.pe/api-reference/cloud-sync/trigger-a-sync-operation
openapi.yaml post /sync/trigger
# Add content (frames or transcriptions) to database
Source: https://docs.screenpi.pe/api-reference/content-management/add-content-frames-or-transcriptions-to-database
openapi.yaml post /add
# Add tags to content
Source: https://docs.screenpi.pe/api-reference/content-management/add-tags-to-content
openapi.yaml post /tags/{content_type}/{id}
# Execute raw SQL query
Source: https://docs.screenpi.pe/api-reference/content-management/execute-raw-sql-query
openapi.yaml post /raw_sql
# Remove tags from content
Source: https://docs.screenpi.pe/api-reference/content-management/remove-tags-from-content
openapi.yaml delete /tags/{content_type}/{id}
# Keyword-based search with optional grouping
Source: https://docs.screenpi.pe/api-reference/context-retrieval/keyword-based-search-with-optional-grouping
openapi.yaml get /search/keyword
# Search screen and audio content with various filters
Source: https://docs.screenpi.pe/api-reference/context-retrieval/search-screen-and-audio-content-with-various-filters
openapi.yaml get /search
Query Screenpipe for content based on various filters.
## Use Cases
- Search for specific text across all applications
- Find content from a specific application or window
- Get screenshots from a particular time period
- Retrieve all visits to a specific website
## Examples
### Basic search for recent browser activity:
```js
const githubActivity = await pipe.queryScreenpipe({
browserUrl: "github.com",
contentType: "ocr",
limit: 20,
includeFrames: true
});
```
### Search for specific text with date filters:
```js
const searchResults = await pipe.queryScreenpipe({
q: "authentication",
browserUrl: "auth0.com",
appName: "Chrome",
contentType: "ocr",
startTime: new Date(Date.now() - 7 * 24 * 60 * 60 * 1000).toISOString(),
endTime: new Date().toISOString(),
limit: 50
});
```
# Merge multiple frames into a video
Source: https://docs.screenpi.pe/api-reference/experimental/merge-multiple-frames-into-a-video
openapi.yaml post /experimental/frames/merge
# Validate a media file
Source: https://docs.screenpi.pe/api-reference/experimental/validate-a-media-file
openapi.yaml get /experimental/validate/media
# Find next frame with valid video file on disk
Source: https://docs.screenpi.pe/api-reference/frames/find-next-frame-with-valid-video-file-on-disk
openapi.yaml get /frames/next-valid
Allows the frontend to skip directly to a valid frame instead of trying each one when frames fail to load.
# Get frame image data
Source: https://docs.screenpi.pe/api-reference/frames/get-frame-image-data
openapi.yaml get /frames/{frame_id}
Returns the JPEG image for a given frame. Supports optional PII redaction.
# Get frame metadata (timestamp) for deep link navigation
Source: https://docs.screenpi.pe/api-reference/frames/get-frame-metadata-timestamp-for-deep-link-navigation
openapi.yaml get /frames/{frame_id}/metadata
# Get OCR text positions for a frame
Source: https://docs.screenpi.pe/api-reference/frames/get-ocr-text-positions-for-a-frame
openapi.yaml get /frames/{frame_id}/ocr
Returns OCR text positions with bounding boxes for text selection overlay on screenshots.
# Delete a pipe
Source: https://docs.screenpi.pe/api-reference/pipes/delete-a-pipe
openapi.yaml delete /pipes/{id}
# Enable or disable a pipe
Source: https://docs.screenpi.pe/api-reference/pipes/enable-or-disable-a-pipe
openapi.yaml post /pipes/{id}/enable
# Get pipe execution history from DB
Source: https://docs.screenpi.pe/api-reference/pipes/get-pipe-execution-history-from-db
openapi.yaml get /pipes/{id}/executions
# Get recent pipe run logs (in-memory)
Source: https://docs.screenpi.pe/api-reference/pipes/get-recent-pipe-run-logs-in-memory
openapi.yaml get /pipes/{id}/logs
# Get single pipe detail
Source: https://docs.screenpi.pe/api-reference/pipes/get-single-pipe-detail
openapi.yaml get /pipes/{id}
# Install a pipe from URL or local path
Source: https://docs.screenpi.pe/api-reference/pipes/install-a-pipe-from-url-or-local-path
openapi.yaml post /pipes/install
# List all pipes with status
Source: https://docs.screenpi.pe/api-reference/pipes/list-all-pipes-with-status
openapi.yaml get /pipes
Re-scans disk so pipes installed externally (e.g. via CLI) are picked up.
# Stop a running pipe
Source: https://docs.screenpi.pe/api-reference/pipes/stop-a-running-pipe
openapi.yaml post /pipes/{id}/stop
# Trigger a manual pipe run
Source: https://docs.screenpi.pe/api-reference/pipes/trigger-a-manual-pipe-run
openapi.yaml post /pipes/{id}/run
# Update pipe config fields
Source: https://docs.screenpi.pe/api-reference/pipes/update-pipe-config-fields
openapi.yaml post /pipes/{id}/config
# Delete a speaker and associated audio chunks
Source: https://docs.screenpi.pe/api-reference/speaker-management/delete-a-speaker-and-associated-audio-chunks
openapi.yaml post /speakers/delete
# Get speakers similar to a given speaker
Source: https://docs.screenpi.pe/api-reference/speaker-management/get-speakers-similar-to-a-given-speaker
openapi.yaml get /speakers/similar
# Get unnamed speakers
Source: https://docs.screenpi.pe/api-reference/speaker-management/get-unnamed-speakers
openapi.yaml get /speakers/unnamed
# Mark a speaker as hallucination
Source: https://docs.screenpi.pe/api-reference/speaker-management/mark-a-speaker-as-hallucination
openapi.yaml post /speakers/hallucination
# Merge two speakers into one
Source: https://docs.screenpi.pe/api-reference/speaker-management/merge-two-speakers-into-one
openapi.yaml post /speakers/merge
# Reassign an audio chunk to a different speaker
Source: https://docs.screenpi.pe/api-reference/speaker-management/reassign-an-audio-chunk-to-a-different-speaker
openapi.yaml post /speakers/reassign
# Search speakers by name
Source: https://docs.screenpi.pe/api-reference/speaker-management/search-speakers-by-name
openapi.yaml get /speakers/search
# Undo a speaker reassignment
Source: https://docs.screenpi.pe/api-reference/speaker-management/undo-a-speaker-reassignment
openapi.yaml post /speakers/undo-reassign
# Update speaker name or metadata
Source: https://docs.screenpi.pe/api-reference/speaker-management/update-speaker-name-or-metadata
openapi.yaml post /speakers/update
# Stream frames via SSE
Source: https://docs.screenpi.pe/api-reference/streaming/stream-frames-via-sse
openapi.yaml get /stream/frames
Server-Sent Events stream for real-time frame data. Not fully representable in OpenAPI.
# WebSocket for health monitoring
Source: https://docs.screenpi.pe/api-reference/streaming/websocket-for-health-monitoring
openapi.yaml get /ws/health
# WebSocket for real-time events
Source: https://docs.screenpi.pe/api-reference/streaming/websocket-for-real-time-events
openapi.yaml get /ws/events
WebSocket endpoint for streaming OCR, audio, and UI events in real-time.
# WebSocket for video export
Source: https://docs.screenpi.pe/api-reference/streaming/websocket-for-video-export
openapi.yaml get /frames/export
WebSocket endpoint for exporting frames as video.
# Health check with pipeline metrics
Source: https://docs.screenpi.pe/api-reference/system/health-check-with-pipeline-metrics
openapi.yaml get /health
# Get UI event statistics grouped by app and event type
Source: https://docs.screenpi.pe/api-reference/ui-events/get-ui-event-statistics-grouped-by-app-and-event-type
openapi.yaml get /ui-events/stats
# Search UI events (clicks, keystrokes, clipboard, etc.)
Source: https://docs.screenpi.pe/api-reference/ui-events/search-ui-events-clicks-keystrokes-clipboard-etc
openapi.yaml get /ui-events
# Get raw vision pipeline metrics snapshot
Source: https://docs.screenpi.pe/api-reference/vision-control/get-raw-vision-pipeline-metrics-snapshot
openapi.yaml get /vision/metrics
# Get vision system status and permissions
Source: https://docs.screenpi.pe/api-reference/vision-control/get-vision-system-status-and-permissions
openapi.yaml get /vision/status
# List available monitors
Source: https://docs.screenpi.pe/api-reference/vision-control/list-available-monitors
openapi.yaml get /vision/list
# Apple Intelligence — on-device daily summaries & auto reminders
Source: https://docs.screenpi.pe/apple-intelligence
screenpipe uses Apple Intelligence (on-device Foundation Models) to generate daily summaries and automatically extract action items into Apple Reminders.
screenpipe uses Apple's on-device Foundation Models to analyze your recorded activity — no data leaves your Mac. requires macOS 26+ with Apple Intelligence enabled on Apple Silicon.
## daily summaries
screenpipe automatically generates a summary of your day at **6 PM** (when plugged in). the summary appears in your **timeline** view and includes:
* **one-liner** — casual summary of your day
* **key moments** — up to 5 important events from audio transcriptions
* **action items** — things mentioned in conversations, ranked by urgency
* **people mentioned** — names and context from your conversations
you can also click the Apple Intelligence icon in the timeline to generate a summary manually at any time.
### how it works
1. fetches your audio transcriptions for the day
2. sends them to Apple's on-device Foundation Models (never leaves your Mac)
3. returns a structured summary with key moments, action items, and people
4. caches the result locally so it loads instantly next time
### enable/disable
open **settings** → **connections** → toggle **Apple Intelligence** on or off.
## auto reminders
screenpipe can scan your activity every 30 minutes and automatically create action items in **Apple Reminders**.
### setup
1. open **settings** → **connections** → find the **Apple Intelligence** card
2. click **"Connect Apple Reminders"** (grants Reminders permission)
3. toggle on **auto-scanning**
once enabled, screenpipe will:
* scan your screen & audio every 30 minutes
* extract action items using Apple Intelligence
* create reminders in a **"Screenpipe"** list in Apple Reminders
* auto-deduplicate so you don't get repeat reminders
### options
* **audio only** — skip screen data, only analyze audio transcriptions (on by default)
* **custom prompt** — add instructions like "focus on coding tasks" or "ignore social media"
* **scan now** — manually trigger a scan anytime
## requirements
* **macOS 26+** (Tahoe) with Apple Intelligence enabled
* **Apple Silicon** (M1 or later)
* screenpipe running on localhost:3030
if Apple Intelligence shows as "not available", check that it's enabled in **System Settings → Apple Intelligence & Siri** and that your Mac meets the requirements.
## vs Apple's built-in screen time
| feature | screenpipe | Apple Screen Time |
| ------------------- | --------------------------------- | ----------------- |
| full text search | ✅ OCR on every frame | ❌ app usage only |
| audio transcription | ✅ local Whisper | ❌ |
| AI summaries | ✅ daily summary with action items | ❌ |
| auto reminders | ✅ action items → Apple Reminders | ❌ |
| API access | ✅ full REST API | ❌ |
| cross-platform | ✅ macOS, Windows, Linux | ❌ Apple only |
questions? [join our discord](https://discord.gg/screenpipe).
# architecture
Source: https://docs.screenpi.pe/architecture
How screenpipe uses event-driven capture, accessibility tree extraction, and OCR fallback to build a searchable local memory.
## overview
screenpipe is a Rust application that captures your screen and audio using an event-driven architecture, processes them locally, and stores everything in a SQLite database. instead of recording every second, it listens for meaningful OS events and captures only when something actually changes — pairing each screenshot with accessibility tree data for maximum quality at minimal cost.
```mermaid theme={null}
graph LR
subgraph trigger["event triggers"]
E1[app switch]
E2[click / scroll]
E3[typing pause]
E4[idle timer]
end
subgraph capture["paired capture"]
SS[screenshot]
A11Y[accessibility tree]
OCR[OCR fallback]
end
subgraph audio["audio"]
MIC[microphone]
SYS[system audio]
STT[speech-to-text]
end
subgraph store["storage"]
DB[(SQLite)]
FS[JPEG files]
end
subgraph serve["API · localhost:3030"]
REST[REST API]
MCP[MCP server]
end
E1 & E2 & E3 & E4 --> SS
SS --> A11Y
A11Y -->|empty?| OCR
A11Y --> DB
OCR --> DB
SS --> FS
MIC & SYS --> STT --> DB
DB --> REST
DB --> MCP
FS --> REST
REST --> P[pipes / AI agents]
MCP --> AI[Claude · Cursor · etc.]
```
## data flow
```mermaid theme={null}
sequenceDiagram
participant OS as OS Events
participant Capture
participant A11Y as Accessibility
participant OCR as OCR (fallback)
participant Audio
participant SQLite
participant API
participant AI
OS->>Capture: meaningful event (click, app switch, typing pause...)
Capture->>Capture: screenshot
Capture->>A11Y: walk accessibility tree
alt accessibility data available
A11Y->>SQLite: structured text + metadata
else accessibility empty (remote desktop, games)
A11Y->>OCR: fallback
OCR->>SQLite: extracted text + metadata
end
Capture->>SQLite: JPEG frame
loop every 30s chunk
Audio->>SQLite: transcription + speaker
Audio->>SQLite: audio file
end
AI->>API: search query
API->>SQLite: SQL lookup
SQLite-->>API: results
API-->>AI: JSON response
```
## crates
screenpipe is a Rust workspace with specialized crates:
```mermaid theme={null}
graph TD
APP[screenpipe-app-tauri
desktop app]
SERVER[screenpipe-server
REST API · routes]
DB[screenpipe-db
SQLite · types]
VISION[screenpipe-vision
screen capture · OCR]
AUDIO[screenpipe-audio
audio capture · STT]
CORE[screenpipe-core
pipes · config]
EVENTS[screenpipe-events
event system]
A11Y[screenpipe-accessibility
UI events · macOS, Windows]
AI[screenpipe-apple-intelligence
Foundation Models]
INT[screenpipe-integrations
MCP · reminders]
APP --> SERVER
APP --> AI
SERVER --> DB
SERVER --> VISION
SERVER --> AUDIO
SERVER --> CORE
SERVER --> EVENTS
AUDIO --> DB
VISION --> DB
CORE --> DB
A11Y --> DB
INT --> SERVER
```
## layers
### 1. event-driven capture
screenpipe listens for meaningful OS events instead of polling at a fixed FPS. when an event fires, it captures a screenshot and walks the accessibility tree together — same timestamp, same frame.
| trigger | description |
| ------------------ | ----------------------------------------------------- |
| **app switch** | user switched to a different application |
| **window focus** | a new window gained focus |
| **click / scroll** | user interacted with the UI |
| **typing pause** | user stopped typing (debounced) |
| **clipboard copy** | content copied to clipboard |
| **idle fallback** | periodic capture every \~5s when nothing is happening |
| what | how | crate |
| ------------------- | -------------------------------------------------------------------------- | -------------------------- |
| **screen** | event-triggered screenshot of the active monitor | `screenpipe-vision` |
| **text extraction** | accessibility tree walk (structured text: buttons, labels, fields) | `screenpipe-accessibility` |
| **OCR fallback** | when accessibility data is empty (remote desktops, games, some Linux apps) | `screenpipe-vision` |
| **audio** | multiple input/output devices in configurable chunks (default 30s) | `screenpipe-audio` |
### 2. processing
| engine | type | platform | when used |
| ------------------ | --------------- | -------------------- | -------------------------------------- |
| accessibility tree | text extraction | macOS, Windows | primary — used for every capture |
| Apple Vision | OCR | macOS | fallback when accessibility is empty |
| Windows native | OCR | Windows | fallback when accessibility is empty |
| Tesseract | OCR | Linux | primary (accessibility support varies) |
| Whisper | speech-to-text | local, all platforms | audio transcription |
| Deepgram | speech-to-text | cloud API | optional cloud audio |
additional processing: speaker identification, PII redaction, frame deduplication (skips identical frames).
### 3. storage
all data stays local on your machine:
* **SQLite** at `~/.screenpipe/db.sqlite` — metadata, accessibility text, OCR text, transcriptions, speakers, tags, UI elements
* **media** at `~/.screenpipe/data/` — JPEG screenshots (event-driven frames), audio chunks
### 4. API
REST API on `localhost:3030`:
| endpoint | description |
| ---------------------- | ------------------------------------------------------- |
| `/search` | filtered content retrieval (OCR, audio, accessibility) |
| `/search/keyword` | keyword search with text positions |
| `/elements` | lightweight UI element search (accessibility tree data) |
| `/frames/{id}` | access captured frames |
| `/frames/{id}/context` | accessibility text + URLs + OCR fallback for a frame |
| `/health` | system status and metrics |
| `/raw_sql` | direct database queries |
| `/ai/chat/completions` | Apple Intelligence (macOS 26+) |
see [API reference](/cli-reference) for the full endpoint list.
### 5. pipes
[pipes](/pipes) are AI agents (`.md` prompt files) that run on your screen data. they're executed by an AI agent that reads the prompt, queries the screenpipe API, and takes action.
pipes live in `~/.screenpipe/pipes/{name}/` and run on cron-like schedules.
### 6. desktop app
the desktop app is built with **Tauri** (Rust backend) + **Next.js** (React frontend):
```mermaid theme={null}
graph LR
subgraph tauri["Tauri shell"]
RS[Rust backend
commands · permissions · tray]
WV[WebView]
end
subgraph frontend["Next.js frontend"]
PAGES[pages
chat · timeline · settings]
STORE[Zustand stores]
UI[shadcn/ui components]
end
subgraph backend["screenpipe-server"]
API[REST API :3030]
end
RS --> WV
WV --> PAGES
PAGES --> STORE
STORE --> UI
PAGES --> API
```
## database schema
key tables:
| table | stores |
| ---------------------- | -------------------------------------------------------------------------------------------------- |
| `frames` | captured screen frame metadata (includes `snapshot_path`, `accessibility_text`, `capture_trigger`) |
| `ocr_text` | OCR fallback text extracted from frames |
| `elements` | UI elements from accessibility tree (buttons, labels, text fields) with FTS5 search |
| `audio_chunks` | audio recording metadata |
| `audio_transcriptions` | text from audio |
| `speakers` | identified speakers |
| `ui_events` | keyboard, mouse, clipboard events |
| `tags` | user-applied tags on content |
inspect directly:
```bash theme={null}
sqlite3 ~/.screenpipe/db.sqlite .schema
```
## resource usage
runs 24/7 on a MacBook Pro M3 (32 GB) or a \$400 Windows laptop:
| metric | typical value |
| ------- | -------------------------------------------------------------------------------- |
| RAM | \~600 MB |
| CPU | \~5-10% |
| storage | \~5-10 GB/month (event-driven capture only stores frames when something changes) |
## source code
| component | path |
| -------------- | -------------------------------------------------------------------------------------------------------------------------------------------- |
| API server | [screenpipe-server/src/](https://github.com/screenpipe/screenpipe/tree/main/crates/screenpipe-server/src) |
| screen capture | [screenpipe-vision/src/core.rs](https://github.com/screenpipe/screenpipe/blob/main/crates/screenpipe-vision/src/core.rs) |
| audio capture | [screenpipe-audio/src/](https://github.com/screenpipe/screenpipe/tree/main/crates/screenpipe-audio/src) |
| database | [screenpipe-db/src/db.rs](https://github.com/screenpipe/screenpipe/blob/main/crates/screenpipe-db/src/db.rs) |
| pipes | [screenpipe-core/src/pipes/](https://github.com/screenpipe/screenpipe/tree/main/crates/screenpipe-core/src/pipes) |
| MCP server | [screenpipe-mcp/src/index.ts](https://github.com/screenpipe/screenpipe/blob/main/crates/screenpipe-integrations/screenpipe-mcp/src/index.ts) |
| desktop app | [screenpipe-app-tauri/](https://github.com/screenpipe/screenpipe/tree/main/apps/screenpipe-app-tauri) |
# ChatGPT — use your subscription with screenpipe
Source: https://docs.screenpi.pe/chatgpt
Connect your ChatGPT Plus or Pro subscription to power screenpipe's AI chat and pipes — no API key needed.
if you have a ChatGPT Plus or Pro subscription, you can connect it directly to screenpipe. this lets you use OpenAI's latest models for chat, summaries, and pipes without managing API keys or paying extra.
## what you get
* **AI chat** — ask questions about your screen history, meetings, and activity using GPT models
* **pipes** — run scheduled or on-demand automations (summaries, time tracking, etc.) powered by your subscription
* **no API key needed** — sign in with your OpenAI account, that's it
## how to connect
1. open screenpipe
2. click the **model selector** next to the chat input (shows "OAI", "Ollama", etc.)
3. select **ChatGPT** as your provider
4. click **sign in with ChatGPT**
5. a browser window opens — log in with your OpenAI account and approve access
6. done — you'll see a green checkmark and your available models
## choosing a model
after signing in, you can pick from your available models in the dropdown. these depend on your subscription tier:
| subscription | models available |
| ---------------- | ----------------------------------------- |
| **ChatGPT Plus** | GPT-4o, GPT-4o mini, and more |
| **ChatGPT Pro** | GPT-4o, GPT-4o mini, o1, o1-pro, and more |
the model list updates automatically based on what your subscription includes.
## using it for pipes
pipes are automations that run on a schedule or on demand. when you connect your ChatGPT subscription, pipes use it automatically.
examples:
* **toggl sync** — auto-track time in Toggl based on your screen activity, every 30 minutes
* **day recap** — one-click summary of what you accomplished today
* **meeting summary** — summarize meeting transcripts with action items
to set up a pipe:
1. go to **pipes** in the sidebar
2. pick a pipe or create your own
3. in the pipe settings, select your ChatGPT preset
4. enable the pipe
## how it's different from MCP
screenpipe also supports [MCP integrations](/mcp-server) with ChatGPT Desktop, Claude, and other apps. that's a different thing:
| | ChatGPT subscription | MCP integration |
| ------------------ | -------------------------------------------------- | ------------------------------------------------------------------ |
| **what it does** | powers screenpipe's built-in AI chat and pipes | lets external apps (ChatGPT Desktop, Claude) query screenpipe data |
| **setup** | sign in with your OpenAI account inside screenpipe | add screenpipe as an MCP server in the external app |
| **where you chat** | in screenpipe's chat window | in the external app (ChatGPT Desktop, Claude, etc.) |
| **runs pipes** | yes | no |
you can use both at the same time.
## signing out
to disconnect your ChatGPT account:
1. open the model selector
2. click the **sign out** button next to your ChatGPT connection
this removes the stored tokens from your machine. your data stays local.
## troubleshooting
**sign-in window doesn't open?**
* make sure screenpipe is running
* try restarting the app and signing in again
**"could not get token" error?**
* your session may have expired — sign out and sign back in
* check that your ChatGPT subscription is active at [openai.com](https://openai.com)
**models not showing up?**
* sign out and sign back in to refresh your token
* some models are only available on higher subscription tiers
**pipes failing with auth errors?**
* tokens refresh automatically, but if it persists, sign out and back in
still stuck? [ask in our discord](https://discord.gg/screenpipe).
# Claude Code - AI coding with screen context
Source: https://docs.screenpi.pe/claude-code
Use screenpipe with Claude Code CLI to give Claude access to your screen history, meeting transcriptions, and app context while coding.
[Claude Code](https://code.claude.com) is Anthropic's official CLI for agentic coding. with screenpipe integration, Claude can reference what you've been working on, recall information from your screen, and access meeting transcriptions.
## setup
```bash theme={null}
claude mcp add screenpipe --transport stdio -- npx -y screenpipe-mcp
```
to make it available across all your projects:
```bash theme={null}
claude mcp add screenpipe --transport stdio --scope user -- npx -y screenpipe-mcp
```
## verify connection
```bash theme={null}
# list MCP servers
claude mcp list
# or inside Claude Code, use
/mcp
```
## available tools
once connected, Claude Code has access to these tools:
| tool | description |
| ---------------- | --------------------------------------------------------------------------- |
| `search-content` | search screen OCR, audio transcriptions, input events with time/app filters |
| `export-video` | create video exports from screen recordings |
## usage examples
ask Claude Code to use screenpipe naturally:
```
> what was I looking at in my browser an hour ago?
> find mentions of "kubernetes" from my screen today
> show me audio transcriptions from my last meeting
> what code was I reading in VS Code yesterday about async?
> export a video of my screen from 2-3pm today
> what did I type in Slack today?
> show me my app usage stats for the past 2 hours
> what did I copy to clipboard recently?
> which apps did I switch between most today?
```
## search parameters
### search-content (vision + audio + input)
| parameter | description |
| ---------------- | ---------------------------------------------------------------------- |
| `q` | search query (optional - omit to get all content) |
| `content_type` | `vision`, `audio`, `accessibility`, `input`, or `all` (default: `all`) |
| `limit` | max results (default: 10) |
| `offset` | pagination offset |
| `start_time` | ISO 8601 UTC start time |
| `end_time` | ISO 8601 UTC end time |
| `app_name` | filter by app (e.g., "Chrome", "Slack") |
| `window_name` | filter by window title |
| `include_frames` | include base64 screenshots |
| `speaker_ids` | comma-separated speaker IDs for audio filtering |
| `speaker_name` | filter audio by speaker name |
## example workflows
**recall context from earlier:**
```
> I was reading a blog post about rust async earlier today,
> search screenpipe and summarize the key points
```
**reference meeting discussion:**
```
> search my audio transcriptions for what was discussed in standup
> about the API refactor, then help me implement it
```
**debug with screen history:**
```
> I saw an error message flash on screen, search screenpipe
> to find it and help me fix the issue
```
**find code examples:**
```
> search screenpipe for the python code I was looking at
> in the browser yesterday about asyncio patterns
```
**track what you typed:**
```
> what did I write in Notion this morning? search for my
> keyboard input using content_type=input
```
**recall clipboard history:**
```
> I copied something important earlier, search clipboard
> events to find it
```
## requirements
* screenpipe running on localhost:3030
* Claude Code CLI installed
* Node.js >= 18.0.0
## troubleshooting
**MCP not connecting?**
* verify screenpipe is running: `curl http://localhost:3030/health`
* check MCP status in Claude Code: `/mcp`
* remove and re-add: `claude mcp remove screenpipe && claude mcp add screenpipe --transport stdio -- npx -y screenpipe-mcp`
**queries returning empty?**
* check screenpipe has data: `curl "http://localhost:3030/search?limit=1"`
* ensure screen recording permissions are granted
* verify the time range you're querying
**permission errors?**
* macos: check System Settings > Privacy & Security > Screen Recording
* ensure screenpipe app is listed and enabled
still stuck? [ask in our discord](https://discord.gg/screenpipe).
# API reference
Source: https://docs.screenpi.pe/cli-reference
REST API reference for screenpipe: search, frames, audio, tags, and more.
screenpipe serves a REST API on `localhost:3030`. use this to integrate with any tool or build custom automations.
for the full interactive API reference with request/response schemas, see the [API reference tab](/api-reference).
## endpoints
| method | endpoint | description |
| ------ | ------------------- | ----------------------------- |
| GET | `/search` | search screen & audio content |
| GET | `/health` | server health check |
| GET | `/audio/list` | list audio devices |
| GET | `/vision/list` | list monitors |
| GET | `/frames/{id}` | get frame data |
| GET | `/frames/{id}/ocr` | get frame OCR text |
| POST | `/tags/{type}/{id}` | add tags |
| DELETE | `/tags/{type}/{id}` | remove tags |
| POST | `/raw_sql` | execute raw SQL |
| POST | `/add` | add content to database |
| GET | `/search/keyword` | keyword search |
| POST | `/audio/start` | start audio recording |
| POST | `/audio/stop` | stop audio recording |
## search example
```bash theme={null}
curl "http://localhost:3030/search?q=meeting&limit=10&content_type=vision"
```
## search parameters
| param | type | description |
| -------------- | -------- | --------------------------------- |
| `q` | string | search query |
| `limit` | int | max results |
| `offset` | int | pagination offset |
| `content_type` | string | `vision`, `audio`, `input`, `all` |
| `start_time` | ISO 8601 | filter start |
| `end_time` | ISO 8601 | filter end |
| `app_name` | string | filter by app |
| `window_name` | string | filter by window title |
| `browser_url` | string | filter by browser URL |
| `min_length` | int | minimum text length |
| `max_length` | int | maximum text length |
need help? [join our discord](https://discord.gg/screenpipe).
# cline - VS Code agent with screen memory
Source: https://docs.screenpi.pe/cline
Connect screenpipe to Cline to give this autonomous VS Code agent access to your screen history and context.
[Cline](https://github.com/cline/cline) is an autonomous AI coding agent for VS Code with 30k+ GitHub stars. it supports MCP servers, so you can connect screenpipe to give Cline context about what you've been working on across all your apps.
## setup
1. open VS Code with Cline installed
2. open Cline settings (gear icon in Cline panel)
3. go to **MCP Servers**
4. add screenpipe:
```json theme={null}
{
"mcpServers": {
"screenpipe": {
"command": "npx",
"args": ["-y", "screenpipe-mcp"]
}
}
}
```
5. click the refresh icon to reload MCP servers
## usage
once configured, Cline can search your screen history while coding:
```
> I was reading documentation about async/await patterns earlier,
> find it and help me apply those patterns here
> what error messages have I seen in my terminal today?
> find the API response format I was looking at in the browser
```
## plan mode with context
Cline's "Plan" mode works great with screenpipe:
1. switch to Plan mode
2. ask Cline to find relevant context from your screen history
3. let it create a plan based on what you've been working on
4. switch to Act mode to execute
```
> [Plan mode] I was researching authentication patterns earlier,
> find what I was looking at and plan how to implement it here
```
## available tools
screenpipe provides:
* **search-content** - search screen OCR, audio transcriptions, input events
## requirements
* screenpipe running on localhost:3030
* VS Code with Cline extension
* Node.js >= 18.0.0
need help? [join our discord](https://discord.gg/screenpipe).
# cloud archive - free disk space automatically
Source: https://docs.screenpi.pe/cloud-archive
Encrypt and archive old screenpipe data to the cloud to free disk space while keeping your history accessible.
cloud archive encrypts your old screenpipe data and uploads it to the cloud, then deletes the local copy to free disk space. it uses the same zero-knowledge encryption as [cloud sync](/cloud-sync) — we cannot read your data.
## how it works
choose how many days of data to keep locally (7, 14, 30, 60, or 90 days). data older than this will be archived.
your token is used to derive an encryption key locally using argon2id. data is encrypted with chacha20-poly1305 before upload — the same encryption used by cloud sync.
screenpipe uploads data in small batches (up to 500 records at a time) every 5 minutes. this includes screen captures, ocr text, audio transcriptions, accessibility data, and ui events.
after each chunk is confirmed uploaded, the corresponding local data and media files are deleted to free disk space.
## what gets archived
frame metadata, app names, window titles, browser urls
extracted text from screen recordings
transcribed speech with speaker and device info
ui text captured via accessibility apis
keyboard and mouse activity metadata
orphaned video/audio files are cleaned up after upload
## enabling cloud archive
1. open **settings → cloud archive**
2. select your retention period (how many days to keep locally)
3. toggle **enable cloud archive**
you can also trigger an immediate archive run by clicking **archive now** in the status card.
## encryption details
cloud archive reuses the same `SyncManager` and encryption pipeline as cloud sync:
* **key derivation**: argon2id with a password derived from your auth token
* **cipher**: chacha20-poly1305 (authenticated encryption)
* **zero-knowledge**: the encryption key never leaves your device
if you already have cloud sync enabled, archive uses the exact same encryption key to avoid any conflicts.
review the rust implementation of cloud archive
## watermark-based tracking
instead of marking individual records as uploaded, cloud archive uses a single **watermark timestamp**. all data before the watermark has been securely uploaded and can be safely deleted locally.
this is simpler and more efficient than per-record tracking — one timestamp tells the system exactly where it left off, even if the app restarts.
## storage & limits
your cloud storage usage is shown in the archive status card. storage limits depend on your screenpipe pro plan.
## important notes
archived data is currently upload-only. download/restore is not yet available — local data older than your retention period will be permanently deleted from your device after upload.
cloud archive requires a screenpipe pro subscription. the archive process runs automatically in the background every 5 minutes when enabled.
questions? [join our discord](https://discord.gg/screenpipe).
# cloud sync - zero-knowledge encryption
Source: https://docs.screenpi.pe/cloud-sync
Learn how screenpipe cloud sync keeps your data encrypted and private with zero-knowledge encryption.
cloud sync lets you access your screenpipe data across all your devices with end-to-end encryption. we use zero-knowledge encryption, meaning we physically cannot read your data.
## zero-knowledge encryption
your data is encrypted on your device before it ever leaves. we use your password to derive an encryption key locally - this key never leaves your device and we never see it.
this means we physically cannot read your data, even if we wanted to. only you (and anyone you share your password with) can decrypt your data.
## how it works
this password is used to derive your encryption key using argon2id (a secure key derivation function).
before any data leaves your device, it's encrypted using chacha20-poly1305 (a fast, secure cipher).
only encrypted blobs are sent to our servers. we store them but cannot read them.
when you access data from another device, it's downloaded and decrypted on that device using your password.
## what we store
* encrypted blobs (we cannot read these)
* your account email
* encrypted master key (encrypted with your password - we cannot decrypt it)
* salt for key derivation (not secret, needed to derive the same key)
* device metadata (device name, os, last sync time)
## what we cannot see
all visual data is encrypted
all audio data is encrypted
all extracted text is encrypted
never leaves your device
derived locally, never transmitted
we only see encrypted blobs
## verify it yourself
screenpipe is open source. you can audit the encryption code yourself:
review the rust implementation of our zero-knowledge encryption
## important notes
if you forget your encryption password, your cloud data cannot be recovered. we recommend using a password manager.
cloud sync requires a screenpipe pro subscription. sync happens automatically in the background and uses approximately 0.8 gb per monitor per day.
questions? [join our discord](https://discord.gg/screenpipe).
# continue - open-source copilot with screen context
Source: https://docs.screenpi.pe/continue
Connect screenpipe to Continue to give this open-source AI coding assistant access to your screen history.
[Continue](https://github.com/continuedev/continue) is an open-source AI coding assistant with 20k+ GitHub stars. it works in VS Code and JetBrains IDEs, and supports MCP servers for external context.
## setup
### VS Code
1. install Continue from the VS Code marketplace
2. open Continue settings
3. edit `~/.continue/config.json`:
```json theme={null}
{
"mcpServers": [
{
"name": "screenpipe",
"command": "npx",
"args": ["-y", "screenpipe-mcp"]
}
]
}
```
4. restart VS Code
### JetBrains
1. install Continue from JetBrains marketplace
2. open Continue settings
3. add the same MCP configuration
4. restart the IDE
## usage
once configured, Continue can access your screen history:
```
> @screenpipe what was I working on this morning?
> find the error I saw in my terminal and help me fix it
> what documentation was I reading about react hooks?
```
## context providers
Continue also supports context providers. you can add screenpipe as a custom context provider for more control:
```json theme={null}
{
"contextProviders": [
{
"name": "screenpipe",
"params": {
"apiUrl": "http://localhost:3030"
}
}
]
}
```
## example workflows
**code with context:**
```
> I was looking at how another project handles authentication,
> find that code and help me implement something similar
```
**recall errors:**
```
> there was a typescript error earlier about types,
> find it and explain how to fix it
```
**meeting follow-up:**
```
> what did we discuss in the standup about the API changes?
> help me implement those changes
```
## available tools
via MCP, Continue gets access to:
* **search-content** - search screen OCR, audio transcriptions, and input events
* **export-video** - create video exports from screen recordings
## requirements
* screenpipe running on localhost:3030
* Continue extension in VS Code or JetBrains
* Node.js >= 18.0.0
need help? [join our discord](https://discord.gg/screenpipe).
# contribute to screenpipe - open source screen recording
Source: https://docs.screenpi.pe/contributing
Contribute to screenpipe: report bugs, submit pull requests, join bounty programs, and help build the open source screen recording platform.
for detailed contribution guidelines, build instructions, and development setup, please see our [contributing guide on github](https://github.com/screenpipe/screenpipe/blob/main/CONTRIBUTING.md).
### quick links
* [report a bug](https://github.com/screenpipe/screenpipe/issues/new?labels=bug)
* [request a feature](https://github.com/screenpipe/screenpipe/issues/new?labels=enhancement)
* [join our discord](https://discord.gg/screenpipe)
* [schedule a call](https://cal.com/team/screenpipe/chat)
# FAQ
Source: https://docs.screenpi.pe/faq
Frequently asked questions about screenpipe.
pipes are scheduled AI agents that run on your screen data. each pipe is a `.md` file with a prompt and a schedule. an AI agent reads the prompt, queries the screenpipe API, and takes action.
examples: sync to Obsidian, track time in Toggl, send daily summaries.
see the [pipes guide](/pipes) for details.
* **minimum**: dual-core CPU, 2GB RAM, 20GB disk
* **recommended**: quad-core, 4GB+ RAM, 50GB+ SSD
* runs 24/7 on a MacBook Pro M3 or a \$400 Windows laptop
* \~600 MB RAM, \~10% CPU, \~30 GB/month storage at 1 FPS
all data is in `~/.screenpipe/`:
* `db.sqlite` — metadata, OCR text, transcriptions
* `data/` — MP4 screen recordings, audio chunks
* `pipes/` — installed pipes
to backup, copy `~/.screenpipe/` to another location.
* check [github issues](https://github.com/screenpipe/screenpipe/issues)
* join [discord](https://discord.gg/screenpipe)
* follow [contribution guidelines](https://github.com/screenpipe/screenpipe/blob/main/CONTRIBUTING.md)
**OCR**: `apple-native` (macOS), `windows-native` (Windows), `tesseract` (Linux), `unstructured` (cloud), `custom`
**STT**: `whisper-large-v3-turbo-quantized` (default, local), `whisper-tiny`, `whisper-tiny-quantized`, `whisper-large`, `whisper-large-quantized`, `whisper-large-v3-turbo`, `deepgram` (cloud)
yes. everything runs locally. no data leaves your machine unless you explicitly choose a cloud provider (deepgram, unstructured). you control what's captured with `--ignored-windows` and `--included-windows`.
three ways:
1. **MCP server** — works with Claude Desktop, Cursor ([guide](/mcp-server))
2. **pipes** — scheduled AI agents ([guide](/pipes))
3. **REST API** — `curl http://localhost:3030/search?q=your+query`
# screenpipe for developers
Source: https://docs.screenpi.pe/for-developers
Give your AI coding assistant memory of your screen. Use screenpipe with Claude Code, Cursor, Cline, Continue, and Gemini CLI. AI coding assistant with screen context.
screenpipe gives developers a superpower: AI that knows what you've been working on. it captures your screen and audio 24/7 and makes it available to coding assistants via MCP and REST API.
## what developers use screenpipe for
* **code search across time** — find that code snippet you saw in a PR review last week, even if you closed the tab
* **meeting recall** — search what was said in standups, design reviews, or pair programming sessions
* **context for AI coding** — give Cursor, Claude Code, or Cline memory of what's on your screen right now and what you worked on earlier
* **automated workflows** — pipes that auto-track time in Toggl, sync daily activity to Obsidian, or generate standup reports
## integrations
screenpipe works with any AI tool that supports MCP or HTTP APIs:
| tool | integration | guide |
| --------------- | ---------------------------------------------------------------------------------------------------------- | --------------------------- |
| **Claude Code** | MCP server — Claude Code can search your screen history, find code you saw earlier, recall meeting context | [setup guide](/claude-code) |
| **Cursor** | MCP server — add screenpipe as a context source for Cursor's AI | [setup guide](/mcp-server) |
| **Cline** | MCP server — give Cline access to your full screen history | [setup guide](/cline) |
| **Continue** | MCP server — add screen context to Continue's AI completions | [setup guide](/continue) |
| **Gemini CLI** | MCP server — use screenpipe with Google's Gemini CLI | [setup guide](/gemini-cli) |
| **OpenCode** | MCP server — search screen history from OpenCode | [setup guide](/opencode) |
| **Ollama** | local AI — use any local model with screenpipe, 100% private | [setup guide](/ollama) |
## quick start
1. [download screenpipe](https://screenpi.pe/onboarding)
2. add the MCP server to your coding tool:
```json theme={null}
{
"mcpServers": {
"screenpipe": {
"command": "npx",
"args": ["-y", "screenpipe-mcp"]
}
}
}
```
3. ask your AI assistant: "what was I working on in the last hour?" or "find the code snippet I saw in that PR review"
## example prompts
once screenpipe is connected to your coding assistant:
* "find the error message I saw in the terminal 30 minutes ago"
* "what did we discuss in the standup about the auth refactor?"
* "show me the API endpoint I was looking at in the browser"
* "summarize what I worked on today for my standup"
## automate with pipes
[pipes](/pipes) are scheduled AI agents that run on your screen data. developers use them for:
* **time tracking** — auto-log coding time to Toggl based on active apps
* **daily journals** — sync screen activity to Obsidian
* **standup reports** — generate daily summaries of what you worked on
## get screenpipe
screenpipe includes everything developers need — MCP server, pipes, cloud AI, and more. lifetime plans available starting at \$400.
[download screenpipe →](https://screenpi.pe/onboarding)
# gemini CLI - google's AI with screen context
Source: https://docs.screenpi.pe/gemini-cli
Use screenpipe with Gemini CLI to give Google's AI access to your screen history and audio transcriptions.
[Gemini CLI](https://github.com/google/gemini-cli) is Google's terminal-based AI coding assistant. it supports MCP servers, so you can connect screenpipe to give Gemini context about what you've been working on.
## setup
Gemini CLI uses MCP for external tools. add screenpipe to your config:
```bash theme={null}
# add screenpipe MCP server
gemini config mcp add screenpipe "npx -y screenpipe-mcp"
```
or edit your config file directly:
```json theme={null}
{
"mcpServers": {
"screenpipe": {
"command": "npx",
"args": ["-y", "screenpipe-mcp"]
}
}
}
```
restart Gemini CLI after configuration.
## usage
once configured, Gemini can search your screen history:
```bash theme={null}
gemini
> what was I working on this morning?
> find the error message I saw in my terminal earlier
> summarize the documentation I was reading about kubernetes
```
## available tools
screenpipe provides these MCP tools to Gemini:
* **search-content** - search screen OCR, audio transcriptions, and input events
* **export-video** - create video exports from screen recordings
## example workflows
**context-aware coding:**
```
> I was looking at a react component earlier that handled
> form validation, find it and help me implement something similar
```
**debug from memory:**
```
> there was an error in my build output, find it and help me fix it
```
**recall documentation:**
```
> what did that API documentation say about rate limits?
```
## requirements
* screenpipe running on localhost:3030
* Gemini CLI installed
* Node.js >= 18.0.0
need help? [join our discord](https://discord.gg/screenpipe).
# install screenpipe
Source: https://docs.screenpi.pe/getting-started
Install screenpipe on macOS, Windows, or Linux. Record everything on screen and search screen history on Mac, Windows, and Linux. Start recording in minutes.
## desktop app
download the [desktop app](https://screenpi.pe/onboarding) and follow the installation instructions. works on macOS, Windows, and Linux.
the app manages recording, settings, and search — no terminal needed.
## CLI
```bash theme={null}
npx screenpipe@latest record
```
## verify it's running
once screenpipe starts, it serves an API on `localhost:3030`:
```bash theme={null}
# check health
curl http://localhost:3030/health
# search your screen history
curl "http://localhost:3030/search"
```
## connect to AI
screenpipe works with any AI that supports MCP or HTTP APIs:
| integration | how |
| ------------------ | ---------------------------------------------------------- |
| **claude desktop** | add screenpipe as MCP server ([guide](/mcp-server)) |
| **cursor** | add screenpipe MCP to your project ([guide](/mcp-server)) |
| **claude code** | use screenpipe MCP or curl the API ([guide](/claude-code)) |
| **ollama** | configure in app settings, use any local model |
## what's next?
* [search your screen history](/search-screen-history) — find anything you've seen
* [set up pipes](/pipes) — scheduled AI agents on your data
* [API reference](/cli-reference) — REST API endpoints and parameters
* [join our discord](https://discord.gg/screenpipe) — get help from the community
## get the desktop app
screenpipe includes everything you need out of the box — 24/7 screen recording, AI-powered search, pipes, MCP server, and more. lifetime plans available starting at \$400.
[download screenpipe →](https://screenpi.pe/onboarding)
# screenpipe — your screen's memory layer
Source: https://docs.screenpi.pe/home
screenpipe captures your screen 24/7, extracts text with OCR, and lets you search everything with AI. open source rewind alternative. local screen recorder with AI. local-first, privacy-focused.
screenpipe records your screen continuously and makes everything searchable. it runs locally, stores data on your machine, and provides APIs for AI integration.
## what it does
* **captures screen 24/7** — OCR extracts text from every frame
* **local storage** — sqlite database + media files on your machine
* **search API** — find anything you've seen via `localhost:3030/search`
* **pipes** — scheduled AI agents that act on your screen data ([learn more](/pipes))
* **MCP server** — connect to Claude, Cursor, and other AI assistants ([learn more](/mcp-server))
## quick start
[download the desktop app](https://screenpi.pe/onboarding) — works on macOS, Windows, and Linux
## what's next?
install screenpipe on macOS, Windows, or Linux
find anything you've seen on your screen
scheduled AI agents on your screen data
connect screenpipe to AI assistants
## community
need help? have ideas? [join our discord](https://discord.gg/screenpipe) — 10k+ members building with screenpipe.
# MCP apps - create interactive UIs for screenpipe
Source: https://docs.screenpi.pe/mcp-apps
Build interactive UI components that render in Claude, ChatGPT, VS Code, and other AI assistants using the MCP Apps protocol.
MCP Apps let you create interactive UIs that render directly inside AI chat interfaces. Instead of plain text responses, your tools can return dashboards, forms, visualizations, and more.
## how it works
```
┌─────────────────────────────────────────────────────────────┐
│ 1. Tool declares a UI resource │
│ { _meta: { ui: { resourceUri: "ui://search" } } } │
│ │
│ 2. Host fetches the HTML from the MCP server │
│ GET ui://search → Returns HTML/JS bundle │
│ │
│ 3. Host renders in sandboxed iframe │
│ │
│ │
│ 4. Bidirectional communication via postMessage │
│ Host ◄──── JSON-RPC ────► App │
└─────────────────────────────────────────────────────────────┘
```
your UI works in:
* Claude Desktop & Web
* ChatGPT
* VS Code
* Goose
* screenpipe's built-in chat
## creating an MCP app
### 1. create your HTML UI
create a single HTML file with embedded CSS and JavaScript:
```html theme={null}
```
### 2. register the UI resource
in your MCP server, add the UI as a resource:
```typescript theme={null}
// add to RESOURCES array
{
uri: "ui://my-app",
name: "My App",
description: "Interactive search interface",
mimeType: "text/html",
}
```
### 3. serve the HTML
in the `ReadResourceRequestSchema` handler:
```typescript theme={null}
case "ui://my-app":
// read from file or embed directly
const html = fs.readFileSync(
path.join(__dirname, 'ui/my-app.html'),
'utf-8'
);
return {
contents: [{
uri,
mimeType: "text/html",
text: html,
}],
};
```
### 4. declare UI in tool response
when a tool returns results, include the UI reference:
```typescript theme={null}
return {
content: [{ type: "text", text: "Search results" }],
_meta: {
ui: { resourceUri: "ui://my-app" }
}
};
```
## contributing an MCP app
want to add your UI to screenpipe? here's how:
### file structure
```
crates/screenpipe-integrations/screenpipe-mcp/
├── src/
│ └── index.ts # MCP server
├── ui/
│ ├── search.html # search dashboard UI
│ ├── timeline.html # timeline viewer UI
│ └── your-app.html # your new UI
└── package.json
```
### submission checklist
1. **create your HTML file** in the `ui/` directory
2. **keep it self-contained** - all CSS/JS should be inline
3. **use dark theme** - match screenpipe's aesthetic
4. **test locally** with MCP Inspector
5. **submit a PR** with:
* your HTML file
* updates to `index.ts` to register and serve it
* a screenshot in the PR description
### design guidelines
| aspect | guideline |
| ---------- | -------------------------------- |
| background | `#0a0a0a` (dark) |
| cards | `#1a1a1a` with `#333` border |
| text | `#fafafa` (light) |
| accent | `#fff` buttons, `#666` secondary |
| font | `system-ui, sans-serif` |
| radius | `6-8px` for cards and buttons |
### example UIs to build
* **timeline viewer** - scroll through your day visually
* **meeting notes** - display transcriptions with speakers
* **activity chart** - visualize app usage over time
* **search dashboard** - rich search with filters
* **memory cards** - google photos-style flashbacks
## testing your app
use MCP Inspector to test:
```bash theme={null}
cd crates/screenpipe-integrations/screenpipe-mcp
npm run build
npx @modelcontextprotocol/inspector node dist/index.js
```
1. open the inspector at [http://localhost:5173](http://localhost:5173)
2. navigate to **resources**
3. click your `ui://` resource
4. verify the HTML renders correctly
## api reference
### host → app messages
```typescript theme={null}
// tool result delivered to app
{
jsonrpc: "2.0",
method: "tool/result",
params: {
toolName: "search-content",
result: { /* tool output */ }
}
}
```
### app → host messages
```typescript theme={null}
// call an MCP tool
{
jsonrpc: "2.0",
method: "tools/call",
params: { name: "search-content", arguments: { q: "meeting" } }
}
// send message to chat
{
jsonrpc: "2.0",
method: "message/send",
params: { content: "Found 5 results" }
}
// open external link
{
jsonrpc: "2.0",
method: "link/open",
params: { url: "https://example.com" }
}
```
## security
MCP Apps run in sandboxed iframes with:
* no access to parent DOM
* no cookies or localStorage from host
* restricted permissions (`allow-scripts` only)
* all communication via postMessage
## resources
* [MCP Apps specification](https://blog.modelcontextprotocol.io/posts/2026-01-26-mcp-apps/)
* [ext-apps SDK](https://github.com/modelcontextprotocol/ext-apps)
* [screenpipe MCP source](https://github.com/screenpipe/screenpipe/tree/main/crates/screenpipe-integrations/screenpipe-mcp)
need help? [join our discord](https://discord.gg/screenpipe).
# screenpipe MCP server - connect to Claude, Cursor & AI assistants
Source: https://docs.screenpi.pe/mcp-server
Set up screenpipe as an MCP server to give Claude, Cursor, and other AI assistants access to your screen history and context.
screenpipe provides an MCP (Model Context Protocol) server that lets AI assistants like Claude and Cursor search your screen recordings, audio transcriptions, and control your computer.
## claude desktop
open the **screenpipe app** → **settings** → **connections** → click **"install extension"**
Claude will open and prompt you to confirm. click **install** — done!
try asking Claude: *"what did I do in the last 5 minutes?"*
make sure screenpipe is running when you use Claude with screenpipe features.
## claude code
one command:
```bash theme={null}
claude mcp add screenpipe --transport stdio -- npx -y screenpipe-mcp
```
to make it available across all your projects:
```bash theme={null}
claude mcp add screenpipe --transport stdio --scope user -- npx -y screenpipe-mcp
```
verify with `claude mcp list` or `/mcp` inside Claude Code.
## cursor
[click here to install in cursor](cursor://anysphere.cursor-deeplink/mcp/install?name=screenpipe\&config=eyJ0eXBlIjoic3RkaW8iLCJjb21tYW5kIjoibnB4IiwiYXJncyI6WyIteSIsInNjcmVlbnBpcGUtbWNwIl19)
or manually: open **cursor settings** → **mcp** → **add new global mcp server** → set command to `npx` with args `["-y", "screenpipe-mcp"]`.
## other editors (cline, continue, windsurf, etc.)
any editor that supports MCP works with screenpipe. the server command is:
```bash theme={null}
npx -y screenpipe-mcp
```
add this as a stdio MCP server in your editor's MCP settings. see also:
* [continue setup](/continue)
* [cline setup](/cline)
## available tools
### search-content (all platforms)
search through recorded screen content, audio transcriptions, and user input events:
| parameter | type | description |
| ---------------- | ------- | ------------------------------------------------------------------- |
| `q` | string | search query (optional - omit to get recent content) |
| `content_type` | string | `all`, `ocr`, `audio`, `input`, or `accessibility` (default: `all`) |
| `limit` | integer | max results (default: 10) |
| `offset` | integer | pagination offset (default: 0) |
| `start_time` | string | ISO 8601 UTC start time (e.g., `2024-01-15T10:00:00Z`) |
| `end_time` | string | ISO 8601 UTC end time |
| `app_name` | string | filter by app (e.g., `Google Chrome`, `Slack`) |
| `window_name` | string | filter by window title |
| `min_length` | integer | minimum content length |
| `max_length` | integer | maximum content length |
| `include_frames` | boolean | include base64 screenshots (OCR only) |
| `speaker_ids` | string | comma-separated speaker IDs for audio filtering (e.g., `1,2,3`) |
| `speaker_name` | string | filter audio by speaker name (case-insensitive partial match) |
### export-video
create video exports from screen recordings for a specific time range:
| parameter | type | description |
| ------------ | ------ | ---------------------------------- |
| `start_time` | string | ISO 8601 UTC start time (required) |
| `end_time` | string | ISO 8601 UTC end time (required) |
| `fps` | number | frames per second (default: 1.0) |
## example queries
try these in Claude or Cursor:
* "search for any mentions of 'project' in my screen recordings"
* "find audio transcriptions from the last hour"
* "show me what was on my screen in VS Code yesterday"
* "export a video of my screen from 10am to 11am today"
* "find what John said in our meeting about the API"
* "what did I type in Slack today?" (uses content\_type=input)
* "what did I copy to my clipboard recently?" (uses content\_type=input)
## testing
test your setup with MCP Inspector:
```bash theme={null}
npx @modelcontextprotocol/inspector npx screenpipe-mcp
```
## requirements
* screenpipe must be running on localhost:3030
* Node.js >= 18.0.0
## manual config (advanced)
if the one-click methods above don't work, you can manually edit config files.
**claude desktop** — edit `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS) or `%AppData%\Claude\claude_desktop_config.json` (Windows):
```json theme={null}
{
"mcpServers": {
"screenpipe": {
"command": "npx",
"args": ["-y", "screenpipe-mcp"]
}
}
}
```
**cursor** — create `.cursor/mcp.json` in your project root:
```json theme={null}
{
"mcpServers": {
"screenpipe": {
"command": "npx",
"args": ["-y", "screenpipe-mcp"]
}
}
}
```
**from source:**
```bash theme={null}
git clone https://github.com/screenpipe/screenpipe
cd screenpipe/crates/screenpipe-integrations/screenpipe-mcp
npm install && npm run build
```
then point your editor to `node /path/to/screenpipe-mcp/dist/index.js`.
## troubleshooting
**mcp not connecting?**
* ensure screenpipe is running (`screenpipe` in terminal)
* check it's accessible at [http://localhost:3030/health](http://localhost:3030/health)
* restart Claude Desktop / Cursor after config changes
**macos automation not working?**
* grant accessibility permissions in System Settings > Privacy & Security > Accessibility
still stuck? [ask in our discord](https://discord.gg/screenpipe) — the community can help debug MCP issues.
# meeting transcription
Source: https://docs.screenpi.pe/meeting-transcription
Automatically transcribe meetings with screenpipe. Local speech-to-text using Whisper, speaker identification. No cloud required.
screenpipe automatically transcribes all audio from your meetings, calls, and conversations. everything runs locally using Whisper.
## setup
audio recording is enabled by default in the desktop app. configure audio devices and transcription engine in **settings**.
* **audio devices**: select which microphones and system audio to capture
* **transcription engine**: choose between local Whisper (private) or Deepgram (faster, cloud)
## search transcriptions
```bash theme={null}
# find discussions about a topic
curl "http://localhost:3030/search?q=budget+review&content_type=audio&limit=10"
# get today's meetings
curl "http://localhost:3030/search?content_type=audio&start_time=2026-02-11T00:00:00Z"
# filter by speaker
curl "http://localhost:3030/search?content_type=audio&speaker_ids=1,2"
curl "http://localhost:3030/search?content_type=audio&speaker_name=John"
```
## speaker identification
screenpipe automatically identifies different speakers. manage them via API:
```bash theme={null}
# get unnamed speakers for labeling
curl "http://localhost:3030/speakers/unnamed?limit=10"
# update a speaker's name
curl -X POST http://localhost:3030/speakers/update \
-H "Content-Type: application/json" \
-d '{"id": 1, "name": "John Smith"}'
# search speakers by name
curl "http://localhost:3030/speakers/search?name=john"
# merge duplicate speakers
curl -X POST http://localhost:3030/speakers/merge \
-H "Content-Type: application/json" \
-d '{"speaker_to_keep_id": 1, "speaker_to_merge_id": 2}'
# find similar speakers
curl "http://localhost:3030/speakers/similar?speaker_id=1"
```
## tips
* use a good microphone
* reduce background noise
* `whisper-large-v3-turbo` gives best accuracy
* set language to English in settings if you only speak English (faster)
## privacy
* all transcription runs locally on your device
* audio files stored in `~/.screenpipe/data/`
* no audio sent to cloud unless you choose deepgram
* disable audio recording in app settings
questions? [join our discord](https://discord.gg/screenpipe).
# obsidian - sync screen history to your notes
Source: https://docs.screenpi.pe/obsidian
Connect screenpipe to Obsidian to automatically capture what you've seen and heard into your knowledge base.
[Obsidian](https://obsidian.md) is a powerful knowledge base that works on local markdown files. with screenpipe, you can query your screen history and meeting transcriptions directly from Obsidian using AI plugins.
## setup with copilot plugin
the [Obsidian Copilot](https://github.com/logancyang/obsidian-copilot) plugin supports MCP servers, letting you query screenpipe from within Obsidian.
1. install the Copilot plugin from Obsidian community plugins
2. open Copilot settings → MCP Servers
3. add screenpipe:
```json theme={null}
{
"screenpipe": {
"command": "npx",
"args": ["-y", "screenpipe-mcp"]
}
}
```
4. restart Obsidian
now you can ask Copilot things like:
* "what was I reading about yesterday?"
* "find my meeting notes from this morning"
* "what code was I looking at in VS Code?"
## manual workflow
if you prefer not to use plugins, you can query screenpipe's API and paste results into notes:
```bash theme={null}
# search recent screen content
curl "http://localhost:3030/search?q=meeting&limit=10" | jq '.data[].content.text'
# get audio transcriptions from today
curl "http://localhost:3030/search?content_type=audio&limit=20"
```
## use cases
* **daily notes**: automatically pull what you worked on into daily notes
* **meeting notes**: capture transcriptions and screen context from meetings
* **research**: recall articles and documentation you've read
* **project logs**: track what you've done across different apps
## requirements
* screenpipe running on localhost:3030
* Obsidian with Copilot plugin (for MCP integration)
* Node.js >= 18.0.0
need help? [join our discord](https://discord.gg/screenpipe).
# Ollama — run AI locally with screenpipe
Source: https://docs.screenpi.pe/ollama
Use Ollama to run AI models locally with screenpipe. Free, private, no API keys needed.
[Ollama](https://ollama.com) lets you run AI models locally on your machine. screenpipe integrates natively with Ollama — no API keys, no cloud, completely private.
## setup
### 1. install Ollama & pull a model
```bash theme={null}
# install from https://ollama.com then:
ollama run ministral-3
```
this downloads the model and starts Ollama. you can use any model — `ministral-3` is a good starting point (fast, works on most machines).
### 2. select Ollama in screenpipe
1. open the **screenpipe app**
2. click the **AI preset selector** (top of the chat/timeline)
3. click **Ollama**
4. pick your model from the dropdown (screenpipe auto-detects pulled models)
5. start chatting
that's it. screenpipe talks to Ollama on `localhost:11434` automatically.
## recommended models
| model | size | best for |
| ---------------- | ------ | --------------------------------------------- |
| `ministral-3` | \~2 GB | fast, general use, recommended starting point |
| `gemma3:4b` | \~3 GB | strong quality for size, good for summaries |
| `qwen3:4b` | \~3 GB | multilingual, good reasoning |
| `deepseek-r1:8b` | \~5 GB | strong reasoning, needs 16 GB+ RAM |
pull any model with:
```bash theme={null}
ollama pull
```
## requirements
* [Ollama](https://ollama.com) installed and running
* at least one model pulled
* screenpipe running
## troubleshooting
**"ollama not detected"**
* make sure Ollama is running: `ollama serve`
* check it's responding: `curl http://localhost:11434/api/tags`
**model not showing in dropdown?**
* pull it first: `ollama pull ministral-3`
* you can also type the model name manually in the input field
**slow responses?**
* try a smaller model (`ministral-3`)
* close other GPU-heavy apps
* ensure you have enough free RAM (model size + \~2 GB overhead)
need help? [join our discord](https://discord.gg/screenpipe) — get recommendations on models and configs from the community.
# OpenClaw - AI assistant with screenpipe memory
Source: https://docs.screenpi.pe/openclaw
Connect screenpipe to OpenClaw to give your personal AI assistant access to everything you've seen and heard.
[OpenClaw](https://openclaw.ai) is a self-hosted personal AI assistant that connects to your messaging apps (WhatsApp, Telegram, Discord, iMessage, etc.) and can take actions on your behalf.
With screenpipe, OpenClaw can recall what you've seen on screen, reference past conversations, and answer questions about your digital history.
## same machine
If OpenClaw and screenpipe run on the same machine, setup is straightforward.
### MCP (recommended)
Add screenpipe to your OpenClaw MCP config:
```json theme={null}
{
"mcpServers": {
"screenpipe": {
"command": "npx",
"args": ["-y", "screenpipe-mcp"]
}
}
}
```
Restart OpenClaw — it will now have access to your screen history, audio transcriptions, and more.
You can test the MCP server independently:
```bash theme={null}
npx @modelcontextprotocol/inspector npx screenpipe-mcp
```
### custom skill (alternative)
Create `~/openclaw/skills/screenpipe/skill.md`:
````markdown theme={null}
---
name: screenpipe
description: Search screen recordings and audio transcriptions from the user's computer
tools:
- Bash
---
# screenpipe skill
Query the user's screen history via the local API at http://localhost:3030.
## search content
```bash
curl -s "http://localhost:3030/search?q=QUERY&limit=20"
```
## get recent activity
```bash
curl -s "http://localhost:3030/search?limit=10&content_type=ocr"
```
````
Restart OpenClaw to load the skill.
## different machines
If OpenClaw runs on a remote server (e.g., a VPS) and screenpipe runs on your laptop/desktop, you need to expose screenpipe to the remote machine. There are two approaches: **MCP over HTTP** (recommended) and **REST API with custom skills** (fallback).
### MCP over HTTP (recommended)
Screenpipe's MCP server supports [Streamable HTTP transport](https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#streamable-http), so OpenClaw can connect to it as a proper MCP server over the network — all tools, fully described, structured calls. No curl workarounds needed.
**Step 1:** On your computer (where screenpipe runs), start the HTTP MCP server:
```bash theme={null}
npx screenpipe-mcp-http --port 3031
```
This starts the MCP server on `http://localhost:3031/mcp`.
**Step 2:** Expose port 3031 to your OpenClaw server using one of:
[Tailscale](https://tailscale.com) creates a private network between your devices — no port forwarding, no config files.
1. Install Tailscale on both machines
2. Run `tailscale up` on both
3. Find your computer's Tailscale IP: `tailscale ip`
4. Verify: `curl http://:3031/health`
Your MCP URL will be `http://:3031/mcp`.
Forward the MCP port over SSH:
```bash theme={null}
# on your computer (where screenpipe runs)
ssh -R 3031:localhost:3031 user@openclaw-server
# or persistent with autossh
autossh -M 0 -f -N -R 3031:localhost:3031 user@openclaw-server
```
Your MCP URL will be `http://localhost:3031/mcp` (on the OpenClaw server).
```bash theme={null}
# on your computer
cloudflared tunnel --url http://localhost:3031
```
Your MCP URL will be the `https://` URL cloudflared gives you, with `/mcp` appended.
**Step 3:** Configure OpenClaw to use the remote MCP server:
```json theme={null}
{
"mcpServers": {
"screenpipe": {
"url": "http://:3031/mcp"
}
}
}
```
Replace `` with the appropriate address from step 2 (Tailscale IP, `localhost` for SSH tunnel, or Cloudflare URL).
Restart OpenClaw — it now has full MCP access to screenpipe with all tools, parameters, and structured responses. No custom skills needed.
The HTTP MCP transport means OpenClaw gets the same experience remotely as it does locally — tool discovery, typed parameters, structured results. This is the same approach [Pieces uses](https://github.com/pieces-app/pro_tips/blob/main/guides/MCP/Connecting%20to%20PiecesOS%20from%20the%20Outside%20World%20via%20Ngrok.md) to expose MCP over the network.
### REST API with custom skills (fallback)
If OpenClaw doesn't support HTTP MCP transport, you can fall back to custom skills that call screenpipe's REST API directly. This works but is less structured — OpenClaw won't get MCP tool discovery or typed parameters.
#### option A: Tailscale + REST
1. Install [Tailscale](https://tailscale.com) on both machines, run `tailscale up` on both
2. Find your Tailscale IP: `tailscale ip`
3. Verify: `curl http://:3030/health`
4. Create `~/openclaw/skills/screenpipe/skill.md`:
````markdown theme={null}
---
name: screenpipe
description: Search screen recordings and audio transcriptions from the user's computer
tools:
- Bash
---
# screenpipe skill
Query the user's screen history via their screenpipe REST API at http://TAILSCALE_IP:3030.
## search content
```bash
curl -s "http://TAILSCALE_IP:3030/search?q=QUERY&limit=20"
```
## filter by type
```bash
# screen content (accessibility + OCR)
curl -s "http://TAILSCALE_IP:3030/search?q=QUERY&content_type=all"
# audio transcriptions
curl -s "http://TAILSCALE_IP:3030/search?q=QUERY&content_type=audio"
# user input (keyboard, clicks, clipboard)
curl -s "http://TAILSCALE_IP:3030/search?q=QUERY&content_type=input"
```
## filter by app
```bash
curl -s "http://TAILSCALE_IP:3030/search?q=QUERY&app_name=Chrome"
```
## filter by time
```bash
curl -s "http://TAILSCALE_IP:3030/search?q=QUERY&start_time=2024-01-15T10:00:00Z&end_time=2024-01-15T18:00:00Z"
```
## activity summary (lightweight overview)
```bash
curl -s "http://TAILSCALE_IP:3030/activity-summary?start_time=2024-01-15T10:00:00Z&end_time=2024-01-15T18:00:00Z"
```
## list meetings
```bash
curl -s "http://TAILSCALE_IP:3030/meetings?limit=20"
```
## search UI elements
```bash
curl -s "http://TAILSCALE_IP:3030/elements?q=QUERY&limit=50"
```
````
Replace `TAILSCALE_IP` with your actual Tailscale IP (e.g., `100.64.x.x`).
#### option B: SSH tunnel + REST
```bash theme={null}
# on your computer (where screenpipe runs)
ssh -R 3030:localhost:3030 user@openclaw-server
# or persistent
autossh -M 0 -f -N -R 3030:localhost:3030 user@openclaw-server
```
This makes screenpipe available at `localhost:3030` on your OpenClaw server. Then use the same skill as the "same machine" custom skill setup, replacing the URL accordingly.
#### option C: Cloudflare Tunnel + REST
```bash theme={null}
# on your computer
cloudflared tunnel --url http://localhost:3030
```
Then point your OpenClaw skill at the tunnel URL.
## available MCP tools
When connected via MCP (stdio or HTTP), OpenClaw gets access to these tools:
| Tool | Description |
| -------------------- | -------------------------------------------------------------------------------------------------------------------------------- |
| **search-content** | Search screen text (accessibility/OCR), audio transcriptions, user input. Supports time range, app, window, and speaker filters. |
| **activity-summary** | Lightweight overview of app usage, audio speakers, and recent texts for a time range (\~200 tokens). |
| **search-elements** | Search structured UI elements (buttons, links, text fields) from the accessibility tree. |
| **frame-context** | Get full accessibility tree, URLs, and text for a specific frame. |
| **list-meetings** | List detected meetings with duration, app, and attendees. |
| **export-video** | Export screen recordings as MP4 for a time range. |
The MCP server also provides **resources** (`screenpipe://context` for current time, `screenpipe://guide` for search strategy) and **prompts** (search-recent, find-in-app, meeting-notes).
## example prompts
Once configured, message OpenClaw from any chat app:
* "what was I reading about yesterday afternoon?"
* "find the slack message from john about the deployment"
* "what code was I looking at in cursor this morning?"
* "summarize my meetings from last week"
* "what tabs did I have open when researching that bug?"
* "when did I last see the budget spreadsheet?"
* "what did I copy to clipboard recently?"
* "show me what buttons I clicked in Figma today"
## troubleshooting
**MCP not connecting?**
* Test the server: `npx @modelcontextprotocol/inspector npx screenpipe-mcp`
* Check screenpipe is running: `curl http://localhost:3030/health`
**remote machine can't reach screenpipe?**
* Check Tailscale is connected: `tailscale status`
* Check SSH tunnel is up: `curl http://localhost:3030/health` on the remote
* Make sure screenpipe is running on your computer
**no results from queries?**
* Verify screenpipe is running: `curl http://localhost:3030/health`
* Ensure screenpipe has screen recording permissions
# OpenCode - terminal AI with screen memory
Source: https://docs.screenpi.pe/opencode
Integrate screenpipe with OpenCode to give your terminal AI assistant access to your screen history, audio transcriptions, and app context.
[OpenCode](https://github.com/opencode-ai/opencode) is a powerful terminal-based AI coding assistant written in Go. it implements the [Agent Skills](https://opencode.ai/docs/skills) open standard, which means screenpipe skills work out of the box.
## setup
OpenCode discovers skills from multiple locations. copy the screenpipe skills to any of these:
### option 1: project-level
```bash theme={null}
# copy to current project
mkdir -p .opencode/skills
cp -r /path/to/screenpipe/.claude/agents/* .opencode/skills/
```
### option 2: user-level (global)
```bash theme={null}
# copy to home directory for all projects
mkdir -p ~/.opencode/skills
cp -r /path/to/screenpipe/.claude/agents/* ~/.opencode/skills/
```
### option 3: clone directly
```bash theme={null}
# clone screenpipe and symlink skills
git clone https://github.com/screenpipe/screenpipe ~/screenpipe
ln -s ~/screenpipe/.claude/agents ~/.opencode/skills/screenpipe
```
OpenCode uses the same Agent Skills format as Claude Code. files in `.claude/agents/` work in `.opencode/skills/` and vice versa.
## available skills
| skill | description |
| ------------------- | -------------------------------------------------------------------------------------------------------- |
| `screenpipe-query` | search screen OCR, audio transcriptions, and UI events (keyboard input, clicks, app switches, clipboard) |
| `screenpipe-health` | check status, diagnose issues, verify permissions |
| `screenpipe-logs` | retrieve and analyze screenpipe logs |
## usage
OpenCode automatically discovers installed skills. invoke them in your prompts:
```bash theme={null}
# start OpenCode
opencode
> @screenpipe-query find what I was reading about docker compose
> @screenpipe-health check if recording is working
> @screenpipe-logs show me errors from today
```
or let OpenCode choose automatically:
```bash theme={null}
> what was I working on in VS Code this morning?
# OpenCode will invoke screenpipe-query
> is my screen recording working?
# OpenCode will invoke screenpipe-health
```
## MCP alternative
OpenCode also supports MCP servers. if you prefer MCP over skills:
```bash theme={null}
# add to your opencode config
opencode config mcp add screenpipe "npx -y screenpipe-mcp"
```
skills are more token-efficient than MCP. the OpenCode team recommends skills for most use cases, with MCP for external API integrations.
## example workflows
**context-aware coding:**
```
> I was looking at a react component earlier that had a cool
> animation effect, use screenpipe to find it and help me
> implement something similar
```
**recall documentation:**
```
> use screenpipe to find that kubernetes docs page I was
> reading about pod scheduling
```
**meeting follow-up:**
```
> what did we discuss in the team call about the database
> migration? use screenpipe to find it
```
**debug from memory:**
```
> there was an error in my terminal earlier, use screenpipe
> to find it and help me fix it
```
## skill format reference
screenpipe skills follow the Agent Skills standard:
```markdown theme={null}
---
name: screenpipe-query
description: Query screen recordings and audio transcriptions
tools:
- Bash
- WebFetch
---
# Screenpipe Query Agent
Instructions for querying screenpipe data...
```
you can customize these skills or create your own following the same format.
## requirements
* screenpipe running on localhost:3030
* OpenCode installed (`go install github.com/opencode-ai/opencode@latest`)
* skills copied to `.opencode/skills/` or `~/.opencode/skills/`
## troubleshooting
**skills not discovered?**
* run `opencode skills list` to see available skills
* verify files are in correct location with valid yaml frontmatter
* check skill file ends in `.md`
**queries returning no data?**
* verify screenpipe is running: `curl http://localhost:3030/health`
* check data exists: `curl "http://localhost:3030/search?limit=1"`
* ensure screenpipe has screen recording permissions
**OpenCode not using skills?**
* mention the skill explicitly: `@screenpipe-query find...`
* check skill description matches your query intent
still stuck? [ask in our discord](https://discord.gg/screenpipe).
# pipes
Source: https://docs.screenpi.pe/pipes
scheduled AI agents that run on your screen data
## quick start — paste this into claude code
copy this prompt into [claude code](https://docs.anthropic.com/en/docs/claude-code), [cursor](https://cursor.com), or any AI coding assistant:
```text create a pipe theme={null}
create a screenpipe pipe that [DESCRIBE WHAT YOU WANT].
## what is screenpipe?
screenpipe is a desktop app that continuously records your screen (OCR) and audio (transcription).
it runs a local API at http://localhost:3030 that lets you query everything you've seen, said, or heard.
## what is a pipe?
a pipe is a scheduled AI agent defined as a single markdown file: ~/.screenpipe/pipes/{name}/pipe.md
every N minutes, screenpipe runs a coding agent (like pi or claude-code) with the pipe's prompt.
the agent can query your screen data, write files, call external APIs, send notifications, etc.
## pipe.md format
the file starts with YAML frontmatter, then the prompt body:
---
schedule: every 30m
enabled: true
---
Your prompt instructions here...
## context header
before execution, screenpipe prepends a context header to the prompt with:
- time range (start/end timestamps based on the schedule interval)
- current date
- user's timezone
- screenpipe API base URL
- output directory
the AI agent uses this context to query the right time range. no template variables needed in the prompt.
## screenpipe search API
the agent queries screen data via the local REST API:
curl "http://localhost:3030/search?limit=20&content_type=all&start_time=&end_time="
### query parameters
- q: text search query (optional)
- content_type: "all" | "ocr" | "audio" | "input" | "accessibility"
- limit: max results (default 20)
- offset: pagination offset
- start_time / end_time: ISO 8601 timestamps
- app_name: filter by app (e.g. "chrome", "cursor")
- window_name: filter by window title
- browser_url: filter by URL (e.g. "github.com")
- min_length / max_length: filter by text length
- speaker_ids: filter audio by speaker IDs
### vision results (what was on screen)
each result contains:
- text: the extracted text visible on screen
- app_name: which app was active (e.g. "Arc", "Cursor", "Slack")
- window_name: the window title
- browser_url: the URL if it was a browser
- timestamp: when it was captured
- file_path: path to the video frame
- focused: whether the window was focused
### audio results (what was said/heard)
each result contains:
- transcription: the spoken text
- speaker_id: numeric speaker identifier
- timestamp: when it was captured
- device_name: which audio device (mic or system audio)
- device_type: "input" (microphone) or "output" (system audio)
### accessibility results (accessibility tree text)
each result contains:
- text: text from the accessibility tree
- app_name: which app was active
- window_name: the window title
- timestamp: when it was captured
### input results (user actions)
query via: curl "http://localhost:3030/search?content_type=input&app_name=Slack&limit=50&start_time=&end_time="
event types: text (keyboard input), click, app_switch, window_focus, clipboard, scroll
## secrets
store API keys in a .env file next to pipe.md (never in the prompt itself):
echo "API_KEY=your_key" > ~/.screenpipe/pipes/my-pipe/.env
reference in prompt: source .env && curl -H "Authorization: Bearer $API_KEY" ...
## after creating the file
use the desktop app: go to **settings → pipes** to install, enable, run, and view logs.
or use the REST API:
install: curl -X POST http://localhost:3030/pipes/install -H "Content-Type: application/json" -d '{"source": "~/.screenpipe/pipes/my-pipe"}'
enable: curl -X POST http://localhost:3030/pipes/my-pipe/enable -H "Content-Type: application/json" -d '{"enabled": true}'
test: curl -X POST http://localhost:3030/pipes/my-pipe/run
logs: curl http://localhost:3030/pipes/my-pipe/logs
```
replace `[DESCRIBE WHAT YOU WANT]` with your use case — e.g. "tracks my time in toggl based on what apps I'm using", "writes daily summaries to obsidian", "sends me a slack message if I've been on twitter for more than 30 minutes".
***
## what are pipes?
pipes are automated workflows that run on your screenpipe data at regular intervals. each pipe is a markdown file with a prompt and a schedule. under the hood, screenpipe runs a coding agent (like [pi](https://github.com/badlogic/pi-mono)) that can query your screen data, call APIs, write files, and take actions.
**a pipe is just one file: `pipe.md`**
```
~/.screenpipe/pipes/
├── daily-journal/
│ └── pipe.md
├── toggl-sync/
│ ├── pipe.md
│ └── .env # secrets (api keys)
└── obsidian-sync/
└── pipe.md
```
## creating a pipe
create a folder in `~/.screenpipe/pipes/` with a `pipe.md` file:
```bash theme={null}
mkdir -p ~/.screenpipe/pipes/my-pipe
cat > ~/.screenpipe/pipes/my-pipe/pipe.md << 'EOF'
---
schedule: every 30m
enabled: true
---
Summarize my screen activity for the last 30 minutes.
Query screenpipe at http://localhost:3030/search using the time range from the context header.
Write the summary to ./output/.md
EOF
# then use the desktop app (settings → pipes) to install, enable, and test
# or use the REST API:
# curl -X POST http://localhost:3030/pipes/install -H "Content-Type: application/json" -d '{"source": "~/.screenpipe/pipes/my-pipe"}'
# curl -X POST http://localhost:3030/pipes/my-pipe/enable -H "Content-Type: application/json" -d '{"enabled": true}'
# curl -X POST http://localhost:3030/pipes/my-pipe/run
```
## pipe.md format
every pipe.md starts with YAML frontmatter between `---` markers, followed by the prompt:
```markdown theme={null}
---
schedule: every 2h
enabled: true
---
Your prompt goes here. This is what the AI agent will execute.
You can reference screenpipe's API, write files, call external APIs, etc.
```
### frontmatter fields
| field | required | default | description |
| ---------- | -------- | -------- | ------------------------------------------------------------------- |
| `schedule` | yes | `manual` | `every 30m`, `every 2h`, `daily`, cron (`0 */2 * * *`), or `manual` |
| `enabled` | no | `true` | whether the scheduler runs this pipe |
### context header
before execution, screenpipe prepends a context header to the prompt:
```
Time range: 2026-02-12T13:00:00Z to 2026-02-12T14:00:00Z
Date: 2026-02-12
Timezone: PST (UTC-08:00)
Output directory: ./output/
Screenpipe API: http://localhost:3030
```
the AI agent uses these values to query the right time range and format output correctly. no template variables needed — just write plain instructions.
### schedule formats
| format | example | description |
| -------- | ----------------------- | --------------------------------- |
| interval | `every 30m`, `every 2h` | runs at fixed intervals |
| daily | `daily` | runs once per day |
| cron | `0 */2 * * *` | standard 5-field cron expression |
| manual | `manual` | only runs when triggered manually |
## manage pipes
use the desktop app (**settings → pipes**) or the REST API:
## http api
when screenpipe is running, pipes are also manageable via the local API:
```bash theme={null}
# list all pipes
curl http://localhost:3030/pipes
# run a pipe
curl -X POST http://localhost:3030/pipes/my-pipe/run
# enable/disable
curl -X POST http://localhost:3030/pipes/my-pipe/enable \
-H "Content-Type: application/json" \
-d '{"enabled": true}'
# update pipe content
curl -X POST http://localhost:3030/pipes/my-pipe/config \
-H "Content-Type: application/json" \
-d '{"raw_content": "---\nschedule: every 1h\nenabled: true\n---\n\nYour prompt here..."}'
# view logs
curl http://localhost:3030/pipes/my-pipe/logs
# install from URL
curl -X POST http://localhost:3030/pipes/install \
-H "Content-Type: application/json" \
-d '{"source": "https://example.com/pipe.md"}'
```
## app ui
go to **settings → pipes** to see all installed pipes, toggle them on/off, run them manually, edit the pipe.md directly, select an AI preset, and view logs.
## examples
### time tracking (toggl)
```markdown theme={null}
---
schedule: every 1m
enabled: true
---
Automatically update my Toggl time tracking based on screen activity.
1. Query screenpipe search API for the time range in the context header
2. Read API key: source .env
3. Check current Toggl timer
4. If activity changed, stop old timer and start new one
Activity rules:
- VSCode/Cursor/Terminal → "coding"
- Chrome with GitHub → "code review"
- Slack/Discord → "communication"
- Zoom/Meet → "meeting"
```
### daily journal (obsidian)
```markdown theme={null}
---
schedule: every 2h
enabled: true
---
Summarize my screen activity into a daily journal entry.
Query screenpipe search API for the time range in the context header.
Write to ~/obsidian-vault/screenpipe/.md
Use [[wiki-links]] for people and projects.
Include timeline deep links: [time](screenpipe://timeline?timestamp=)
```
### standup report
```markdown theme={null}
---
schedule: daily
enabled: true
---
Generate a standup report from yesterday's screen activity.
Format: what I did, what I'm doing, blockers.
Write to ./output/.md
```
## AI presets
in the screenpipe app, go to **settings → AI settings** to configure presets (model + provider combinations). in **settings → pipes**, you can assign a preset to each pipe — this overrides the model/provider in the frontmatter.
screenpipe auto-creates a default preset using screenpipe cloud.
## AI providers
by default, pipes use **screenpipe cloud** — no setup needed if you have a screenpipe account.
to use your own AI subscription (Claude Pro, ChatGPT Plus, Gemini, or API keys), pipes reuse [pi's native auth system](https://github.com/badlogic/pi-mono):
### option 1: subscription (free with existing plan)
```bash theme={null}
# run pi interactively and use /login
pi
# then type: /login
# select Claude Pro, ChatGPT Plus, GitHub Copilot, or Google Gemini
```
### option 2: API key
add to `~/.pi/agent/auth.json`:
```json theme={null}
{
"anthropic": { "type": "api_key", "key": "sk-ant-..." },
"openai": { "type": "api_key", "key": "sk-..." },
"google": { "type": "api_key", "key": "..." }
}
```
or set environment variables: `ANTHROPIC_API_KEY`, `OPENAI_API_KEY`, `GEMINI_API_KEY`.
### using in a pipe
add `provider` to your pipe.md frontmatter:
```yaml theme={null}
---
schedule: every 30m
provider: anthropic
model: claude-haiku-4-5@20251001
---
```
**provider resolution:** preset (if set) → frontmatter provider/model → screenpipe cloud.
## secrets
store API keys in `.env` files inside the pipe folder:
```bash theme={null}
echo "TOGGL_API_KEY=your_key_here" > ~/.screenpipe/pipes/toggl-sync/.env
```
the pipe prompt can reference them: `source .env && curl -u $TOGGL_API_KEY:api_token ...`
**never put secrets in pipe.md** — the prompt may be visible in logs.
## architecture
```
pipe.md (prompt + config)
→ pipe manager (parses frontmatter, schedules runs)
→ agent executor (pi, claude-code, etc.)
→ agent queries screenpipe API + executes actions
→ output saved to pipe folder
```
* **agent ≠ model**: the agent is the CLI tool (pi, claude-code). the model is the LLM (haiku, opus, llama).
* **one pipe runs at a time** (global semaphore prevents overlap)
* **lookback = schedule interval** (capped at 8h to prevent context overflow)
* **logs saved** to `~/.screenpipe/pipes/{name}/logs/` as JSON
## built-in pipes
screenpipe ships with three template pipes (disabled by default):
* **obsidian-sync** — sync screen activity to obsidian vault as daily logs
* **reminders** — scan activity for todos and create apple reminders (macOS)
* **idea-tracker** — surface startup ideas from your browsing + market trends
need help building pipes? [join our discord](https://discord.gg/screenpipe) — share your pipes, get feedback, and see what others are building.
## get screenpipe
screenpipe includes pipes, cloud AI, and more out of the box. lifetime plans available — no subscriptions needed.
[download screenpipe →](https://screenpi.pe/onboarding)
# search your screen history
Source: https://docs.screenpi.pe/search-screen-history
Find any text, conversation, or activity from your screen history with screenpipe. Search screen history on Mac, Windows, and Linux. Local AI-powered screen recorder.
screenpipe records your screen 24/7 and lets you search through everything. find that code snippet, conversation, or document you saw last week.
## how it works
1. **continuous capture** — screenpipe records your screen at configurable intervals (default: 1 frame/second)
2. **OCR extraction** — text is extracted from every frame using native OCR engines
3. **local storage** — everything stored in a local SQLite database
4. **search API** — query via `localhost:3030/search` with filters
## search examples
### find by text
```bash theme={null}
curl "http://localhost:3030/search?q=project+apollo+budget&content_type=vision&limit=20"
```
### find by app
```bash theme={null}
curl "http://localhost:3030/search?app_name=Code&content_type=vision&limit=20"
```
### find by time range
```bash theme={null}
curl "http://localhost:3030/search?q=standup&start_time=2026-02-10T14:00:00Z&end_time=2026-02-10T18:00:00Z"
```
### find by browser URL
```bash theme={null}
curl "http://localhost:3030/search?browser_url=github.com&limit=10"
```
### combine filters
```bash theme={null}
curl "http://localhost:3030/search?q=deployment&app_name=Slack&content_type=vision&limit=10"
```
## search parameters
| param | type | description |
| -------------------- | -------- | ------------------------------------------------------------------ |
| `q` | string | search query |
| `limit` | int | max results (default 20) |
| `offset` | int | pagination offset |
| `content_type` | string | `vision`, `audio`, `accessibility`, `input`, `all` |
| `start_time` | ISO 8601 | filter by start time |
| `end_time` | ISO 8601 | filter by end time |
| `app_name` | string | filter by app name |
| `window_name` | string | filter by window title |
| `browser_url` | string | filter by browser URL |
| `min_length` | int | minimum text length |
| `max_length` | int | maximum text length |
| `max_content_length` | int | truncate each result's text to this many chars (middle-truncation) |
## using the desktop app
the easiest way to search is the built-in search in the screenpipe desktop app:
1. open screenpipe
2. use the search bar or timeline view
3. scroll through your day visually
4. select content to chat with AI about it
## search tips
* **be specific**: "slack message from john about deployment" > "deployment"
* **use time context**: combine `start_time` and `end_time` for precision
* **combine filters**: app name + time range + keywords
## privacy
* all search happens locally on your device
* no data leaves your machine
* control what's recorded with `--ignored-windows` and `--included-windows`
questions? [join our discord](https://discord.gg/screenpipe).
## get screenpipe
screenpipe includes 24/7 screen recording, AI search, cloud sync, and more. lifetime plans available — no subscriptions needed.
[download screenpipe →](https://screenpi.pe/onboarding)
# teams — share configs with end-to-end encryption
Source: https://docs.screenpi.pe/teams
Push pipe configurations and content filters to your team. Everything is encrypted client-side — the server only sees encrypted blobs.
teams let admins push pipe configurations and recording filters to all members. everything is end-to-end encrypted using AES-256-GCM — the screenpipe server never sees your configs in plaintext.
## what you can share
share scheduled AI agents (pipe.md configs) with your team so everyone runs the same automations
push ignored/included window lists so the whole team has consistent privacy rules
share ignored URL patterns (e.g. banking sites) to enforce org-wide recording policies
## how security works
teams use **AES-256-GCM** encryption. the encryption key is generated on the admin's device and never sent to our server. members receive the key through the invite link (shared out-of-band via a secure channel like slack DM or signal).
a 256-bit AES-GCM key is generated locally using the Web Crypto API. this key is stored in the Tauri secure store (`~/.screenpipe/store.bin`) — not in localStorage or anywhere web-accessible.
the invite link contains the team ID and the base64-encoded encryption key: `screenpipe://join-team?team_id=...&key=...`. this is the only time the key is transmitted — via the link itself, not through our server.
when a member opens the invite link, the key is imported and stored in their local Tauri secure store. our server only records the membership — it never sees the key.
when an admin pushes a pipe or filter config, it's encrypted locally with AES-256-GCM using a random 12-byte nonce. only the encrypted blob and nonce are sent to the server.
team members download the encrypted configs and decrypt them on-device using the shared key. decrypted configs are applied to local settings automatically.
## what the server stores vs what it can see
| data | stored on server | readable by server |
| ----------------------- | :--------------: | :---------------------------: |
| team name & member list | yes | yes |
| encrypted config blobs | yes | **no** |
| encryption nonces | yes | yes (but useless without key) |
| encryption key | **no** | **no** |
| decrypted pipe configs | **no** | **no** |
| decrypted filter lists | **no** | **no** |
## getting started
### create a team (admin)
1. go to **settings > team**
2. enter a team name and click **create team**
3. copy the invite link and share it with your team via a secure channel
the invite link contains your encryption key. share it only through a secure channel (e.g. signal, slack DM, in-person). anyone with this link can join and decrypt your team's configs.
### join a team (member)
1. open the invite link — screenpipe will handle it automatically via deep link
2. alternatively, go to **settings > team** and paste the invite link manually
### push filters to team (admin)
1. go to **settings > recording** and scroll to **filtering**
2. set up your ignored windows, included windows, or ignored URLs
3. click the **push to team** button on any filter card
pushed filters appear under the **team** tab and are automatically synced to all members.
### share a pipe to team (admin)
1. go to **settings > pipes**
2. click the **share** button next to any pipe
3. the pipe config (including its prompt and schedule) is encrypted and pushed to the team
if you update a pipe locally and share it again, the team copy is overwritten with your latest version. the model is last-push-wins — there's no merge.
### how filters sync for members
when a member visits the team tab, shared filters are automatically merged into their local recording settings:
* **team filters are additive** — they're combined with the member's own filters, not replaced
* **team-sourced entries show a badge** in the recording settings so members know which filters come from the team
* **members can't remove team filters** from their local settings while they're in the team — leaving the team clears them
## roles
| action | admin | member |
| ----------------------------- | :---: | :----: |
| create/delete team | yes | no |
| invite members | yes | no |
| remove members | yes | no |
| push configs (pipes, filters) | yes | no |
| delete shared configs | yes | no |
| receive shared configs | yes | yes |
| leave team | yes | yes |
## requirements
* screenpipe account (sign in at **settings > team**)
* screenpipe desktop app (teams use the Tauri secure store for key storage)
## technical details
* **encryption**: AES-256-GCM via the [Web Crypto API](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto)
* **key storage**: Tauri plugin-store (`~/.screenpipe/store.bin`), not localStorage
* **nonce**: 12-byte random per encryption operation (crypto.getRandomValues)
* **config types**: `pipe`, `window_filter`, `url_filter` (extensible)
* **sync**: automatic when the team tab loads; filter configs merge into local settings via the `useTeamSync` hook
audit the AES-256-GCM encryption implementation
## faq
**what happens if i lose the invite link?**
the admin can always copy it again from **settings > team**. the encryption key is stored locally on the admin's device.
**can the screenpipe team read my configs?**
no. the server only stores encrypted blobs. the encryption key is never transmitted to our server.
**what if two admins push the same pipe name?**
last push wins. the newer version overwrites the older one. there's no merge or conflict resolution — the pipe is treated as a single document.
**what happens when i leave a team?**
all team-sourced filters are removed from your local settings. your personal filters remain unchanged.
**is the encryption key rotated?**
not currently. if you suspect the key is compromised, delete the team and create a new one with a fresh key.
questions? [join our discord](https://discord.gg/screenpipe).
# screenpipe vs Limitless - self-hosted alternative
Source: https://docs.screenpi.pe/vs-limitless
Compare screenpipe to Limitless.ai. screenpipe is the best limitless pendant alternative — self-hosted, open source screen recording vs wearable. Complete data ownership and privacy.
Limitless.ai offers an AI-powered pendant and app for recording meetings and conversations. screenpipe provides similar AI memory capabilities but entirely self-hosted and open source.
## comparison
| feature | screenpipe | limitless |
| ------------------ | -------------------------- | ---------------- |
| **hardware** | uses your existing devices | requires pendant |
| **open source** | yes (MIT) | no |
| **data location** | 100% local | cloud-based |
| **platforms** | macOS, Windows, Linux | iOS, web |
| **screen capture** | yes | no |
| **audio capture** | yes | yes |
| **self-hosted** | yes | no |
## why choose screenpipe?
### no hardware purchase
use your existing computer, microphone, and webcam. no additional hardware required.
### screen + audio
limitless only captures audio. screenpipe captures:
* screen content with OCR
* audio transcription
* UI elements (experimental)
### complete privacy
* all data stays on your device
* use local AI models (Ollama)
* no cloud dependency
* audit the source code yourself
### no cloud dependency
screenpipe runs entirely on your device. no cloud account required.
### extensible
build custom integrations with the REST API or [pipes](/pipes):
```bash theme={null}
# query your screen data
curl "http://localhost:3030/search?content_type=all&limit=50"
```
## for meeting transcription
screenpipe handles meetings just as well — configure audio devices in the desktop app settings to capture system audio (meeting apps) and your microphone simultaneously.
## self-hosting benefits
| limitless cloud | screenpipe local |
| --------------------- | ------------------- |
| data on their servers | data on your device |
| internet required | works offline |
| cloud dependency | works offline |
| vendor lock-in | export anytime |
## get started
[download the desktop app](https://screenpi.pe)
***
## more comparisons
looking for detailed feature comparisons? check out our main comparison pages:
* [screenpipe vs Limitless (full comparison)](https://screenpi.pe/compare/limitless) - in-depth analysis with feature tables
* [screenpipe vs Microsoft Recall](https://screenpi.pe/compare/microsoft-recall) - privacy-focused alternative
* [screenpipe vs Rewind](https://screenpi.pe/compare/rewind) - open source alternative
* [all comparisons](https://screenpi.pe/compare) - see how screenpipe stacks up
## resources
* [AI meeting notes guide](https://screenpi.pe/resources/use-cases/ai-meeting-notes) - automatic transcription setup
* [local AI assistant setup](https://screenpi.pe/resources/use-cases/local-ai-assistant) - run AI 100% locally
* [screenpipe blog](https://screenpi.pe/blog) - latest updates and tutorials
## frequently asked questions
### how does screenpipe compare to Limitless?
screenpipe captures both screen and audio on your existing devices, while Limitless requires purchasing a pendant and only captures audio. screenpipe is open source, runs entirely locally, and stores all data on your device. Limitless is cloud-based and closed source. screenpipe also offers extensibility through pipes (AI agents) and a full REST API.
### do I need to buy hardware to use screenpipe?
no. screenpipe uses your existing computer, microphone, and webcam. no pendant, wearable, or special hardware required. just [download the desktop app](https://screenpi.pe/onboarding) and start recording.
### can screenpipe replace Limitless for meeting transcription?
yes. screenpipe captures system audio (Zoom, Meet, Teams) and your microphone simultaneously, with automatic transcription and speaker identification. configure audio devices in the desktop app settings. unlike Limitless, all transcriptions stay on your device.
# screenpipe vs Microsoft Recall - privacy-focused alternative
Source: https://docs.screenpi.pe/vs-recall
Compare screenpipe to Microsoft Recall. screenpipe is the best Windows Recall alternative — privacy-focused, open source, private screen recording that works on all platforms with no cloud dependency.
Microsoft Recall is a Windows 11 feature that captures screenshots and makes them searchable. screenpipe offers similar functionality but with complete privacy, open source code, and cross-platform support.
## comparison
| feature | screenpipe | microsoft recall |
| ----------------- | ---------------------------- | --------------------------- |
| **open source** | yes (MIT) | no |
| **platforms** | macOS, Windows, Linux | Windows 11 only |
| **data location** | 100% local | local (with cloud concerns) |
| **NPU required** | no | yes (Copilot+ PCs) |
| **privacy** | auditable source code | closed source |
| **AI provider** | your choice (local or cloud) | Microsoft AI |
| **extensibility** | plugins (pipes) | none |
## why choose screenpipe?
### true privacy
* open source: you can audit every line of code
* no Microsoft account required
* no telemetry or data collection
* use completely local AI (Ollama)
### works everywhere
* macOS, Windows, Linux
* no special hardware required
* runs on any modern computer
### you control the AI
* use Ollama for 100% local AI
* or connect to OpenAI, Claude, etc.
* your choice, not Microsoft's
### extensible
build custom integrations with the REST API or [pipes](/pipes):
```bash theme={null}
curl "http://localhost:3030/search?q=meeting+notes&content_type=all"
```
## recall privacy concerns
Microsoft Recall has faced criticism for:
* storing sensitive data (passwords, financial info)
* potential security vulnerabilities
* data being accessible to other apps
* unclear data handling policies
screenpipe addresses these:
* **PII removal**: optional automatic redaction of sensitive data
* **window filtering**: exclude apps like password managers
* **local-only**: data never leaves your device
* **open source**: security researchers can audit the code
## filtering sensitive content
in the desktop app settings, you can exclude specific windows (like password managers and banking apps) and enable PII removal to automatically redact sensitive data.
## get started
[download the desktop app](https://screenpi.pe)
***
## more comparisons
looking for detailed feature comparisons? check out our main comparison pages:
* [screenpipe vs Microsoft Recall (full comparison)](https://screenpi.pe/compare/microsoft-recall) - in-depth privacy analysis
* [screenpipe vs Limitless](https://screenpi.pe/compare/limitless) - hardware vs software approach
* [screenpipe vs Rewind](https://screenpi.pe/compare/rewind) - open source alternative
* [all comparisons](https://screenpi.pe/compare) - see how screenpipe stacks up
## resources
* [AI recall tools guide](https://screenpi.pe/resources/use-cases/ai-recall-tools) - how screen memory works
* [local AI assistant setup](https://screenpi.pe/resources/use-cases/local-ai-assistant) - complete privacy with Ollama
* [screenpipe blog](https://screenpi.pe/blog) - latest updates and tutorials
## frequently asked questions
### is there a privacy-focused alternative to Windows Recall?
yes. screenpipe is the privacy-focused, open source alternative to Microsoft Recall. unlike Recall, screenpipe is fully open source (MIT licensed), works on macOS, Windows, and Linux, requires no special hardware (no NPU needed), and stores all data 100% locally with no cloud dependency. you can use local AI models (Ollama) for complete privacy and audit every line of code.
### does screenpipe work on Windows without a Copilot+ PC?
yes. screenpipe runs on any modern Windows computer — no NPU or Copilot+ hardware required. it uses CPU-based OCR and works on Windows 10 and 11.
### how is screenpipe different from Windows Recall?
screenpipe is open source, cross-platform, and has no cloud dependency. Recall is closed source, Windows 11 only, requires Copilot+ hardware, and has faced privacy concerns. screenpipe also offers extensibility through pipes (scheduled AI agents) and a full REST API that Recall doesn't provide.
# screenpipe vs Rewind.ai - open source alternative
Source: https://docs.screenpi.pe/vs-rewind
Compare screenpipe to Rewind.ai. screenpipe is the best open source rewind alternative 2026. Rewind.ai replacement that's local-first, cross-platform, and extensible.
screenpipe is the open source alternative to Rewind.ai. after Rewind pivoted to Limitless and discontinued its desktop app, screenpipe continues to provide 24/7 screen memory with complete privacy.
## comparison
| feature | screenpipe | rewind.ai |
| ----------------- | --------------------- | ----------------------------------- |
| **open source** | yes (MIT) | no |
| **data location** | 100% local | local |
| **platforms** | macOS, Windows, Linux | macOS only |
| **status** | active | discontinued (pivoted to Limitless) |
| **extensibility** | plugins (pipes) | limited |
| **self-hosted** | yes | no |
| **API access** | full REST API | limited |
## why screenpipe?
### open source
screenpipe is MIT licensed. audit the code, contribute, or fork it. your data, your control.
### cross-platform
works on macOS, Windows, and Linux. rewind was mac-only.
### extensible
build custom plugins (pipes) with TypeScript and Next.js. integrate with any service.
### active development
16k+ GitHub stars, 80+ contributors. actively maintained and improved.
### privacy-first
* all processing happens locally
* use local LLMs (Ollama) for complete privacy
* no telemetry or data collection
* you can audit the source code
## migrating from rewind
if you were a Rewind user:
1. [install screenpipe](/getting-started)
2. use the built-in timeline view for a familiar interface
3. your future screen activity will be captured locally
## features rewind users love
* **timeline view**: scroll through your day visually
* **natural language search**: "find the email I was reading yesterday"
* **meeting transcription**: automatic transcription with speaker identification
* **AI integration**: connect to ChatGPT, Claude, or local LLMs
## get started
[download the desktop app](https://screenpi.pe)
***
## more comparisons
looking for detailed feature comparisons? check out our main comparison pages:
* [screenpipe vs Rewind (full comparison)](https://screenpi.pe/compare/rewind) - in-depth analysis
* [screenpipe vs Microsoft Recall](https://screenpi.pe/compare/microsoft-recall) - privacy-focused alternative
* [screenpipe vs Limitless](https://screenpi.pe/compare/limitless) - hardware vs software
* [all comparisons](https://screenpi.pe/compare) - see how screenpipe stacks up
## resources
* [open source rewind alternative guide](https://screenpi.pe/resources/use-cases/open-source-rewind-alternative) - complete migration guide
* [screenpipe blog](https://screenpi.pe/blog) - latest updates and tutorials
## frequently asked questions
### what is the best open source alternative to Rewind.ai?
screenpipe is the most popular open source alternative to Rewind.ai with 16k+ GitHub stars. it captures your screen 24/7, extracts text with OCR, transcribes audio, and makes everything searchable — all running locally on your device. unlike Rewind (which has been discontinued), screenpipe is actively maintained, works on macOS, Windows, and Linux, and is fully extensible with plugins (pipes) and a REST API.
### is Rewind.ai still available?
no. Rewind.ai pivoted to Limitless and discontinued the desktop screen recording app. screenpipe is the actively maintained replacement that provides the same 24/7 screen capture and search functionality, plus cross-platform support and open source code.
### can I migrate from Rewind to screenpipe?
yes. [install screenpipe](/getting-started), and it will start capturing your screen activity going forward. screenpipe provides a familiar timeline view and natural language search. while historical Rewind data can't be imported, all future activity will be captured and searchable locally.