why AI needs memory
LLMs are stateless — they don’t know what you were working on 5 minutes ago. screenpipe bridges this by:- capturing everything on your screen 24/7
- extracting text via OCR
- making it searchable via REST API on
localhost:3030
connect to AI
MCP (recommended)
screenpipe has a built-in MCP server that works with Claude Desktop, Cursor, and other MCP-compatible tools:pipes (scheduled agents)
pipes are AI agents that run on a schedule and act on your screen data automatically — like syncing to Obsidian, tracking time in Toggl, or sending daily summaries.direct API
any tool that can make HTTP requests can query screenpipe:use cases
| use case | how |
|---|---|
| ”what was I working on?“ | search by time range |
| ”summarize today’s meetings” | query audio transcriptions |
| ”find that code snippet” | search OCR text |
| ”auto-track my time” | toggl-sync pipe |
| ”sync activity to notes” | obsidian-sync pipe |
privacy-first
- all data stays on your device
- use local LLMs (Ollama, LMStudio) for complete privacy
- filter what gets captured with
--ignored-windowsand--included-windows - no data sent to cloud unless you explicitly choose cloud providers
next steps
- set up MCP server — connect to Claude, Cursor
- set up pipes — scheduled AI agents
- CLI reference — search parameters and API endpoints