Skip to main content
screenpipe acts as a memory layer for AI — it gives LLMs context about what you’ve been doing on your computer.

why AI needs memory

LLMs are stateless — they don’t know what you were working on 5 minutes ago. screenpipe bridges this by:
  • capturing everything on your screen 24/7
  • extracting text via OCR
  • making it searchable via REST API on localhost:3030

connect to AI

screenpipe has a built-in MCP server that works with Claude Desktop, Cursor, and other MCP-compatible tools:
{
  "mcpServers": {
    "screenpipe": {
      "command": "screenpipe-mcp",
      "args": []
    }
  }
}
see MCP server setup for details.

pipes (scheduled agents)

pipes are AI agents that run on a schedule and act on your screen data automatically — like syncing to Obsidian, tracking time in Toggl, or sending daily summaries.

direct API

any tool that can make HTTP requests can query screenpipe:
# get recent screen activity
curl "http://localhost:3030/search?content_type=ocr&limit=20"

# search for specific content
curl "http://localhost:3030/search?q=meeting+notes&app_name=Slack&limit=10"

use cases

use casehow
”what was I working on?“search by time range
”summarize today’s meetings”query audio transcriptions
”find that code snippet”search OCR text
”auto-track my time”toggl-sync pipe
”sync activity to notes”obsidian-sync pipe

privacy-first

  • all data stays on your device
  • use local LLMs (Ollama, LMStudio) for complete privacy
  • filter what gets captured with --ignored-windows and --included-windows
  • no data sent to cloud unless you explicitly choose cloud providers

next steps