Skip to main content
screenpipe provides an MCP (Model Context Protocol) server that lets AI assistants like Claude and Cursor search your screen recordings, audio transcriptions, and control your computer.

claude desktop

open the screenpipe appsettingsconnections → click “install extension” Claude will open and prompt you to confirm. click install — done! try asking Claude: “what did I do in the last 5 minutes?”
make sure screenpipe is running when you use Claude with screenpipe features.

claude code

one command:
claude mcp add screenpipe --transport stdio -- npx -y screenpipe-mcp
to make it available across all your projects:
claude mcp add screenpipe --transport stdio --scope user -- npx -y screenpipe-mcp
verify with claude mcp list or /mcp inside Claude Code.

cursor

click here to install in cursor or manually: open cursor settingsmcpadd new global mcp server → set command to npx with args ["-y", "screenpipe-mcp"].

other editors (cline, continue, windsurf, etc.)

any editor that supports MCP works with screenpipe. the server command is:
npx -y screenpipe-mcp
add this as a stdio MCP server in your editor’s MCP settings. see also:

available tools

search-content (all platforms)

search through recorded screen content, audio transcriptions, and user input events:
parametertypedescription
qstringsearch query (optional - omit to get recent content)
content_typestringall, ocr, audio, input, or accessibility (default: all)
limitintegermax results (default: 10)
offsetintegerpagination offset (default: 0)
start_timestringISO 8601 UTC start time (e.g., 2024-01-15T10:00:00Z)
end_timestringISO 8601 UTC end time
app_namestringfilter by app (e.g., Google Chrome, Slack)
window_namestringfilter by window title
min_lengthintegerminimum content length
max_lengthintegermaximum content length
include_framesbooleaninclude base64 screenshots (OCR only)
speaker_idsstringcomma-separated speaker IDs for audio filtering (e.g., 1,2,3)
speaker_namestringfilter audio by speaker name (case-insensitive partial match)

export-video

create video exports from screen recordings for a specific time range:
parametertypedescription
start_timestringISO 8601 UTC start time (required)
end_timestringISO 8601 UTC end time (required)
fpsnumberframes per second (default: 1.0)

example queries

try these in Claude or Cursor:
  • “search for any mentions of ‘project’ in my screen recordings”
  • “find audio transcriptions from the last hour”
  • “show me what was on my screen in VS Code yesterday”
  • “export a video of my screen from 10am to 11am today”
  • “find what John said in our meeting about the API”
  • “what did I type in Slack today?” (uses content_type=input)
  • “what did I copy to my clipboard recently?” (uses content_type=input)

testing

test your setup with MCP Inspector:
npx @modelcontextprotocol/inspector npx screenpipe-mcp

requirements

  • screenpipe must be running on localhost:3030
  • Node.js >= 18.0.0

manual config (advanced)

troubleshooting

mcp not connecting?
  • ensure screenpipe is running (screenpipe in terminal)
  • check it’s accessible at http://localhost:3030/health
  • restart Claude Desktop / Cursor after config changes
macos automation not working?
  • grant accessibility permissions in System Settings > Privacy & Security > Accessibility
still stuck? ask in our discord — the community can help debug MCP issues.