Skip to main content
screenpipe provides an MCP (Model Context Protocol) server that lets AI assistants like Claude and Cursor search your screen recordings, audio transcriptions, and control your computer.

claude desktop

the easiest way to connect screenpipe to Claude Desktop:
  1. open the screenpipe app
  2. go to settingsconnections
  3. click “install extension” next to Claude Desktop
  4. Claude will open and prompt you to confirm the installation
  5. click install in Claude’s dialog
that’s it! try asking Claude: “what did I do in the last 5 minutes?”
make sure screenpipe is running when you use Claude with screenpipe features.

option 2: npx (manual)

if you prefer manual setup, edit your Claude Desktop config:
# open config file
code ~/Library/Application\ Support/Claude/claude_desktop_config.json
add this configuration:
{
  "mcpServers": {
    "screenpipe": {
      "command": "npx",
      "args": ["-y", "screenpipe-mcp"]
    }
  }
}
restart Claude Desktop after saving.

option 3: from source

git clone https://github.com/mediar-ai/screenpipe
cd screenpipe/screenpipe-integrations/screenpipe-mcp
npm install
npm run build
then configure Claude Desktop:
{
  "mcpServers": {
    "screenpipe": {
      "command": "node",
      "args": ["/absolute/path/to/screenpipe-mcp/dist/index.js"]
    }
  }
}

claude code cli

to use with Claude Code:
# add screenpipe as an MCP server
claude mcp add screenpipe --transport stdio -- npx -y screenpipe-mcp

# make it available in all projects
claude mcp add screenpipe --transport stdio --scope user -- npx -y screenpipe-mcp

option 2: JSON config

claude mcp add-json screenpipe '{"type":"stdio","command":"npx","args":["-y","screenpipe-mcp"]}'

verify connection

# list configured servers
claude mcp list

# inside Claude Code
/mcp
if you have Claude Desktop configured, you can import those settings:
claude mcp add-from-claude-desktop

cursor ide

to use with Cursor:
  1. open cursor settings (gear icon)
  2. select mcp in the sidebar
  3. click “add new global mcp server”
  4. paste the npx configuration:
{
  "mcpServers": {
    "screenpipe": {
      "command": "npx",
      "args": ["-y", "screenpipe-mcp"]
    }
  }
}
  1. click the refresh icon next to mcp servers
  2. open a new composer in agent mode

available tools

search-content (all platforms)

search through recorded screen content (OCR) and audio transcriptions:
parametertypedescription
qstringsearch query (optional - omit to get recent content)
content_typestringall, ocr, audio, or ui (default: all)
limitintegermax results (default: 10)
offsetintegerpagination offset (default: 0)
start_timestringISO 8601 UTC start time (e.g., 2024-01-15T10:00:00Z)
end_timestringISO 8601 UTC end time
app_namestringfilter by app (e.g., Google Chrome, Slack)
window_namestringfilter by window title
min_lengthintegerminimum content length
max_lengthintegermaximum content length
include_framesbooleaninclude base64 screenshots (OCR only)
speaker_idsstringcomma-separated speaker IDs for audio filtering (e.g., 1,2,3)
speaker_namestringfilter audio by speaker name (case-insensitive partial match)

search-ui-events (macOS)

search UI input events captured via accessibility APIs. this is the third data modality alongside vision and audio.
parametertypedescription
qstringsearch text content, app name, window title
event_typestringclick, text, scroll, key, app_switch, window_focus, clipboard
app_namestringfilter by application
window_namestringfilter by window title
start_timestringISO 8601 UTC start time
end_timestringISO 8601 UTC end time
limitintegermax results (default: 50)
offsetintegerpagination offset
event types:
  • text - aggregated keyboard input (what was typed)
  • click - mouse clicks with accessibility element context
  • app_switch / window_focus - app and window switching
  • clipboard - copy/paste operations with content
  • scroll - scroll events with delta values
UI events require macOS with Accessibility permissions and “UI Events” enabled in screenpipe settings.

get-ui-event-stats (macOS)

get aggregated statistics of UI events by app and event type for productivity analysis:
parametertypedescription
start_timestringISO 8601 UTC start time
end_timestringISO 8601 UTC end time

export-video

create video exports from screen recordings for a specific time range:
parametertypedescription
start_timestringISO 8601 UTC start time (required)
end_timestringISO 8601 UTC end time (required)
fpsnumberframes per second (default: 1.0)

example queries

try these in Claude or Cursor:
  • “search for any mentions of ‘project’ in my screen recordings”
  • “find audio transcriptions from the last hour”
  • “show me what was on my screen in VS Code yesterday”
  • “export a video of my screen from 10am to 11am today”
  • “find what John said in our meeting about the API”
  • “what did I type in Slack today?” (uses search-ui-events)
  • “show me my app usage statistics for the past 3 hours”
  • “what did I copy to my clipboard recently?”
  • “which apps did I switch between most during my work session?“

testing

test your setup with MCP Inspector:
npx @modelcontextprotocol/inspector npx screenpipe-mcp

requirements

  • screenpipe must be running on localhost:3030
  • Node.js >= 18.0.0

mintlify mcp (use our docs in cursor, claude)

add our docs as context in your AI assistant:
npx mint-mcp add mediar-ai

troubleshooting

mcp not connecting?
  • ensure screenpipe is running (screenpipe in terminal)
  • check it’s accessible at http://localhost:3030/health
  • restart Claude Desktop / Cursor after config changes
macos automation not working?
  • grant accessibility permissions in System Settings > Privacy & Security > Accessibility