screenpipe provides an MCP (Model Context Protocol) server that lets AI assistants like Claude and Cursor search your screen recordings, audio transcriptions, and control your computer.
claude desktop
option 1: one-click install (recommended)
the easiest way to connect screenpipe to Claude Desktop:
- open the screenpipe app
- go to settings → connections
- click “install extension” next to Claude Desktop
- Claude will open and prompt you to confirm the installation
- click install in Claude’s dialog
that’s it! try asking Claude: “what did I do in the last 5 minutes?”
make sure screenpipe is running when you use Claude with screenpipe features.
option 2: npx (manual)
if you prefer manual setup, edit your Claude Desktop config:
# open config file
code ~/Library/Application\ Support/Claude/claude_desktop_config.json
notepad $env:AppData\Claude\claude_desktop_config.json
add this configuration:
{
"mcpServers": {
"screenpipe": {
"command": "npx",
"args": ["-y", "screenpipe-mcp"]
}
}
}
restart Claude Desktop after saving.
option 3: from source
git clone https://github.com/mediar-ai/screenpipe
cd screenpipe/screenpipe-integrations/screenpipe-mcp
npm install
npm run build
then configure Claude Desktop:
{
"mcpServers": {
"screenpipe": {
"command": "node",
"args": ["/absolute/path/to/screenpipe-mcp/dist/index.js"]
}
}
}
claude code cli
to use with Claude Code:
option 1: CLI command (recommended)
# add screenpipe as an MCP server
claude mcp add screenpipe --transport stdio -- npx -y screenpipe-mcp
# make it available in all projects
claude mcp add screenpipe --transport stdio --scope user -- npx -y screenpipe-mcp
option 2: JSON config
claude mcp add-json screenpipe '{"type":"stdio","command":"npx","args":["-y","screenpipe-mcp"]}'
verify connection
# list configured servers
claude mcp list
# inside Claude Code
/mcp
if you have Claude Desktop configured, you can import those settings:claude mcp add-from-claude-desktop
cursor ide
to use with Cursor:
- open cursor settings (gear icon)
- select mcp in the sidebar
- click “add new global mcp server”
- paste the npx configuration:
{
"mcpServers": {
"screenpipe": {
"command": "npx",
"args": ["-y", "screenpipe-mcp"]
}
}
}
- click the refresh icon next to mcp servers
- open a new composer in agent mode
search-content (all platforms)
search through recorded screen content (OCR) and audio transcriptions:
| parameter | type | description |
|---|
q | string | search query (optional - omit to get recent content) |
content_type | string | all, ocr, audio, or ui (default: all) |
limit | integer | max results (default: 10) |
offset | integer | pagination offset (default: 0) |
start_time | string | ISO 8601 UTC start time (e.g., 2024-01-15T10:00:00Z) |
end_time | string | ISO 8601 UTC end time |
app_name | string | filter by app (e.g., Google Chrome, Slack) |
window_name | string | filter by window title |
min_length | integer | minimum content length |
max_length | integer | maximum content length |
include_frames | boolean | include base64 screenshots (OCR only) |
speaker_ids | string | comma-separated speaker IDs for audio filtering (e.g., 1,2,3) |
speaker_name | string | filter audio by speaker name (case-insensitive partial match) |
search-ui-events (macOS)
search UI input events captured via accessibility APIs. this is the third data modality alongside vision and audio.
| parameter | type | description |
|---|
q | string | search text content, app name, window title |
event_type | string | click, text, scroll, key, app_switch, window_focus, clipboard |
app_name | string | filter by application |
window_name | string | filter by window title |
start_time | string | ISO 8601 UTC start time |
end_time | string | ISO 8601 UTC end time |
limit | integer | max results (default: 50) |
offset | integer | pagination offset |
event types:
text - aggregated keyboard input (what was typed)
click - mouse clicks with accessibility element context
app_switch / window_focus - app and window switching
clipboard - copy/paste operations with content
scroll - scroll events with delta values
UI events require macOS with Accessibility permissions and “UI Events” enabled in screenpipe settings.
get-ui-event-stats (macOS)
get aggregated statistics of UI events by app and event type for productivity analysis:
| parameter | type | description |
|---|
start_time | string | ISO 8601 UTC start time |
end_time | string | ISO 8601 UTC end time |
export-video
create video exports from screen recordings for a specific time range:
| parameter | type | description |
|---|
start_time | string | ISO 8601 UTC start time (required) |
end_time | string | ISO 8601 UTC end time (required) |
fps | number | frames per second (default: 1.0) |
example queries
try these in Claude or Cursor:
- “search for any mentions of ‘project’ in my screen recordings”
- “find audio transcriptions from the last hour”
- “show me what was on my screen in VS Code yesterday”
- “export a video of my screen from 10am to 11am today”
- “find what John said in our meeting about the API”
- “what did I type in Slack today?” (uses search-ui-events)
- “show me my app usage statistics for the past 3 hours”
- “what did I copy to my clipboard recently?”
- “which apps did I switch between most during my work session?“
testing
test your setup with MCP Inspector:
npx @modelcontextprotocol/inspector npx screenpipe-mcp
requirements
- screenpipe must be running on localhost:3030
- Node.js >= 18.0.0
mintlify mcp (use our docs in cursor, claude)
add our docs as context in your AI assistant:
npx mint-mcp add mediar-ai
troubleshooting
mcp not connecting?
- ensure screenpipe is running (
screenpipe in terminal)
- check it’s accessible at http://localhost:3030/health
- restart Claude Desktop / Cursor after config changes
macos automation not working?
- grant accessibility permissions in System Settings > Privacy & Security > Accessibility