Skip to main content

quick start — paste this into claude code

copy this prompt into claude code, cursor, or any AI coding assistant:
create a screenpipe pipe that [DESCRIBE WHAT YOU WANT].

## what is screenpipe?

screenpipe is a desktop app that continuously records your screen (OCR) and audio (transcription).
it runs a local API at http://localhost:3030 that lets you query everything you've seen, said, or heard.

## what is a pipe?

a pipe is a scheduled AI agent defined as a single markdown file: ~/.screenpipe/pipes/{name}/pipe.md
every N minutes, screenpipe runs a coding agent (like pi or claude-code) with the pipe's prompt.
the agent can query your screen data, write files, call external APIs, send notifications, etc.

## pipe.md format

the file starts with YAML frontmatter, then the prompt body:

---
name: my-pipe
schedule: every 30m
lookback: 30m
enabled: true
---

Your prompt instructions here...

## screenpipe search API

the agent queries screen data via the local REST API:

curl "http://localhost:3030/search?limit=20&content_type=all&start_time={{start_time}}&end_time={{end_time}}"

### query parameters
- q: text search query (optional)
- content_type: "ocr" | "audio" | "ui" | "all" | "ocr+audio" | "ocr+ui" | "audio+ui"
- limit: max results (default 20)
- offset: pagination offset
- start_time / end_time: ISO 8601 timestamps
- app_name: filter by app (e.g. "chrome", "cursor")
- window_name: filter by window title
- browser_url: filter by URL (e.g. "github.com")
- min_length / max_length: filter by text length
- speaker_ids: filter audio by speaker IDs

### OCR results (what was on screen)
each result contains:
- text: the OCR'd text visible on screen
- app_name: which app was active (e.g. "Arc", "Cursor", "Slack")
- window_name: the window title
- browser_url: the URL if it was a browser
- timestamp: when it was captured
- file_path: path to the video frame
- focused: whether the window was focused

### audio results (what was said/heard)
each result contains:
- transcription: the spoken text
- speaker_id: numeric speaker identifier
- timestamp: when it was captured
- device_name: which audio device (mic or system audio)
- device_type: "input" (microphone) or "output" (system audio)

### UI events (accessibility data, macOS only)
query via: curl "http://localhost:3030/ui-events?app_name=Slack&limit=50&start_time={{start_time}}&end_time={{end_time}}"
event types: text (keyboard input), click, app_switch, window_focus, clipboard, scroll

## template variables

these are replaced in the prompt before execution:
- {{start_time}}: ISO 8601 start (based on lookback)
- {{end_time}}: ISO 8601 end (current time)
- {{date}}: current date (YYYY-MM-DD)
- {{timezone}}: timezone abbreviation (e.g. PST)
- {{timezone_offset}}: UTC offset (e.g. -08:00)

## secrets

store API keys in a .env file next to pipe.md (never in the prompt itself):
echo "API_KEY=your_key" > ~/.screenpipe/pipes/my-pipe/.env
reference in prompt: source .env && curl -H "Authorization: Bearer $API_KEY" ...

## after creating the file

install: bunx screenpipe pipe install ~/.screenpipe/pipes/my-pipe
enable:  bunx screenpipe pipe enable my-pipe
test:    bunx screenpipe pipe run my-pipe
logs:    bunx screenpipe pipe logs my-pipe
replace [DESCRIBE WHAT YOU WANT] with your use case — e.g. “tracks my time in toggl based on what apps I’m using”, “writes daily summaries to obsidian”, “sends me a slack message if I’ve been on twitter for more than 30 minutes”.

what are pipes?

pipes are automated workflows that run on your screenpipe data at regular intervals. each pipe is a markdown file with a prompt and a schedule. under the hood, screenpipe runs a coding agent (like pi) that can query your screen data, call APIs, write files, and take actions. a pipe is just one file: pipe.md
~/.screenpipe/pipes/
├── daily-journal/
│   └── pipe.md
├── toggl-sync/
│   ├── pipe.md
│   └── .env          # secrets (api keys)
└── obsidian-sync/
    └── pipe.md

creating a pipe

create a folder in ~/.screenpipe/pipes/ with a pipe.md file:
mkdir -p ~/.screenpipe/pipes/my-pipe
cat > ~/.screenpipe/pipes/my-pipe/pipe.md << 'EOF'
---
name: my-pipe
schedule: every 30m
lookback: 30m
enabled: true
---

Summarize my screen activity for the last 30 minutes.
Query screenpipe at http://localhost:3030/search.
Write the summary to ./output/{{date}}.md
EOF

# install and enable
bunx screenpipe pipe install ~/.screenpipe/pipes/my-pipe
bunx screenpipe pipe enable my-pipe

# test it
bunx screenpipe pipe run my-pipe

pipe.md format

every pipe.md starts with YAML frontmatter between --- markers, followed by the prompt:
---
name: my-pipe
schedule: every 2h
lookback: 2h
enabled: true
agent: pi
model: claude-haiku-4-5@20251001
---

Your prompt goes here. This is what the AI agent will execute.
You can reference screenpipe's API, write files, call external APIs, etc.

frontmatter fields

fieldrequireddefaultdescription
namenofolder namepipe identifier
scheduleyesmanualevery 1m, every 30m, every 2h, daily, manual
lookbacknomatches schedulehow far back to query screen data: 30m, 2h, 8h
enablednotruewhether the scheduler runs this pipe
agentnopiwhich coding agent CLI to use
modelnoclaude-haiku-4-5@20251001which LLM the agent uses
providernoscreenpipeLLM provider: screenpipe (cloud), anthropic, openai, google, etc.

template variables

these get replaced in the prompt body before execution:
variableexample value
{{start_time}}2026-02-11T20:00:00Z
{{end_time}}2026-02-11T20:30:00Z
{{date}}2026-02-11
{{timezone}}PST
{{timezone_offset}}-08:00

cli commands

# list all pipes
bunx screenpipe pipe list

# install a pipe from file, folder, or URL
bunx screenpipe pipe install ./my-pipe.md
bunx screenpipe pipe install https://raw.githubusercontent.com/user/repo/main/pipe.md

# enable/disable
bunx screenpipe pipe enable my-pipe
bunx screenpipe pipe disable my-pipe

# run once manually
bunx screenpipe pipe run my-pipe

# view run history
bunx screenpipe pipe logs my-pipe
bunx screenpipe pipe logs my-pipe -f  # follow

# delete
bunx screenpipe pipe delete my-pipe

http api

when screenpipe is running, pipes are also manageable via the local API:
# list all pipes
curl http://localhost:3030/pipes

# run a pipe
curl -X POST http://localhost:3030/pipes/my-pipe/run

# enable/disable
curl -X POST http://localhost:3030/pipes/my-pipe/enable \
  -H "Content-Type: application/json" \
  -d '{"enabled": true}'

# view logs
curl http://localhost:3030/pipes/my-pipe/logs

# install from URL
curl -X POST http://localhost:3030/pipes/install \
  -H "Content-Type: application/json" \
  -d '{"source": "https://example.com/pipe.md"}'

app ui

go to settings → pipes to see all installed pipes, toggle them on/off, run them manually, and view logs.

examples

time tracking (toggl)

---
name: toggl-sync
schedule: every 1m
lookback: 2m
enabled: true
---

You automatically update my Toggl time tracking based on screen activity.

1. Query screenpipe: curl "http://localhost:3030/search?content_type=ocr&limit=20&start_time={{start_time}}&end_time={{end_time}}"
2. Read API key: source .env
3. Check current Toggl timer
4. If activity changed, stop old timer and start new one
5. If activity matches, do nothing

Activity rules:
- VSCode/Cursor/Terminal → "coding"
- Chrome with GitHub → "code review"
- Slack/Discord → "communication"
- Zoom/Meet → "meeting"

daily journal (obsidian)

---
name: daily-journal
schedule: every 2h
lookback: 2h
enabled: true
---

Summarize my screen activity into a daily journal entry.
Query screenpipe at http://localhost:3030/search.
Write to ~/obsidian-vault/screenpipe/{{date}}.md
Use [[wiki-links]] for people and projects.
Include timeline deep links: [time](screenpipe://timeline?timestamp=ISO8601)

standup report

---
name: standup
schedule: daily
lookback: 8h
enabled: true
---

Generate a standup report from yesterday's screen activity.
Format: what I did, what I'm doing, blockers.
Write to ./output/{{date}}.md

AI providers

by default, pipes use screenpipe cloud — no setup needed if you have a screenpipe account. to use your own AI subscription (Claude Pro, ChatGPT Plus, Gemini, or API keys), pipes reuse pi’s native auth system:

option 1: subscription (free with existing plan)

# run pi interactively and use /login
pi
# then type: /login
# select Claude Pro, ChatGPT Plus, GitHub Copilot, or Google Gemini

option 2: API key

add to ~/.pi/agent/auth.json:
{
  "anthropic": { "type": "api_key", "key": "sk-ant-..." },
  "openai": { "type": "api_key", "key": "sk-..." },
  "google": { "type": "api_key", "key": "..." }
}
or set environment variables: ANTHROPIC_API_KEY, OPENAI_API_KEY, GEMINI_API_KEY.

using in a pipe

add provider to your pipe.md frontmatter:
---
name: my-pipe
schedule: every 30m
provider: anthropic
model: claude-haiku-4-5@20251001
---
provider resolution: if provider is set in pipe.md → use it. otherwise → screenpipe cloud.

secrets

store API keys in .env files inside the pipe folder:
echo "TOGGL_API_KEY=your_key_here" > ~/.screenpipe/pipes/toggl-sync/.env
the pipe prompt can reference them: source .env && curl -u $TOGGL_API_KEY:api_token ... never put secrets in pipe.md — the prompt may be visible in logs.

architecture

pipe.md (prompt + config)
  → pipe manager (parses frontmatter, schedules runs)
    → agent executor (pi, claude-code, etc.)
      → agent queries screenpipe API + executes actions
        → output saved to pipe folder
  • agent ≠ model: the agent is the CLI tool (pi, claude-code). the model is the LLM (haiku, opus, llama).
  • one pipe runs at a time (global semaphore prevents overlap)
  • lookback capped at 8h to prevent context overflow
  • logs saved to ~/.screenpipe/pipes/{name}/logs/ as JSON

built-in pipes

screenpipe ships with two template pipes (disabled by default):
  • obsidian-sync — sync screen activity to obsidian vault
  • reminders — scan activity for todos and create apple reminders