Pipes
screenpipe is built for extensibility through plugins that interact with captured screen and audio data. whether you need to tag activities, generate summaries, or send data to third-party services, plugins let you build powerful workflows.
plugins run within screenpipe’s sandboxed environment. written in typescript/javascript and nextjs.
feel free to use our docs as context in cursor agent through MCP
why build pipes? 🚀
regardless of progress in AI architecture, it’s as good as the given context. screenpipe is the bridge between dull hallunicating AI and super intelligent agents.
for developers
- zero infrastructure: 100% local by default, no servers or complex setups, access to your auth tokens (unlike cloud agents)
- typescript + rust + bun: blazing fast environment, highly optimized pipeline, 4 AI models, running on $200 laptops, best local STT in the market
- full context: rich OCR, desktop scrapping, keyboard/mouse, and audio transcription APIs
- open source: no BS, no tricks, MIT license
- monetization ready: Stripe integration to monetize your pipes
- no lock-in: use our store for distribution, then drive traffic into your exported standalone app
killer features
- ai flexibility: OpenAI, local LLMs (ollama), Anthropic, Gemini, etc.
- rich APIs:
pipe.queryScreenpipe
for contextpipe.settings
for app settings- experimental
pipe.streamTranscriptions
for audio transcription streaming (make sure to enable it in settings) - experimental
pipe.streamVision
for OCR/Accessibility/Frames/browser URL streaming - experimental
pipe.input
for keyboard/mouse control
- sandboxed & cross-platform: safe execution on all OS
- real-time: process screen & audio as it happens
- cron jobs: schedule your pipes to run at specific times, same API as Vercel
- nextjs: build desktop native apps with NextJS - no native hell
quick start
we recommend using bun
, install or make sure it’s up to date.
The fastest way to create a new pipe is using our CLI:
follow installation instructions & test your pipe locally
developer CLI
for developers wanting to publish pipes to the store, we provide a dedicated CLI tool:
prerequisite: connect your Stripe account in settings/account to obtain your developer API key.
available commands:
you can deploy your pipe to your screenpipe app through the UI or using screenpipe pipe install <path>
and screenpipe pipe enable <id/folder of your pipe>
.
when you’re ready to deploy, send a PR to the screenpipe repo to add your pipe to the store.
PS: monetization requires a stripe account and US bank account, we recommend wise for international users.
(louis) i am french and i don’t have a personal US bank account, so i use wise.
available pipes
automate your second brain by logging activities to obsidian | ||
google-photo like gallery of your screen recordings with AI insights | ||
explore your data in a powerful table view with filtering and sorting | ||
search through your screen recordings and audio transcripts with AI | ||
visualize your day with AI-powered timeline of activities | ||
automatically identify and label different speakers using AI | ||
organize and summarize meetings with AI - get transcripts and insights | ||
automate business development on linkedin |
to install a pipe from the store, just add the url of the folder in the UI and click install.
LLM links
paste these links into your Cursor chat for context:
- https://github.com/mediar-ai/screenpipe/blob/main/screenpipe-js/browser-sdk/src/index.ts
- https://github.com/mediar-ai/screenpipe/blob/main/pipes/obsidian/src/app/api/log/route.ts
- https://github.com/mediar-ai/screenpipe/blob/main/pipes/search/src/components/search-chat.tsx
- https://github.com/mediar-ai/screenpipe/blob/main/pipes/rewind/src/lib/hooks/use-app-name-suggestion.tsx
- https://github.com/mediar-ai/screenpipe/blob/main/screenpipe-server/src/server.rs
- https://github.com/mediar-ai/screenpipe/blob/main/screenpipe-core/src/pipes.rs