docs
plugins (pipes)

plugins (pipes)

screenpipe is built for extensibility through plugins that interact with captured screen and audio data. whether you need to tag activities, generate summaries, or send data to third-party services, plugins let you build powerful workflows.

plugins run within screenpipe's sandboxed environment. written in typescript/javascript and nextjs.

why build pipes? 🚀

agents are quite dumb without context. screenpipe is the bridge between human context and AI understanding.

for developers

  • zero infrastructure: run locally, no servers or complex setups, access to your auth tokens (unlike cloud agents)
  • typescript + rust + bun: blazing fast environment
  • full context: rich OCR, desktop scrapping, keyboard/mouse, and audio transcription APIs
  • open source: contribute to augmenting collective human intelligence
  • monetization ready: Stripe integration to monetize your pipes
  • no lock-in: ship a startup in 1h in screenpipe's store and export it later as a desktop native app using screenpipe as a library (we will even help you with that)

killer features

  • ai flexibility: OpenAI, local LLMs (ollama), Anthropic, Gemini, etc.
  • rich APIs:
    • pipe.input for keyboard/mouse control
    • pipe.queryScreenpipe for context
    • pipe.settings for app settings
    • experimental pipe.streamTranscriptions for audio transcription streaming (atm uses a lot of RAM, make sure to enable it in settings/CLI)
    • experimental pipe.streamVision for OCR/Accessibility streaming (atm uses a lot of RAM)
  • sandboxed & cross-platform: safe execution on all OS
  • real-time: process screen & audio as it happens
  • cron jobs: schedule your pipes to run at specific times, same API as Vercel
  • nextjs: build desktop native apps with NextJS - no native hell

quick start

The fastest way to create a new pipe is using our CLI:

bunx @screenpipe/create-pipe@latest

follow installation instructions & test your pipe locally

bun dev

you can deploy your pipe to your screenpipe app through the UI or using screenpipe install <path> and screenpipe enable <id/folder of your pipe>.

when you're ready to deploy, send a PR to the screenpipe repo (opens in a new tab) to add your pipe to the store.

available pipes

pipedescriptionlink
memories gallerygoogle-photo like gallery of your screen recordings with AI insightslink (opens in a new tab)
data tableexplore your data in a powerful table view with filtering and sortinglink (opens in a new tab)
searchsearch through your screen recordings and audio transcripts with AIlink (opens in a new tab)
timelinevisualize your day with AI-powered timeline of activitieslink (opens in a new tab)
speaker identificationautomatically identify and label different speakers using AIlink (opens in a new tab)
obsidian logsautomate your second brain by logging activities to obsidianlink (opens in a new tab)
meeting assistantorganize and summarize meetings with AI - get transcripts and insightslink (opens in a new tab)
linkedin ai agentautomate business development on linkedinlink (opens in a new tab)
loomgenerate looms from your screenpipe datalink (opens in a new tab)

to install a pipe from the store, just add the url of the folder in the UI and click install.