docs
plugins (pipes)

plugins (pipes)

screenpipe is built for extensibility through plugins that interact with captured screen and audio data. whether you need to tag activities, generate summaries, or send data to third-party services, plugins let you build powerful workflows.

plugins come in two flavors:

  • pipes: native plugins that run within screenpipe's sandboxed environment. written in typescript/javascript. pipes can be either:
    • UI-based: desktop native apps with NextJS for user interaction (e.g. think of screenpipe as a local Vercel powered by your 24/7 context)
    • Headless (deprecated): running in the background without a visual interface with cron, etc.

why build pipes? 🚀

think of pipes like a local Zapier which costs 10x & 10x less friction - no auth needed, with full context of your screen and audio data.

for developers

  • zero infrastructure: run locally, no servers or complex setups, access to your auth tokens (unlike Zapier)
  • typescript + bun: blazing fast development
  • full context: rich OCR, desktop scrapping, keyboard/mouse, and audio transcription APIs
  • bounty program: earn $100+ for building pipes or promoting screenpipe
  • open source: contribute to augmenting collective human intelligence
  • monetization ready: Stripe integration to monetize your pipes
  • no lock-in: ship a startup in 1h in screenpipe's store and export it later as a desktop native app using screenpipe as a library (we will even help you with that)

killer features

  • ai flexibility: OpenAI, local LLMs (ollama), or any provider
  • rich APIs:
    • pipe.inbox for AI/human-in-the-loop messaging
    • pipe.scheduler for cron jobs
    • pipe.input for keyboard/mouse control
    • pipe.queryScreenpipe for context
  • sandboxed & cross-platform: safe execution on all OS
  • real-time: process screen & audio as it happens

quick start

The fastest way to create a new pipe is using our CLI:

bunx @screenpipe/create-pipe@latest

follow installation instructions & test your pipe locally

then enable your pipe in screenpipe

screenpipe pipe enable my-pipe

the CLI will guide you through setting up your pipe

available pipes

pipedescriptionlink
linkedin ai assistantautomate business development by getting customers or connections on auto-pilot on linkedinlink (opens in a new tab)
loomgenerate looms from your screenpipe datalink (opens in a new tab)
notion table logsautomate team reporting by generating team work logs and adding to notion tablelink (opens in a new tab)
reddit questionsautomate reddit marketing by posting questions based on screen activitylink (opens in a new tab)
linear commentsautomate product management by AI commenting on linear tasks while you worklink (opens in a new tab)
obsidian time logsautomate your second brain by logging activities to obsidianlink (opens in a new tab)

to install a pipe from the store, just add the url of the folder in the UI and click install.

in CLI:

screenpipe pipe download https://github.com/mediar-ai/screenpipe/tree/main/pipes/pipe-obsidian-time-logs
screenpipe pipe enable pipe-obsidian-time-logs

pipe configuration

we use pipe.json to configure your pipe through UI/CLI:

{
  "fields": [
    {
      "default": 60,
      "description": "Interval in seconds to process screen data",
      "name": "interval",
      "type": "number"
    },
    {
      "default": "daily",
      "description": "Summary frequency: 'daily' or 'hourly:X'",
      "name": "summaryFrequency",
      "type": "string"
    }
  ]
}

this will render in the screenpipe UI.

screenpipe-js SDK

key features:

  • pipe.inbox.send: AI messages with user confirmation
  • pipe.sendDesktopNotification: system notifications
  • pipe.queryScreenpipe: query screen/audio data
  • pipe.input: programmatic UI control (use your keyboard/mouse)
  • pipe.settings: get/set app settings (e.g. AI model, port, etc.)
  • (experimental) vercel-like crons (opens in a new tab) - just replace vercel.json by pipe.json (only work for nextjs pipes)

JS implementation (ask AI) (opens in a new tab)

Rust implementation (ask AI) (opens in a new tab)

bounties & monetization

  • send PRs with your pipes, we'll review and merge!
  • earn $100+ for new pipes or social media promotion
  • early access to pipe store with Stripe integration
  • [email protected]

examples

simple hourly summary with ollama

this example gets the last hour of screen/audio data and generates a summary using your local ollama model:

import { generateText } from 'ai';
import { ollama } from 'ollama-ai-provider';
import { pipe } from '@screenpipe/sdk';
 
async function generateHourlySummary() {
  // get settings & last hour data
  const settings = await pipe.settings.getAll();
  const lastHour = await pipe.queryScreenpipe({
    contentType: 'all',
    startTime: new Date(Date.now() - 60 * 60 * 1000).toISOString(),
    limit: 10000
  });
 
  // format data for context
  const context = lastHour?.data.map(item => {
    if (item.type === 'OCR') {
      return `[${item.content.appName}] ${item.content.text}`;
    }
    return `[audio] ${item.content.transcription}`;
  }).join('\n');
 
  if (settings.aiProviderType === 'native-ollama') {
    const { text } = await generateText({
      model: ollama(settings.aiModel),
      system: 'you are a helpful assistant that summarizes activity data into markdown',
      prompt: `summarize this activity data:\n${context}`,
    });
 
    // write to obsidian / markdown file 
    fs.writeFileSync(`/tmp/hourly-summary-${new Date().toISOString()}.md`, text);
  }
}
 
generateHourlySummary();

smart meeting assistant with ollama

this example monitors for meeting apps and provides real-time AI assistance:

import { streamText } from 'ai';
import { ollama } from 'ollama-ai-provider';
import { pipe } from '@screenpipe/sdk';
 
 
async function meetingAssistant() {
  const settings = await pipe.settings.getAll();
  
 
 
  if (settings.aiProviderType === 'native-ollama') {
    // get last 60 min context
    const context = await pipe.queryScreenpipe({
      contentType: 'all',
      startTime: new Date(Date.now() - 60 * 60 * 1000).toISOString(),
      limit: 10000
    });
 
    const { text } = await generateText({
      model: ollama(settings.aiModel),
      system: 'you are a meeting assistant that provides real-time insights',
      prompt: `analyze this meeting context and suggest helpful insights:\n${JSON.stringify(context)}`
    });
 
    // send insights to app AI inbox with actions
    await pipe.inbox.send({
      title: 'meeting insights',
      body: text,
      actions: [{
        label: 'save to notes',
        callback: async () => {
          // save to your note system
          console.log('saving insights...');
        }
      }]
    });
  }
}
 

these examples show how to:

  • query screen/audio data with pipe.queryScreenpipe
  • use local AI with ollama (you can also use OpenAI, Anthropic, or any other provider)
  • send interactive notifications with pipe.inbox