docs
api reference

api reference for screenpipe

below is a detailed explanation of the core apis used in the screenpipe project along with sample implementations.

1. search api

  • endpoint: /search
  • method: get
  • description: searches captured data (ocr, audio transcriptions, etc.) stored in screenpipe's local database based on filters such as content type, timestamps, app name, and window name.

query parameters:

  • q (string, optional): search term.
  • content_type (enum): type of content to search (ocr, audio, etc.).
  • limit (int): max number of results per page (default: 20).
  • offset (int): pagination offset.
  • start_time (timestamp, optional): filter by start timestamp.
  • end_time (timestamp, optional): filter by end timestamp.
  • app_name (string, optional): filter results based on application name.
  • window_name (string, optional): filter results based on window name.
  • include_frames (bool, optional): if true, fetch frame data for ocr content.

sample request:

curl "http://localhost:3030/search?q=meeting&content_type=ocr&limit=5&offset=0"

sample response:

{
  "data": [
    {
      "type": "ocr",
      "content": {
        "frame_id": 123,
        "text": "meeting notes",
        "timestamp": "2024-09-10t12:00:00z",
        "file_path": "/frames/frame123.png",
        "app_name": "google chrome",
        "window_name": "meeting room",
        "tags": ["meeting", "important"]
      }
    }
  ],
  "pagination": {
    "limit": 5,
    "offset": 0,
    "total": 100
  }
}

2. list audio devices api

  • endpoint: /audio/list
  • method: get
  • description: lists all audio input and output devices available on the machine, including default devices.

sample request:

curl "http://localhost:3030/audio/list"

sample response:

[
  {
    "name": "built-in microphone",
    "is_default": true
  },
  {
    "name": "external speaker",
    "is_default": false
  }
]

3. add tags to content api

  • endpoint: /tags/:content_type/:id
  • method: post
  • description: adds custom tags to content items based on the content type (audio or vision).

request body:

{
  "tags": ["important", "review"]
}

sample request:

curl -x post "http://localhost:3030/tags/vision/123" -h "content-type: application/json" -d '{"tags": ["meeting", "important"]}'

sample response:

{
  "success": true
}

4. download pipe api

  • endpoint: /pipes/download
  • method: post
  • description: downloads a pipe (plugin) from a specified url and stores it locally.

request body:

{
  "url": "https://github.com/mediar-ai/screenpipe/tree/main/examples/typescript/pipe-stream-ocr-text"
}

sample request:

curl -x post "http://localhost:3030/pipes/download" -h "content-type: application/json" -d '{"url": "https://github.com/mediar-ai/screenpipe/tree/main/examples/typescript/pipe-stream-ocr-text"}'

sample response:

{
  "message": "pipe pipe-stream-ocr-text downloaded successfully",
  "pipe_id": "pipe-stream-ocr-text"
}

5. run pipe api

  • endpoint: /pipes/enable
  • method: post
  • description: enables a pipe (plugin) to start processing data.

request body:

{
  "pipe_id": "pipe-stream-ocr-text"
}

sample request:

curl -x post "http://localhost:3030/pipes/enable" -h "content-type: application/json" -d '{"pipe_id": "pipe-stream-ocr-text"}'

sample response:

{
  "message": "pipe pipe-stream-ocr-text started",
  "pipe_id": "pipe-stream-ocr-text"
}

6. stop pipe api

  • endpoint: /pipes/disable
  • method: post
  • description: disables a pipe to stop processing data.

request body:

{
  "pipe_id": "pipe-stream-ocr-text"
}

sample request:

curl -x post "http://localhost:3030/pipes/disable" -h "content-type: application/json" -d '{"pipe_id": "pipe-stream-ocr-text"}'

sample response:

{
  "message": "pipe pipe-stream-ocr-text stopped",
  "pipe_id": "pipe-stream-ocr-text"
}

7. health check api

  • endpoint: /health
  • method: get
  • description: returns the health status of the system, including the timestamps of the last frame and audio captures, and the overall system status.

sample request:

curl "http://localhost:3030/health"

sample response:

{
  "status": "healthy",
  "last_frame_timestamp": "2024-09-10t10:00:00z",
  "last_audio_timestamp": "2024-09-10t10:00:00z",
  "frame_status": "ok",
  "audio_status": "ok",
  "message": "all systems are functioning normally."
}