why AI needs memory
LLMs are powerful but stateless - they don’t know what you were working on 5 minutes ago. screenpipe bridges this gap by:- capturing everything on your screen
- extracting text via OCR
- transcribing audio
- making it all searchable and retrievable
connect to AI providers
ollama (local)
openai
set your API key in screenpipe settings under AI configuration.claude via MCP
screenpipe can act as an MCP server, giving Claude direct access to your screen history:build AI-powered apps
context-aware chatbot
automatic note-taking
use cases
| use case | how screenpipe helps |
|---|---|
| ”what was I working on?“ | search screen history by time |
| ”summarize today’s meetings” | query audio transcriptions |
| ”find that code snippet” | search OCR text by content |
| ”what did john say about X?“ | search transcriptions by keyword |
privacy-first AI
- all data stays on your device
- use local LLMs (Ollama, LMStudio) for complete privacy
- no data sent to cloud unless you explicitly choose cloud providers
- filter what gets captured with window inclusion/exclusion