Cinestar logo Cinestar
Open source · MIT

Search images & videos with local AI

Search through images and videos backed by local LLMs. Private, fast, offline-first — powered by bge-large embeddings, moondream vision, and Whisper transcription.

Quick Setup Required

Clipwise requires Docker Compose to run AI services locally. See setup instructions →

Live preview
Cinestar product screenshot
Settings
Cinestar settings screenshot

Grab the latest Cinestar build

Find installers and release notes on the Cinestar SourceForge project page.

Local-first & private

All inference runs locally. Your media never leaves your machine.

Multimodal understanding

Embeddings (bge-large), vision (moondream), and Whisper transcripts working together.

Fast, accurate search

Embed once, query instantly with SQLite-Vec. Timecodes and thumbnails included.

Easy setup with Docker

One command starts all AI services locally. No complex configuration needed.

Privacy First, Always

100% Local Processing

All AI models run on your hardware. No cloud services, no data uploads, no tracking.

Locally Hosted LLMs

Powered by open-source models like bge-large, moondream, and Whisper running entirely on your machine.

Zero Data Collection

No analytics, no telemetry, no usage tracking. Your media and search queries remain completely private.

Use Cases for Everyone

Personal Media Management

Transform your photo and video collection into a searchable archive. Find "that beach sunset from 2019" or "mom's birthday party" instantly.

Google Photos Alternative Family Memories Travel Photos

Content Creators & YouTubers

Quickly find specific moments in hours of raw footage. Search for "when I dropped the camera" or "the reaction shot" across all your projects.

Video Editing Content Archiving B-roll Discovery

Media Organizations

Manage large video libraries for news, documentaries, or film production. Enable teams to find specific content without expensive cloud services.

News Archives Film Production Documentary Research

Academic & Market Research

Analyze interview footage, focus groups, or observational studies. Search for specific topics, emotions, or behaviors across research datasets.

Interview Analysis Focus Groups Behavioral Studies

Extensible Plugin System

Adapt Cinestar to your specific needs with a powerful plugin architecture that grows with your requirements.

Consumer Plugins

Google Photos-style face recognition, automatic album creation, smart sharing features, and social media integration.

Professional Plugins

Advanced metadata extraction, custom transcoding workflows, collaborative editing tools, and enterprise-grade security features.

Enterprise Plugins

LDAP integration, audit trails, automated backup systems, compliance reporting, and multi-user access controls.

Built with an open API - create custom plugins for your specific industry needs. From healthcare to education, adapt Cinestar to any use case.

Powered by Open Source AI Models

Cinestar leverages state-of-the-art open source models to deliver powerful local AI capabilities without compromising your privacy.

BGE (BAAI General Embedding)

High-quality text and image embeddings for semantic search. Converts your content into searchable vector representations.

Embedding Model

Moondream v2

Advanced vision-language model that generates detailed captions and descriptions of image and video content.

Vision Model

Llama 3.2 (3B)

Efficient language model handling general-purpose tasks including search query processing, content reconstruction, and natural language understanding.

Language Model
All models run locally on your hardware - no cloud dependencies

How It Works

Three simple steps to transform your media into a searchable archive

Step 1

Ingest

Drop folders or videos. We transcribe (Whisper), embed frames (BGE), and generate captions (Moondream).

Step 2

Index

Store vectors in SQLite-Vec. Llama 3.2 processes and organizes content. Everything stays local.

Step 3

Search

Natural language queries return exact moments with timecodes and thumbnails powered by our AI models.

Ready when you are

Start with the open-source core. Add capabilities as you grow.