Cinestar logo Cinestar

Install Cinestar

Follow these quick steps to run Cinestar on your Mac.

macOS

  1. Download the latest Cinestar.app from the SourceForge Files section.
  2. Move Cinestar.app to /Applications.
  3. If macOS blocks the app (Gatekeeper), clear the quarantine attributes:
    sudo xattr -c -r /Applications/Cinestar.app
    If the app is still in Downloads:
    sudo xattr -c -r ~/Downloads/Cinestar.app
  4. Right-click the app and choose Open the first time to allow it through Gatekeeper.

Notes: The first run may take longer while models initialize. All processing runs locally.

Build from source (optional)

  1. Install Node.js 18+ and npm.
  2. Clone the source repository.
  3. Inside the project root run:
    npm install
    npm run dev    # start Electron in dev mode
    # or
    npm run build  # produce a packaged app

AI Services Setup

Recommended Setup

Ollama local install is required for AI services. All processing runs locally on your machine.

Cinestar uses local AI services: Ollama (for BGE embeddings, Moondream vision, and Llama 3.2) for all AI processing.

Ollama Setup (Required)
  1. Install Ollama:
    📥 Download Ollama installer or use command line:
    # macOS/Linux - Quick install
    curl -fsSL https://ollama.com/install.sh | sh
    
    # Windows - Download .exe from ollama.com/download
  2. Pull the required AI models:

    These models will be downloaded automatically on first pull (~2-4GB total)

    # Embedding model for semantic search
    ollama pull bge-large
    
    # Vision model for image/video captioning  
    ollama pull moondream:v2
    
    # Language model for general tasks
    ollama pull llama3.2:3b
  3. Verify models are ready:
    # List installed models
    ollama list
    
    # Test a model (optional)
    ollama run llama3.2:3b "Hello, are you working?"
  4. Launch Cinestar app - Ollama will automatically start as a system service when needed.

Benefits: Better performance, easier model management, direct access to Ollama CLI, faster startup times, automatic model updates.

Health checks
# Check Ollama
ollama list

# Test Ollama API
curl http://localhost:11434/api/tags
Manage Ollama
# Ollama runs as a system service and typically doesn't need manual stopping
# To restart Ollama (if needed):
# macOS: Quit Ollama from menu bar
# Linux: systemctl restart ollama
Back to site Get the app