Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Install

ohara ships as two static binaries — ohara (the CLI) and ohara-mcp (the MCP stdio server) — built per-platform by cargo-dist and attached to every GitHub release.

Supported platforms

OSArchitectures
macOSApple silicon (aarch64-apple-darwin), Intel (x86_64-apple-darwin)
Linuxaarch64-unknown-linux-gnu, x86_64-unknown-linux-gnu
Windowsnot supported — use WSL

One-shot installer

The recommended path. Downloads the right binary for your platform, drops it on PATH, and writes an install receipt that ohara update later uses for self-update:

curl --proto '=https' --tlsv1.2 -LsSf \
  https://github.com/vss96/ohara/releases/latest/download/ohara-cli-installer.sh | sh

curl --proto '=https' --tlsv1.2 -LsSf \
  https://github.com/vss96/ohara/releases/latest/download/ohara-mcp-installer.sh | sh

Two installers because the CLI and the MCP server are independent artifacts — most users want both, but you can install just one.

Tarball download

If you’d rather not pipe a script:

  1. Open the releases page.
  2. Grab the ohara-cli-* and ohara-mcp-* tarball matching your platform.
  3. Unpack and move the binaries somewhere on PATH (e.g. /usr/local/bin or ~/.local/bin).

Build from source

You need Rust 1.85 or newer (see rust-toolchain.toml). From a clone of the repo:

cargo build --release --workspace

Both binaries land under target/release/.

Build with hardware acceleration

The cargo-dist installer for aarch64-apple-darwin (Apple Silicon) bundles the CoreML execution provider from v0.6.2 onwards — you no longer need to rebuild from source to get hardware acceleration on Apple Silicon. CUDA on Linux still requires a source rebuild:

# Linux x86_64 + NVIDIA — CUDA
cargo build --release --features cuda

# Apple silicon — CoreML (only needed if you want CoreML on a
# from-source build; the released binary already has it)
cargo build --release --features coreml

The features flow through ohara-embed to both ohara and ohara-mcp. Pair the resulting binary with ohara index --embed-provider coreml (or cuda) — or leave it on the default auto, which picks CoreML on Apple silicon, CUDA when CUDA_VISIBLE_DEVICES is set, and CPU otherwise. Default features stay CPU-only.

CoreML on long index runs (auto-downgrade still applies). On Apple Silicon, the CoreML execution path leaks ~4 MB per embed_batch call (MALLOC_LARGE heap, see docs/perf/v0.6.1-leak-diagnosis.md) — a 5,000+ commit first-time index would OOM the host before completing. The released v0.6.2 binary’s --embed-provider auto therefore resolves to CPU on Apple Silicon when the upcoming index pass would walk 1,000 commits or more; short passes (query, index --incremental, small repos) still pick CoreML. Pass --embed-provider coreml explicitly to bypass the downgrade and accept the OOM risk; CPU and CUDA paths are unaffected. Re-opened upstream investigation (fastembed / ort) is tracked for a future release.

Updating

The CLI can self-update in place:

ohara update              # install the latest release
ohara update --check      # report whether a newer version exists
ohara update --prerelease # opt into pre-release tags

ohara update only works when the binary was installed via the curl-pipe-sh installer above — it reads the install receipt that the installer dropped beside the binary. If you built from source or unpacked a tarball by hand, update by re-running the installer (or re-building). The cargo-dist installer also drops a standalone ohara-cli-update script alongside the binary; either entry point works. See ohara update for the full flag set.

Next

Now that the binaries are on PATH, head to the Quickstart to index your first repo, or jump straight to Wiring into MCP clients if you already know the drill.