Last updated: March 2026
View this guide as
HTML
Download the setup package
ZIP
Download the guide-only
ZIP
This guide explains how to run Leela Chess Zero
(lc0) on a Linux GPU server and reach it privately
from your own machine over Tailscale.
Optional feature note: This entire guide is optional. You do not need Vast.ai, Tailscale, or a remote Leela server to install or use AIChessGM itself.
Source package note: The downloadable setup package
now includes the Go source for uci-bridge and the startup
scripts used by this workflow. You can build the bridge for
Windows, macOS, or Linux. The included prebuilt bridge
binary remains Linux x86_64 only.
It covers:
Fresh-container note:
up.sh now installs
tailscale automatically if the base Jupyter image does not
already include it.lc0, weights,
and .ts_authkey on a persistent /workspace
volume, recreating the Vast container should be enough to recover on
the next start.Important scope note:
systemd examples in this guide remain Linux-first because
the recommended Vast.ai and home-server paths are Linux
deployments.You are building a private path like this:
Mac or Windows client
-> Tailscale tailnet
-> Vast.ai Linux GPU server or home Linux GPU server
-> uci-bridge
-> lc0
The important design decisions are:
vast-lc0lc0 warm between sessions when
configured to do soUse this table first.
| Situation | Best path |
|---|---|
| You want the easiest first cloud success | Vast.ai + Jupyter/SSH container + up.sh |
| You want an always-on server at home | Home Linux + systemd |
| You already own a Linux GPU machine | Home Linux + systemd |
| You only want a cloud GPU when analyzing | Vast.ai + stop the instance when finished |
| You are brand new to this project | Start with one GPU, one engine, one weights file, one hostname |
For most users, the best order is:
lc0 engine responding to uci and
isready./Users/james/LeelaOnVastAIlc0 weights file (.pb
or .pb.gz)lc0 binary for the OS that will host your
enginencat/WSL for manual
UCI testinglc0systemd available if you want automatic startuplc0This project currently assumes:
lc0 Linux x86_64cuda-fp16555553350vast-lc0If cuda-fp16 fails, try:
cudnncudaInside the extracted setup package, the bridge source lives under:
src/Build the bridge that matches the machine running the Leela host:
cd LeelaSetupPackage/src
go build -o uci-bridge-linux ./cmd/uci-bridge
GOOS=windows GOARCH=amd64 go build -o uci-bridge-windows-amd64.exe ./cmd/uci-bridge
GOOS=darwin GOARCH=arm64 go build -o uci-bridge-darwin-arm64 ./cmd/uci-bridgeUse the Linux build for Vast.ai and systemd examples in
this guide. If your Leela host is a Windows machine, build the
.exe version and point it at your local
lc0.exe.
This is the most important buying decision.
You do not need the most expensive GPU on Vast.ai to
get a useful lc0 server. For a single private engine, a
stable and sensibly priced NVIDIA machine is usually better than chasing
the highest-end option.
Start with:
Vast exposes fields such as:
verifiedreliabilitygpu_ramcpu_ramcpu_cores_effectivedisk_spaceinet_upinet_downdlperfdph_totalFor a first-pass filter, I recommend:
verified = truereliability >= 0.98gpu_ram >= 16disk_space >= 30cpu_ram >= 16cpu_cores_effective >= 8inet_up >= 100inet_down >= 100num_gpus = 1These are recommended starting thresholds, not official requirements.
Prioritize:
Do not optimize for raw GPU alone if the host is unreliable.
Inference from this project’s defaults and typical lc0
use:
More VRAM gives you more headroom, but for a single lc0
engine you usually do not need the largest card on the marketplace.
This workload is not bandwidth-heavy, but you still want a responsive machine. Avoid obviously weak network offers.
You want enough persistent disk for:
lc0If two offers are close in price and quality, prefer the one geographically closer to you. That reduces interactive latency.
For a first success:
Do this before touching lc0.
vast-lc0New tailnets generally have MagicDNS enabled by default, but check before relying on hostnames.
For automated server joins, create a Tailscale auth key.
Recommended starting choice:
Important:
The current Tailscale client requires macOS 12.0 or later. Tailscale recommends the standalone macOS variant as the default install path.
tailscale status
tailscale ip -4Expected result:
Note:
host or
nslookup may bypass MagicDNS behavior.ping vast-lc0 can work even when those tools do
not.The current Tailscale Windows client requires Windows 10 or later or Windows Server 2016 or later.
.exe or .msi.In PowerShell:
tailscale status
tailscale ip -4
Test-NetConnection vast-lc0 -Port 5555If tailscale is not on your PATH, use the
system tray UI for the login and status confirmation.
For mainstream Linux distributions:
curl -fsSL https://tailscale.com/install.sh | sh
sudo tailscale uptailscale status
tailscale ip -4If this machine will be your long-lived server, think about whether you want:
Only relax key expiry for trusted server nodes you actually control.
lc0 binary and weightsFor Vast.ai and home Linux, use:
lc0.pb or
.pb.gzThis repo’s baseline recommendation is:
lc0 stable line: v0.32.1cuda-fp16If cuda-fp16 fails:
cudnncudaVerify GPU visibility:
nvidia-smiIf nvidia-smi fails, stop there and fix the GPU runtime
first.
This is the best path for most new users.
Use:
/workspaceup.sh5555This repo’s current Vast-specific guide uses:
vastai/base-image_cuda-13.0.2-auto/jupyterThat is a practical starting point because it gives you:
When creating the instance:
/workspaceOn the Vast instance, prepare:
/workspace/uci-bridge-linux/workspace/lc0/build/lc0/workspace/BT4-332.pb or another weights path/workspace/up.sh/workspace/.ts_authkeyFrom this repo, a practical starting point is:
cd /workspace
git clone <YOUR_REPO_URL_OR_COPY_FILES> LeelaOnVastAI
cd LeelaOnVastAI
cp out/uci-bridge-persistent-linux /workspace/uci-bridge-linux
chmod +x /workspace/uci-bridge-linux
mkdir -p /workspace/lc0/build
# Put your Linux x86_64 lc0 binary here:
# /workspace/lc0/build/lc0
cp up.sh /workspace/up.sh
chmod +x /workspace/up.shThen:
/workspace/BT4-332.pbWEIGHTS_FILE later if you prefer another
pathStore your Tailscale auth key carefully:
printf '%s\n' '<YOUR_TS_AUTHKEY>' > /workspace/.ts_authkey
chmod 600 /workspace/.ts_authkeyup.sh doesup.sh is designed for container-style Linux environments
that do not use systemd.
It:
tailscale automatically if the base container
does not already have ittailscaled in userspace modeuci-bridgelc0 binary and weightsUse:
/bin/bash -lc /workspace/up.shWith a persistent volume mounted at /workspace, this
lets a freshly recreated Vast Jupyter container come back with the same
up.sh, bridge binary, lc0, weights, and auth
key without redoing the package install by hand.
If you want to override defaults, use environment variables such as:
TS_HOSTNAME=vast-lc0
LISTEN_ADDR=0.0.0.0:5555
MGMT_ADDR=0.0.0.0:53350
TS_ACCEPT_ROUTES=0
UCI_AUTH_TOKEN=<YOUR_OPTIONAL_APP_TOKEN>If you want localhost-only listeners, use:
LISTEN_ADDR=127.0.0.1:5555
MGMT_ADDR=127.0.0.1:53350For most users on Vast.ai, the better security model is:
After the instance starts:
tail -n 50 /workspace/logs/tailscaled.log
tail -n 50 /workspace/logs/bridge.log
tailscale --socket=/workspace/tailscaled.sock ip -4
ss -ltnp | grep -E '5555|53350'Healthy signs:
lc0 startup errorsVast.ai costs can accumulate silently if you leave the instance running.
Recommended habits:
If you prefer to run the container image directly instead of using
the Jupyter + up.sh path, this repo also supports a Docker
route.
From the repo:
docker build -t vast-lc0-uci-bridge .Then run it with your weights volume and Tailscale variables.
Use this path if:
Dockerfile and
entrypoint.shFor most brand-new users, the up.sh path is simpler to
understand and debug.
Use this if you already own a Linux GPU machine and want a server at home.
Use:
lc0uci-bridge started by systemdThis is the cleanest long-lived home setup.
nvidia-smicurl -fsSL https://tailscale.com/install.sh | sh
sudo tailscale up --hostname=home-lc0If you want a server-style identity, you can use an auth key and tagging instead of interactive login.
A practical layout is:
/opt/lc0/lc0/opt/lc0/weights.pb.gz/usr/local/bin/uci-bridge-linuxFrom the setup package you can either:
LeelaSetupPackage/src/, orExample using the included binary:
sudo cp /Users/james/LeelaOnVastAI/out/uci-bridge-persistent-linux /usr/local/bin/uci-bridge-linux
sudo chmod +x /usr/local/bin/uci-bridge-linuxCreate:
/etc/systemd/system/lc0-cloud.serviceExample:
[Unit]
Description=LC0 Cloud UCI Bridge
Wants=network-online.target
Requires=tailscaled.service
After=network-online.target tailscaled.service
[Service]
Type=simple
WorkingDirectory=/opt/lc0
User=root
Group=root
ExecStartPre=/usr/bin/test -x /usr/local/bin/uci-bridge-linux
ExecStartPre=/usr/bin/test -x /opt/lc0/lc0
ExecStartPre=/usr/bin/test -f /opt/lc0/weights.pb.gz
ExecStart=/usr/local/bin/uci-bridge-linux --listen 127.0.0.1:5555 --mgmt-listen 127.0.0.1:53350 --lc0 /opt/lc0/lc0 --weights /opt/lc0/weights.pb.gz --backend cuda-fp16 --engine-name lc0-cloud
Restart=always
RestartSec=3
StartLimitIntervalSec=0
[Install]
WantedBy=multi-user.targetsudo systemctl daemon-reload
sudo systemctl enable lc0-cloud
sudo systemctl start lc0-cloud
sudo systemctl status lc0-cloud --no-pagerjournalctl -u lc0-cloud -n 100 --no-pager
ss -ltnp | grep -E '5555|53350'
tailscale statusRecommended security model on home Linux:
127.0.0.1Once the server is healthy, you need only three things:
vast-lc0 or
home-lc05555From Terminal:
nc vast-lc0 5555Then type:
uci
isready
Expected:
uciokreadyokIf you enabled app-layer auth:
AUTH <YOUR_AUTH_TOKEN>
uci
isready
First check reachability:
Test-NetConnection vast-lc0 -Port 5555If you want a full interactive UCI test on Windows, use one of:
ncatnc vast-lc0 5555Then:
uci
isready
For an AIChessGM build that supports remote-engine connections, the values you need are:
vast-lc0 or home-lc05555If you are using a legacy management/discovery flow, the management port is:
53350Check:
tailscaled actually startedweights not foundCheck:
failed to start lc0Check:
lc0 path is correctCheck:
nvidia-smiFallback order:
cuda-fp16cudnncudaCheck:
Fallback:
5555Check:
Check:
Do not assume process exit alone stops billing.
If you want the highest chance of success:
/workspace.uci-bridge-persistent-linuxlc0up.sh/workspace/.ts_authkey./bin/bash -lc /workspace/up.sh.nc vast-lc0 5555.This is the cleanest new-user path.
Use these to verify product details that may change over time:
Do not overbuild the first version.
For your first success, keep it to:
Once that works, you can add:
lc0 tuningAIChessGM already supports local and remote-style Ollama workflows internally, so in the future it may make sense to support running an Ollama model on a Vast.ai box that is also being used for Leela.
For now, this is a low-priority future task, not a recommended setup path.
Why it is lower priority:
lc0 and the language modelCurrent recommendation: