AIChessGM Guide: Set Up a Leela Engine on Vast.ai or Home Linux

Last updated: March 2026

View this guide as HTML
Download the setup package ZIP
Download the guide-only ZIP

This guide explains how to run Leela Chess Zero (lc0) on a Linux GPU server and reach it privately from your own machine over Tailscale.

Optional feature note: This entire guide is optional. You do not need Vast.ai, Tailscale, or a remote Leela server to install or use AIChessGM itself.

Source package note: The downloadable setup package now includes the Go source for uci-bridge and the startup scripts used by this workflow. You can build the bridge for Windows, macOS, or Linux. The included prebuilt bridge binary remains Linux x86_64 only.

It covers:

Fresh-container note:

Important scope note:


1. What you are building

You are building a private path like this:

Mac or Windows client
  -> Tailscale tailnet
  -> Vast.ai Linux GPU server or home Linux GPU server
  -> uci-bridge
  -> lc0

The important design decisions are:


2. Choose your deployment path

Use this table first.

Situation Best path
You want the easiest first cloud success Vast.ai + Jupyter/SSH container + up.sh
You want an always-on server at home Home Linux + systemd
You already own a Linux GPU machine Home Linux + systemd
You only want a cloud GPU when analyzing Vast.ai + stop the instance when finished
You are brand new to this project Start with one GPU, one engine, one weights file, one hostname

For most users, the best order is:

  1. Set up Tailscale on your local machine.
  2. Launch one Linux server.
  3. Get one lc0 engine responding to uci and isready.
  4. Only after that, add more tuning, more ports, or more engines.

3. What you need before you start

Required accounts and software

Local machine requirements

macOS user

Windows user

Home Linux user

Server requirements

Project defaults used in this repo

This project currently assumes:

If cuda-fp16 fails, try:

  1. cudnn
  2. cuda

Build the bridge from the included source package

Inside the extracted setup package, the bridge source lives under:

Build the bridge that matches the machine running the Leela host:

cd LeelaSetupPackage/src
go build -o uci-bridge-linux ./cmd/uci-bridge
GOOS=windows GOARCH=amd64 go build -o uci-bridge-windows-amd64.exe ./cmd/uci-bridge
GOOS=darwin GOARCH=arm64 go build -o uci-bridge-darwin-arm64 ./cmd/uci-bridge

Use the Linux build for Vast.ai and systemd examples in this guide. If your Leela host is a Windows machine, build the .exe version and point it at your local lc0.exe.


4. How to choose a Vast.ai server for Leela Chess Zero

This is the most important buying decision.

You do not need the most expensive GPU on Vast.ai to get a useful lc0 server. For a single private engine, a stable and sensibly priced NVIDIA machine is usually better than chasing the highest-end option.

The short recommendation

Start with:

Vast.ai search fields that matter most

Vast exposes fields such as:

For a first-pass filter, I recommend:

These are recommended starting thresholds, not official requirements.

What to prioritize

1. Reliability and host quality

Prioritize:

Do not optimize for raw GPU alone if the host is unreliable.

2. GPU class

Inference from this project’s defaults and typical lc0 use:

3. VRAM

More VRAM gives you more headroom, but for a single lc0 engine you usually do not need the largest card on the marketplace.

4. Network speed

This workload is not bandwidth-heavy, but you still want a responsive machine. Avoid obviously weak network offers.

5. Disk and persistence

You want enough persistent disk for:

6. Region

If two offers are close in price and quality, prefer the one geographically closer to you. That reduces interactive latency.

A practical Vast.ai buying checklist

Recommendation for new users

For a first success:


5. Set up Tailscale first

Do this before touching lc0.

Why Tailscale is the right default here

Create or confirm your tailnet

  1. Create a Tailscale account.
  2. Sign in on your local machine.
  3. Confirm you can see your machine in the Tailscale admin console.
  4. Enable MagicDNS if it is not already enabled.

New tailnets generally have MagicDNS enabled by default, but check before relying on hostnames.

Create an auth key for servers

For automated server joins, create a Tailscale auth key.

Recommended starting choice:

Important:


6. Local Tailscale setup by operating system

macOS

Install

The current Tailscale client requires macOS 12.0 or later. Tailscale recommends the standalone macOS variant as the default install path.

Sign in

  1. Install Tailscale.
  2. Follow the onboarding flow.
  3. Approve the VPN/network extension prompts.
  4. Log in with your chosen identity provider.

Verify

tailscale status
tailscale ip -4

Expected result:

Note:

Windows

Install

The current Tailscale Windows client requires Windows 10 or later or Windows Server 2016 or later.

Sign in

  1. Install the .exe or .msi.
  2. Open the Tailscale system tray icon.
  3. Choose Log in.
  4. Finish the browser-based login.

Verify

In PowerShell:

tailscale status
tailscale ip -4
Test-NetConnection vast-lc0 -Port 5555

If tailscale is not on your PATH, use the system tray UI for the login and status confirmation.

Home Linux

Install

For mainstream Linux distributions:

curl -fsSL https://tailscale.com/install.sh | sh
sudo tailscale up

Verify

tailscale status
tailscale ip -4

If this machine will be your long-lived server, think about whether you want:

Only relax key expiry for trusted server nodes you actually control.


7. Get the right lc0 binary and weights

For Vast.ai and home Linux, use:

This repo’s baseline recommendation is:

If cuda-fp16 fails:

  1. try cudnn
  2. then try cuda

Before doing anything else on the server

Verify GPU visibility:

nvidia-smi

If nvidia-smi fails, stop there and fix the GPU runtime first.


This is the best path for most new users.

Use:

This repo’s current Vast-specific guide uses:

That is a practical starting point because it gives you:

Step 1: Launch the instance

When creating the instance:

Step 2: Put the required files in /workspace

On the Vast instance, prepare:

From this repo, a practical starting point is:

cd /workspace
git clone <YOUR_REPO_URL_OR_COPY_FILES> LeelaOnVastAI
cd LeelaOnVastAI

cp out/uci-bridge-persistent-linux /workspace/uci-bridge-linux
chmod +x /workspace/uci-bridge-linux

mkdir -p /workspace/lc0/build
# Put your Linux x86_64 lc0 binary here:
# /workspace/lc0/build/lc0

cp up.sh /workspace/up.sh
chmod +x /workspace/up.sh

Then:

Store your Tailscale auth key carefully:

printf '%s\n' '<YOUR_TS_AUTHKEY>' > /workspace/.ts_authkey
chmod 600 /workspace/.ts_authkey

Step 3: Understand what up.sh does

up.sh is designed for container-style Linux environments that do not use systemd.

It:

Step 4: Set the Vast On-start command

Use:

/bin/bash -lc /workspace/up.sh

With a persistent volume mounted at /workspace, this lets a freshly recreated Vast Jupyter container come back with the same up.sh, bridge binary, lc0, weights, and auth key without redoing the package install by hand.

Step 5: Optional environment variables

If you want to override defaults, use environment variables such as:

TS_HOSTNAME=vast-lc0
LISTEN_ADDR=0.0.0.0:5555
MGMT_ADDR=0.0.0.0:53350
TS_ACCEPT_ROUTES=0
UCI_AUTH_TOKEN=<YOUR_OPTIONAL_APP_TOKEN>

If you want localhost-only listeners, use:

LISTEN_ADDR=127.0.0.1:5555
MGMT_ADDR=127.0.0.1:53350

For most users on Vast.ai, the better security model is:

Step 6: Start and verify

After the instance starts:

tail -n 50 /workspace/logs/tailscaled.log
tail -n 50 /workspace/logs/bridge.log
tailscale --socket=/workspace/tailscaled.sock ip -4
ss -ltnp | grep -E '5555|53350'

Healthy signs:

Step 7: Cost control

Vast.ai costs can accumulate silently if you leave the instance running.

Recommended habits:


9. Vast.ai alternative: Dockerized path

If you prefer to run the container image directly instead of using the Jupyter + up.sh path, this repo also supports a Docker route.

From the repo:

docker build -t vast-lc0-uci-bridge .

Then run it with your weights volume and Tailscale variables.

Use this path if:

For most brand-new users, the up.sh path is simpler to understand and debug.


10. Home Linux server setup

Use this if you already own a Linux GPU machine and want a server at home.

Use:

This is the cleanest long-lived home setup.

Step 1: Verify the GPU runtime

nvidia-smi

Step 2: Install Tailscale

curl -fsSL https://tailscale.com/install.sh | sh
sudo tailscale up --hostname=home-lc0

If you want a server-style identity, you can use an auth key and tagging instead of interactive login.

Step 3: Prepare files

A practical layout is:

From the setup package you can either:

Example using the included binary:

sudo cp /Users/james/LeelaOnVastAI/out/uci-bridge-persistent-linux /usr/local/bin/uci-bridge-linux
sudo chmod +x /usr/local/bin/uci-bridge-linux

Step 4: Create a systemd unit

Create:

Example:

[Unit]
Description=LC0 Cloud UCI Bridge
Wants=network-online.target
Requires=tailscaled.service
After=network-online.target tailscaled.service

[Service]
Type=simple
WorkingDirectory=/opt/lc0
User=root
Group=root

ExecStartPre=/usr/bin/test -x /usr/local/bin/uci-bridge-linux
ExecStartPre=/usr/bin/test -x /opt/lc0/lc0
ExecStartPre=/usr/bin/test -f /opt/lc0/weights.pb.gz

ExecStart=/usr/local/bin/uci-bridge-linux --listen 127.0.0.1:5555 --mgmt-listen 127.0.0.1:53350 --lc0 /opt/lc0/lc0 --weights /opt/lc0/weights.pb.gz --backend cuda-fp16 --engine-name lc0-cloud

Restart=always
RestartSec=3
StartLimitIntervalSec=0

[Install]
WantedBy=multi-user.target

Step 5: Enable and start

sudo systemctl daemon-reload
sudo systemctl enable lc0-cloud
sudo systemctl start lc0-cloud
sudo systemctl status lc0-cloud --no-pager

Step 6: Verify

journalctl -u lc0-cloud -n 100 --no-pager
ss -ltnp | grep -E '5555|53350'
tailscale status

Recommended security model on home Linux:


11. How to connect from your client machine

Once the server is healthy, you need only three things:

Mac client test

From Terminal:

nc vast-lc0 5555

Then type:

uci
isready

Expected:

If you enabled app-layer auth:

AUTH <YOUR_AUTH_TOKEN>
uci
isready

Windows client test

First check reachability:

Test-NetConnection vast-lc0 -Port 5555

If you want a full interactive UCI test on Windows, use one of:

Linux client test

nc vast-lc0 5555

Then:

uci
isready

AIChessGM-specific note

For an AIChessGM build that supports remote-engine connections, the values you need are:

If you are using a legacy management/discovery flow, the management port is:


12. Troubleshooting

Problem: Tailscale node does not show up

Check:

Problem: weights not found

Check:

Problem: failed to start lc0

Check:

Problem: backend failure on startup

Check:

Fallback order:

  1. cuda-fp16
  2. cudnn
  3. cuda

Problem: hostname does not resolve

Check:

Fallback:

Problem: connection refused on 5555

Check:

Problem: Vast bill keeps running

Check:

Do not assume process exit alone stops billing.


13. Security recommendations


If you want the highest chance of success:

  1. Install Tailscale on your Mac.
  2. Launch one Vast.ai NVIDIA instance with persistent /workspace.
  3. Copy in:
  4. Store the Tailscale auth key in /workspace/.ts_authkey.
  5. Set the Vast On-start command to /bin/bash -lc /workspace/up.sh.
  6. Wait 45 to 60 seconds.
  7. Test with nc vast-lc0 5555.
  8. Only after that, connect your GUI or remote-engine client.

This is the cleanest new-user path.


15. Official references

Use these to verify product details that may change over time:


16. Final advice

Do not overbuild the first version.

For your first success, keep it to:

Once that works, you can add:

17. Future possibility: Ollama on Vast.ai

AIChessGM already supports local and remote-style Ollama workflows internally, so in the future it may make sense to support running an Ollama model on a Vast.ai box that is also being used for Leela.

For now, this is a low-priority future task, not a recommended setup path.

Why it is lower priority:

Current recommendation: