Developers
Walkthroughs
Price Prediction Worker

Walkthrough: Build and Deploy Price Prediction Worker Node

Overview

This guide walks through deploying a worker node that predicts cryptocurrency prices (ETH, BTC, SOL, etc.) using machine learning models. You'll configure data sources, select ML models, and deploy via Docker.

Prerequisites

  1. Review worker deployment with Docker documentation
  2. Clone the basic-coin-prediction-node (opens in a new tab) repository:
git clone https://github.com/allora-network/basic-coin-prediction-node
cd basic-coin-prediction-node

Video Tutorial

Configuration

Environment Variables (.env)

Configure your .env file with the following parameters:

TOKEN

Cryptocurrency to predict. Options: ETH, SOL, BTC, BNB, ARB

Note: For Binance, any token works. For Coingecko, add the token's coin_id to the token map (opens in a new tab). See Coingecko docs (opens in a new tab) and coin list (opens in a new tab).

TRAINING_DAYS

Days of historical data for training. Must be ≥ 1.

  • 1-7 days: Captures recent volatility
  • 7-30 days: Balanced historical context
  • 30+ days: Long-term pattern recognition

TIMEFRAME

Data granularity (e.g., 10min, 1h, 1d).

For Coingecko, avoid downsampling by following these minimums:

  • TIMEFRAME >= 30min if TRAINING_DAYS <= 2
  • TIMEFRAME >= 4h if TRAINING_DAYS <= 30
  • TIMEFRAME >= 4d if TRAINING_DAYS >= 31

MODEL

ML model for prediction. Options:

  • LinearRegression: Fast, linear relationships
  • SVR: Non-linear patterns, handles outliers
  • KernelRidge: Balanced complexity
  • BayesianRidge: Provides uncertainty estimates

Add custom models in model.py (opens in a new tab).

DATA_PROVIDER

Data source. Options: Binance or Coingecko

REGION

For Binance only. Options: EU or US

CG_API_KEY

Your Coingecko API key (required if DATA_PROVIDER=coingecko)

Example .env

TOKEN=ETH
TRAINING_DAYS=30
TIMEFRAME=4h
MODEL=SVR
REGION=US
DATA_PROVIDER=binance
CG_API_KEY=

Network Configuration (config.json)

  1. Copy config.example.json to config.json
  2. Update the following fields:

wallet

  • nodeRpc: RPC URL for your network
  • addressKeyName: Wallet key name from wallet setup
  • addressRestoreMnemonic: Wallet mnemonic phrase

worker

Array of topic configurations. Each topic requires:

  • topicId: Topic ID for this worker
  • InferenceEndpoint: Endpoint exposing inferences (e.g., http://localhost:8000/inference/{Token})
  • Token: Token identifier matching your endpoint implementation
⚠️

The worker array supports multiple topics. Duplicate and modify the configuration for each additional topic:

"worker": [
      {
        "topicId": 1,
        "inferenceEntrypointName": "api-worker-reputer",
        "loopSeconds": 5,
        "parameters": {
          "InferenceEndpoint": "http://localhost:8000/inference/{Token}",
          "Token": "ETH"
        }
      },
      {
        "topicId": 2,
        "inferenceEntrypointName": "api-worker-reputer",
        "loopSeconds": 5,
        "parameters": {
          "InferenceEndpoint": "http://localhost:8000/inference/{Token}",
          "Token": "ETH"
        }
      }
    ],

Model Customization

The basic-coin-prediction-node includes a regression model for ETH price prediction on topic 1. Learn to customize it in the model.py walkthrough.

Deployment

Step 1: Export Variables

From the root directory:

chmod +x init.config
./init.config

This exports environment variables from your config.json for the offchain node.

💡

If you modify config.json after running init.config, rerun it before proceeding:

chmod +x init.config
./init.config

Step 2: Get Testnet Tokens

Copy your Allora address and request tokens from the Allora Testnet Faucet (opens in a new tab) for worker registration.

Step 3: Deploy

docker compose up --build

This starts the offchain node and inference services. They communicate through internal Docker DNS.

Verification

If deployment succeeds, you'll see the worker checking for active nonces:

offchain_node    | {"level":"debug","topicId":1,"time":1723043600,"message":"Checking for latest open worker nonce on topic"}

Successful inference submission shows:

{"level":"debug","msg":"Send Worker Data to chain","txHash":<tx-hash>,"time":<timestamp>,"message":"Success"}

Test Locally

Test your inference server:

curl http://localhost:8000/inference/<token>

Verify the response format and prediction values.

Next Steps