Walkthrough: Build and Deploy Price Prediction Worker Node
Overview
This guide walks through deploying a worker node that predicts cryptocurrency prices (ETH, BTC, SOL, etc.) using machine learning models. You'll configure data sources, select ML models, and deploy via Docker.
Prerequisites
- Review worker deployment with Docker documentation
- Clone the basic-coin-prediction-node (opens in a new tab) repository:
git clone https://github.com/allora-network/basic-coin-prediction-node
cd basic-coin-prediction-nodeVideo Tutorial
Configuration
Environment Variables (.env)
Configure your .env file with the following parameters:
TOKEN
Cryptocurrency to predict. Options: ETH, SOL, BTC, BNB, ARB
Note: For Binance, any token works. For Coingecko, add the token's
coin_idto the token map (opens in a new tab). See Coingecko docs (opens in a new tab) and coin list (opens in a new tab).
TRAINING_DAYS
Days of historical data for training. Must be ≥ 1.
- 1-7 days: Captures recent volatility
- 7-30 days: Balanced historical context
- 30+ days: Long-term pattern recognition
TIMEFRAME
Data granularity (e.g., 10min, 1h, 1d).
For Coingecko, avoid downsampling by following these minimums:
TIMEFRAME >= 30minifTRAINING_DAYS <= 2TIMEFRAME >= 4hifTRAINING_DAYS <= 30TIMEFRAME >= 4difTRAINING_DAYS >= 31
MODEL
ML model for prediction. Options:
LinearRegression: Fast, linear relationshipsSVR: Non-linear patterns, handles outliersKernelRidge: Balanced complexityBayesianRidge: Provides uncertainty estimates
Add custom models in model.py (opens in a new tab).
DATA_PROVIDER
Data source. Options: Binance or Coingecko
REGION
For Binance only. Options: EU or US
CG_API_KEY
Your Coingecko API key (required if DATA_PROVIDER=coingecko)
Example .env
TOKEN=ETH
TRAINING_DAYS=30
TIMEFRAME=4h
MODEL=SVR
REGION=US
DATA_PROVIDER=binance
CG_API_KEY=Network Configuration (config.json)
- Copy
config.example.jsontoconfig.json - Update the following fields:
wallet
nodeRpc: RPC URL for your networkaddressKeyName: Wallet key name from wallet setupaddressRestoreMnemonic: Wallet mnemonic phrase
worker
Array of topic configurations. Each topic requires:
topicId: Topic ID for this workerInferenceEndpoint: Endpoint exposing inferences (e.g.,http://localhost:8000/inference/{Token})Token: Token identifier matching your endpoint implementation
The worker array supports multiple topics. Duplicate and modify the configuration for each additional topic:
"worker": [
{
"topicId": 1,
"inferenceEntrypointName": "api-worker-reputer",
"loopSeconds": 5,
"parameters": {
"InferenceEndpoint": "http://localhost:8000/inference/{Token}",
"Token": "ETH"
}
},
{
"topicId": 2,
"inferenceEntrypointName": "api-worker-reputer",
"loopSeconds": 5,
"parameters": {
"InferenceEndpoint": "http://localhost:8000/inference/{Token}",
"Token": "ETH"
}
}
],Model Customization
The basic-coin-prediction-node includes a regression model for ETH price prediction on topic 1. Learn to customize it in the model.py walkthrough.
Deployment
Step 1: Export Variables
From the root directory:
chmod +x init.config
./init.configThis exports environment variables from your config.json for the offchain node.
If you modify config.json after running init.config, rerun it before proceeding:
chmod +x init.config
./init.configStep 2: Get Testnet Tokens
Copy your Allora address and request tokens from the Allora Testnet Faucet (opens in a new tab) for worker registration.
Step 3: Deploy
docker compose up --buildThis starts the offchain node and inference services. They communicate through internal Docker DNS.
Verification
If deployment succeeds, you'll see the worker checking for active nonces:
offchain_node | {"level":"debug","topicId":1,"time":1723043600,"message":"Checking for latest open worker nonce on topic"}Successful inference submission shows:
{"level":"debug","msg":"Send Worker Data to chain","txHash":<tx-hash>,"time":<timestamp>,"message":"Success"}Test Locally
Test your inference server:
curl http://localhost:8000/inference/<token>Verify the response format and prediction values.
Next Steps
- Understand model implementation for customization
- Query worker data to monitor performance
- Review worker requirements for optimization
- Explore available topics for additional markets