Skip to content

Optimization

Once you have a working algorithm, optimization helps you find the best parameter values. Reversion provides Bayesian optimization for parameter search and rolling cross-validation for robustness checking.

Bayesian optimization uses Optuna’s TPE (Tree-structured Parzen Estimator) sampler to intelligently search the parameter space. Unlike grid search, it learns from previous trials to focus on promising regions.

  1. You define a parameter space — which parameters to vary and their bounds
  2. You choose a scoring function — what to maximize (or minimize)
  3. The optimizer runs backtests with different parameter combinations
  4. Each trial’s score informs the next trial’s parameter selection
  5. After N trials, you get the best parameters found
ParameterDescription
algo_idAlgorithm ID
versionAlgorithm version
symbolTrading pair
exchange_idExchange
start_time / end_timeDate range for backtests
n_trialsNumber of trials (default: 20)
directionmaximize or minimize
param_spaceParameter bounds and types
param_mappingHow params map to algorithm config fields
scoring_codeOptional Python scoring function

Define which parameters to search over and their bounds:

{
"rsi_period": { "type": "int", "low": 5, "high": 30 },
"rsi_threshold": { "type": "float", "low": 20.0, "high": 40.0 },
"sl_pct": { "type": "float", "low": 0.02, "high": 0.10 }
}

Supported types:

TypeDescriptionExample
intInteger rangePeriod: 5–30
floatFloat rangeThreshold: 20.0–40.0
categoricalDiscrete choicesTimeframe: [“1h”, “4h”, “1d”]

Maps optimization parameters to algorithm config fields. This tells the optimizer where each parameter goes in the AlgoParams structure.

By default, optimization maximizes the Sharpe ratio. You can provide a custom Python scoring function that runs in a sandbox:

# Receives backtest results dict, returns a float score
score = results['swapMetrics']['sharpeRatio'] * 0.5 + \
results['swapMetrics']['profitFactor'] * 0.3 - \
results['swapMetrics']['maxDrawdownPct'] * 0.2

Use compute_backtest_score to test your scoring function on existing backtest results before running a full optimization.

  • get_optimization_status — poll progress, see best score/params so far, view trial history
  • stop_optimization — early-stop if results are converging; all results so far are preserved
  • plot_optimization — generate matplotlib charts of the optimization trajectory

If you have prior knowledge (e.g., from a previous optimization run), pass warm_start_params and warm_start_scores to seed the optimizer with known good regions.

Rolling cross-validation (walk-forward analysis) tests whether your algorithm generalizes to unseen data by splitting the date range into train/test windows.

|── Train ──|── Test ──|
|── Train ──|── Test ──|
|── Train ──|── Test ──|
|── Train ──|── Test ──|

The date range is divided into overlapping folds. Each fold:

  1. Train window — the optimizer (or your chosen params) runs on this period
  2. Test window — the algorithm is evaluated on this unseen period

The windows step forward by step_days, creating multiple out-of-sample tests.

ParameterDescription
algo_idAlgorithm ID
versionAlgorithm version
symbolTrading pair
exchange_idExchange
start_time / end_timeFull date range
train_window_daysLength of each training window
test_window_daysLength of each test window
step_daysHow far to step forward between folds
capital_scalerOptional capital multiplier

For a 12-month backtest with 90-day train, 30-day test, and 30-day step:

  • Fold 1: Train Jan–Mar, Test Apr
  • Fold 2: Train Feb–Apr, Test May
  • Fold 3: Train Mar–May, Test Jun
  • … and so on

This produces ~9 out-of-sample test periods. If the algorithm performs consistently across all test windows, it’s likely robust.

  • Consistent test performance — similar Sharpe/win rate across folds means the strategy generalizes
  • Train vs test gap — if train metrics are much better than test metrics, the strategy is likely overfit
  • Deteriorating folds — if later folds perform worse, the strategy may be losing edge over time (regime change)

The recommended workflow combines both techniques:

  1. Backtest your base algorithm to establish a baseline
  2. Optimize parameters with Bayesian search on the full date range
  3. Validate the optimized parameters with rolling cross-validation
  4. Compare CV test-window metrics against the full-range optimization metrics
  5. If consistent — deploy the optimized parameters
  6. If overfit — reduce parameter space, add regularization (wider bounds, fewer parameters), or use a simpler strategy

For more control, use the bo_suggest tool for step-by-step optimization:

  1. Run a backtest with initial parameters
  2. Score the results (manually or with compute_backtest_score)
  3. Pass observed params + scores to bo_suggest to get the next suggested parameters
  4. Run another backtest with the suggested params
  5. Repeat until satisfied

This lets you inspect each trial, adjust constraints, or change direction mid-optimization.