Optimization
Once you have a working algorithm, optimization helps you find the best parameter values. Reversion provides Bayesian optimization for parameter search and rolling cross-validation for robustness checking.
Bayesian Optimization
Section titled “Bayesian Optimization”Bayesian optimization uses Optuna’s TPE (Tree-structured Parzen Estimator) sampler to intelligently search the parameter space. Unlike grid search, it learns from previous trials to focus on promising regions.
How It Works
Section titled “How It Works”- You define a parameter space — which parameters to vary and their bounds
- You choose a scoring function — what to maximize (or minimize)
- The optimizer runs backtests with different parameter combinations
- Each trial’s score informs the next trial’s parameter selection
- After N trials, you get the best parameters found
Running an Optimization
Section titled “Running an Optimization”| Parameter | Description |
|---|---|
algo_id | Algorithm ID |
version | Algorithm version |
symbol | Trading pair |
exchange_id | Exchange |
start_time / end_time | Date range for backtests |
n_trials | Number of trials (default: 20) |
direction | maximize or minimize |
param_space | Parameter bounds and types |
param_mapping | How params map to algorithm config fields |
scoring_code | Optional Python scoring function |
Parameter Space
Section titled “Parameter Space”Define which parameters to search over and their bounds:
{ "rsi_period": { "type": "int", "low": 5, "high": 30 }, "rsi_threshold": { "type": "float", "low": 20.0, "high": 40.0 }, "sl_pct": { "type": "float", "low": 0.02, "high": 0.10 }}Supported types:
| Type | Description | Example |
|---|---|---|
int | Integer range | Period: 5–30 |
float | Float range | Threshold: 20.0–40.0 |
categorical | Discrete choices | Timeframe: [“1h”, “4h”, “1d”] |
Parameter Mapping
Section titled “Parameter Mapping”Maps optimization parameters to algorithm config fields. This tells the optimizer where each parameter goes in the AlgoParams structure.
Custom Scoring
Section titled “Custom Scoring”By default, optimization maximizes the Sharpe ratio. You can provide a custom Python scoring function that runs in a sandbox:
# Receives backtest results dict, returns a float scorescore = results['swapMetrics']['sharpeRatio'] * 0.5 + \ results['swapMetrics']['profitFactor'] * 0.3 - \ results['swapMetrics']['maxDrawdownPct'] * 0.2Use compute_backtest_score to test your scoring function on existing backtest results before running a full optimization.
Monitoring
Section titled “Monitoring”get_optimization_status— poll progress, see best score/params so far, view trial historystop_optimization— early-stop if results are converging; all results so far are preservedplot_optimization— generate matplotlib charts of the optimization trajectory
Warm Starting
Section titled “Warm Starting”If you have prior knowledge (e.g., from a previous optimization run), pass warm_start_params and warm_start_scores to seed the optimizer with known good regions.
Rolling Cross-Validation
Section titled “Rolling Cross-Validation”Rolling cross-validation (walk-forward analysis) tests whether your algorithm generalizes to unseen data by splitting the date range into train/test windows.
How It Works
Section titled “How It Works”|── Train ──|── Test ──| |── Train ──|── Test ──| |── Train ──|── Test ──| |── Train ──|── Test ──|The date range is divided into overlapping folds. Each fold:
- Train window — the optimizer (or your chosen params) runs on this period
- Test window — the algorithm is evaluated on this unseen period
The windows step forward by step_days, creating multiple out-of-sample tests.
Parameters
Section titled “Parameters”| Parameter | Description |
|---|---|
algo_id | Algorithm ID |
version | Algorithm version |
symbol | Trading pair |
exchange_id | Exchange |
start_time / end_time | Full date range |
train_window_days | Length of each training window |
test_window_days | Length of each test window |
step_days | How far to step forward between folds |
capital_scaler | Optional capital multiplier |
Example
Section titled “Example”For a 12-month backtest with 90-day train, 30-day test, and 30-day step:
- Fold 1: Train Jan–Mar, Test Apr
- Fold 2: Train Feb–Apr, Test May
- Fold 3: Train Mar–May, Test Jun
- … and so on
This produces ~9 out-of-sample test periods. If the algorithm performs consistently across all test windows, it’s likely robust.
What to Look For
Section titled “What to Look For”- Consistent test performance — similar Sharpe/win rate across folds means the strategy generalizes
- Train vs test gap — if train metrics are much better than test metrics, the strategy is likely overfit
- Deteriorating folds — if later folds perform worse, the strategy may be losing edge over time (regime change)
Workflow: Optimize → Validate
Section titled “Workflow: Optimize → Validate”The recommended workflow combines both techniques:
- Backtest your base algorithm to establish a baseline
- Optimize parameters with Bayesian search on the full date range
- Validate the optimized parameters with rolling cross-validation
- Compare CV test-window metrics against the full-range optimization metrics
- If consistent — deploy the optimized parameters
- If overfit — reduce parameter space, add regularization (wider bounds, fewer parameters), or use a simpler strategy
Agent-Driven Optimization
Section titled “Agent-Driven Optimization”For more control, use the bo_suggest tool for step-by-step optimization:
- Run a backtest with initial parameters
- Score the results (manually or with
compute_backtest_score) - Pass observed params + scores to
bo_suggestto get the next suggested parameters - Run another backtest with the suggested params
- Repeat until satisfied
This lets you inspect each trial, adjust constraints, or change direction mid-optimization.