Training method; affects the way how parameters, factors,
rules, or machine learning models are generated in the
The following flags can be set:
|Ascent parameter optimization and
parameter chart export. Evaluates the effect
of any parameter on the strategy separately. Starts with the first parameter
and applies the results of already optimized parameters and the defaults of
the rest. Seeks for 'plateaus' in the parameter space, while
ignoring single peaks. This is normally the best algorithm for a robust strategy,
except in special cases with highly irregular parameter spaces or with interdependent parameters.
|Brute force parameter optimization and parameter
spreadheet export (Zorro S required). Evaluates all
parameter combinations, exports them to a *par.csv
file in the Log folder, and selects the most profitable combination
that is not a single peak. Can take a long time when many parameters are
optimized or when parameter ranges have many steps. Useful when parameters affect each other in complex ways,
or when it is necessary to evaluate results from any parameter combination. Brute force optimization
tends to overfit the strategy, so out-of-sample
testing or walk-forward optimization is mandatory.
|Genetic parameter optimization (Zorro S required). A population of parameter combinations is evolved toward the best
solution in an iterative process. In each iteration, the best combinations
are stochastically selected, and their parameters are then pair-wise
recombined and randomly mutated to form a new generation. This algorithm is
useful when a large number of parameters - 5 or more per component - must be optimized, or
when parameters affect each other in complex ways. It will likely overfit the strategy, so out-of-sample or walk-forward testing
|Use trade sizes by Lots, Margin,
etc. Large trades get then more weight in the
Set this flag in special cases when the trade volume matters, f.i. for
optimizing the money management or for portfolio systems that
calculate their capital distribution by script. Otherwise trade sizes are ignored in the training process
for giving all trades equal weights.
|Exclude phantom trades. Otherwise phantom trades are
treated as normal trades in the training process.
|Optimize toward the highest single peak in the
parameter space, rather than toward hills or plateaus. This can generate
unstable strategies and is for special purposes only. For instance when
optimizing not a parameter range, but a set of different algorithms or
|Generate individual OptimalF
factor files for all WFO cycles, instead of a single file for the whole
simulation. This produces low-quality factors due to less trades, but
prevents backtest bias.
Number of parameters to optimize for the current
asset/algo component that was selected in a
loop (int, read/only,
valid after the first INITRUN).
Current parameter or current generation, runs from 1 to NumParameters
in Ascent mode, or from 1 to Generations in
Genetic mode (int, read/only, valid after the first
Number of the optimize step, starting with 1 (int,
read/only, valid after the first INITRUN).
The number of step cycles depends on the number of steps in a parameter
range and of the population in a genetic optimization. Counts up after any
step until the required number is reached or StepNext is set to
Set this to 0 for early aborting the optimization (int).
Number of training cycles (int, default = 1)
for special purposes, such as training a combination of interdependent rules and parameters
in a given order (see Training). In any cycle, set either RULES, or
PARAMETERS, or both, dependent on training method.
Not to be confused with WFO cycles or
The number of the current training cycle from 1 to NumTrainCycles,
or 0 in [Test] or [Trade]
The training mode of the current cycle can be determined with the
RULCYCLE, FACCYLE flags.
Set this to a identifier number for logging a particular training run. The
identifier is a 5-digit number in the format WFSPO, where
W = WFO cycle, F = first loop iteration,
S = second loop iteration, P = parameter
number, and O = optimize step. At 11111 the
very first training run is logged.
Maximum population size for the genetic algorithm (int, default = 50). Any parameter
combination is an individual of the population. The population
size reduces automatically when the algorithm converges and only the fittest
individuals and the mutants remain.
Maximum number of generations for the genetic algorithm (int, default = 50).
Evolution terminates when this number is reached or when the overall fitness does not increase for 10
Average number of mutants in any generation, in percent (int, default =
5%). More mutants can find more and better parameter combinations, but let the
algorithm converge slower.
Average number of parameter recombinations in any generation, in
percent (int, default = 80%).
Highest objective return value so far (var,
starting with 0).
Pointer to a list of PARAMETER structs for the current asset/algo
component. The Min, Max, and Step elements are set up in
the list after the first INITRUN in [Train]
mode. The PARAMETER struct is defined in trading.h.
- TrainMode must be set up before the first
- Training methods for machine learning or rules generating are set up with the
- Alternative optimization algorithms from external libraries or
individual optimization targets can be set up with the
parameters and objective
- Parameter charts are only produced by Ascent optimization
when LOGFILE is set.
It is recommended to do first an Ascent training for
determining the parameter dependence of a strategy. Afterwards the final
optimization can done with a different algorithm if requried.
- Parameter spreadsheets are exported by
Brute Force optimization. They can be used for generating 2-d
parameter heatmaps or 3-d parameter surfaces with Excel or other programs.
- Percent steps (4th parameter of the optimize function)
are replaced by 10 equal steps for brute force and genetic optimization.
- In genetic optimization, parameter combinations that were already
evaluated in the previous generation are not evaluated again and are skipped in the log.
This lets the algorithm run faster with higher generations.
- Genetic optimization is also possible with the free Zorro version using
the Z Optimizer tool from the Download page.
- When parameters are trained several times by using
NumTrainCycles, each time the start
values are taken from the last optimization cycle
in Ascent mode. This
sometimes improves the result, but requires a longer time for the training process
and increases the likeliness of overfitting. To prevent overfitting, use not more than 2
subsequent parameter training cycles.
optimize, advise, OptimalF,
objective, setf, resf
► latest version online