COPT Tuner

Introduction

The COPT Tuner is a tool designed for tuning performance automatically for all supported problem types.

  • For MIP problems, it supports tuning for solving time, relative gap, best objective value and objective bound;

  • For non-MIP problems, only solving time supported.

The workflow of the COPT tuning tool is as follows:

  1. First, perform benchmark calculation and allow users to customize benchmark calculation parameters;

  2. Next, generate tuning parameters one by one, and find parameter combinations that improve the solution performance through parameter tuning calculations.

Provided capabilities

The COPT tuning tool provides the following capabilities:

Tuning method

Controlled by the parameter TuneMethod, options are: greedy search and aggressive search. The greedy method tries to find better parameter settings within limited number of trials, while the aggressive method search for more combinations and has much larger search space than the greedy one, and can potentially find even better parameter settings at the expense of more elapsed tuning time. Default setting is to choose automatically.

  • Greedy search strategy: It is expected to optimize the calculation with fewer parameters and find better parameter settings;

  • Broader search strategy: try more parameter combinations, have a larger search space, and are more likely to find better parameter settings, but also consume more tuning time.

The possible values and corresponding meanings of the parameter TuneMethod are as follows. By setting it to a different value, the search method can be selected. The default setting is automatic selection.

  • -1: Automatic selection

  • 0: Greedy search strategy

  • 1: Broader search strategy

Tuning mode

Controlled by the parameter TuneMode, options are: solving time, relative gap, objective value and objective bound. For MIP problem, by default, if the baseline run is not solved to optimality within specified time limit, tuner will change tuning mode to relative gap automatically. Default setting is to choose automatically.

  • 0: Solving time

  • 1: Optimal relative tolerance

  • 2: Objective function value

  • 3: Lower bound of objective function value

Note: For integer programming problems, by default, if the benchmark calculation does not optimize the model within the given time limit, the tuning tool will automatically switch the tuning mode to the optimal relative tolerance.

Tuning permutations

Controlled by the parameter TunePermutes. Tuner allow users to run more permutations for each trial parameter set to evaluate performance variability. Default setting is to choose automatically.

Tuning measure

Controlled by the parameter TuneMeasure, options are: by average or maximum. When users run more permutations for each trial, tuner will compute the aggregated tuning value by this measure. Default setting is to choose automatically.

  • 0: Calculate the average

  • 1: Calculate the maximum value

Tuning targets

Controlled by the parameter TuneTargetTime and TuneTargetRelGap. Tuner enables users to specify target solving time or relative gap for tuning, when tuner finds out parameters that satisfy the specified target, it will stop tuning. For solving time, default value is 0.01 seconds, while for relative gap, default value is 1e-4.

Tuning output

Controlled by the parameter TuneOutputLevel, options are: no output, show summary for improved trials, show summary for each trial and show detailed log for each trial. Default setting is to show summary for each trial.

  • 0: Do not output tuning log

  • 1: Output only a summary of the improved parameters

  • 2: Output a summary of each tuning attempt

  • 3: Output a detailed log of each tuning attempt

TuneTimeLimit

Controlled by the parameter TuneTimeLimit. This parameter is used to control the overall time limit for the improvement run of tuning. Default setting is to choose automatically.

User defined parts

  • User defined parameters

    The tool enables users to set parameters for the baseline run, which will also be used as fixed parameters for each trial run. Tuner will not tune parameters in the fixed parameters.

  • User defined MIP start

    The COPT tuner enables users to set MIP start for the baseline run, which will also be used for each trial run also.

  • User defined tuning file

    The COPT tuner enables users to read tuning parameter sets from tuning file, if so, the tuner will try to tune from the given parameter sets, otherwise, tuner will generate tuning parameter sets automatically. The tuning file is similar to parameter file, with the difference that it allow multiple values for each parameter name.

    Note: The COPT tuning file has a similar format to the COPT parameter file, except that the tuning file allows multiple values to be specified for a single parameter.

Load or writing tuning parameter

After the parameter tuning is completed, the number of parameter tuning results can be obtained through the attribute TuneResults, and the tuning results of the specified number can also be loaded into the model or written into the parameter file.

COPT can output the parameter tuning results of the specified number to the parameter file (".par"). The parameters that need to be specified are:

  • idx: parameter tuning result number

  • filename: file name

The corresponding functions in different programming interfaces are as follows:

Table 26 functions for writing parameter tuning results in different interfaces

API

function

C

COPT_WriteTuneParam

C++

Model::WriteTuneParam()

C#

Model.WriteTuneParam()

Java

Model.writeTuneParam()

Python

Model.writeTuneParam()

Example

For example, to tune model "foo.mps" from command line for solving time with COPT command line tool, the commands are:

copt_cmd -c "read foo.mps; tune; exit"

To use the tuner in API such as Python, the codes are:

env = Envr()
m = env.createModel()
m.read("foo.mps")
m.tune()