Skip to main content

Benchmarks

Two types of benchmarks have been performed on a large set of scheduling problems:

  • Extensive benchmarks against IBM CP Optimizer.
  • Benchmarks against the state-of-the-art.

Benchmarks against IBM CP Optimizer

Full source code in three languages

Every benchmark below is published in the optalcp-benchmarks repository — implemented in Python, C#, and TypeScript. The repository doubles as a collection of real-world OptalCP examples; browse it to see what idiomatic OptalCP code looks like on non-trivial models. All benchmarks run on the Preview edition.

The table below contains links to the detailed results. The setup is the following:

  • Both solvers are using four workers (threads).
  • CP Optimizer is using the default setting.
  • OptalCP uses half of the workers for LNS (with default propagation settings) and the other half for Failure-Directed Search (with the maximum propagation levels).
  • Openshop benchmark is an exception. It is too easy, so it is solved with only two workers for both solvers.
BenchmarkResultsModeling features
Jobshop

Classical instances: Results
ANC: Easy100×100100×10001000×1001000×1000

endBeforeStartnoOverlap

Jobshop with Operators

Number of operators:
45678910

endBeforeStartnoOverlappulsele

Flexible JobshopResults

alternativeendBeforeStartnoOverlap

Jobshop TTJobshop with sequence-dependent Transition Times
Results

endBeforeStartnoOverlap with transition times

OpenshopResults

noOverlap

RCPSPResource Constrained Project Scheduling Problem
Results

endBeforeStartpulsele

MMRCPSPMulti-Mode Resource-Constrained Project Scheduling Problem

PSPLIBMMLIB50MMLIB100

alternativepulsele,
non-renewable resources ( presencetimessumle )

RCPSP-CPRRCPSP with Consumption and Production of Resources
Results

endBeforeStartpulsele,
reservoir ( stepAtStartstepAtEndnegsumgele )

Benchmarks against the state-of-the-art

Benchmarks against the state of the art are available as a separate GitHub site.

In the benchmarks, you will find:

  • a mathematical description of the problem and its main variants
  • the instance files (usually > 1000) classified in easy, medium, hard or open
  • a description of the most common formats
  • the best-known solutions from published results or engines (Cplex, CP Optimizer, OR-tools, OptalCP)
  • list of relevant publications
  • subsets of problems of interest (large, small but still open)

The most advanced state-of-the-art benchmarks, as of today, are:

The very first version of the benchmark was built on top of the outstanding work of Naderi, Ruiz and Roshanaei in their paper Mixed-Integer Programming versus Constraint Programming for shop scheduling problems: New Results and Outlook, which compares CPO, Cplex and Gurobi on a benchmark of 6623 instances over 17 benchmarks with a timeout of 1h.

Because running multiple solvers on thousands of instances takes time, this benchmark suite is still under construction. We are working to add other types of problems (RCPSP and workforce scheduling).