Beyond the Tutorial
Congratulations! You've built a complete scheduling model for the Furniture Workshop, using interval variables, precedence constraints, no-overlap and cumulative resources, sequence variables with transition times, alternative constraints, and reservoir constraints. The tutorial covered the core modeling concepts you'll use in most scheduling applications.
Real-world problems often need additional capabilities: controlling how long the solver runs, handling problems that can't be solved to proven optimality, integrating with existing systems, and debugging complex models. This chapter provides a quick tour of what OptalCP offers beyond the modeling techniques we've covered.
Controlling the Solver
The tutorial examples solved instantly, but real problems are harder. They may need more time, more computing power, or specific stopping criteria.
The Parameters object controls solver behavior:
- Python
- TypeScript
import optalcp as cp
model = cp.Model()
# ... build your model ...
# Solve with custom parameters
result = model.solve({
'timeLimit': 300, # Stop after 5 minutes
'nbWorkers': 8, # Use 8 parallel workers
'solutionLimit': 10, # Stop after finding 10 solutions
'logLevel': 2 # Standard logging verbosity
})
import * as CP from '@scheduleopt/optalcp';
const model = new CP.Model();
// ... build your model ...
// Solve with custom parameters
const result = await model.solve({
timeLimit: 300, // Stop after 5 minutes
nbWorkers: 8, // Use 8 parallel workers
solutionLimit: 10, // Stop after finding 10 solutions
logLevel: 2 // Standard logging verbosity
});
Key parameters include:
| Parameter | What it does |
|---|---|
timeLimit | Maximum seconds to run |
nbWorkers | Number of parallel search workers |
solutionLimit | Stop after finding N solutions |
logLevel | Output verbosity (0 = silent, higher = more detail) |
Solver parameters use camelCase in both Python and TypeScript. This is different from Python function parameters, which use snake_case.
Building Command-Line Tools
When you need to experiment with different settings or integrate scheduling into a larger workflow, parsing parameters from the command line is helpful. OptalCP provides functions that handle this automatically:
- Python
- TypeScript
import optalcp as cp
# Parse solver parameters from command line
params = cp.parse_parameters()
model = cp.Model()
# ... build your model ...
result = model.solve(params)
import * as CP from '@scheduleopt/optalcp';
// Parse solver parameters from command line
const params = CP.parseParameters();
const model = new CP.Model();
// ... build your model ...
const result = await model.solve(params);
Now you can run your program with options like:
python my_scheduler.py --timeLimit 60 --nbWorkers 4
If your program has its own command-line arguments, use parse_known_parameters / parseKnownParameters instead. It returns solver parameters plus any unrecognized arguments for your own processing:
- Python
- TypeScript
import optalcp as cp
# Parse known solver args, collect the rest
params, other_args = cp.parse_known_parameters()
# other_args might contain ['input.txt', '--verbose']
import * as CP from '@scheduleopt/optalcp';
// Parse known solver args, collect the rest
const [params, otherArgs] = CP.parseKnownParameters();
// otherArgs might contain ['input.txt', '--verbose']
Warm Starts
If you have a known schedule—from yesterday's run, a heuristic, or a manual assignment—you can provide it as a starting point. This helps the solver skip over poor solutions and focus on finding improvements:
- Python
- TypeScript
import optalcp as cp
model = cp.Model()
task1 = model.interval_var(length=30, name="Task1")
task2 = model.interval_var(length=20, name="Task2")
model.minimize(task2.end())
# ... more model setup ...
# Create a warm start solution (must be complete)
warm_start = cp.Solution()
warm_start.set_value(task1, start=0, end=30)
warm_start.set_value(task2, start=30, end=50)
warm_start.set_objective(50) # Must match the objective
# Solve with the warm start
result = model.solve(warm_start=warm_start)
import * as CP from '@scheduleopt/optalcp';
const model = new CP.Model();
const task1 = model.intervalVar({ length: 30, name: "Task1" });
const task2 = model.intervalVar({ length: 20, name: "Task2" });
model.minimize(task2.end());
// ... more model setup ...
// Create a warm start solution (must be complete)
const warmStart = new CP.Solution();
warmStart.setValue(task1, 0, 30);
warmStart.setValue(task2, 30, 50);
warmStart.setObjective(50); // Must match the objective
// Solve with the warm start
const result = await model.solve({}, warmStart);
The warm start must be a complete, valid solution—all variables and the objective value must be set. It doesn't need to be optimal, but it must be feasible.
You can also send solutions to the solver while it's running using send_solution() / sendSolution() on the Solver object. This is useful when you have another process (a heuristic, another solver) finding solutions in parallel.
See External Solutions for more details.
Solution Quality
Not every problem can be solved to proven optimality within a time limit. When the solver stops, check what you got:
- Python
- TypeScript
result = model.solve({'timeLimit': 60})
if result.solution:
print(f"Best solution found: {result.objective_value}")
print(f"Bound: {result.objective_bound}")
if result.proof:
print("This solution is optimal!")
else:
gap = result.objective_value - result.objective_bound
print(f"Gap to proven optimum: {gap}")
const result = await model.solve({ timeLimit: 60 });
if (result.solution) {
console.log(`Best solution found: ${result.objective}`);
console.log(`Bound: ${result.objectiveBound}`);
if (result.proof) {
console.log("This solution is optimal!");
} else {
const gap = result.objective - result.objectiveBound;
console.log(`Gap to proven optimum: ${gap}`);
}
}
Understanding the gap between the best solution and the bound tells you how close to optimal your solution might be.
You can control when the solver considers a solution "close enough" to optimal:
- Python
- TypeScript
result = model.solve({
'absoluteGapTolerance': 10, # Stop when within 10 of bound
'relativeGapTolerance': 0.01 # Or within 1% of bound
})
const result = await model.solve({
absoluteGapTolerance: 10, // Stop when within 10 of bound
relativeGapTolerance: 0.01 // Or within 1% of bound
});
Async Solving
For long-running solves or applications that need to show progress, you can monitor the solver with callbacks. The solution callback receives the full solution, so you can display the current best schedule while the solver continues searching:
- Python
- TypeScript
import optalcp as cp
import asyncio
async def solve_with_progress():
solver = cp.Solver()
# Called whenever a new solution is found
def on_solution(event):
print(f"Found solution with objective {event.solution.get_objective()}")
solver.on_solution = on_solution
model = cp.Model()
# ... build your model ...
result = await solver.solve(model, {'timeLimit': 300})
return result
result = asyncio.run(solve_with_progress())
import * as CP from '@scheduleopt/optalcp';
const solver = new CP.Solver();
// Called whenever a new solution is found
solver.onSolution = (event) => {
console.log(`Found solution with objective ${event.solution.getObjective()}`);
};
const model = new CP.Model();
// ... build your model ...
const result = await solver.solve(model, { timeLimit: 300 });
The async API lets you receive updates on solutions, bounds, and log messages as they happen, and even inject new solutions during the search.
See Async Solving for the complete API.
Model Export & Debugging
When something isn't working as expected, export the model to inspect it:
- Python
- TypeScript
import optalcp as cp
model = cp.Model()
# ... build your model ...
# Export to human-readable text
print(model.toText())
# Export to JSON (for sharing or debugging)
json_str = model.toJSON()
import * as CP from '@scheduleopt/optalcp';
const model = new CP.Model();
// ... build your model ...
// Export to human-readable text
console.log(await model.toText());
// Export to JSON (for sharing or debugging)
const jsonStr = model.toJSON();
The text format shows all variables, constraints, and objectives in a readable form. The JSON format can be saved to a file and loaded again later, which is useful for reproducing issues or sharing problems.
See Model Export for more export options.
What We Learned
In this chapter, we explored capabilities beyond the core modeling techniques:
- Parameters control time limits, parallelism, and stopping criteria
- Command-line parsing makes it easy to build scheduling tools
- Warm starts provide initial solutions to speed up the search
- Solution quality metrics help you understand how close to optimal you are
- Async solving enables progress monitoring and interactive applications
- Model export helps with debugging and reproducibility
See Also
- Solving Basics — The solve() function in detail
- External Solutions — Warm starts
- Async Solving — Callbacks and progress monitoring
- Model Export — Export formats