Specialized Problems
The specialized_problems.py
file provides domain-specific problem classes that extend BaseProblem
with specialized functionality for different types of optimization problems. These classes offer enhanced interfaces and domain-specific methods.
File Purpose
The specialized problems module provides:
- Domain-Specific Interfaces: Tailored interfaces for different problem types
- Enhanced Functionality: Additional methods specific to problem domains
- Type Safety: Strong typing for domain-specific operations
- Convenience Methods: Simplified APIs for common operations
- Validation: Domain-specific validation and constraints
Problem Specializations
ContinuousProblem
For optimization problems with continuous variables:
class ContinuousProblem(BaseProblem):
"""
Base class for continuous optimization problems.
Provides specialized methods for continuous variable handling.
"""
def __init__(self, metadata: ProblemMetadata,
variable_bounds: Dict[str, Tuple[float, float]]):
"""
Initialize continuous problem.
Args:
metadata: Problem metadata
variable_bounds: Bounds for each variable {name: (min, max)}
"""
super().__init__(metadata)
self.variable_bounds = variable_bounds
self.dimension = len(variable_bounds)
Key Methods:
get_bounds
def get_bounds(self) -> List[Tuple[float, float]]:
"""
Get variable bounds as list of (min, max) tuples.
Returns:
List of bounds for each variable
"""
project_to_bounds
def project_to_bounds(self, solution: List[float]) -> List[float]:
"""
Project solution to feasible bounds.
Args:
solution: Solution vector to project
Returns:
Projected solution within bounds
"""
compute_gradient
def compute_gradient(self, solution: List[float],
h: float = 1e-8) -> List[float]:
"""
Compute numerical gradient at solution point.
Args:
solution: Point to compute gradient at
h: Step size for finite differences
Returns:
Gradient vector
"""
compute_hessian
def compute_hessian(self, solution: List[float],
h: float = 1e-6) -> List[List[float]]:
"""
Compute numerical Hessian matrix.
Args:
solution: Point to compute Hessian at
h: Step size for finite differences
Returns:
Hessian matrix as 2D list
"""
DiscreteProblem
For optimization problems with discrete variables:
class DiscreteProblem(BaseProblem):
"""
Base class for discrete optimization problems.
Handles integer and categorical variables.
"""
def __init__(self, metadata: ProblemMetadata,
variable_domains: Dict[str, Union[range, List[Any]]]):
"""
Initialize discrete problem.
Args:
metadata: Problem metadata
variable_domains: Domain for each variable
"""
super().__init__(metadata)
self.variable_domains = variable_domains
self.dimension = len(variable_domains)
Key Methods:
get_domains
def get_domains(self) -> Dict[str, Union[range, List[Any]]]:
"""Get variable domains."""
return self.variable_domains.copy()
sample_from_domain
def sample_from_domain(self, variable_name: str) -> Any:
"""
Sample a value from variable domain.
Args:
variable_name: Name of variable to sample
Returns:
Random value from variable domain
"""
get_neighbors
def get_neighbors(self, solution: List[Any],
neighborhood_size: int = 1) -> List[List[Any]]:
"""
Get neighboring solutions in discrete space.
Args:
solution: Current solution
neighborhood_size: Size of neighborhood
Returns:
List of neighboring solutions
"""
CombinatorialProblem
For combinatorial optimization problems:
class CombinatorialProblem(BaseProblem):
"""
Base class for combinatorial optimization problems.
Handles permutations, selections, and combinatorial structures.
"""
def __init__(self, metadata: ProblemMetadata, n_elements: int):
"""
Initialize combinatorial problem.
Args:
metadata: Problem metadata
n_elements: Number of elements in the problem
"""
super().__init__(metadata)
self.n_elements = n_elements
Key Methods:
generate_random_permutation
def generate_random_permutation(self) -> List[int]:
"""
Generate random permutation of elements.
Returns:
Random permutation as list of indices
"""
generate_random_selection
def generate_random_selection(self, k: int) -> List[int]:
"""
Generate random selection of k elements.
Args:
k: Number of elements to select
Returns:
Random selection as list of indices
"""
swap_elements
def swap_elements(self, solution: List[int], i: int, j: int) -> List[int]:
"""
Swap two elements in solution.
Args:
solution: Current solution
i, j: Indices to swap
Returns:
Solution with elements swapped
"""
two_opt_move
def two_opt_move(self, solution: List[int], i: int, j: int) -> List[int]:
"""
Perform 2-opt move on solution.
Args:
solution: Current solution (typically a tour)
i, j: Indices for 2-opt move
Returns:
Solution after 2-opt move
"""
ConstrainedProblem
For problems with explicit constraints:
class ConstrainedProblem(BaseProblem):
"""
Base class for constrained optimization problems.
Handles equality and inequality constraints.
"""
def __init__(self, metadata: ProblemMetadata):
"""Initialize constrained problem."""
super().__init__(metadata)
self.equality_constraints: List[Callable] = []
self.inequality_constraints: List[Callable] = []
self.constraint_names: List[str] = []
Key Methods:
add_equality_constraint
def add_equality_constraint(self, constraint: Callable[[Any], float],
name: str = "") -> None:
"""
Add equality constraint h(x) = 0.
Args:
constraint: Function that returns 0 when constraint is satisfied
name: Optional name for the constraint
"""
add_inequality_constraint
def add_inequality_constraint(self, constraint: Callable[[Any], float],
name: str = "") -> None:
"""
Add inequality constraint g(x) <= 0.
Args:
constraint: Function that returns <= 0 when constraint is satisfied
name: Optional name for the constraint
"""
check_feasibility
def check_feasibility(self, solution: Any,
tolerance: float = 1e-6) -> Tuple[bool, List[str]]:
"""
Check if solution satisfies all constraints.
Args:
solution: Solution to check
tolerance: Tolerance for constraint violations
Returns:
(is_feasible, list_of_violations)
"""
compute_penalty
def compute_penalty(self, solution: Any,
penalty_factor: float = 1000.0) -> float:
"""
Compute penalty for constraint violations.
Args:
solution: Solution to evaluate
penalty_factor: Multiplier for penalty terms
Returns:
Total penalty value
"""
MultiObjectiveProblem
For multi-objective optimization:
class MultiObjectiveProblem(BaseProblem):
"""
Base class for multi-objective optimization problems.
Handles multiple conflicting objectives.
"""
def __init__(self, metadata: ProblemMetadata, n_objectives: int):
"""
Initialize multi-objective problem.
Args:
metadata: Problem metadata
n_objectives: Number of objectives
"""
super().__init__(metadata)
self.n_objectives = n_objectives
self.objective_names: List[str] = [f"obj_{i}" for i in range(n_objectives)]
self.objective_types: List[ObjectiveType] = [ObjectiveType.MINIMIZE] * n_objectives
Key Methods:
evaluate_objectives
@abstractmethod
def evaluate_objectives(self, solution: Any) -> List[float]:
"""
Evaluate all objectives for a solution.
Args:
solution: Solution to evaluate
Returns:
List of objective values
"""
dominates
def dominates(self, solution1: Any, solution2: Any) -> bool:
"""
Check if solution1 dominates solution2.
Args:
solution1, solution2: Solutions to compare
Returns:
True if solution1 dominates solution2
"""
compute_pareto_front
def compute_pareto_front(self, solutions: List[Any]) -> List[Any]:
"""
Compute Pareto front from set of solutions.
Args:
solutions: List of solutions
Returns:
List of non-dominated solutions
"""
compute_hypervolume
def compute_hypervolume(self, solutions: List[Any],
reference_point: List[float]) -> float:
"""
Compute hypervolume indicator.
Args:
solutions: List of solutions
reference_point: Reference point for hypervolume
Returns:
Hypervolume value
"""
Usage Examples
Continuous Optimization
from qubots import ContinuousProblem, ProblemMetadata, ProblemType
class RosenbrockProblem(ContinuousProblem):
"""Classic Rosenbrock function optimization."""
def __init__(self, dimension: int = 2):
metadata = ProblemMetadata(
name=f"Rosenbrock {dimension}D",
description="Classic Rosenbrock function",
problem_type=ProblemType.CONTINUOUS
)
# All variables bounded in [-5, 5]
bounds = {f"x{i}": (-5.0, 5.0) for i in range(dimension)}
super().__init__(metadata, bounds)
def _get_default_metadata(self):
return self.metadata
def evaluate_solution(self, solution: List[float]) -> EvaluationResult:
# Rosenbrock function: sum(100*(x[i+1] - x[i]^2)^2 + (1 - x[i])^2)
value = 0.0
for i in range(len(solution) - 1):
value += 100 * (solution[i+1] - solution[i]**2)**2 + (1 - solution[i])**2
return EvaluationResult(
objective_value=value,
is_feasible=True
)
def generate_random_solution(self) -> List[float]:
bounds = self.get_bounds()
return [random.uniform(low, high) for low, high in bounds]
Combinatorial Optimization
from qubots import CombinatorialProblem
class TravelingSalesmanProblem(CombinatorialProblem):
"""Traveling Salesman Problem implementation."""
def __init__(self, cities: List[Tuple[float, float]]):
metadata = ProblemMetadata(
name=f"TSP {len(cities)} cities",
description="Traveling Salesman Problem",
problem_type=ProblemType.COMBINATORIAL
)
super().__init__(metadata, len(cities))
self.cities = cities
self.distance_matrix = self._compute_distance_matrix()
def _compute_distance_matrix(self) -> List[List[float]]:
n = len(self.cities)
matrix = [[0.0] * n for _ in range(n)]
for i in range(n):
for j in range(n):
if i != j:
x1, y1 = self.cities[i]
x2, y2 = self.cities[j]
matrix[i][j] = ((x1-x2)**2 + (y1-y2)**2)**0.5
return matrix
def evaluate_solution(self, tour: List[int]) -> EvaluationResult:
total_distance = 0.0
for i in range(len(tour)):
current_city = tour[i]
next_city = tour[(i + 1) % len(tour)]
total_distance += self.distance_matrix[current_city][next_city]
return EvaluationResult(
objective_value=total_distance,
is_feasible=len(set(tour)) == len(tour) # All cities visited once
)
def generate_random_solution(self) -> List[int]:
return self.generate_random_permutation()
Multi-Objective Optimization
from qubots import MultiObjectiveProblem
class ZDT1Problem(MultiObjectiveProblem):
"""ZDT1 multi-objective test problem."""
def __init__(self, dimension: int = 30):
metadata = ProblemMetadata(
name=f"ZDT1 {dimension}D",
description="ZDT1 multi-objective test function",
problem_type=ProblemType.MULTI_OBJECTIVE
)
super().__init__(metadata, 2) # 2 objectives
self.dimension = dimension
self.variable_bounds = [(0.0, 1.0)] * dimension
def evaluate_objectives(self, solution: List[float]) -> List[float]:
x = solution
f1 = x[0]
g = 1 + 9 * sum(x[1:]) / (len(x) - 1)
f2 = g * (1 - (f1 / g)**0.5)
return [f1, f2]
def evaluate_solution(self, solution: List[float]) -> EvaluationResult:
objectives = self.evaluate_objectives(solution)
# For multi-objective, we can use weighted sum or other scalarization
weighted_sum = sum(objectives) # Simple approach
return EvaluationResult(
objective_value=weighted_sum,
is_feasible=True,
objectives=objectives
)
Integration Points
With AutoProblem
Specialized problems work seamlessly with AutoProblem:
# Repository can contain specialized problem
class MyTSPProblem(CombinatorialProblem):
# Implementation here
pass
# Loaded via AutoProblem, retains specialized functionality
problem = AutoProblem.from_repo("username/tsp-problem")
assert isinstance(problem, CombinatorialProblem)
# Specialized methods available
tour = problem.generate_random_permutation()
improved_tour = problem.two_opt_move(tour, 1, 3)
With Specialized Optimizers
Specialized problems pair well with specialized optimizers:
from qubots import PopulationBasedOptimizer
# Continuous problem with gradient-based optimizer
continuous_problem = RosenbrockProblem(dimension=10)
gradient_optimizer = AutoOptimizer.from_repo("examples/gradient_descent")
# Combinatorial problem with evolutionary optimizer
tsp_problem = TravelingSalesmanProblem(cities)
genetic_optimizer = AutoOptimizer.from_repo("examples/genetic_tsp")
With Benchmarking
Specialized problems provide enhanced benchmarking capabilities:
from qubots import BenchmarkSuite
suite = BenchmarkSuite("Continuous Optimization Benchmark")
# Add multiple continuous problems
for dim in [2, 5, 10, 20]:
problem = RosenbrockProblem(dimension=dim)
suite.add_problem(problem, f"Rosenbrock-{dim}D")
# Benchmark provides domain-specific analysis
results = suite.run_full_benchmark()
Developer Notes
Design Principles
- Inheritance: All specialized classes inherit from BaseProblem
- Composition: Domain-specific functionality added through composition
- Type Safety: Strong typing for domain-specific operations
- Extensibility: Easy to add new specialized problem types
- Lazy Computation: Expensive computations (like distance matrices) computed once
- Caching: Results cached when appropriate
- Memory Efficiency: Large data structures handled efficiently
Extension Guidelines
To create new specialized problem types:
- Inherit from appropriate base class (BaseProblem or existing specialization)
- Add domain-specific methods and properties
- Override abstract methods with domain-specific implementations
- Provide comprehensive documentation and examples
Next Steps