Getting Started
Core Concepts
Specialized Optimizers (specialized_optimizers.py)
Domain-specific optimizer classes for different algorithm families
Specialized Optimizers
The specialized_optimizers.py
file provides domain-specific optimizer classes that extend BaseOptimizer
with specialized functionality for different families of optimization algorithms. These classes offer enhanced interfaces and algorithm-specific methods.
File Purpose
The specialized optimizers module provides:
- Algorithm-Specific Interfaces: Tailored interfaces for different optimizer families
- Enhanced Functionality: Additional methods specific to algorithm types
- Type Safety: Strong typing for algorithm-specific operations
- Convenience Methods: Simplified APIs for common optimization patterns
- Parameter Management: Specialized parameter handling for different algorithms
Optimizer Specializations
PopulationBasedOptimizer
For evolutionary algorithms and population-based methods:
class PopulationBasedOptimizer(BaseOptimizer):
"""
Base class for population-based optimization algorithms.
Includes genetic algorithms, evolution strategies, particle swarm optimization.
"""
def __init__(self, metadata: OptimizerMetadata,
population_size: int = 50, **kwargs):
"""
Initialize population-based optimizer.
Args:
metadata: Optimizer metadata
population_size: Size of the population
**kwargs: Additional algorithm parameters
"""
super().__init__(metadata, population_size=population_size, **kwargs)
self.population_size = population_size
self.current_population: List[Any] = []
self.population_fitness: List[float] = []
self.generation = 0
Key Methods:
initialize_population
def initialize_population(self, problem: BaseProblem) -> List[Any]:
"""
Initialize the population for optimization.
Args:
problem: Problem to optimize
Returns:
List of initial solutions
"""
evaluate_population
def evaluate_population(self, problem: BaseProblem,
population: List[Any]) -> List[float]:
"""
Evaluate fitness of entire population.
Args:
problem: Problem to evaluate on
population: Population to evaluate
Returns:
List of fitness values
"""
selection
def selection(self, population: List[Any],
fitness: List[float],
num_parents: int) -> List[Any]:
"""
Select parents for reproduction.
Args:
population: Current population
fitness: Fitness values
num_parents: Number of parents to select
Returns:
Selected parent solutions
"""
crossover
@abstractmethod
def crossover(self, parent1: Any, parent2: Any) -> Tuple[Any, Any]:
"""
Perform crossover between two parents.
Args:
parent1, parent2: Parent solutions
Returns:
Two offspring solutions
"""
mutation
@abstractmethod
def mutation(self, solution: Any, mutation_rate: float) -> Any:
"""
Perform mutation on a solution.
Args:
solution: Solution to mutate
mutation_rate: Probability of mutation
Returns:
Mutated solution
"""
LocalSearchOptimizer
For local search and neighborhood-based methods:
class LocalSearchOptimizer(BaseOptimizer):
"""
Base class for local search optimization algorithms.
Includes hill climbing, simulated annealing, tabu search.
"""
def __init__(self, metadata: OptimizerMetadata, **kwargs):
"""Initialize local search optimizer."""
super().__init__(metadata, **kwargs)
self.current_solution: Optional[Any] = None
self.current_value: Optional[float] = None
self.best_solution: Optional[Any] = None
self.best_value: Optional[float] = None
Key Methods:
get_neighbors
@abstractmethod
def get_neighbors(self, solution: Any, problem: BaseProblem) -> List[Any]:
"""
Get neighboring solutions for local search.
Args:
solution: Current solution
problem: Problem being optimized
Returns:
List of neighboring solutions
"""
acceptance_criterion
def acceptance_criterion(self, current_value: float,
neighbor_value: float,
temperature: float = 1.0) -> bool:
"""
Determine if a neighbor should be accepted.
Args:
current_value: Current solution value
neighbor_value: Neighbor solution value
temperature: Temperature parameter (for simulated annealing)
Returns:
True if neighbor should be accepted
"""
local_search_step
def local_search_step(self, problem: BaseProblem) -> bool:
"""
Perform one step of local search.
Args:
problem: Problem being optimized
Returns:
True if improvement was found
"""
GradientBasedOptimizer
For gradient-based optimization methods:
class GradientBasedOptimizer(BaseOptimizer):
"""
Base class for gradient-based optimization algorithms.
Includes gradient descent, Newton methods, quasi-Newton methods.
"""
def __init__(self, metadata: OptimizerMetadata,
learning_rate: float = 0.01, **kwargs):
"""
Initialize gradient-based optimizer.
Args:
metadata: Optimizer metadata
learning_rate: Learning rate for gradient updates
**kwargs: Additional parameters
"""
super().__init__(metadata, learning_rate=learning_rate, **kwargs)
self.learning_rate = learning_rate
Key Methods:
compute_gradient
def compute_gradient(self, problem: BaseProblem,
solution: List[float],
h: float = 1e-8) -> List[float]:
"""
Compute gradient using finite differences.
Args:
problem: Problem to compute gradient for
solution: Point to compute gradient at
h: Step size for finite differences
Returns:
Gradient vector
"""
compute_hessian
def compute_hessian(self, problem: BaseProblem,
solution: List[float],
h: float = 1e-6) -> List[List[float]]:
"""
Compute Hessian matrix using finite differences.
Args:
problem: Problem to compute Hessian for
solution: Point to compute Hessian at
h: Step size for finite differences
Returns:
Hessian matrix
"""
line_search
def line_search(self, problem: BaseProblem,
solution: List[float],
direction: List[float]) -> float:
"""
Perform line search to find optimal step size.
Args:
problem: Problem being optimized
solution: Current solution
direction: Search direction
Returns:
Optimal step size
"""
gradient_step
def gradient_step(self, problem: BaseProblem,
solution: List[float],
gradient: List[float]) -> List[float]:
"""
Perform one gradient descent step.
Args:
problem: Problem being optimized
solution: Current solution
gradient: Gradient at current solution
Returns:
Updated solution
"""
SwarmOptimizer
For swarm intelligence algorithms:
class SwarmOptimizer(BaseOptimizer):
"""
Base class for swarm intelligence optimization algorithms.
Includes particle swarm optimization, ant colony optimization.
"""
def __init__(self, metadata: OptimizerMetadata,
swarm_size: int = 30, **kwargs):
"""
Initialize swarm optimizer.
Args:
metadata: Optimizer metadata
swarm_size: Number of particles/agents in swarm
**kwargs: Additional parameters
"""
super().__init__(metadata, swarm_size=swarm_size, **kwargs)
self.swarm_size = swarm_size
self.swarm: List[Dict[str, Any]] = []
Key Methods:
initialize_swarm
def initialize_swarm(self, problem: BaseProblem) -> List[Dict[str, Any]]:
"""
Initialize the swarm.
Args:
problem: Problem to optimize
Returns:
List of particle/agent dictionaries
"""
update_particle
@abstractmethod
def update_particle(self, particle: Dict[str, Any],
problem: BaseProblem,
global_best: Any) -> Dict[str, Any]:
"""
Update a single particle/agent.
Args:
particle: Particle to update
problem: Problem being optimized
global_best: Global best solution
Returns:
Updated particle
"""
update_swarm
def update_swarm(self, problem: BaseProblem) -> None:
"""
Update entire swarm for one iteration.
Args:
problem: Problem being optimized
"""
HybridOptimizer
For multi-stage and hybrid optimization approaches:
class HybridOptimizer(BaseOptimizer):
"""
Base class for hybrid optimization algorithms.
Combines multiple optimization strategies.
"""
def __init__(self, metadata: OptimizerMetadata,
stages: List[Dict[str, Any]], **kwargs):
"""
Initialize hybrid optimizer.
Args:
metadata: Optimizer metadata
stages: List of optimization stages
**kwargs: Additional parameters
"""
super().__init__(metadata, stages=stages, **kwargs)
self.stages = stages
self.current_stage = 0
Key Methods:
execute_stage
def execute_stage(self, stage_config: Dict[str, Any],
problem: BaseProblem,
initial_solution: Any) -> OptimizationResult:
"""
Execute a single optimization stage.
Args:
stage_config: Configuration for the stage
problem: Problem being optimized
initial_solution: Starting solution for stage
Returns:
Result from the stage
"""
transition_between_stages
def transition_between_stages(self, from_result: OptimizationResult,
to_stage: Dict[str, Any]) -> Any:
"""
Handle transition between optimization stages.
Args:
from_result: Result from previous stage
to_stage: Configuration for next stage
Returns:
Initial solution for next stage
"""
Usage Examples
Genetic Algorithm Implementation
from qubots import PopulationBasedOptimizer, OptimizerMetadata, OptimizerType, OptimizerFamily
import random
class GeneticAlgorithm(PopulationBasedOptimizer):
"""Simple genetic algorithm implementation."""
def __init__(self, population_size: int = 50,
crossover_rate: float = 0.8,
mutation_rate: float = 0.1, **kwargs):
metadata = OptimizerMetadata(
name="Genetic Algorithm",
description="Simple genetic algorithm",
optimizer_type=OptimizerType.METAHEURISTIC,
optimizer_family=OptimizerFamily.EVOLUTIONARY,
deterministic=False
)
super().__init__(metadata, population_size=population_size,
crossover_rate=crossover_rate,
mutation_rate=mutation_rate, **kwargs)
def _get_default_metadata(self):
return self.metadata
def _optimize_implementation(self, problem: BaseProblem,
initial_solution: Optional[Any] = None) -> OptimizationResult:
# Initialize population
population = self.initialize_population(problem)
fitness = self.evaluate_population(problem, population)
best_idx = min(range(len(fitness)), key=lambda i: fitness[i])
best_solution = population[best_idx]
best_value = fitness[best_idx]
convergence_history = [best_value]
# Evolution loop
for generation in range(self._parameters.get('max_generations', 100)):
# Selection
parents = self.selection(population, fitness, self.population_size)
# Create new population
new_population = []
for i in range(0, len(parents), 2):
parent1 = parents[i]
parent2 = parents[(i + 1) % len(parents)]
# Crossover
if random.random() < self._parameters.get('crossover_rate', 0.8):
child1, child2 = self.crossover(parent1, parent2)
else:
child1, child2 = parent1, parent2
# Mutation
child1 = self.mutation(child1, self._parameters.get('mutation_rate', 0.1))
child2 = self.mutation(child2, self._parameters.get('mutation_rate', 0.1))
new_population.extend([child1, child2])
# Evaluate new population
population = new_population[:self.population_size]
fitness = self.evaluate_population(problem, population)
# Update best
current_best_idx = min(range(len(fitness)), key=lambda i: fitness[i])
if fitness[current_best_idx] < best_value:
best_solution = population[current_best_idx]
best_value = fitness[current_best_idx]
convergence_history.append(best_value)
return OptimizationResult(
best_solution=best_solution,
best_value=best_value,
is_feasible=True,
success=True,
runtime_seconds=0.0,
iterations=len(convergence_history) - 1,
evaluations=len(convergence_history) * self.population_size,
algorithm_name=self.metadata.name,
convergence_history=convergence_history
)
def crossover(self, parent1: List[int], parent2: List[int]) -> Tuple[List[int], List[int]]:
"""Order crossover for permutation problems."""
size = len(parent1)
start, end = sorted(random.sample(range(size), 2))
child1 = [-1] * size
child1[start:end] = parent1[start:end]
pointer = end
for city in parent2[end:] + parent2[:end]:
if city not in child1:
child1[pointer % size] = city
pointer += 1
# Similar for child2
child2 = [-1] * size
child2[start:end] = parent2[start:end]
pointer = end
for city in parent1[end:] + parent1[:end]:
if city not in child2:
child2[pointer % size] = city
pointer += 1
return child1, child2
def mutation(self, solution: List[int], mutation_rate: float) -> List[int]:
"""Swap mutation for permutation problems."""
if random.random() < mutation_rate:
solution = solution.copy()
i, j = random.sample(range(len(solution)), 2)
solution[i], solution[j] = solution[j], solution[i]
return solution
Simulated Annealing Implementation
from qubots import LocalSearchOptimizer
import math
class SimulatedAnnealing(LocalSearchOptimizer):
"""Simulated annealing implementation."""
def __init__(self, initial_temperature: float = 1000.0,
cooling_rate: float = 0.95, **kwargs):
metadata = OptimizerMetadata(
name="Simulated Annealing",
description="Simulated annealing local search",
optimizer_type=OptimizerType.METAHEURISTIC,
optimizer_family=OptimizerFamily.LOCAL_SEARCH,
deterministic=False
)
super().__init__(metadata, initial_temperature=initial_temperature,
cooling_rate=cooling_rate, **kwargs)
def _optimize_implementation(self, problem: BaseProblem,
initial_solution: Optional[Any] = None) -> OptimizationResult:
# Initialize
if initial_solution is not None:
current_solution = initial_solution
else:
current_solution = problem.generate_random_solution()
current_result = problem.evaluate_solution(current_solution)
current_value = current_result.objective_value
best_solution = current_solution
best_value = current_value
temperature = self._parameters.get('initial_temperature', 1000.0)
cooling_rate = self._parameters.get('cooling_rate', 0.95)
convergence_history = [best_value]
evaluations = 1
# Annealing loop
while temperature > 1e-6:
neighbors = self.get_neighbors(current_solution, problem)
for neighbor in neighbors:
neighbor_result = problem.evaluate_solution(neighbor)
neighbor_value = neighbor_result.objective_value
evaluations += 1
# Acceptance criterion
if self.acceptance_criterion(current_value, neighbor_value, temperature):
current_solution = neighbor
current_value = neighbor_value
if current_value < best_value:
best_solution = current_solution
best_value = current_value
break
convergence_history.append(best_value)
temperature *= cooling_rate
return OptimizationResult(
best_solution=best_solution,
best_value=best_value,
is_feasible=True,
success=True,
runtime_seconds=0.0,
iterations=len(convergence_history) - 1,
evaluations=evaluations,
algorithm_name=self.metadata.name,
convergence_history=convergence_history
)
def get_neighbors(self, solution: List[int], problem: BaseProblem) -> List[List[int]]:
"""Generate neighbors by swapping adjacent elements."""
neighbors = []
for i in range(len(solution) - 1):
neighbor = solution.copy()
neighbor[i], neighbor[i + 1] = neighbor[i + 1], neighbor[i]
neighbors.append(neighbor)
return neighbors
def acceptance_criterion(self, current_value: float,
neighbor_value: float,
temperature: float) -> bool:
"""Simulated annealing acceptance criterion."""
if neighbor_value < current_value:
return True
else:
probability = math.exp(-(neighbor_value - current_value) / temperature)
return random.random() < probability
Integration Points
With AutoOptimizer
Specialized optimizers work seamlessly with AutoOptimizer:
# Repository can contain specialized optimizer
class MyGeneticAlgorithm(PopulationBasedOptimizer):
# Implementation here
pass
# Loaded via AutoOptimizer, retains specialized functionality
optimizer = AutoOptimizer.from_repo("username/genetic-algorithm")
assert isinstance(optimizer, PopulationBasedOptimizer)
# Specialized methods available
population = optimizer.initialize_population(problem)
With Specialized Problems
Specialized optimizers pair well with specialized problems:
from qubots import CombinatorialProblem, PopulationBasedOptimizer
# TSP problem with genetic algorithm
tsp_problem = AutoProblem.from_repo("examples/tsp")
genetic_optimizer = AutoOptimizer.from_repo("examples/genetic_tsp")
# Both retain their specialized interfaces
assert isinstance(tsp_problem, CombinatorialProblem)
assert isinstance(genetic_optimizer, PopulationBasedOptimizer)
With Benchmarking
Specialized optimizers provide enhanced benchmarking:
from qubots import BenchmarkSuite
suite = BenchmarkSuite("Evolutionary Algorithms Comparison")
# Add multiple population-based optimizers
optimizers = [
AutoOptimizer.from_repo("examples/genetic_algorithm"),
AutoOptimizer.from_repo("examples/evolution_strategy"),
AutoOptimizer.from_repo("examples/particle_swarm")
]
for opt in optimizers:
assert isinstance(opt, PopulationBasedOptimizer)
suite.add_optimizer(opt)
# Benchmarking can analyze population-specific metrics
results = suite.run_full_benchmark()
Developer Notes
Design Principles
- Inheritance Hierarchy: Clear inheritance from BaseOptimizer
- Algorithm Families: Group related algorithms together
- Common Interfaces: Shared methods for similar operations
- Extensibility: Easy to add new algorithm families
Performance Considerations
- Vectorization: Use vectorized operations where possible
- Memory Management: Efficient handling of populations and swarms
- Parallel Evaluation: Support for parallel fitness evaluation
Extension Guidelines
To create new specialized optimizer types:
- Inherit from BaseOptimizer or existing specialization
- Add family-specific methods and properties
- Implement abstract methods with algorithm-specific logic
- Provide comprehensive documentation and examples
Next Steps
Was this page helpful?
- Specialized Optimizers
- File Purpose
- Optimizer Specializations
- PopulationBasedOptimizer
- initialize_population
- evaluate_population
- selection
- crossover
- mutation
- LocalSearchOptimizer
- get_neighbors
- acceptance_criterion
- local_search_step
- GradientBasedOptimizer
- compute_gradient
- compute_hessian
- line_search
- gradient_step
- SwarmOptimizer
- initialize_swarm
- update_particle
- update_swarm
- HybridOptimizer
- execute_stage
- transition_between_stages
- Usage Examples
- Genetic Algorithm Implementation
- Simulated Annealing Implementation
- Integration Points
- With AutoOptimizer
- With Specialized Problems
- With Benchmarking
- Developer Notes
- Design Principles
- Performance Considerations
- Extension Guidelines
- Next Steps