Leaderboard (Beta)

The Qubots Leaderboard system provides a competitive platform for benchmarking optimization algorithms against standardized problems. Researchers and developers can submit their solutions, compare performance, and track progress in the optimization community.

The Leaderboard system is currently in beta. Features and APIs may change as we gather feedback from the community.

What is the Leaderboard?

The Leaderboard is a competitive benchmarking platform that enables:

  • Algorithm Comparison: Compare your optimization algorithms against others
  • Standardized Benchmarks: Test on well-defined, standardized problems
  • Performance Tracking: Monitor your algorithm’s performance over time
  • Community Recognition: Gain recognition for high-performing solutions
  • Research Collaboration: Connect with other researchers working on similar problems

Key Features

Standardized Problems

Access a curated collection of benchmark problems with well-defined metrics and evaluation criteria.

Automated Evaluation

Submit your algorithms for automated evaluation on standardized hardware and environments.

Real-time Rankings

View live rankings and performance comparisons across different algorithms and problem categories.

Historical Tracking

Track performance improvements and algorithm evolution over time.

Fair Competition

Standardized evaluation environments ensure fair and reproducible comparisons.

Getting Started

Prerequisites

To participate in the Leaderboard:

  1. Qubots framework: pip install qubots
  2. Rastion account: Register at rastion.com
  3. API authentication: Generate API token from account settings

Authentication

Set up authentication for Leaderboard access:

import qubots.rastion as rastion

# Authenticate with your API token
rastion.authenticate("your_api_token_here")

View Available Problems

Explore standardized problems available for competition:

from qubots import get_standardized_problems

# Get list of available benchmark problems
problems = get_standardized_problems()

for problem in problems:
    print(f"Problem ID: {problem.id}")
    print(f"Name: {problem.name}")
    print(f"Category: {problem.category}")
    print(f"Difficulty: {problem.difficulty}")
    print(f"Current best: {problem.current_best_value}")
    print("---")

Submit to Leaderboard

Submit your optimization results for evaluation:

from qubots import submit_to_leaderboard, AutoOptimizer, AutoProblem

# Load your optimizer and problem
optimizer = AutoOptimizer.from_repo("my_username/my_optimizer")
problem = AutoProblem.from_repo("examples/tsp_benchmark")

# Run optimization
result = optimizer.optimize(problem)

# Submit to leaderboard
submission = submit_to_leaderboard(
    result=result,
    problem_id=1,  # TSP benchmark problem
    solver_name="MyGeneticAlgorithm",
    solver_repository="my_username/my_optimizer",
    solver_config={
        "version": "1.2.0",
        "population_size": 100,
        "generations": 1000
    }
)

print(f"Submission ID: {submission.id}")
print(f"Status: {submission.status}")

Leaderboard API

LeaderboardClient

Main interface for Leaderboard operations:

from qubots import LeaderboardClient

client = LeaderboardClient()

# Get problem leaderboard
leaderboard = client.get_problem_leaderboard(problem_id=1)

# Get submission details
submission = client.get_submission(submission_id="sub_123")

# Get user's submissions
my_submissions = client.get_user_submissions()

LeaderboardSubmission

Submission data structure:

class LeaderboardSubmission:
    id: str                    # Unique submission identifier
    problem_id: int           # Problem identifier
    solver_name: str          # Name of the optimization algorithm
    solver_repository: str    # Repository containing the solver
    solver_config: Dict       # Configuration used for optimization
    result_value: float       # Objective value achieved
    execution_time: float     # Time taken for optimization
    submission_time: datetime # When the submission was made
    status: str              # Evaluation status
    rank: int                # Current rank on leaderboard

StandardizedProblem

Information about benchmark problems:

class StandardizedProblem:
    id: int                   # Problem identifier
    name: str                # Problem name
    description: str         # Problem description
    category: str            # Problem category (TSP, MaxCut, etc.)
    difficulty: str          # Difficulty level
    objective_type: str      # Minimize or maximize
    evaluation_metric: str   # Primary evaluation metric
    time_limit: int          # Maximum execution time
    current_best_value: float # Current best known value
    total_submissions: int   # Number of submissions

Leaderboard Categories

Problem Categories

The Leaderboard organizes problems into several categories:

Combinatorial Optimization

  • Traveling Salesman Problem (TSP): Find shortest route visiting all cities
  • Maximum Cut (MaxCut): Partition graph to maximize cut weight
  • Vehicle Routing Problem (VRP): Optimize delivery routes
  • Knapsack Problem: Maximize value within weight constraints

Continuous Optimization

  • Function Optimization: Optimize mathematical functions
  • Parameter Tuning: Optimize algorithm parameters
  • Neural Network Training: Optimize network weights

Multi-Objective Optimization

  • Pareto Front Discovery: Find trade-off solutions
  • Constraint Satisfaction: Satisfy multiple objectives

Difficulty Levels

Problems are categorized by difficulty:

  • Easy: Small instances, well-understood problems
  • Medium: Moderate complexity, realistic problem sizes
  • Hard: Large-scale, challenging instances
  • Expert: Research-level, cutting-edge problems

Evaluation Metrics

Primary Metrics

Different problems use different evaluation criteria:

  • Objective Value: Primary optimization target
  • Solution Quality: How close to optimal/best-known
  • Execution Time: Time efficiency of the algorithm
  • Convergence Speed: How quickly algorithm improves
  • Robustness: Consistency across multiple runs

Ranking System

Leaderboard rankings consider multiple factors:

# Example ranking calculation
def calculate_rank(submissions):
    for submission in submissions:
        # Primary: objective value
        primary_score = submission.result_value
        
        # Secondary: execution time (tie-breaker)
        time_penalty = submission.execution_time / time_limit
        
        # Final score
        submission.score = primary_score + (time_penalty * 0.1)
    
    # Sort and assign ranks
    submissions.sort(key=lambda s: s.score)
    for i, submission in enumerate(submissions):
        submission.rank = i + 1

Advanced Features

Batch Submissions

Submit multiple algorithm variants:

from qubots import LeaderboardClient

client = LeaderboardClient()

# Submit multiple configurations
configs = [
    {"population_size": 50, "generations": 500},
    {"population_size": 100, "generations": 1000},
    {"population_size": 200, "generations": 2000}
]

submissions = []
for config in configs:
    # Run optimization with config
    result = optimizer.optimize(problem, config=config)
    
    # Submit to leaderboard
    submission = submit_to_leaderboard(
        result=result,
        problem_id=1,
        solver_name=f"MyAlgorithm_v{config['population_size']}",
        solver_repository="my_username/my_optimizer",
        solver_config=config
    )
    submissions.append(submission)

Performance Analysis

Analyze your submissions and compare with others:

# Get detailed performance analysis
analysis = client.get_performance_analysis(
    problem_id=1,
    user_submissions_only=True
)

print(f"Best rank achieved: {analysis.best_rank}")
print(f"Average rank: {analysis.average_rank}")
print(f"Improvement over time: {analysis.improvement_trend}")

# Compare with top performers
top_performers = client.get_top_performers(problem_id=1, limit=10)
for performer in top_performers:
    print(f"{performer.solver_name}: {performer.result_value}")

Collaboration Features

Connect with other researchers:

# Follow interesting submissions
client.follow_submission(submission_id="sub_123")

# Get notifications for new submissions
notifications = client.get_leaderboard_notifications()

# Share insights and discussions
client.add_submission_comment(
    submission_id="sub_123",
    comment="Interesting approach! How did you handle local optima?"
)

Best Practices

Algorithm Development

  • Start with simple baselines before complex algorithms
  • Test locally first to ensure correctness
  • Document your approach for community benefit
  • Iterate based on leaderboard feedback

Submission Strategy

  • Submit incrementally as you improve your algorithm
  • Use meaningful names for easy identification
  • Include detailed configurations for reproducibility
  • Monitor performance trends over time

Fair Competition

  • Follow evaluation guidelines strictly
  • Respect time and resource limits
  • Report any issues with evaluation process
  • Contribute to problem discussions

Troubleshooting

Submission Issues

# Check submission status
submission = client.get_submission(submission_id)
if submission.status == "failed":
    print(f"Error: {submission.error_message}")
    print(f"Logs: {submission.evaluation_logs}")

Performance Problems

  • Verify algorithm correctness on smaller instances
  • Check for timeout issues with large problems
  • Ensure reproducible results across runs
  • Contact support for evaluation environment issues

Future Features

The Leaderboard system is actively being developed. Upcoming features include:

  • Team competitions for collaborative optimization
  • Dynamic problems that change over time
  • Multi-stage competitions with elimination rounds
  • Real-world problem integration from industry partners
  • Advanced analytics and performance insights

Next Steps

Support

For Leaderboard-related questions: