How to Join a Public Leaderboard

Leaderboards on Rastion provide competitive environments where you can test your optimization algorithms against community benchmarks and compete with other researchers and developers.

What are Leaderboards?

Leaderboards are competitive challenges that feature:

  • Standardized Problems: Well-defined optimization challenges
  • Fair Evaluation: Consistent testing environment and metrics
  • Public Rankings: Transparent performance comparisons
  • Community Recognition: Showcase your algorithmic achievements

Prerequisites

Before joining a leaderboard:

  1. GitHub Login: Sign in with GitHub
  2. Uploaded Algorithm: Upload your optimizer to Rastion
  3. Playground Testing: Test your algorithm first
  4. API Token: Available in your Rastion settings

Finding Leaderboards

1. Browse Available Problems

Visit rastion.com/leaderboard to see the leaderboard interface:

Problem Cards Grid

The leaderboard displays problems as colorful, artistic cards:

  • Problem Name: Clear problem identification
  • Problem Type: TSP, VRP, MaxCut, Scheduling, etc.
  • Difficulty Level: Visual indicators for complexity
  • Submission Count: Number of current submissions
  • Best Performance: Current leading score

Problem Categories

  • Traveling Salesman Problem (TSP): Route optimization challenges
  • Vehicle Routing Problem (VRP): Multi-vehicle logistics
  • Maximum Cut (MaxCut): Graph partitioning problems
  • Scheduling: Resource allocation and timing
  • Custom Problems: Community-contributed challenges

2. Problem Selection

Each problem card provides:

  • View Rankings: See current leaderboard standings
  • View Repository: Access the problem definition repository
  • Open in Playground: Load problem directly in playground with fixed parameters

Joining Process

1. Understanding the Three Steps

The leaderboard clearly outlines the participation process:

Step 1: Open Problem in Playground

  • Click “Open in Playground” on any problem card
  • Problem loads with fixed parameters (non-editable for fairness)
  • Playground environment is pre-configured for the specific challenge

Step 2: Solve with Your Qubot Optimizer

  • Select your uploaded optimizer from the model selector
  • Configure your optimizer parameters (these remain editable)
  • Your algorithm must work with the fixed problem parameters

Step 3: Ensure Metrics are Returned

  • Your optimizer must return proper optimization results
  • Required metrics: best_value, runtime_seconds
  • Optional metrics: iterations, evaluations, convergence_data

2. Problem Repository Access

View Repository Button

  • Repository Link: Direct access to problem definition at rastion.com/username/repo_name
  • Problem Structure: Review the problem’s qubot implementation
  • Parameter Schema: Understand fixed vs. configurable parameters
  • Test Instances: See example problem instances

Use Locally Button

Between “View Repository” and “Open in Playground”, find the “Use Locally” button:

  • AutoProblem Code: Shows AutoProblem.from_repo() code snippet
  • Copy to Clipboard: Easy copying for local development
  • Local Testing: Test your optimizer locally before playground submission

3. Playground Integration

Fixed Problem Parameters

When opened from leaderboard, the playground automatically:

  • Locks Problem Parameters: Ensures fair comparison across submissions
  • Loads Problem Schema: Pre-configures the problem setup
  • Validates Compatibility: Checks if your optimizer can handle the problem type

Optimizer Configuration

You retain full control over:

  • Algorithm Parameters: Population size, iterations, learning rates, etc.
  • Stopping Criteria: Convergence thresholds, time limits
  • Algorithm Variants: Different versions of your optimization approach

Submission Process

1. Playground Execution and Submission

Execute Your Optimizer

  1. Configure Parameters: Set your optimizer parameters in the playground
  2. Run Optimization: Click the central “Run Optimization” button
  3. Monitor Execution: Watch real-time logs in the terminal viewer
  4. Review Results: Check the optimization results display

Automatic Submission Pipeline

The platform features a streamlined submission process:

  • Automatic Detection: Platform detects when you’re running a leaderboard problem
  • Result Validation: Ensures your optimizer returns required metrics
  • Real-time Updates: Rankings update immediately after successful runs
  • Best Value Tracking: Only your best performance is recorded

2. Ranking System

Ranking Table Features

The leaderboard displays clean, sortable rankings:

Rank | Algorithm | Author | Best Value | Runtime | Actions
-----|-----------|--------|------------|---------|--------
1    | GA-Elite  | alice  | 7542.0     | 45.2s   | [View Repo] [Use Locally] [Open in Playground]
2    | SA-Pro    | bob    | 7598.3     | 38.7s   | [View Repo] [Use Locally] [Open in Playground]
3    | MyTSP-Opt | you    | 7634.1     | 52.1s   | [View Repo] [Use Locally] [Open in Playground]

Ranking Features

  • 1-Based Ranking: Clear numerical ranks (1, 2, 3, …)
  • Sortable Columns: Sort by best value, runtime, or other metrics
  • Action Buttons: Direct access to repositories and playground
  • User Submissions: Remove your own submissions if needed

Performance Metrics

  • Best Value: Your algorithm’s best objective value
  • Runtime in Playground: Execution time in the cloud environment
  • Submission Count: Number of attempts (for reference)
  • Percentile Score: Performance relative to all submissions

3. Submission Validation

Required Metrics

Your optimizer must return:

class OptimizationResult:
    best_value: float        # Required: objective function value
    runtime_seconds: float   # Required: execution time
    iterations: int          # Optional: algorithm iterations
    evaluations: int         # Optional: function evaluations
    solution: Any           # Optional: actual solution found

Validation Process

  1. Problem Matching: Verifies you’re solving the correct problem
  2. Repository Ownership: Confirms you own the optimizer repository
  3. Result Format: Checks that results match expected format
  4. Performance Bounds: Validates results are within reasonable ranges

Leaderboard Interface Features

1. Problem Cards Design

The leaderboard features artistic, non-deterministic card designs:

  • Colorful Gradients: Each problem has unique color schemes
  • Visual Appeal: Modern, engaging card layouts
  • Quick Information: Essential details at a glance
  • Hover Effects: Interactive feedback on card hover

2. Sorting and Filtering

Sorting Options

The dropdown provides sorting by:

  • Best Value: Sort by objective function performance
  • Runtime: Sort by execution speed
  • Recent: Most recently submitted algorithms

No Search/Submit Buttons

The leaderboard focuses on viewing and accessing:

  • Clean Interface: No clutter from search bars or submission counts
  • Direct Actions: Immediate access to repositories and playground
  • Streamlined Experience: Focus on algorithm performance

3. Repository Integration

  • Direct Access: Links to rastion.com/username/repo_name
  • Repository Browsing: Full GitHub-style repository interface
  • Code Review: Examine algorithm implementations
  • Documentation: Read algorithm descriptions and usage

Use Locally Feature

  • Code Snippets: Ready-to-copy AutoProblem.from_repo() code
  • Local Development: Easy integration with local development workflow
  • Testing: Validate algorithms locally before submission

Improving Your Ranking

1. Parameter Optimization Strategy

Systematic Approach

  1. Start with Defaults: Begin with your algorithm’s default parameters
  2. Single Parameter Changes: Modify one parameter at a time
  3. Performance Tracking: Record results for each configuration
  4. Iterative Refinement: Build on successful parameter combinations

Local Testing First

# Test parameter combinations locally before playground submission
from qubots import AutoProblem, AutoOptimizer

# Load the leaderboard problem locally
problem = AutoProblem.from_repo("tsp_challenge_problem", username="benchmarks")
optimizer = AutoOptimizer.from_repo("my_tsp_optimizer", username="your_username")

# Test different parameter sets
parameter_sets = [
    {"population_size": 100, "generations": 500},
    {"population_size": 200, "generations": 1000},
    {"population_size": 150, "generations": 750}
]

for params in parameter_sets:
    result = optimizer.optimize(problem, **params)
    print(f"Params: {params}, Result: {result.best_value}")

2. Algorithm Enhancement

Performance Improvements

  • Convergence Speed: Optimize for faster convergence
  • Solution Quality: Improve final solution quality
  • Robustness: Ensure consistent performance across runs
  • Memory Efficiency: Optimize memory usage for larger problems

Advanced Techniques

  • Hybrid Methods: Combine multiple optimization approaches
  • Problem-specific Heuristics: Leverage domain knowledge
  • Adaptive Parameters: Implement self-tuning parameters
  • Parallel Processing: Utilize multiple cores effectively

3. Submission Strategy

Multiple Attempts

  • Parameter Variants: Submit with different parameter configurations
  • Algorithm Versions: Try different versions of your algorithm
  • Best Performance: Only your best result counts in rankings
  • Learning from Failures: Analyze unsuccessful attempts

Competitive Analysis

  • Study Top Performers: Examine leading algorithms’ repositories
  • Identify Patterns: Look for common successful strategies
  • Benchmark Comparison: Compare your approach with others
  • Innovation Opportunities: Find gaps in current approaches

Leaderboard Etiquette

1. Fair Play

  • Original Work: Submit your own algorithms
  • Proper Attribution: Credit any borrowed techniques
  • No Cheating: Don’t exploit system vulnerabilities
  • Respectful Competition: Maintain professional conduct

2. Community Contribution

  • Share Insights: Discuss your approaches in forums
  • Help Others: Answer questions from newcomers
  • Provide Feedback: Report issues or suggest improvements
  • Open Source: Consider sharing successful algorithms

3. Documentation

# Example algorithm description
algorithm_description = {
    "name": "Hybrid Genetic-SA TSP Solver",
    "description": "Genetic algorithm with simulated annealing local search",
    "key_features": [
        "Tournament selection with elitism",
        "Order crossover (OX) operator", 
        "2-opt local search",
        "Adaptive cooling schedule"
    ],
    "parameters": {
        "population_size": 200,
        "generations": 1000,
        "mutation_rate": 0.1,
        "local_search_prob": 0.3
    },
    "references": [
        "Goldberg, D.E. (1989). Genetic Algorithms in Search, Optimization and Machine Learning"
    ]
}

Tracking Your Progress

1. Performance History

Monitor your submissions:

  • Score Progression: Track improvements over time
  • Ranking Changes: See how you compare to others
  • Algorithm Evolution: Document what changes helped

2. Analytics Dashboard

Access detailed analytics:

  • Instance-level Performance: Which problems are challenging
  • Parameter Sensitivity: How settings affect performance
  • Comparison Analysis: Performance relative to baselines

Troubleshooting

Common Issues

”Submission Failed”

  • Check that your algorithm completed successfully
  • Verify all required metrics were computed
  • Ensure you have submission permissions

”Algorithm Timeout”

  • Optimize your algorithm for speed
  • Reduce problem complexity if possible
  • Check for infinite loops or inefficient code

”Invalid Results”

  • Verify your solution format matches requirements
  • Check that solutions are feasible
  • Ensure objective values are calculated correctly

Next Steps

Competing on leaderboards is a great way to benchmark your algorithms, learn from the community, and contribute to optimization research. Good luck with your submissions!