How to Create a Public Optimization Experiment

Optimization experiments (also called “workflows”) on Rastion allow you to share your research findings, algorithm comparisons, and optimization insights with the community. This guide shows you how to create compelling experiments that others can learn from and build upon.

What is an Optimization Experiment?

An optimization experiment is a structured study that:

  • Documents Optimization Runs: Records specific problem-optimizer combinations
  • Shares Configurations: Makes successful parameter settings available to others
  • Enables Reproduction: Provides all details needed to reproduce results
  • Facilitates Learning: Helps the community understand what works and why

Prerequisites

Before creating an experiment:

  1. Uploaded Repositories: Upload your algorithms to Rastion
  2. Playground Testing: Test your algorithms in the playground
  3. GitHub Login: Sign in with GitHub
  4. Successful Optimization Run: Complete at least one successful playground execution

Creating Experiments from Playground

1. Save Playground Configuration

After a successful optimization run in the playground:

Save Workflow Option

  1. Complete Optimization: Finish a playground run with good results
  2. Click “Save”: Use the save button in the playground interface
  3. Name Your Workflow: Give it a descriptive name
  4. Add Description: Explain what makes this configuration special
  5. Set Visibility: Choose public (shareable) or private (personal)

Workflow Information

Your saved workflow captures:

  • Problem Selection: Exact problem repository and version
  • Optimizer Selection: Your algorithm repository and version
  • Parameter Configuration: All problem and optimizer parameters
  • Performance Results: Best value, runtime, and other metrics
  • Execution Environment: Playground environment specifications

2. Access Experiments Page

Navigate to rastion.com/experiments to:

View Community Experiments

  • Browse Public Workflows: See what others have shared
  • Search by Problem Type: Filter by TSP, VRP, MaxCut, etc.
  • Sort by Performance: Find the best-performing configurations
  • Learn from Success: Study successful parameter combinations

Manage Your Experiments

  • My Workflows: View your saved configurations
  • Edit Descriptions: Update experiment documentation
  • Share Settings: Change visibility between public and private
  • Performance Tracking: Monitor how your experiments perform

Experiment Features

1. Experiment Interface

The experiments page provides:

Compact Search and Sorting

  • Smaller Search Bar: Streamlined interface for finding experiments
  • Sorting Options: Sort by performance, date, popularity
  • Filter by Type: Filter by problem categories (TSP, VRP, etc.)
  • Consistent Width: Matches the benchmarks page layout for consistency

Experiment Cards

Each experiment displays:

  • Configuration Summary: Problem and optimizer combination
  • Performance Metrics: Best value, runtime, success rate
  • Author Information: Creator and creation date
  • Repository Links: Direct links to used repositories
  • Action Buttons: Open in playground, copy URL, view details

2. Sharing and Collaboration

Open in Playground

  • “Open in Playground” Button: Replaces generic “Run” button
  • Pre-configured Setup: Loads exact configuration in playground
  • Parameter Inheritance: All parameters automatically set
  • Immediate Testing: Start experimenting with proven configurations

Repository Integration

  • Clickable Repository Names: Link to rastion.com/username/repo_name
  • Repository Browsing: Full access to algorithm and problem code
  • Version Tracking: Links to specific repository versions used
  • Code Review: Examine implementation details

URL Sharing

  • “Copy URL” Button: Share experiment URLs with other Rastion users
  • Direct Access: Recipients can immediately view and use experiments
  • Community Sharing: Facilitate knowledge sharing across the platform
  • Collaboration: Enable team members to access shared experiments

Creating Multiple Experiments

1. Systematic Experimentation

Parameter Studies

Create multiple experiments to study parameter effects:

  1. Baseline Experiment: Start with default parameters
  2. Single Parameter Variation: Change one parameter at a time
  3. Combination Studies: Test promising parameter combinations
  4. Performance Analysis: Compare results across experiments

Algorithm Comparisons

Compare different approaches:

  1. Same Problem, Different Algorithms: Test multiple optimizers on one problem
  2. Algorithm Variants: Compare different versions of your algorithm
  3. Hybrid Approaches: Test combinations of optimization techniques
  4. Benchmark Studies: Compare against established algorithms

2. Experiment Documentation

Effective Descriptions

Write clear experiment descriptions:

# Genetic Algorithm Parameter Study for TSP

## Objective
Investigate the effect of population size on GA performance for TSP instances.

## Configuration
- Problem: TSP Berlin52 (52 cities)
- Algorithm: Genetic Algorithm with tournament selection
- Variable: Population size (50, 100, 200, 400)
- Fixed: 1000 generations, 0.1 mutation rate, 0.8 crossover rate

## Results
- Best performance: Population size 200 (7542.0)
- Runtime trade-off: Larger populations slower but better quality
- Diminishing returns: 400 population only marginally better than 200

## Insights
- Sweet spot around 200 for this problem size
- Consider adaptive population sizing for different problem scales

Tags and Categorization

Use relevant tags for discoverability:

  • Problem Type: “tsp”, “vrp”, “maxcut”, “scheduling”
  • Algorithm Type: “genetic”, “simulated_annealing”, “local_search”
  • Study Type: “parameter_study”, “comparison”, “benchmark”
  • Domain: “logistics”, “graph_theory”, “combinatorial”

Community Engagement

1. Learning from Others

Browse Community Experiments

  • Study Successful Configurations: Learn from high-performing experiments
  • Understand Parameter Choices: See why certain parameters work well
  • Discover New Approaches: Find innovative optimization strategies
  • Benchmark Your Work: Compare your results with community standards

Experiment Analysis

When viewing others’ experiments:

  • Performance Metrics: Compare objective values and runtimes
  • Parameter Settings: Study successful parameter combinations
  • Problem-Algorithm Fit: Understand which algorithms work for which problems
  • Implementation Details: Access repository code for deeper understanding

2. Contributing to Knowledge

Share Your Insights

  • Document Findings: Explain why certain configurations work
  • Share Failures: Negative results are also valuable for the community
  • Provide Context: Explain the reasoning behind parameter choices
  • Offer Improvements: Suggest enhancements to existing approaches

Best Practices for Public Experiments

  • Clear Naming: Use descriptive experiment names
  • Comprehensive Descriptions: Explain methodology and findings
  • Reproducible Setup: Ensure others can replicate your results
  • Performance Context: Explain what constitutes good performance

Experiment Analysis and Visualization

Statistical Analysis

import numpy as np
import scipy.stats as stats

# Compare algorithm performance
genetic_results = df[df['algorithm'] == 'genetic_tsp']['best_value']
sa_results = df[df['algorithm'] == 'sa_tsp']['best_value']

# Statistical significance test
t_stat, p_value = stats.ttest_ind(genetic_results, sa_results)

analysis = {
    "genetic_mean": np.mean(genetic_results),
    "genetic_std": np.std(genetic_results),
    "sa_mean": np.mean(sa_results),
    "sa_std": np.std(sa_results),
    "p_value": p_value,
    "significant": p_value < 0.05
}

Visualization Options

The platform provides built-in visualizations:

  • Performance Comparison: Box plots, violin plots
  • Convergence Analysis: Line plots showing optimization progress
  • Parameter Sensitivity: Heatmaps and scatter plots
  • Statistical Summary: Tables with means, medians, confidence intervals

Sharing and Collaboration

Publication Settings

  • Public Experiments: Visible to all users, searchable
  • Unlisted: Accessible via direct link only
  • Private: Visible only to you and collaborators

Collaboration Features

# Add collaborators to experiment
experiment.add_collaborator("colleague_username", role="editor")
experiment.add_collaborator("advisor_username", role="viewer")

# Enable community contributions
experiment.enable_community_submissions(
    allow_new_algorithms=True,
    allow_parameter_suggestions=True,
    moderation_required=True
)

Sharing Options

  • Direct Links: Share experiment URLs
  • Embed Code: Embed results in websites/papers
  • Export Data: Download results in CSV, JSON formats
  • Citation: Generate academic citations

Best Practices

Experimental Design

  1. Clear Hypothesis: State what you’re testing
  2. Controlled Variables: Keep non-tested parameters constant
  3. Multiple Runs: Use multiple random seeds for statistical validity
  4. Appropriate Baselines: Include well-known algorithms for comparison

Documentation

# Experiment Documentation Template

## Objective
What research question are you answering?

## Methodology
- Algorithms tested
- Problem instances used
- Parameter configurations
- Evaluation metrics
- Statistical analysis methods

## Results
- Key findings
- Statistical significance
- Performance comparisons
- Unexpected observations

## Conclusions
- Practical implications
- Limitations
- Future work suggestions

Reproducibility

  • Version Control: Specify exact algorithm versions
  • Parameter Documentation: Record all parameter settings
  • Environment Details: Note any special requirements
  • Data Availability: Ensure all data is accessible

Next Steps

Your experiments contribute to the collective knowledge of the optimization community. Share your insights and help advance the field!