Playground Integration

The Qubots Playground provides a cloud-based environment for executing optimization algorithms without the need for local computational resources. It enables researchers and developers to test, benchmark, and share their optimization solutions in a scalable, collaborative environment.

What is the Playground?

The Playground is a cloud-based execution environment that allows you to:

  • Run optimizations without local resource constraints
  • Test algorithms on standardized hardware
  • Share results with the community
  • Access powerful computing resources for large-scale problems
  • Collaborate with other researchers and developers

Key Features

Cloud Execution

Execute optimization algorithms on powerful cloud infrastructure without worrying about local hardware limitations.

Standardized Environment

All executions run in a consistent, reproducible environment ensuring fair comparisons and reliable results.

Resource Scaling

Automatically scale computational resources based on problem complexity and requirements.

Result Sharing

Share optimization results, visualizations, and insights with the community.

Integration with Qubots

Seamless integration with the Qubots framework for easy deployment of local algorithms to the cloud.

Getting Started

Prerequisites

Before using the Playground, ensure you have:

  1. Qubots installed: pip install qubots
  2. Rastion account: Sign up at rastion.com
  3. API token: Generate from your Rastion account settings

Authentication

Set up authentication for Playground access:

import qubots.rastion as rastion

# Authenticate with your API token
rastion.authenticate("your_api_token_here")

# Or set environment variable
# export RASTION_API_TOKEN="your_api_token_here"

Basic Usage

Execute an optimization in the Playground:

from qubots import execute_playground_optimization

# Execute optimization in the cloud
result = execute_playground_optimization(
    problem_name="tsp_problem",
    optimizer_name="genetic_tsp",
    problem_username="community",
    optimizer_username="research_group",
    config={
        "population_size": 100,
        "generations": 500,
        "mutation_rate": 0.1
    }
)

print(f"Best solution: {result.best_solution}")
print(f"Best value: {result.best_value}")
print(f"Execution time: {result.execution_time}")

Playground API

PlaygroundExecutor

The main interface for Playground operations:

from qubots import PlaygroundExecutor

executor = PlaygroundExecutor()

# Submit optimization job
job_id = executor.submit_optimization(
    problem_repo="examples/maxcut_problem",
    optimizer_repo="examples/ortools_maxcut",
    config={"time_limit": 300}
)

# Monitor job status
status = executor.get_job_status(job_id)
print(f"Job status: {status.state}")

# Retrieve results when complete
if status.is_complete():
    result = executor.get_job_result(job_id)

PlaygroundResult

Comprehensive result object from Playground executions:

class PlaygroundResult:
    best_solution: Any          # Best solution found
    best_value: float          # Best objective value
    execution_time: float      # Total execution time
    iterations: int            # Number of iterations
    convergence_data: List     # Convergence history
    resource_usage: Dict       # CPU, memory usage stats
    metadata: Dict             # Additional execution metadata

ModelInfo

Information about models available in the Playground:

from qubots import ModelInfo

# Get information about available models
problems = ModelInfo.list_problems()
optimizers = ModelInfo.list_optimizers()

# Get detailed model information
problem_info = ModelInfo.get_problem_details("tsp_problem", "community")
optimizer_info = ModelInfo.get_optimizer_details("genetic_tsp", "research_group")

Advanced Features

Batch Execution

Execute multiple optimizations simultaneously:

from qubots import PlaygroundExecutor

executor = PlaygroundExecutor()

# Submit batch of jobs
jobs = executor.submit_batch([
    {
        "problem_repo": "examples/tsp",
        "optimizer_repo": "examples/genetic_tsp",
        "config": {"population_size": 50}
    },
    {
        "problem_repo": "examples/tsp", 
        "optimizer_repo": "examples/ortools_tsp",
        "config": {"time_limit": 300}
    }
])

# Wait for all jobs to complete
results = executor.wait_for_batch(jobs)

Custom Resource Requirements

Specify computational resource requirements:

result = execute_playground_optimization(
    problem_name="large_tsp",
    optimizer_name="genetic_algorithm",
    problem_username="community",
    optimizer_username="research",
    resources={
        "cpu_cores": 8,
        "memory_gb": 16,
        "time_limit_minutes": 60,
        "gpu_enabled": False
    }
)

Result Visualization

Access built-in visualization tools:

from qubots import PlaygroundResult

result = execute_playground_optimization(...)

# Generate convergence plot
result.plot_convergence()

# Generate resource usage chart
result.plot_resource_usage()

# Export results for external analysis
result.export_to_csv("optimization_results.csv")
result.export_to_json("optimization_results.json")

Integration Patterns

Local Development to Cloud Deployment

Develop locally and deploy to Playground:

# 1. Develop and test locally
from qubots import AutoProblem, AutoOptimizer

problem = AutoProblem.from_repo("my_username/my_problem")
optimizer = AutoOptimizer.from_repo("my_username/my_optimizer")

# Test locally
local_result = optimizer.optimize(problem)

# 2. Deploy to Playground for larger scale testing
cloud_result = execute_playground_optimization(
    problem_name="my_problem",
    optimizer_name="my_optimizer", 
    problem_username="my_username",
    optimizer_username="my_username",
    config=optimizer.get_config()
)

Benchmarking in the Cloud

Run comprehensive benchmarks using Playground resources:

from qubots import BenchmarkSuite, PlaygroundExecutor

# Create benchmark suite
suite = BenchmarkSuite()
suite.add_problem_from_repo("examples/tsp")
suite.add_optimizer_from_repo("examples/genetic_tsp")
suite.add_optimizer_from_repo("examples/ortools_tsp")

# Execute benchmark in Playground
executor = PlaygroundExecutor()
benchmark_results = executor.run_benchmark_suite(suite)

Best Practices

Resource Management

  • Estimate resource needs based on problem size and algorithm complexity
  • Use appropriate time limits to avoid unnecessary costs
  • Monitor resource usage to optimize future executions
  • Clean up completed jobs to manage storage

Configuration Management

  • Use version control for optimization configurations
  • Document parameter choices for reproducibility
  • Test configurations locally before cloud deployment
  • Use parameter sweeps for hyperparameter optimization

Result Management

  • Download important results for local analysis
  • Use meaningful job names for easy identification
  • Archive completed experiments for future reference
  • Share interesting results with the community

Troubleshooting

Common Issues

Authentication Errors

# Verify token is set correctly
import os
print(os.getenv('RASTION_API_TOKEN'))

# Re-authenticate if needed
rastion.authenticate("your_token_here")

Job Failures

# Check job status and error messages
status = executor.get_job_status(job_id)
if status.has_error():
    print(f"Error: {status.error_message}")
    print(f"Logs: {status.execution_logs}")

Resource Limitations

  • Check available resource quotas in your account
  • Reduce resource requirements or upgrade account
  • Contact support for quota increases

Next Steps

Support

For Playground-related questions and support: