IntegrationCLI Usage

CLI Usage

The ReAPI CLI provides command-line access to run tests, manage deployments, and integrate with automation workflows.

Installation

NPM Installation

npm install -g @reapi/cli
 
# Verify installation
reapi --version

Docker Usage

# Run without installation
docker run --rm reapi/cli:latest --version
 
# Create alias for convenience
alias reapi='docker run --rm -v $(pwd):/workspace reapi/cli:latest'

Authentication

API Key Setup

# Set API key as environment variable
export REAPI_API_KEY="your-api-key-here"
 
# Or use config file
reapi config set api-key "your-api-key-here"
 
# Verify authentication
reapi auth status

Configuration File

# Initialize configuration
reapi config init
 
# Set workspace
reapi config set workspace "your-workspace-id"
 
# Set default environment
reapi config set default-environment "staging"
 
# View current configuration
reapi config show

Running Tests with Secrets

ReAPI CLI supports injecting secrets at runtime for maximum security. This allows you to store secrets in your own vault (AWS Secrets Manager, HashiCorp Vault, etc.) and provide them only during test execution.

Basic Secrets Injection

# Inject secrets from a JSON file
reapi test run --deployment my-deployment --secrets ./secrets.json
 
# Inject secrets from a YAML file
reapi test run --deployment my-deployment --secrets ./secrets.yaml

Secrets File Format

JSON format (secrets.json):

{
  "API_KEY": "your-api-key-here",
  "DB_PASSWORD": "your-database-password",
  "AUTH_TOKEN": "Bearer eyJhbGc..."
}

YAML format (secrets.yaml):

API_KEY: your-api-key-here
DB_PASSWORD: your-database-password
AUTH_TOKEN: Bearer eyJhbGc...

CI/CD Integration Example

#!/bin/bash
# Retrieve secrets from AWS Secrets Manager
aws secretsmanager get-secret-value \
  --secret-id reapi/production \
  --query SecretString \
  --output text > /tmp/secrets.json
 
# Run tests with injected secrets
reapi test run \
  --deployment production-tests \
  --secrets /tmp/secrets.json \
  --output junit > test-results.xml
 
# Clean up secrets file
rm /tmp/secrets.json

Security Benefits

  • Zero-knowledge: Secrets never stored in ReAPI’s cloud
  • Centralized management: Use your existing secret management solution
  • Instant rotation: Update secrets without re-deploying tests
  • Compliance: Meet requirements for secrets to stay in your infrastructure

Detailed guide: See Secrets Management for complete setup instructions, including browser-based password and biometric unlock options.

Running Tests

Basic Test Execution

# Run a specific test runner
reapi run --runner smoke-tests
 
# Run with specific environment
reapi run --runner regression --environment staging
 
# Run a single test case
reapi run --test-case "user-login-flow"

Advanced Execution Options

# Run with timeout (in seconds)
reapi run --runner integration --timeout 1800
 
# Run with specific tags
reapi run --tags "critical,auth" --environment prod
 
# Run and wait for completion
reapi run --runner nightly --wait-for-completion
 
# Run with custom variables
reapi run --runner api-tests --var "userId=123" --var "testMode=debug"

Output Formats

# JSON output
reapi run --runner smoke --output json > results.json
 
# JUnit XML output (for CI/CD)
reapi run --runner regression --output junit > test-results.xml
 
# Human-readable output (default)
reapi run --runner integration --output human
 
# Quiet mode (minimal output)
reapi run --runner smoke --quiet
 
# Verbose mode (detailed output)
reapi run --runner integration --verbose

Test Management

Listing Resources

# List all test runners
reapi list runners
 
# List test cases in a specific folder
reapi list tests --folder "authentication"
 
# List test cases with specific tags
reapi list tests --tags "smoke,critical"
 
# List environments
reapi list environments
 
# List deployments
reapi list deployments

Test Information

# Get details about a test runner
reapi describe runner smoke-tests
 
# Get test case details
reapi describe test "user-login-flow"
 
# Get environment configuration
reapi describe environment staging
 
# Get deployment status
reapi describe deployment "nightly-regression"

Deployment Management

Creating Deployments

# Create a new deployment
reapi deployment create \
  --name "staging-regression" \
  --runner "regression-suite" \
  --environment "staging" \
  --schedule "0 2 * * *"  # Daily at 2 AM
 
# Create deployment with webhook trigger
reapi deployment create \
  --name "pr-validation" \
  --runner "api-contracts" \
  --environment "staging" \
  --webhook-enabled

Managing Deployments

# List all deployments
reapi deployment list
 
# Get deployment details
reapi deployment show "staging-regression"
 
# Update deployment schedule
reapi deployment update "staging-regression" --schedule "0 */6 * * *"
 
# Enable/disable deployment
reapi deployment enable "staging-regression"
reapi deployment disable "staging-regression"
 
# Delete deployment
reapi deployment delete "staging-regression"

Deployment Execution

# Trigger deployment manually
reapi deployment trigger "staging-regression"
 
# Get deployment run history
reapi deployment history "staging-regression" --limit 10
 
# Get specific run details
reapi deployment run-details "run-id-12345"

Results and Reporting

Viewing Results

# Get latest test results
reapi results latest --runner smoke-tests
 
# Get results by run ID
reapi results show "run-id-12345"
 
# Get results with specific filters
reapi results list \
  --environment staging \
  --from "2024-01-01" \
  --to "2024-01-31" \
  --status failed
 
# Export results
reapi results export --format csv --output results.csv

Snapshots

# Create a snapshot from latest run
reapi snapshot create --runner regression --name "release-v2.1"
 
# List snapshots
reapi snapshot list
 
# Share snapshot (get shareable URL)
reapi snapshot share "snapshot-id-789"
 
# Delete snapshot
reapi snapshot delete "snapshot-id-789"

Advanced Features

Bulk Operations

# Run multiple runners in parallel
reapi run --runner smoke,integration,performance --parallel
 
# Run all runners with specific tag
reapi run --tag "nightly" --environment staging
 
# Run tests from file list
reapi run --from-file test-list.txt

Environment Variables and Substitution

# Use environment variables in commands
export TEST_ENV="staging"
export TEST_RUNNER="regression"
 
reapi run --runner "$TEST_RUNNER" --environment "$TEST_ENV"
 
# Pass variables to tests
reapi run --runner api-tests \
  --var "baseUrl=https://api-staging.example.com" \
  --var "timeout=30000" \
  --var "retries=3"

Conditional Execution

# Run only if previous command succeeded
reapi run --runner smoke && reapi run --runner regression
 
# Run with fallback
reapi run --runner integration || reapi run --runner smoke
 
# Complex conditional logic
if reapi run --runner health-check --quiet; then
  echo "Health check passed, running full suite"
  reapi run --runner full-regression
else
  echo "Health check failed, running diagnostics"
  reapi run --runner diagnostics
fi

Scripting and Automation

Bash Integration

#!/bin/bash
# comprehensive-test.sh
 
set -e  # Exit on any error
 
echo "🚀 Starting comprehensive test suite..."
 
# Run smoke tests first
echo "Running smoke tests..."
reapi run --runner smoke-tests --environment staging --quiet
echo "✅ Smoke tests passed"
 
# Run integration tests
echo "Running integration tests..."
reapi run --runner integration --environment staging --timeout 1800 --quiet
echo "✅ Integration tests passed"
 
# Run performance tests (only on main branch)
if [[ "$BRANCH_NAME" == "main" ]]; then
  echo "Running performance tests..."
  reapi run --runner performance --environment staging --timeout 3600 --quiet
  echo "✅ Performance tests passed"
fi
 
echo "🎉 All tests completed successfully!"

Python Integration

#!/usr/bin/env python3
# run_tests.py
 
import subprocess
import json
import sys
from datetime import datetime
 
def run_reapi_command(args):
    """Run ReAPI CLI command and return parsed JSON result"""
    cmd = ['reapi'] + args
    result = subprocess.run(cmd, capture_output=True, text=True)
 
    if result.returncode != 0:
        print(f"❌ Command failed: {' '.join(cmd)}")
        print(f"Error: {result.stderr}")
        return None
 
    try:
        return json.loads(result.stdout)
    except json.JSONDecodeError:
        return {"output": result.stdout}
 
def main():
    print("🚀 Running automated test suite...")
 
    # Define test configuration
    test_config = [
        {"runner": "smoke-tests", "timeout": 300},
        {"runner": "api-contracts", "timeout": 600},
        {"runner": "integration", "timeout": 1800}
    ]
 
    results = []
 
    for config in test_config:
        print(f"Running {config['runner']}...")
 
        result = run_reapi_command([
            'run',
            '--runner', config['runner'],
            '--environment', 'staging',
            '--timeout', str(config['timeout']),
            '--output', 'json'
        ])
 
        if result is None:
            print(f"❌ {config['runner']} failed")
            sys.exit(1)
 
        results.append({
            "runner": config['runner'],
            "timestamp": datetime.now().isoformat(),
            "result": result
        })
 
        print(f"✅ {config['runner']} completed")
 
    # Generate summary report
    with open('test-summary.json', 'w') as f:
        json.dump(results, f, indent=2)
 
    print("🎉 All tests completed successfully!")
    print("📊 Summary report saved to test-summary.json")
 
if __name__ == "__main__":
    main()

PowerShell Integration

# run-tests.ps1
 
param(
    [string]$Environment = "staging",
    [string[]]$Runners = @("smoke-tests", "regression"),
    [int]$Timeout = 1800
)
 
Write-Host "🚀 Starting ReAPI test execution..." -ForegroundColor Green
 
foreach ($runner in $Runners) {
    Write-Host "Running $runner tests..." -ForegroundColor Yellow
 
    $result = & reapi run --runner $runner --environment $Environment --timeout $Timeout --output json 2>&1
 
    if ($LASTEXITCODE -ne 0) {
        Write-Host "❌ $runner tests failed" -ForegroundColor Red
        Write-Host $result -ForegroundColor Red
        exit 1
    }
 
    Write-Host "✅ $runner tests passed" -ForegroundColor Green
}
 
Write-Host "🎉 All tests completed successfully!" -ForegroundColor Green

Troubleshooting

Common Issues

# Check CLI version and update
reapi --version
npm update -g @reapi/cli
 
# Verify authentication
reapi auth status
 
# Check workspace access
reapi list runners
 
# Debug network issues
reapi run --runner smoke --verbose --debug
 
# Check configuration
reapi config show

Error Handling

# Run with error details
reapi run --runner integration --verbose 2>&1 | tee test-output.log
 
# Check specific error codes
case $? in
  0) echo "Tests passed" ;;
  1) echo "Tests failed" ;;
  2) echo "Configuration error" ;;
  3) echo "Authentication error" ;;
  *) echo "Unknown error" ;;
esac

Performance Optimization

# Use parallel execution when possible
reapi run --runner smoke,integration --parallel
 
# Cache authentication
reapi config set cache-auth true
 
# Use local configuration file
reapi config save .reapi-config.json

Best Practices

Command Organization

  • Use configuration files for repeated settings
  • Create wrapper scripts for complex workflows
  • Use meaningful names for deployments and snapshots
  • Implement proper error handling in scripts

Security

  • Store API keys in environment variables, not scripts
  • Use least-privilege API keys for different environments
  • Regularly rotate API keys
  • Avoid logging sensitive information

Performance

  • Use appropriate timeouts for different test types
  • Leverage parallel execution when tests are independent
  • Cache authentication and configuration when possible
  • Monitor CLI performance and optimize bottlenecks

Integration

  • Use structured output formats (JSON, JUnit) for CI/CD
  • Implement proper exit codes for automation
  • Generate comprehensive reports for stakeholders
  • Integrate with monitoring and alerting systems

More CLI usage patterns and advanced examples coming soon…