Build a Powerful Testing Framework
Learn how to combine ReAPI’s scripting features to build a maintainable, scalable testing framework that teams love. This guide uses a real-world scenario to demonstrate patterns that transform scattered test scripts into reusable, powerful components.
The Challenge: Testing a Multi-Tenant Analytics API
Imagine you’re testing MetricsHub, a SaaS analytics platform serving hundreds of clients. The API provides usage metrics, growth calculations, and trend analysis across different subscription tiers.
Testing Challenges
Multiple data patterns: Free, Pro, and Enterprise tenants have different metric structures and validation rules.
Unreadable timestamps: API returns Unix timestamps in milliseconds (1704067200000), making test reports and debugging painful.
Environment-specific behavior: Development needs ?debug=true&pretty=true for detailed responses, but production shouldn’t have these parameters.
Complex validation: Metrics must pass multiple business rules: growth calculations must be accurate, SLAs must be met, data structures must match tenant types.
Why Visual Nodes Aren’t Enough
Visual nodes handle 95% of your testing needs, but this scenario requires:
- Reusable calculation logic across dozens of tests
- Business rule validation that QA teams can apply without coding
- Environment-aware behavior without duplicating tests
- Data transformation for human-readable reports
This is where ReAPI’s scripting patterns shine.
The Solution: Layered Scripting Architecture
Build your testing framework in four layers, each building on the previous:
┌─────────────────────────────────────────┐
│ Runtime Layer: Hooks │ ← Dynamic behavior
├─────────────────────────────────────────┤
│ Data Layer: Value Generators │ ← Test data creation
├─────────────────────────────────────────┤
│ Validation Layer: Custom Assertions │ ← Business rules
├─────────────────────────────────────────┤
│ Foundation Layer: Global Utilities │ ← Shared logic
└─────────────────────────────────────────┘Each layer leverages the ones below it, creating a powerful, maintainable architecture.
Step 1: Build Your Foundation with Global Utilities
Pattern: Create a static class with the $$ prefix containing all shared logic and Zod schemas. Global scripts can use ReAPI’s built-in libraries (Zod v4.x, ky, lodash, faker) and load predefined libraries (DayJS, CryptoJS, etc.).
Loading External Libraries
Global scripts are managed in the ReAPI web application and can use both built-in and predefined external libraries. For our analytics testing, we’ll use:
Built-in Runtime Libraries (always available):
- Zod (
z): Schema validation and type safety (v4.x) - ky: Modern HTTP client for fetching remote data
- lodash (
_): Data manipulation utilities - faker: Realistic test data generation
Predefined Libraries (load on demand):
The Code
class $$AnalyticsUtils {
/**
* Zod schemas for data validation
* Reusable across all scripts
*/
static schemas = {
// Metric structure validation
metric: z.object({
value: z.number(),
previousValue: z.number().optional(),
change: z.number(),
timestamp: z.number().positive(),
}),
// Tenant type validation
tenantType: z.enum(["free", "pro", "enterprise"]),
// Period validation
period: z.enum(["daily", "weekly", "monthly", "yearly"]),
// Generator options
generatorOptions: z
.object({
tenantType: z.enum(["free", "pro", "enterprise"]).default("free"),
period: z
.enum(["daily", "weekly", "monthly", "yearly"])
.default("monthly"),
includeHistory: z.boolean().default(false),
})
.optional(),
};
/**
* Convert Unix timestamp to human-readable format
* Uses DayJS for better date handling
*/
static formatTimestamp(ms) {
if (!ms) return "N/A";
return dayjs(ms).format("YYYY-MM-DD HH:mm:ss");
}
/**
* Format timestamp as relative time (e.g., "2 hours ago")
*/
static formatRelativeTime(ms) {
if (!ms) return "N/A";
return dayjs(ms).fromNow();
}
/**
* Calculate growth percentage between two values
*/
static calculateGrowth(current, previous) {
if (!previous || previous === 0) return 0;
return ((current - previous) / previous) * 100;
}
/**
* Check if response time meets SLA (< 500ms)
*/
static isWithinSLA(responseTimeMs) {
return responseTimeMs < 500;
}
/**
* Get expected metric ranges by tenant type
*/
static getMetricRanges(tenantType) {
const ranges = {
free: { users: [0, 100], apiCalls: [0, 1000] },
pro: { users: [100, 10000], apiCalls: [1000, 100000] },
enterprise: { users: [10000, 1000000], apiCalls: [100000, 10000000] },
};
return ranges[tenantType] || ranges.free;
}
/**
* Generate secure hash for data integrity validation
* Uses CryptoJS for cryptographic operations
*/
static generateDataHash(data) {
const stringified = typeof data === "string" ? data : JSON.stringify(data);
return CryptoJS.SHA256(stringified).toString();
}
/**
* Validate data integrity by comparing hashes
*/
static validateDataIntegrity(data, expectedHash) {
const actualHash = this.generateDataHash(data);
return actualHash === expectedHash;
}
/**
* Fetch test data from remote API
* Uses ky (built-in HTTP client) for secure, API-based data access
*/
static async fetchTestData(dataType, filters = {}) {
try {
const data = await ky
.get(`https://test-data-api.example.com/data/${dataType}`, {
searchParams: filters,
})
.json();
return data;
} catch (error) {
console.error(`Failed to fetch test data: ${error.message}`);
return null;
}
}
}Why This Is Powerful
Single Source of Truth: Change formatTimestamp() once, and all scripts, hooks, generators, and assertions use the new format. No hunting through hundreds of test cases.
Zod Schema Library: Define validation schemas once in $$AnalyticsUtils.schemas, reuse across all generators and assertions. Type-safe data validation with clear error messages. No more scattered typeof checks or manual validation.
Available Everywhere: Use $$AnalyticsUtils in before-request hooks, after-response hooks, custom assertions, value generators, and script nodes without importing or loading anything.
Built-in Power Libraries: Zod (v4.x) for validation, ky for HTTP requests, DayJS for dates, CryptoJS for security, lodash for data manipulation—all built-in and ready to use. No setup required.
Remote Data Access: Use ky (built-in) to fetch test data from APIs. ReAPI recommends the API-first approach over direct database access for security and maintainability. Test data management feature coming soon.
Team Consistency: Everyone calculates growth the same way, formats dates identically, validates with the same Zod schemas, and uses the same hashing algorithms. No more “why do these two tests calculate differently?”
Discoverability: The $$ prefix makes utilities easy to find with autocomplete, and clearly distinguishes them from other functions.
Centralized Management: Managed through ReAPI’s web interface with version control. Update schemas, library usage, or business logic in one place—all scripts benefit immediately.
Other Tools Comparison
Postman: Must copy-paste utility functions across collections or maintain external libraries. No centralized management. Changes require updating multiple files.
ReAPI’s Advantage: Write once in the global script editor, instantly available everywhere.
Step 2: Create Smart Custom Assertions
Pattern: Build reusable validation logic that QA teams can use through the UI dropdown.
The Code
async function isValidMetric(metric) {
try {
// Structure validation using Zod
const result = $$AnalyticsUtils.schemas.metric.safeParse(metric);
if (!result.success) {
$addAssertionResult({
passed: false,
message: `Metric structure invalid: ${result.error.errors
.map((e) => `${e.path.join(".")}: ${e.message}`)
.join(", ")}`,
operator: "isValidMetric",
leftValue: metric,
rightValue: "valid metric structure",
});
return;
}
const validMetric = result.data;
// Business rule: timestamp must be recent (within 7 days)
const daysSince =
(Date.now() - validMetric.timestamp) / (1000 * 60 * 60 * 24);
const isRecent = daysSince <= 7;
// Business rule: growth calculation must be accurate
const expectedGrowth = $$AnalyticsUtils.calculateGrowth(
validMetric.value,
validMetric.previousValue
);
const growthMatches = Math.abs(validMetric.change - expectedGrowth) < 0.01;
const allValid = isRecent && growthMatches;
$addAssertionResult({
passed: allValid,
message: allValid
? `Metric is valid (growth: ${validMetric.change.toFixed(
2
)}%, age: ${daysSince.toFixed(1)} days)`
: `Metric failed validation - Recent: ${isRecent}, Growth accurate: ${growthMatches}`,
operator: "isValidMetric",
leftValue: validMetric,
rightValue: "valid business rules",
});
} catch (error) {
$addAssertionResult({
passed: false,
message: `Validation error: ${error.message}`,
operator: "isValidMetric",
leftValue: metric,
rightValue: "valid metric",
});
}
}Why This Is Powerful
QA Self-Service: Once you write isValidMetric, QA engineers select it from the assertion dropdown in the UI. They validate complex business rules without writing a single line of code.
Zod-Powered Validation: Uses safeParse() for non-throwing validation with detailed error messages. Instead of generic “undefined is not a number”, you get “timestamp: Expected number, received undefined”. Perfect for debugging failed tests.
Business Logic Encapsulation: Complex validation logic (growth calculations, timestamp checks, Zod structure validation) is hidden from the test flow. Tests remain clean and readable.
Reusability at Scale: Instead of 50 test cases with duplicated validation logic, you have 50 test cases using one isValidMetric assertion.
Leverages Global Utils: The assertion uses $$AnalyticsUtils.schemas.metric for structure validation and $$AnalyticsUtils.calculateGrowth() for business rules, ensuring consistency with data generation.
Detailed Reporting: When tests fail, you get actionable messages like “timestamp: Expected number, received string” or “Growth accurate: false” instead of generic “assertion failed.”
Real Impact
Before: 50 test cases each with 15 lines of inline validation code = 750 lines to maintain. API changes = update 50 places.
After: 50 test cases using one dropdown selection + one isValidMetric function = 20 lines to maintain. API changes = update one function.
Other Tools Comparison
Postman: Write pm.test() blocks in every request. Validation logic scattered across hundreds of requests. No UI integration for reusable assertions.
REST Assured: Java matchers are reusable but require compilation and deployment. No visual interface for non-coders.
ReAPI’s Advantage: Write once, QA uses from dropdown. Code and UI work together seamlessly.
Step 3: Build Smart Data Generators
Pattern: Create parameterized generators that produce realistic test data using your global utilities.
The Code
async function generateTenantMetrics(options) {
// Validate and parse options with Zod
// Provides defaults and type safety
const validOptions = $$AnalyticsUtils.schemas.generatorOptions.parse(options);
const { tenantType, period, includeHistory } = validOptions;
// Get tenant-appropriate value ranges
const ranges = $$AnalyticsUtils.getMetricRanges(tenantType);
// Generate base metrics
const currentValue = _.random(ranges.users[0], ranges.users[1]);
const previousValue = _.random(
Math.max(0, currentValue * 0.7),
currentValue * 1.3
);
// Calculate growth using the same logic as assertions
const growth = $$AnalyticsUtils.calculateGrowth(currentValue, previousValue);
const metrics = {
tenantId: faker.string.uuid(),
tenantType: tenantType,
period: period,
metrics: {
activeUsers: {
value: currentValue,
previousValue: previousValue,
change: growth,
timestamp: Date.now(),
},
apiCalls: {
value: _.random(ranges.apiCalls[0], ranges.apiCalls[1]),
change: _.random(-20, 50),
timestamp: Date.now(),
},
},
};
// Optionally include historical data
if (includeHistory) {
metrics.history = Array.from({ length: 6 }, (_, i) => ({
period: `month-${i + 1}`,
value: _.random(ranges.users[0], ranges.users[1]),
timestamp: Date.now() - (i + 1) * 30 * 24 * 60 * 60 * 1000,
}));
}
return metrics;
}
// Example: Generator that fetches real test data from API
async function fetchTestUser(userId) {
// Use global utility to fetch from test data API
const userData = await $$AnalyticsUtils.fetchTestData("users", {
id: userId,
});
if (!userData) {
// Fallback to generated data if API fails
return {
id: userId || faker.string.uuid(),
email: faker.internet.email(),
name: faker.person.fullName(),
};
}
return userData;
}Why This Is Powerful
Zod-Validated Parameters: Automatically validates and applies defaults to options using $$AnalyticsUtils.schemas.generatorOptions. Invalid parameters throw clear errors immediately. No more “undefined is not a function” runtime surprises.
Type-Aware Generation: Same generator creates realistic data for free, pro, and enterprise tenants with appropriate value ranges. No manual JSON construction.
Calculation Consistency: Uses $$AnalyticsUtils.calculateGrowth(), ensuring test data matches the same business logic your assertions validate.
Composable in Expressions: Call from anywhere using js:$gen.generateTenantMetrics({...}) including API request bodies, context variables, and even other scripts.
Remote Data Integration: Generators can fetch real test data from APIs using ky (built-in HTTP client via global utilities). This follows the API-first pattern ReAPI recommends - secure, maintainable, and production-like.
UI Configuration Available: Value generators can also be configured through ReAPI’s no-code UI - QA teams can select from dropdown and set parameters visually. However, the JS expression approach is more powerful for inline composition and complex scenarios.
Parameterized Flexibility: One generator, infinite variations: $gen.generateTenantMetrics({ tenantType: 'enterprise', includeHistory: true }). Zod ensures invalid combinations fail fast with helpful error messages.
Realistic Patterns: Uses faker for IDs, lodash for randomization, and can fetch real data from test data APIs, creating data that looks like production.
Real Impact
Before: Each test manually constructs 50+ lines of JSON with hardcoded values. Schema changes = update every test.
After: One line: js:$gen.generateTenantMetrics({ tenantType: 'pro' }). Schema changes = update one generator function.
Other Tools Comparison
Postman: Can create functions in pre-request scripts but they can’t be called from request body variables or environment variables directly. No UI for non-coders to use these functions.
ReAPI’s Advantage: Dual approach - developers call via js:$gen.functionName() anywhere (request bodies, headers, query params, context variables), while QA teams can configure the same generators through the UI dropdown. Best of both worlds.
Step 4: Add Dynamic Hooks with Configuration
Pattern: Create hooks that read $context.config to adapt behavior per environment without modifying tests.
Hook Flexibility: The Onion Model
Hooks in ReAPI follow a symmetric onion/middleware pattern across three levels:
- Test Flow Level: Global hooks for the entire test case
- Folder Level: Module/feature-specific hooks (hierarchical: root → parent → current)
- Node Level: Individual API request hooks
Each level supports two modes:
- Inline: Write hook code directly at that level
- Reference: Reference a reusable hook function by ID (defined in global scripts)
Execution Order: The Onion Pattern
Before Request (Outer → Inner):
┌─────────────────────────────────────────┐
│ Test Flow: Authentication, Debug │
│ ┌───────────────────────────────────┐ │
│ │ Folder: Encryption, Module Headers│ │
│ │ ┌─────────────────────────────┐ │ │
│ │ │ Node: Specific Parameters │ │ │
│ │ │ → API Call → │ │ │
│ │ └─────────────────────────────┘ │ │
│ └───────────────────────────────────┘ │
└─────────────────────────────────────────┘
After Response (Inner → Outer - REVERSED):
┌─────────────────────────────┐
│ Node: Extract Test Data │
└─────────────────────────────┘
┌───────────────────────────────────┐
│ Folder: Decrypt, Aggregate Data │
└───────────────────────────────────┘
┌─────────────────────────────────────────┐
│ Test Flow: Cleanup, Global Stats │
└─────────────────────────────────────────┘Why this pattern?
- Before request: Build up the request from general to specific
- After response: Process response from specific to general (like unwrapping layers)
- Each layer can access and enhance what inner layers produced
Real-World Example: Payment Processing API
Scenario: Testing a payment gateway with encryption, authentication, and compliance requirements.
Test Flow Level Hooks (Global - All APIs)
Purpose: Authentication, debug flags, request tracing that apply to the ENTIRE test case.
// beforeRequest: Add authentication for all APIs
async function beforeRequest() {
// Add auth token for the entire test flow
$request.headers["Authorization"] = `Bearer ${$context.config.authToken}`;
// Add debug params in dev environment (from $context.config)
if ($context.config.debugMode) {
$request.query.debug = "true";
$request.query.verbose = "true";
}
// Request tracing for the entire test case
if (!$context.traceId) {
$context.traceId = `test-${Date.now()}-${_.random(1000, 9999)}`;
}
$request.headers["X-Trace-ID"] = $context.traceId;
}
// afterResponse: Global statistics and cleanup
async function afterResponse() {
// Track global test statistics
$context.totalApiCalls = ($context.totalApiCalls || 0) + 1;
$context.totalResponseTime =
($context.totalResponseTime || 0) + $response.time;
// Store for final report
$context.apiCallLog = $context.apiCallLog || [];
$context.apiCallLog.push({
url: $request.path,
status: $response.status,
time: $response.time,
});
}Folder Level Hooks (Module - All Payment APIs)
Purpose: Encryption/decryption, module-specific headers, feature compliance that apply to ALL tests in the “Payment” folder.
// beforeRequest: Encrypt sensitive payment data
async function beforeRequest() {
// Add payment-module-specific headers
$request.headers["X-Payment-API-Version"] = "v2";
$request.headers["X-Idempotency-Key"] = faker.string.uuid();
// Encrypt sensitive fields (credit card, SSN, etc.)
if ($request.body && $request.body.creditCard) {
$context.originalCreditCard = $request.body.creditCard;
$request.body.creditCard = $$PaymentUtils.encryptSensitiveData(
$request.body.creditCard
);
}
if ($request.body && $request.body.ssn) {
$context.originalSSN = $request.body.ssn;
$request.body.ssn = $$PaymentUtils.encryptSensitiveData($request.body.ssn);
}
}
// afterResponse: Decrypt response and aggregate payment data
async function afterResponse() {
// Decrypt sensitive fields in response
if ($response.data.maskedCard) {
$response.data.decryptedCard = $$PaymentUtils.decryptSensitiveData(
$response.data.encryptedCard
);
}
// Aggregate payment statistics for this module
$context.paymentStats = $context.paymentStats || { total: 0, successful: 0 };
$context.paymentStats.total += 1;
if ($response.data.status === "approved") {
$context.paymentStats.successful += 1;
}
// PCI compliance: Remove decrypted data after validation
delete $response.data.encryptedCard;
}Node Level Hooks (Specific API)
Purpose: One-off parameters, test-specific data, override defaults for THIS specific API call.
// beforeRequest (Inline): Add specific test data for "Process Refund" endpoint
async function beforeRequest() {
// This specific refund endpoint needs a reason code
$request.body.refundReason = "customer_request";
$request.body.refundInitiatedBy = "test_automation";
// Override timeout for this slow endpoint
$request.timeout = 60000;
// Add specific test metadata
$request.headers["X-Test-Scenario"] = "large-refund-test";
}
// afterResponse (Inline): Extract specific data for assertions
async function afterResponse() {
// Extract refund ID for next test step
$context.refundId = $response.data.refundId;
$context.originalTransactionId = $response.data.originalTransactionId;
// This specific endpoint returns encrypted audit trail - decrypt it
if ($response.data.encryptedAuditTrail) {
$response.data.auditTrail = $$PaymentUtils.decryptAuditTrail(
$response.data.encryptedAuditTrail
);
}
}The Complete Flow in Action
Test Case: "Process Payment and Refund Flow"
1. Before Request Execution (Outer → Inner):
├─ Test Flow Hook: Add Bearer token, traceId, debug flags
├─ Folder Hook: Add payment headers, encrypt credit card
└─ Node Hook: Add refund reason, override timeout
→ API Call: POST /payments/refund
2. After Response Execution (Inner → Outer - REVERSED):
├─ Node Hook: Extract refundId, decrypt audit trail
├─ Folder Hook: Decrypt response, aggregate payment stats
└─ Test Flow Hook: Update global stats, log API call
Result:
- Authentication applied globally
- Encryption applied to all payment APIs
- Specific refund parameters added to one endpoint
- Response processed in reverse order (specific → general)When to Use Each Level
| Level | Best For | Examples |
|---|---|---|
| Test Flow | Cross-cutting concerns for entire test | Authentication, debug flags, request tracing, global stats |
| Folder | Module/feature-specific logic | Encryption/decryption, module headers, feature authentication, data aggregation |
| Node | One-off customizations | Specific parameters, timeout overrides, test metadata, endpoint-specific data extraction |
Environment-Driven Configuration
Hooks become truly powerful when combined with environment variable groups. Instead of hardcoding values, define them in ReAPI’s environment management:
Setup (in ReAPI UI):
- Create environment variable groups: “Development”, “Staging”, “Production”
- Add config flags to each environment:
debugMode: true/falsetenantType: "free"/"pro"/"enterprise"enableSLATracking: true/false- Any custom flags your hooks need
Usage (QA workflow):
- Select environment from dropdown: “Development”
- Run tests
$context.configautomatically contains all flags from that environment- Hooks read
$context.config.debugModeto adapt behavior - Switch to “Production” environment—same tests, different behavior, zero code changes
Developer advantage:
- Write hooks once that work across all environments
- QA teams control environment selection without touching code
- Add new environments without modifying tests
- Version control environment configs alongside tests
Before Request Hook
async function beforeRequest() {
// Read configuration for environment-specific behavior
const debugMode = $context.config?.debugMode || false;
const tenantType = $context.config?.tenantType || "free";
// Add debug parameters in development
if (debugMode) {
$request.query = $request.query || {};
$request.query.pretty = "true";
$request.query.debug = "true";
}
// Add tenant-specific headers
$request.headers["X-Tenant-Type"] = tenantType;
// Track request timing for SLA validation
$context.requestStartTime = Date.now();
// Add request ID for debugging
$request.headers["X-Request-ID"] = `req_${Date.now()}_${_.random(
1000,
9999
)}`;
}After Response Hook
async function afterResponse() {
// Calculate response time
const responseTime = Date.now() - $context.requestStartTime;
$context.lastResponseTime = responseTime;
// Check SLA using global utility
const metSLA = $$AnalyticsUtils.isWithinSLA(responseTime);
$context.lastResponseMetSLA = metSLA;
// Convert all timestamp fields to human-readable format
// Uses DayJS through $$AnalyticsUtils for consistent, readable dates
if ($response.data.metrics) {
Object.keys($response.data.metrics).forEach((key) => {
const metric = $response.data.metrics[key];
if (metric.timestamp) {
// Formatted: "2024-01-15 14:30:00"
metric.timestampFormatted = $$AnalyticsUtils.formatTimestamp(
metric.timestamp
);
// Relative: "2 hours ago"
metric.timestampRelative = $$AnalyticsUtils.formatRelativeTime(
metric.timestamp
);
}
});
}
// Generate data integrity hash for validation
if ($response.data.metrics) {
$context.responseDataHash = $$AnalyticsUtils.generateDataHash(
$response.data.metrics
);
}
// Store formatted response time for reports
$context.responseTimeFormatted = `${responseTime}ms (SLA: ${
metSLA ? "✓" : "✗"
})`;
}Why This Is Powerful
Multi-Level Flexibility: Apply hooks at folder, test, or API node level. Reference reusable functions from global script or write inline for special cases. One hook function can be reused across hundreds of API calls.
Environment Adaptation: Select an environment from the UI dropdown (e.g., “Development” vs “Production”). The $context.config automatically contains values from that environment’s variable group. Hooks read these values to adapt behavior—same tests, zero code changes across environments.
Automatic Enhancement: Every API call gets timestamp formatting (using DayJS), data integrity hashing (using CryptoJS), SLA tracking, and request IDs without any per-test configuration.
Configuration-Driven: Define behavior flags in environment variable groups (in ReAPI UI). QA selects environment from dropdown—tests automatically adapt. Change behavior by updating environment config, not by editing code.
Layered Behavior: Folder hooks apply to all tests, test hooks override for specific scenarios, node hooks fine-tune individual requests. No code duplication, maximum flexibility.
Human-Readable Reports: Timestamps automatically converted from 1704067200000 to both absolute (2024-01-01 14:30:00) and relative (2 hours ago) formats. Response times show 342ms (SLA: ✓) instead of raw numbers.
Library-Powered: Hooks leverage global utilities that use professional libraries (DayJS, CryptoJS), giving you enterprise-grade date handling and security features automatically.
Reusability at Scale: Write beforeRequest once in global script → reference it at folder level → applies to 100+ API calls. Update once, all tests benefit.
Real Impact
Before: Each test manually adds debug params in dev, removes them for CI. 100 tests = 100 places to modify when switching environments.
After: QA selects “Development” or “Production” environment from UI dropdown. All tests automatically read $context.config and adapt behavior. Zero code changes, zero test modifications.
Other Tools Comparison
Postman: Collection-level scripts only. Can’t apply hooks at folder or request level independently. Environment variables are strings only—must parse booleans manually. Must duplicate code or use workarounds.
REST Assured: Java hooks require compilation. Can’t reference pre-written functions from UI - must write inline code every time. Environment config requires code changes and redeployment.
Newman: Can pass command-line flags but requires script modifications to read them. No built-in config object pattern. Environment switching requires CLI parameters or file swaps.
ReAPI’s Advantage:
- Multi-level hooks (test flow/folder/node) with symmetric onion pattern execution
- Environment variable groups managed in UI with native type support (boolean, number, string, object)
$context.configautomatically populated from selected environment—no parsing required- Write once in global script → reference anywhere via UI dropdown
- QA teams select environments and apply hooks without coding
- Switch environments: one dropdown click, zero code changes
The Complete Picture: How It All Works Together
Test Execution Flow
1. Select Environment from UI Dropdown
↓
Example: "Development" environment selected
$context.config automatically populated from environment variable group:
{ debugMode: true, tenantType: 'enterprise', enableSLATracking: true }
2. Test Case: "Validate Enterprise Metrics Growth"
↓
Generate Data: js:$gen.generateTenantMetrics({ tenantType: 'enterprise' })
- Zod validates parameters: $$AnalyticsUtils.schemas.generatorOptions.parse()
- Uses: $$AnalyticsUtils.getMetricRanges() and .calculateGrowth()
↓
Before Request Hook Runs
- Reads $context.config.debugMode → adds ?debug=true&pretty=true
- Reads $context.config.tenantType → adds X-Tenant-Type header
- Records timestamp for SLA tracking
↓
API Call Executes
POST /api/metrics with generated data
↓
After Response Hook Runs
- Uses $$AnalyticsUtils.formatTimestamp() → converts timestamps
- Uses $$AnalyticsUtils.isWithinSLA() → validates response time
- Stores formatted data in context
↓
Custom Assertion Runs
isValidMetric(body.metrics.userGrowth)
- Zod validates structure: $$AnalyticsUtils.schemas.metric.safeParse()
- Uses $$AnalyticsUtils.calculateGrowth() → validates business rule
- Reports detailed pass/fail message with Zod error details
↓
Test Complete
Report shows human-readable timestamps and SLA statusExample Test Configuration
Environment Variable Group Setup (in ReAPI UI):
// "Development" Environment
{
debugMode: true,
tenantType: "enterprise",
enableSLATracking: true
}
// "Production" Environment
{
debugMode: false,
tenantType: "pro",
enableSLATracking: true
}Test Execution (QA selects environment from dropdown):
// Individual Test (simplified by patterns)
{
name: "Validate Enterprise User Growth",
request: {
method: "POST",
url: "/api/metrics",
body: "js:$gen.generateTenantMetrics({ tenantType: $context.config.tenantType })"
},
assertions: [
{ operator: "isValidMetric", field: "body.metrics.activeUsers" }
]
}
// Hooks and global utils do all the heavy lifting
// - Debug params added automatically
// - Timestamps formatted automatically
// - SLA tracked automatically
// - Business rules validated through dropdownThe Power of Layered Architecture
Maintainability: Update $$AnalyticsUtils.calculateGrowth() or Zod schemas once → affects generators, assertions, and hooks across all 100+ tests.
Type Safety: Zod schemas in $$AnalyticsUtils.schemas provide runtime validation. Invalid data fails fast with clear error messages, not cryptic runtime errors hours into test execution.
Reusability: Hooks apply to every API call, assertions to every validation, generators to every test setup. Zod schemas reused across all components.
Flexibility: Toggle debugMode in config → all tests adapt. Add new tenant type → update one schema enum, not 50 tests.
Collaboration: Developers create utilities, Zod schemas, generators, and assertions. QA engineers use them through the UI without writing code.
Scalability: Adding test #101 is as easy as test #1. New team members leverage existing patterns and schemas immediately.
Real-World Benefits
Before These Patterns
Code Volume
- Each test: 30-40 lines of setup + validation + manual JSON
- 100 tests = 3,000+ lines of duplicated logic
- Validation scattered across all tests
Maintenance Burden
- API schema changes: Update 100+ test cases
- Business rule changes: Hunt through all validation code
- Environment switching: Manually edit test parameters
Developer Experience
- Timestamp debugging: Copy-paste Unix timestamps to online converter tools
- Date formatting: Inconsistent formats across tests (ISO? locale string? custom?)
- Data validation: Manual
typeofchecks scattered everywhere, cryptic error messages - Calculation verification: Manual spreadsheet work
- Security operations: Reinvent hashing logic or skip it
- Onboarding: “Here are 100 test examples, figure out the patterns”
After These Patterns
Code Volume
- Each test: 5-10 lines using generators and assertions
- 100 tests = 500 lines (80% reduction)
- Validation centralized in custom assertions
Maintenance Burden
- API schema changes: Update one generator function
- Business rule changes: Update one assertion function
- Environment switching: Select different environment from UI dropdown (QA can do it, no code changes)
Developer Experience
- Timestamps: Automatically formatted with DayJS (
2024-01-15 14:30:00+2 hours ago) - Date formatting: Consistent DayJS format across all tests
- Data validation: Zod schemas with clear error messages (
timestamp: Expected number, received string) - Calculations: Validated by
$$AnalyticsUtils - Security operations: CryptoJS hashing built-in to utilities
- Onboarding: “Use these generators and assertions from the dropdown”
Team Impact
For Developers
- Build powerful utilities and abstractions once
- Focus on complex scenarios, not repetitive test code
- Create reusable components that multiply impact
For QA Engineers
- Use developer-created utilities through UI dropdowns
- Create comprehensive tests without coding
- Validate complex business rules with confidence
For the Organization
- 80% less code duplication
- 90% faster onboarding for new team members
- Consistent testing patterns across all projects
Best Practices
Do
✓ Start with global utils for any logic used 2+ times. Even simple formatters pay dividends.
✓ Use the $$ prefix for global classes. Makes them instantly recognizable and autocomplete-friendly.
✓ Define Zod schemas in global utilities. Create a central $$Utils.schemas object with all your validation schemas. Reuse them across generators, assertions, and hooks for consistency.
✓ Leverage built-in libraries. Zod (v4.x) for validation, ky for HTTP requests, lodash for data manipulation, faker for test data—all built-in, no setup required.
✓ Use Zod for parameter validation. In value generators, use schema.parse() or schema.safeParse() to validate inputs. Clear error messages help catch issues early.
✓ Use API-first for test data. Fetch test data via ky (built-in) from APIs, not direct database connections. This is more secure, maintainable, and follows microservices patterns. ReAPI’s test data management feature (coming soon) will make this even easier.
✓ Make assertions detailed. Use Zod’s error messages in assertion reports. Report what passed and what failed, with actual values.
✓ Define hooks as reusable functions in global scripts. Reference them at folder/test/node level instead of duplicating inline code. One function → hundreds of uses.
✓ Use folder-level hooks for suite-wide behavior (authentication, logging, SLA tracking). Test-level or node-level hooks for exceptions.
✓ Define config values in environment variable groups. Add flags like debugMode, slowMode, mockExternal to your environments. Hooks read $context.config to adapt behavior automatically when QA selects an environment.
✓ Document your utilities. Add JSDoc comments explaining parameters and return values. Include Zod schema references.
✓ Version control global scripts. Treat them as production code with proper reviews.
Don’t
✗ Don’t put business logic in test cases. Extract to global utils or custom assertions.
✗ Don’t hardcode environment values. Define them in environment variable groups—they’ll be available in $context.config when that environment is selected.
✗ Don’t duplicate calculations. If you’re copying code, it belongs in $$Utils.
✗ Don’t write the same inline hook repeatedly. If you need the same behavior in 5+ places, define it once in global script and reference it.
✗ Don’t access databases directly. Use APIs to fetch test data. Direct DB access creates security risks, tight coupling, and maintenance nightmares.
✗ Don’t skip error handling in hooks. A broken hook affects all tests.
✗ Don’t make generators too complex. Keep them focused on data generation, delegate logic to utils.
Next Steps
Now that you understand the patterns, dive deeper into each component:
Foundation
- Global Scripts - Writing and managing global utilities
Components
- Custom Assertions - Creating reusable validation logic
- Value Generators - Building data generation functions
- Before Request Hooks - Pre-request dynamic behavior
- After Response Hooks - Post-response processing
Integration
- Template Engine & Expressions - Using
js:$gen.functionName()syntax
Start with global utilities, add custom assertions as you find repeated validations, and let the patterns grow naturally with your test suite.