Simulate the execution of a specific node within a flow and retrieve the updated node states. This endpoint is useful for testing individual nodes during flow development or debugging flow execution without running the entire flow.

Overview

The Simulate Node endpoint allows you to execute a specific node in your flow and returns all nodes that were updated as a result. This includes the target node itself and any downstream nodes that were affected by the execution. This endpoint is essential for testing node configurations, debugging pipeline issues, and validating processing logic without running complete flows.
  • Method: POST
  • URL: https://{flow_name}.flows.graphorlm.com/nodes/simulate
  • Authentication: Required (API Token)

Authentication

All requests must include a valid API token in the Authorization header:
Authorization: Bearer YOUR_API_TOKEN
Learn how to generate API tokens in the API Tokens guide.

Request Format

Headers

HeaderValueRequired
AuthorizationBearer YOUR_API_TOKENYes
Content-Typeapplication/jsonYes

Request Body

The request body should be a JSON object with the following required field:
FieldTypeRequiredDescription
node_idstringYesThe unique identifier of the node to simulate within the flow

Example Request

{
  "node_id": "chunking-1234567890"
}

Response Format

Success Response (200 OK)

{
  "success": true,
  "message": "Node 'chunking-1234567890' simulated successfully",
  "updated_nodes": [
    {
      "id": "chunking-1234567890",
      "type": "chunking",
      "data": {
        "config": {
          "embeddingModel": "text-embedding-3-small",
          "chunkingSplitter": "recursive",
          "chunkSize": 1000,
          "chunkOverlap": 200
        },
        "result": {
          "updated": true,
          "chunks_created": 45,
          "processing_time": 2.34
        }
      }
    },
    {
      "id": "retrieval-0987654321",
      "type": "retrieval",
      "data": {
        "config": {
          "searchType": "similarity",
          "topK": 10,
          "scoreThreshold": 0.7
        },
        "result": {
          "updated": false,
          "reason": "Downstream node marked for re-execution"
        }
      }
    }
  ]
}

Response Fields

FieldTypeDescription
successbooleanWhether the node simulation was successful
messagestringHuman-readable message about the simulation result
updated_nodesarrayArray of node objects that were updated during the simulation

Node Object Structure

Each node object in the updated_nodes array contains:
FieldTypeDescription
idstringThe unique identifier of the node
typestringThe type of the node (e.g., “dataset”, “chunking”, “retrieval”)
dataobjectThe node’s configuration and result data

Code Examples

JavaScript/Node.js

async function simulateNode(flowName, nodeId, apiToken) {
  const url = `https://${flowName}.flows.graphorlm.com/nodes/simulate`;
  
  const response = await fetch(url, {
    method: 'POST',
    headers: {
      'Authorization': `Bearer ${apiToken}`,
      'Content-Type': 'application/json',
    },
    body: JSON.stringify({
      node_id: nodeId
    })
  });

  if (!response.ok) {
    const errorData = await response.json();
    throw new Error(`Simulation failed: ${errorData.detail}`);
  }

  return await response.json();
}

// Usage
simulateNode('my-rag-flow', 'chunking-1234567890', 'YOUR_API_TOKEN')
  .then(result => {
    console.log('✅ Simulation successful!');
    console.log('Message:', result.message);
    console.log('Updated nodes:', result.updated_nodes.length);
    
    result.updated_nodes.forEach(node => {
      console.log(`📊 Node ${node.id} (${node.type}): ${node.data.result?.updated ? 'Updated' : 'Not updated'}`);
    });
  })
  .catch(error => {
    console.error('❌ Simulation failed:', error.message);
  });

Python

import requests
import json

def simulate_node(flow_name, node_id, api_token):
    url = f"https://{flow_name}.flows.graphorlm.com/nodes/simulate"
    
    headers = {
        "Authorization": f"Bearer {api_token}",
        "Content-Type": "application/json"
    }
    
    payload = {
        "node_id": node_id
    }
    
    response = requests.post(url, headers=headers, json=payload)
    response.raise_for_status()
    
    return response.json()

# Usage
try:
    result = simulate_node("my-rag-flow", "chunking-1234567890", "YOUR_API_TOKEN")
    
    print("✅ Simulation successful!")
    print(f"Message: {result['message']}")
    print(f"Updated nodes: {len(result['updated_nodes'])}")
    
    for node in result['updated_nodes']:
        status = "Updated" if node['data'].get('result', {}).get('updated') else "Not updated"
        print(f"📊 Node {node['id']} ({node['type']}): {status}")
        
except requests.exceptions.RequestException as e:
    print(f"❌ Simulation failed: {e}")

cURL

curl -X POST "https://my-rag-flow.flows.graphorlm.com/nodes/simulate" \
  -H "Authorization: Bearer YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "node_id": "chunking-1234567890"
  }'

Error Responses

Common Error Codes

Status CodeDescriptionCommon Causes
400Bad RequestInvalid node ID or malformed request
401UnauthorizedInvalid or missing API token
404Not FoundFlow not found or node not found in the specified flow
500Internal Server ErrorNode simulation failed, processing error

Error Response Format

{
  "detail": "Error message describing what went wrong"
}

Example Error Responses

Flow Not Found

{
  "detail": "Flow with name 'my-rag-flow' not found"
}

Node Not Found

{
  "detail": "Node with id 'invalid-node-id' not found in flow 'my-rag-flow'"
}

Simulation Failed

{
  "detail": "Failed to simulate node: Processing error in node execution"
}

Invalid Request

{
  "detail": "node_id is required in request body"
}

Use Cases

The Simulate Node endpoint serves multiple critical purposes in flow development and maintenance:

Testing Node Configuration

Use this endpoint to test how configuration changes affect a specific node’s execution without running the entire flow.
// Test different chunking configurations
const testConfigurations = [
  { chunkSize: 500, chunkOverlap: 100 },
  { chunkSize: 1000, chunkOverlap: 200 },
  { chunkSize: 1500, chunkOverlap: 300 }
];

for (const config of testConfigurations) {
  // Update node configuration first
  await updateChunkingNode(flowName, nodeId, config);
  
  // Then simulate to see results
  const result = await simulateNode(flowName, nodeId, apiToken);
  console.log(`Config ${JSON.stringify(config)}: ${result.updated_nodes[0].data.result.chunks_created} chunks`);
}

Debugging Flow Issues

When a flow isn’t producing expected results, simulate individual nodes to identify where issues occur.
def debug_flow_nodes(flow_name, node_ids, api_token):
    """Debug multiple nodes in sequence to identify issues"""
    for node_id in node_ids:
        try:
            result = simulate_node(flow_name, node_id, api_token)
            
            if result['success']:
                print(f"✅ Node {node_id}: Working correctly")
                
                # Check if node produced expected output
                target_node = next((n for n in result['updated_nodes'] if n['id'] == node_id), None)
                if target_node and target_node['data'].get('result', {}).get('updated'):
                    print(f"   📊 Node executed and updated successfully")
                else:
                    print(f"   ⚠️  Node didn't update as expected")
            else:
                print(f"❌ Node {node_id}: Failed simulation")
                
        except Exception as e:
            print(f"❌ Node {node_id}: Error - {e}")

# Debug pipeline step by step
debug_flow_nodes(
    "my-rag-flow", 
    ["dataset-123", "chunking-456", "retrieval-789", "llm-012"],
    "YOUR_API_TOKEN"
)

Development and Iteration

During flow development, test nodes individually as you build and refine your data processing pipeline.
// Development workflow: simulate after each node configuration
async function developmentWorkflow(flowName, apiToken) {
  const nodes = [
    { id: 'dataset-123', type: 'dataset' },
    { id: 'chunking-456', type: 'chunking' },
    { id: 'retrieval-789', type: 'retrieval' },
    { id: 'llm-012', type: 'llm' }
  ];
  
  for (const node of nodes) {
    console.log(`\n🔧 Testing ${node.type} node: ${node.id}`);
    
    try {
      const result = await simulateNode(flowName, node.id, apiToken);
      
      if (result.success) {
        console.log(`✅ ${node.type} node working correctly`);
        
        // Log performance metrics if available
        const targetNode = result.updated_nodes.find(n => n.id === node.id);
        if (targetNode?.data?.result?.processing_time) {
          console.log(`   ⏱️  Processing time: ${targetNode.data.result.processing_time}s`);
        }
      }
    } catch (error) {
      console.log(`❌ ${node.type} node failed: ${error.message}`);
      console.log("   🛑 Fix this node before continuing");
      break;
    }
  }
}

developmentWorkflow('my-rag-flow', 'YOUR_API_TOKEN');

Integration Examples

Node Performance Monitor

class NodeSimulationMonitor {
  constructor(flowName, apiToken) {
    this.flowName = flowName;
    this.apiToken = apiToken;
  }

  async simulateAndAnalyze(nodeId) {
    try {
      const startTime = Date.now();
      const result = await this.simulateNode(nodeId);
      const endTime = Date.now();
      
      const analysis = {
        nodeId,
        success: result.success,
        simulationTime: endTime - startTime,
        updatedNodesCount: result.updated_nodes.length,
        performanceMetrics: this.extractPerformanceMetrics(result.updated_nodes)
      };
      
      console.log('🔍 Node Simulation Analysis:');
      console.log(`  Node ID: ${analysis.nodeId}`);
      console.log(`  Success: ${analysis.success ? '✅' : '❌'}`);
      console.log(`  Simulation Time: ${analysis.simulationTime}ms`);
      console.log(`  Updated Nodes: ${analysis.updatedNodesCount}`);
      
      if (analysis.performanceMetrics.length > 0) {
        console.log('  📊 Performance Metrics:');
        analysis.performanceMetrics.forEach(metric => {
          console.log(`    ${metric.nodeId}: ${metric.processingTime}s`);
        });
      }
      
      return analysis;
    } catch (error) {
      console.error(`❌ Simulation failed for node ${nodeId}:`, error.message);
      throw error;
    }
  }

  async simulateNode(nodeId) {
    const response = await fetch(`https://${this.flowName}.flows.graphorlm.com/nodes/simulate`, {
      method: 'POST',
      headers: {
        'Authorization': `Bearer ${this.apiToken}`,
        'Content-Type': 'application/json'
      },
      body: JSON.stringify({ node_id: nodeId })
    });

    if (!response.ok) {
      throw new Error(`HTTP ${response.status}: ${response.statusText}`);
    }

    return await response.json();
  }

  extractPerformanceMetrics(nodes) {
    return nodes
      .filter(node => node.data?.result?.processing_time)
      .map(node => ({
        nodeId: node.id,
        nodeType: node.type,
        processingTime: node.data.result.processing_time
      }));
  }

  async batchSimulate(nodeIds) {
    const results = [];
    
    for (const nodeId of nodeIds) {
      try {
        const analysis = await this.simulateAndAnalyze(nodeId);
        results.push(analysis);
        
        // Add delay between simulations to avoid overwhelming the system
        await new Promise(resolve => setTimeout(resolve, 1000));
      } catch (error) {
        results.push({
          nodeId,
          success: false,
          error: error.message
        });
      }
    }
    
    this.printBatchReport(results);
    return results;
  }

  printBatchReport(results) {
    console.log('\n📋 Batch Simulation Report:');
    console.log('================================');
    
    const successful = results.filter(r => r.success);
    const failed = results.filter(r => !r.success);
    
    console.log(`Total simulations: ${results.length}`);
    console.log(`Successful: ${successful.length}`);
    console.log(`Failed: ${failed.length}`);
    
    if (successful.length > 0) {
      const avgSimulationTime = successful.reduce((sum, r) => sum + r.simulationTime, 0) / successful.length;
      console.log(`Average simulation time: ${Math.round(avgSimulationTime)}ms`);
    }
    
    if (failed.length > 0) {
      console.log('\n❌ Failed simulations:');
      failed.forEach(f => {
        console.log(`  ${f.nodeId}: ${f.error}`);
      });
    }
  }
}

// Usage
const monitor = new NodeSimulationMonitor('my-rag-pipeline', 'YOUR_API_TOKEN');

// Simulate single node
monitor.simulateAndAnalyze('chunking-123').catch(console.error);

// Simulate multiple nodes
monitor.batchSimulate([
  'dataset-456',
  'chunking-789', 
  'retrieval-012',
  'llm-345'
]).catch(console.error);

Best Practices

Performance Considerations

  • Resource Usage: Simulating nodes will consume computational resources similar to running the actual flow
  • Rate Limiting: Avoid rapid successive simulations to prevent overwhelming the system
  • Performance Monitoring: Track simulation performance to optimize node configurations
  • Processing Time: Monitor node execution times for performance optimization

Development Guidelines

  • Test Early: Simulate nodes as soon as they’re configured
  • Incremental Testing: Test nodes in pipeline order to catch dependencies
  • Error Handling: Always implement proper error handling for simulation failures
  • Performance Tracking: Monitor execution times and resource usage during simulation
  • Configuration Validation: Verify node configurations before simulation to avoid errors

Security and Monitoring

  • API Token Management: Secure API token storage and rotation
  • Access Control: Limit simulation access to development environments when possible
  • Activity Logging: Log simulation activities for debugging and performance monitoring
  • Resource Management: Monitor simulation frequency and system resource usage

Troubleshooting

Next Steps

After simulating nodes in your flow, you can: The Node Simulation API provides essential testing and debugging capabilities for your RAG pipeline development. By enabling individual node testing, performance monitoring, and configuration validation, this endpoint helps ensure your flows operate efficiently and correctly before deployment to production environments.