Overview
The List LLM Nodes endpoint allows you to retrieve information about LLM nodes within a flow. LLM nodes serve as the final response generation layer in RAG pipelines, converting retrieved documents and context into coherent, contextually-aware answers while providing detailed quality metrics and evaluation capabilities.- Method:
GET
- URL:
https://{flow_name}.flows.graphorlm.com/llm
- Authentication: Required (API Token)
Authentication
All requests must include a valid API token in the Authorization header:Learn how to generate API tokens in the API Tokens guide.
Request Format
Headers
Header | Value | Required |
---|---|---|
Authorization | Bearer YOUR_API_TOKEN | Yes |
Parameters
No query parameters are required for this endpoint.Example Request
Response Format
Success Response (200 OK)
The response contains an array of LLM node objects:Response Structure
Each LLM node in the array contains:Field | Type | Description |
---|---|---|
id | string | Unique identifier for the LLM node |
type | string | Node type (always “llm” for LLM nodes) |
position | object | Position coordinates in the flow canvas |
style | object | Visual styling properties (height, width) |
data | object | LLM node configuration and results |
Position Object
Field | Type | Description |
---|---|---|
x | number | X coordinate position in the flow canvas |
y | number | Y coordinate position in the flow canvas |
Style Object
Field | Type | Description |
---|---|---|
height | integer | Height of the node in pixels |
width | integer | Width of the node in pixels |
Data Object
Field | Type | Description |
---|---|---|
name | string | Display name of the LLM node |
config | object | Node configuration including model settings |
result | object | Processing results and performance metrics (optional) |
Config Object
Field | Type | Description |
---|---|---|
model | string | LLM model identifier (e.g., “gpt-4o”, “claude-3-sonnet”, “llama-3-70b”) |
promptId | string | ID of the prompt template used for response generation |
temperature | number | Creativity control parameter (0.0-2.0) for response randomness |
Result Object (Optional)
Field | Type | Description |
---|---|---|
updated | boolean | Whether the node has been processed with current configuration |
processing | boolean | Whether the node is currently generating responses |
waiting | boolean | Whether the node is waiting for input dependencies |
has_error | boolean | Whether the node encountered errors during processing |
updatedMetrics | boolean | Whether quality evaluation metrics have been calculated |
total_responses | integer | Total number of responses generated (if available) |
avg_response_length | number | Average character length of generated responses |
avg_processing_time | number | Average response generation time in seconds |
streaming_enabled | boolean | Whether real-time streaming responses are supported |
multimodal_support | boolean | Whether the node supports image and multimedia processing |
Code Examples
JavaScript/Node.js
Python
cURL
PHP
Error Responses
Common Error Codes
Status Code | Description | Example Response |
---|---|---|
401 | Unauthorized - Invalid or missing API token | {"detail": "Invalid authentication credentials"} |
404 | Not Found - Flow not found | {"detail": "Flow not found"} |
500 | Internal Server Error - Server error | {"detail": "Failed to retrieve LLM nodes"} |
Error Response Format
Example Error Responses
Invalid API Token
Flow Not Found
Server Error
Use Cases
LLM Node Management
Use this endpoint to:- Response Quality Monitoring: Track LLM performance metrics and response generation quality
- Model Configuration Analysis: Review configured models, prompt templates, and temperature settings
- Performance Optimization: Analyze response times, lengths, and processing efficiency
- Capability Assessment: Identify nodes with streaming, multimodal, or advanced evaluation features
Integration Examples
LLM Performance Monitor
Quality Metrics Validator
Best Practices
Response Quality Management
- Model Selection: Choose appropriate models based on use case complexity and response quality requirements
- Temperature Tuning: Optimize temperature settings to balance creativity and consistency in responses
- Prompt Engineering: Use well-designed prompt templates for consistent, high-quality response generation
- Quality Metrics: Enable and regularly review evaluation metrics to ensure response quality standards
Performance Optimization
- Response Time Monitoring: Track processing times and optimize model configurations for efficiency
- Streaming Implementation: Use streaming responses for real-time user experiences with supported models
- Batch Processing: Implement efficient batching strategies for high-volume response generation
- Resource Management: Monitor memory usage and processing capacity for sustainable operation
Advanced Capabilities
- Multimodal Integration: Leverage multimodal support for rich content processing including images and multimedia
- Quality Evaluation: Implement comprehensive evaluation using contextual precision, recall, relevancy, and faithfulness metrics
- Error Handling: Implement robust retry mechanisms and fallback strategies for reliable operation
- Context Management: Optimize context window usage for maximum information utilization
Monitoring and Maintenance
- Performance Analytics: Regularly analyze response generation patterns and optimization opportunities
- Quality Assessment: Monitor evaluation metrics to identify areas for improvement
- Configuration Auditing: Regularly review and validate LLM node configurations
- Capability Tracking: Monitor the utilization of advanced features like streaming and multimodal processing
Troubleshooting
Flow Not Found Error
Flow Not Found Error
Solution: Verify that:
- The flow name in the URL is correct and matches exactly
- The flow exists in your project
- Your API token has access to the correct project
- The flow has been created and saved properly
Empty LLM Nodes Array
Empty LLM Nodes Array
Solution: If no LLM nodes are returned:
- Verify the flow contains LLM response generation components
- Check that LLM nodes have been added to the flow
- Ensure the flow has been saved after adding LLM nodes
- Confirm you’re checking the correct flow
Model Configuration Issues
Model Configuration Issues
Solution: If LLM nodes have invalid model configurations:
- Verify the specified model is available and supported
- Check that prompt template IDs reference existing templates
- Ensure temperature values are within valid range (0.0-2.0)
- Validate model compatibility with streaming and multimodal features
Slow Response Generation
Slow Response Generation
Solution: If LLM nodes show slow processing times:
- Check model configuration - some models are inherently slower
- Optimize prompt templates to reduce unnecessary complexity
- Consider lowering temperature for faster, more deterministic responses
- Monitor context window usage and optimize for efficiency
- Implement streaming for better perceived performance
Missing Quality Metrics
Missing Quality Metrics
Solution: If evaluation metrics are not available:
- Ensure nodes have been processed with sufficient test queries
- Verify that updatedMetrics is enabled in node configuration
- Check that input nodes provide necessary context for evaluation
- Run evaluation explicitly if metrics haven’t been calculated
- Ensure proper input from retrieval/RAG nodes for context evaluation
Connection Issues
Connection Issues
Solution: For connectivity problems:
- Check your internet connection
- Verify the flow URL is accessible
- Ensure your firewall allows HTTPS traffic to *.flows.graphorlm.com
- Try accessing the endpoint from a different network
Next Steps
After retrieving LLM node information, you might want to:Update LLM Configuration
Modify LLM node settings like model selection, prompt templates, and temperature parameters
List Retrieval Nodes
View retrieval nodes that provide context input to LLM nodes
Run Flow
Execute your flow with the configured LLM nodes for response generation
Flow Overview
Learn about all available flow management endpoints