Guide to LLM node management and response generation in GraphorLM flows
/{flow_name}/llm
Retrieve all LLM nodes with configurations and metrics/{flow_name}/llm/{node_id}
Modify LLM node settings including model and temperature/{flow_name}/prompts
Access available prompt templates for LLM customizationParameter | Type | Description |
---|---|---|
model | string | Language model for response generation |
promptId | string | Prompt template for instruction guidance |
temperature | float (0.0-2.0) | Creativity and randomness control |
Model | Context Window | Best For | Expected Latency |
---|---|---|---|
gpt-4o | 128K tokens | High accuracy, complex reasoning | 2-4 seconds |
gpt-4o-mini | 128K tokens | Balanced quality and speed | 1-2 seconds |
gpt-4.1 | 128K tokens | Latest capabilities | 2-5 seconds |
gpt-4.1-mini | 128K tokens | Modern features, efficient | 1-2 seconds |
gpt-4.1-nano | 128K tokens | Resource optimization | 0.8-1.5 seconds |
gpt-3.5-turbo-0125 | 16K tokens | High-volume processing | 0.5-1 second |
mixtral-8x7b-32768 | 32K tokens | Real-time processing | 0.5-1 second |
llama-3.1-8b-instant | 8K tokens | Ultra-fast responses | 0.3-0.8 seconds |