AutoGluon Assistant uses YAML configuration files to control its behavior. This tutorial explains the configuration system and how to customize it for your specific needs.
The configuration system is based on hierarchical YAML files that control:
- General execution settings
- LLM provider settings
- Agent behaviors and parameters
- Resource utilization
- Data handling preferences
A configuration file has this general structure:
# General settings
per_execution_timeout: 86400
create_venv: false
# ... other general settings
# Default LLM Configuration
llm: &default_llm # The anchor defines reusable settings
provider: bedrock
model: "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
# ... other LLM settings
# Agent-specific configurations
coder:
<<: *default_llm # This merges all settings from default_llm
multi_turn: True # Override specific settings| Parameter | Description | Default |
|---|---|---|
per_execution_timeout |
Maximum execution time (seconds) for code execution | 86400 |
create_venv |
Whether to install additional packages in created conda environment | false |
condense_tutorials |
Whether to use condensed tutorials | true |
use_tutorial_summary |
Whether to use tutorial summary as retrieval key | true |
continuous_improvement |
Continue iterations after finding a valid solution | false |
optimize_system_resources |
Optimize resource usage during execution | false |
cleanup_unused_env |
Remove unused environments after execution | true |
| Parameter | Description | Default |
|---|---|---|
max_file_group_size_to_show |
Minimum number of similar files to show as a group | 5 |
num_example_files_to_show |
Number of example files to display for each type | 1 |
max_chars_per_file |
Maximum characters to display per file | 768 |
max_user_input_length |
Maximum length of user input to process | 2048 |
max_error_message_length |
Maximum length of error messages to include | 2048 |
| Parameter | Description | Default |
|---|---|---|
num_tutorial_retrievals |
Number of tutorial segments to retrieve | 30 |
max_num_tutorials |
Maximum number of tutorials to include | 5 |
max_tutorial_length |
Maximum length of all tutorial contents | 32768 |
| Parameter | Description | Default |
|---|---|---|
provider |
LLM provider to use (bedrock, openai, anthropic, sagemaker) | bedrock |
model |
Specific model name for the selected provider | |
max_tokens |
Maximum token limit for model responses | 65535 |
proxy_url |
Optional proxy URL for API requests | null |
temperature |
Controls randomness (0.0-1.0, lower = more deterministic) | 0.1 |
top_p |
Nucleus sampling parameter for token selection | 0.9 |
verbose |
Whether to log detailed information about LLM interactions | true |
multi_turn |
Whether to use multi-turn conversation with the LLM across different iterations | false |
template |
Optional custom prompt template | null |
add_coding_format_instruction |
Add explicit coding format instructions | false |
AutoGluon Assistant uses specialized agents for different tasks. Each inherits the default LLM settings but can have custom overrides:
Generates code based on requirements and context.
coder:
<<: *default_llm
multi_turn: True # Enable multi-turn conversation across iterations for iterative codingRuns code and evaluates execution results.
executer:
<<: *default_llm
max_stdout_length: 8192 # Maximum length of stdout to capture
max_stderr_length: 2048 # Maximum length of stderr to captureAnalyzes and understands input data files.
reader:
<<: *default_llm
details: False # Whether to include detailed file informationDescribes tasks based on input data.
task_descriptor:
<<: *default_llm
max_description_files_length_to_show: 1024 # Max length to show
max_description_files_length_for_summarization: 16384 # Max length for summarizationerror_analyzer: Analyzes execution errors and suggests fixesretriever: Retrieves relevant tutorialsreranker: Re-ranks and selects top retrieved tutorialsdescription_file_retriever: Retrieves information from description filestool_selector: Selects the appropriate ML Library based on requirements
You can create and use a custom configuration file by:
- Create a new YAML file, e.g.,
my_custom_config.yaml - Run with your custom config:
mlzero -i <input_folder> -c my_custom_config.yaml
- Start Simple: Begin with minimal customizations and add more as needed
- Test Incrementally: Test changes one at a time to understand their impact
- Inheritance Issues: If you modify settings in the
llmsection, you must update agent references to it. The YAML anchor/alias inheritance (<<: *default_llm) is a one-time static reference, not dynamic. When you change the mainllmconfig, agents won't automatically inherit these changes unless you explicitly update their references.
If you run into any issues:
- Check the API Reference for detailed documentation
- Browse the examples for common use cases (coming soon)
- Visit our GitHub repository for support