Skip to content

fix: JSON-serialize ToolCallBlock.tool_kwargs in to_openai_message_dict#21389

Open
NIK-TIGER-BILL wants to merge 1 commit intorun-llama:mainfrom
NIK-TIGER-BILL:fix-tool-kwargs-serialization-21378
Open

fix: JSON-serialize ToolCallBlock.tool_kwargs in to_openai_message_dict#21389
NIK-TIGER-BILL wants to merge 1 commit intorun-llama:mainfrom
NIK-TIGER-BILL:fix-tool-kwargs-serialization-21378

Conversation

@NIK-TIGER-BILL
Copy link
Copy Markdown

Fixes #21378

Problem

When using AgentWorkflow with mixed LLM providers (e.g., Anthropic orchestrator handing off to an OpenAI sub-agent), the OpenAI Chat Completions API returns a 400 BadRequestError because function.arguments was sent as a JSON object instead of a JSON string.

BadRequestError: Error code: 400 - {'error': {'message': "Invalid type for 'messages[3].tool_calls[0].function.arguments': expected a string, but got an object instead.", ...}}

Root Cause

In llama_index/llms/openai/utils.py, the to_openai_message_dict function placed block.tool_kwargs directly into "arguments" without JSON serialization. The OpenAI Chat Completions API expects function.arguments to be a JSON string.

Fix

  • Serialize dict tool_kwargs to JSON string before assigning to function.arguments
  • String tool_kwargs are passed through unchanged (backward compatible)

Changes

  • Modified to_openai_message_dict() in llama_index/llms/openai/utils.py
  • Added 2 tests in tests/test_openai_utils.py:
    • test_chat_completions_tool_kwargs_serialized_to_json_string() - verifies dict serialization
    • test_chat_completions_tool_kwargs_string_passthrough() - verifies string passthrough

Testing

cd llama-index-integrations/llms/llama-index-llms-openai
pytest tests/test_openai_utils.py::test_chat_completions_tool_kwargs_serialized_to_json_string -v
pytest tests/test_openai_utils.py::test_chat_completions_tool_kwargs_string_passthrough -v

The OpenAI Chat Completions API expects 'function.arguments' to be a JSON
string, but ToolCallBlock.tool_kwargs can be a dict. This caused 400
BadRequestError when using mixed LLM providers (e.g., Anthropic
orchestrator handing off to an OpenAI sub-agent).

Changes:
- Serialize dict tool_kwargs to JSON string in to_openai_message_dict
- Add tests for both dict and string tool_kwargs scenarios

Fixes run-llama#21378

Signed-off-by: NIK-TIGER-BILL <nik.tiger.bill@github.com>
@dosubot dosubot bot added the size:XS This PR changes 0-9 lines, ignoring generated files. label Apr 14, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:XS This PR changes 0-9 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: to_openai_message_dict doesn't JSON-serialize ToolCallBlock.tool_kwargs and breaks cross-provider agent workflows

1 participant