Describe the Bug
part_to_message_block() in anthropic_llm.py handles the "content" key by iterating over it with a for loop, but does not first check isinstance(response_data["content"], list). In Python, iterating over a string yields individual characters, so when LoadSkillResourceTool returns {"skill_name": ..., "file_path": ..., "content": "<file text>"} (where "content" is a plain string), Claude receives the file content character-by-character with newlines between each character:
"H\ne\nl\nl\no" instead of "Hello"
Relationship to #4779
Issue #4779 fixed the case where no "result" or "content" key was present (arbitrary dicts now fall through to json.dumps). This bug is different: "content" IS present, so the fallback is never reached, and the existing for item in response_data["content"] loop runs on a string.
Steps to Reproduce
- Create an agent using
AnthropicLlm (any Claude model on Vertex AI)
- Add a
SkillToolset with a skill that has a reference file
- Ask the agent to call
load_skill_resource on that file
- The agent receives garbled content — each character on its own line
Or deterministically, without an LLM:
from google.adk.models.anthropic_llm import part_to_message_block
from google.genai import types
part = types.Part.from_function_response(
name="load_skill_resource",
response={"skill_name": "my-skill", "file_path": "references/doc.md", "content": "Hello"},
)
block = part_to_message_block(part)
print(repr(block["content"])) # 'H\ne\nl\nl\no'
Expected Behavior
block["content"] should be "Hello".
Root Cause
In anthropic_llm.py, the "content" branch lacks a type guard:
# Current (buggy)
if "content" in response_data and response_data["content"]:
for item in response_data["content"]: # iterates chars if content is a string
...
# Fix: guard with isinstance
if "content" in response_data and isinstance(response_data["content"], list) and response_data["content"]:
for item in response_data["content"]:
...
LoadSkillResourceTool.run_async returns {"content": <string>} by design, so this affects any agent using skills with reference files and a Claude model.
Environment
- google-adk version: 1.30.0
- Model: Claude (Vertex AI via AnthropicLlm)
Describe the Bug
part_to_message_block()inanthropic_llm.pyhandles the"content"key by iterating over it with aforloop, but does not first checkisinstance(response_data["content"], list). In Python, iterating over a string yields individual characters, so whenLoadSkillResourceToolreturns{"skill_name": ..., "file_path": ..., "content": "<file text>"}(where"content"is a plain string), Claude receives the file content character-by-character with newlines between each character:"H\ne\nl\nl\no"instead of"Hello"Relationship to #4779
Issue #4779 fixed the case where no
"result"or"content"key was present (arbitrary dicts now fall through tojson.dumps). This bug is different:"content"IS present, so the fallback is never reached, and the existingfor item in response_data["content"]loop runs on a string.Steps to Reproduce
AnthropicLlm(any Claude model on Vertex AI)SkillToolsetwith a skill that has a reference fileload_skill_resourceon that fileOr deterministically, without an LLM:
Expected Behavior
block["content"]should be"Hello".Root Cause
In
anthropic_llm.py, the"content"branch lacks a type guard:LoadSkillResourceTool.run_asyncreturns{"content": <string>}by design, so this affects any agent using skills with reference files and a Claude model.Environment