Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error using Langgraph + Langchain -- Unknown message type exception #27052

Open
5 tasks done
Subham07 opened this issue Oct 2, 2024 · 5 comments
Open
5 tasks done

Error using Langgraph + Langchain -- Unknown message type exception #27052

Subham07 opened this issue Oct 2, 2024 · 5 comments
Assignees
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@Subham07
Copy link

Subham07 commented Oct 2, 2024

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangGraph/LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangGraph/LangChain rather than my code.
  • I am sure this is better as an issue rather than a GitHub discussion, since this is a LangGraph bug and not a design question.

Example Code

# Imports
import operator
from typing import Annotated, TypedDict, Union, Sequence, List
from langchain.agents import create_tool_calling_agent
from langchain.prompts import (
    ChatPromptTemplate,
    MessagesPlaceholder,
    HumanMessagePromptTemplate
)
from langchain_openai import AzureChatOpenAI
from langchain_core.tools import tool
from langchain_core.agents import AgentAction, AgentFinish
from langchain_core.messages import BaseMessage, AIMessage, SystemMessage
from langgraph.graph import START
from langgraph.prebuilt.tool_executor import ToolExecutor
from langgraph.graph import END, StateGraph
from langgraph.checkpoint.memory import MemorySaver

@tool
def search(query: str):
    """Call to surf the web."""
    # This is a placeholder for the actual implementation
    # Don't let the LLM know this though 😊
    return [
        "It's sunny in San Francisco, but you better look out if you're a Gemini 😈."
    ]


# define graph state
class AgentState(TypedDict):
    input: str
    chat_history: list[BaseMessage]
    output: Union[AgentAction, AgentFinish, None]
    intermediate_steps: Annotated[list[tuple[AgentAction, str]], operator.add]
    messages: Annotated[Sequence[BaseMessage], operator.add]

# Define nodes and conditional edges
# Define the function that determines whether to continue or not

def should_continue(state: AgentState):
    if isinstance(state["output"], AgentFinish):
        return "end"
    else:
        return "continue"

# define the function that executes the tool
def execute_tools(state):
    intermediate_steps = []
    for agent_action in state["output"]:
        output = tool_executor.invoke(agent_action)
        intermediate_steps.append((agent_action, str(output)))
    return {"intermediate_steps": intermediate_steps}

# Define the function that calls the model
def call_model(state):
    output = tool_calling_agent.invoke(state)
    # We return a list, because this will get added to the existing list
    return {"output": output}


# set up the tool
tools = [search]
tool_executor = ToolExecutor(tools)

# Set up the model
model = AzureChatOpenAI(
    azure_endpoint="***",
    openai_api_version="***",
    deployment_name="GPT4omni",
    model_name="GPT4omni",
    openai_api_key="***",
    openai_api_type="azure" 
)
model = model.bind_tools(tools)

prompt = ChatPromptTemplate.from_messages(
                [
                    SystemMessage(content="You are a weather reporter, and using the search tool you find the weather of the places"),
                    MessagesPlaceholder(variable_name='chat_history', optional=True),
                    HumanMessagePromptTemplate.from_template("{input}"),
                    MessagesPlaceholder(variable_name='agent_scratchpad')
                ]
            )

tool_calling_agent = create_tool_calling_agent(model, tools, prompt)

# Define a new graph
workflow = StateGraph(AgentState)

# Define the two nodes we will cycle between
workflow.add_node("agent", call_model)
workflow.add_node("action", execute_tools)

# Set the entrypoint as `agent`
# This means that this node is the first one called
workflow.add_edge(START, "agent")

# We now add a conditional edge
workflow.add_conditional_edges(
    # First, we define the start node. We use `agent`.
    # This means these are the edges taken after the `agent` node is called.
    "agent",
    # Next, we pass in the function that will determine which node is called next.
    should_continue,
    # Finally we pass in a mapping.
    # The keys are strings, and the values are other nodes.
    # END is a special node marking that the graph should finish.
    # What will happen is we will call `should_continue`, and then the output of that
    # will be matched against the keys in this mapping.
    # Based on which one it matches, that node will then be called.
    {
        # If `tools`, then we call the tool node.
        "continue": "action",
        # Otherwise we finish.
        "end": END,
    },
)

# We now add a normal edge from `tools` to `agent`.
# This means that after `tools` is called, `agent` node is called next.
workflow.add_edge("action", "agent")

# Set up memory
memory = MemorySaver()

# Finally, we compile it!
# This compiles it into a LangChain Runnable,
# meaning you can use it as you would any other runnable

# We add in `interrupt_before=["action"]`
# This will add a breakpoint before the `action` node is called
app = workflow.compile(checkpointer=memory, interrupt_before=["action"])


# execution
thread = {"configurable": {"thread_id": "3"}}
input = "search for the weather in sf now"
response = app.invoke(input = {"input": "search for the weather in sf now", "chat_history": []}, config = thread)

# The above will work and interrupt before the tool execution.
# Now to resume the execution

response = app.invoke(None, config=thread)

Error Message and Stack Trace (if applicable)

File [~\AppData\Local\miniconda3\envs\test_langgraph_env\Lib\site-packages\langchain_openai\chat_models\base.py:232](http://localhost:8888/lab/tree/backup/langgraph/~/AppData/Local/miniconda3/envs/test_langgraph_env/Lib/site-packages/langchain_openai/chat_models/base.py#line=231), in _convert_message_to_dict(message)
    230     message_dict = {k: v for k, v in message_dict.items() if k in supported_props}
    231 else:
--> 232     raise TypeError(f"Got unknown type {message}")
    233 return message_dict

TypeError: Got unknown type content='' additional_kwargs={'tool_calls': [{'id': 'call_0GzVAj4KksFGqiYKk0F5uvZO', 'function': {'arguments': '{"query":"current weather in San Francisco"}', 'name': 'search'}, 'type': 'function'}]} response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 68, 'total_tokens': 85}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_67802d9a6d', 'finish_reason': 'tool_calls', 'logprobs': None, 'content_filter_results': {}} type='ai' id='run-78eb4727-e08d-40a1-9055-f18ae171162b-0' invalid_tool_calls=[] example=False tool_calls=[{'name': 'search', 'args': {'query': 'current weather in San Francisco'}, 'id': 'call_0GzVAj4KksFGqiYKk0F5uvZO', 'type': 'tool_call'}] usage_metadata={'input_tokens': 68, 'output_tokens': 17, 'total_tokens': 85}

Description

I am using Langgraph + Human in the loop implementation.

I checked the state snapshot values and in that the "message_log" attribute is containing BaseMessage() type data, instead of what should be AIMessage()

After we try to resume the graph execution, it gives the below stacktrace (same as what is mentioned in the above question)

System Info

Below are the versions

System Information
------------------
> OS:  Windows
> OS Version:  10.0.22631
> Python Version:  3.11.10 | packaged by conda-forge | (main, Sep 22 2024, 14:00:36) [MSC v.1941 64 bit (AMD64)]

Package Information
-------------------
> langchain_core: 0.2.41
> langchain: 0.2.16
> langchain_community: 0.2.11
> langsmith: 0.1.129
> langchain_aws: 0.1.16
> langchain_cohere: 0.2.2
> langchain_elasticsearch: 0.2.2
> langchain_experimental: 0.0.64
> langchain_google_community: 1.0.6
> langchain_google_vertexai: 1.0.6
> langchain_huggingface: 0.0.3
> langchain_openai: 0.1.21
> langchain_text_splitters: 0.2.2
> langgraph: 0.2.4

Optional packages not installed
-------------------------------
> langserve

Other Dependencies
------------------
> aiohttp: 3.10.8
> anthropic[vertexai]: Installed. No version info available.
> async-timeout: Installed. No version info available.
> beautifulsoup4: 4.12.3
> boto3: 1.34.142
> cohere: 5.10.0
> dataclasses-json: 0.6.7
> db-dtypes: Installed. No version info available.
> elasticsearch[vectorstore-mmr]: Installed. No version info available.
> gapic-google-longrunning: Installed. No version info available.
> google-api-core: 2.20.0
> google-api-python-client: 2.131.0
> google-auth-httplib2: 0.2.0
> google-auth-oauthlib: Installed. No version info available.
> google-cloud-aiplatform: 1.56.0
> google-cloud-bigquery: 3.26.0
> google-cloud-bigquery-storage: Installed. No version info available.
> google-cloud-contentwarehouse: Installed. No version info available.
> google-cloud-discoveryengine: Installed. No version info available.
> google-cloud-documentai: Installed. No version info available.
> google-cloud-documentai-toolbox: Installed. No version info available.
> google-cloud-speech: Installed. No version info available.
> google-cloud-storage: 2.18.2
> google-cloud-texttospeech: Installed. No version info available.
> google-cloud-translate: Installed. No version info available.
> google-cloud-vision: Installed. No version info available.
> googlemaps: Installed. No version info available.
> grpcio: 1.63.0
> httpx: 0.27.2
> huggingface-hub: 0.24.5
> jsonpatch: 1.33
> langgraph-checkpoint: 1.0.14
> numpy: 1.26.4
> openai: 1.40.3
> orjson: 3.10.7
> packaging: 23.2
> pandas: 2.2.2
> pyarrow: Installed. No version info available.
> pydantic: 1.10.14
> PyYAML: 6.0.2
> requests: 2.32.3
> sentence-transformers: 3.1.1
> SQLAlchemy: 2.0.32
> tabulate: 0.9.0
> tenacity: 8.3.0
> tiktoken: 0.7.0
> tokenizers: 0.19.1
> transformers: 4.44.0
> typing-extensions: 4.12.2

@isahers1
Copy link
Collaborator

isahers1 commented Oct 2, 2024

Hmm, this code ran fine for me. Can you try upgrading your packages to the latest versions? langchain/langchain_core should be at 0.3.1 and langchain_openai should be at 0.2.1. You can check the pypi website to see the latest versions for all langchain/langgraph related packages.

@Subham07
Copy link
Author

Subham07 commented Oct 2, 2024

@isahers1 langchain_core at 0.3.1 will also require pydantic version to be >=2 right? Actually currently I am using pydantic 1.10.14. upgrading it to >=2 will require lot more changes in the configs

@gbaian10
Copy link
Contributor

gbaian10 commented Oct 2, 2024

I get the same error with the modified code (remove comments and use ChatOpenAI).

import operator
from typing import Annotated, Literal, TypedDict

from dotenv import load_dotenv
from langchain.agents import create_tool_calling_agent
from langchain.prompts import (
    ChatPromptTemplate,
    HumanMessagePromptTemplate,
    MessagesPlaceholder,
)
from langchain_core.agents import AgentAction, AgentFinish
from langchain_core.messages import AnyMessage, SystemMessage
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import END, START, StateGraph, add_messages
from langgraph.prebuilt.tool_executor import ToolExecutor
from rich import get_console

load_dotenv()


@tool
def search(query: str) -> list[str]:
    """Call to surf the web."""
    return [
        "It's sunny in San Francisco, but you better look out if you're a Gemini 😈."
    ]


class AgentState(TypedDict):
    input: str
    chat_history: list[AnyMessage]
    output: AgentAction | AgentFinish | None
    intermediate_steps: Annotated[list[tuple[AgentAction, str]], operator.add]
    messages: Annotated[list[AnyMessage], add_messages]


def should_continue(state: AgentState) -> Literal["end", "continue"]:
    print("should_continue")
    get_console().print(state)  # message_log is AIMessage
    if isinstance(state["output"], AgentFinish):
        return "end"
    return "continue"


def execute_tools(state):
    print("execute_tools")
    get_console().print(state)  # message_log is BaseMessage
    intermediate_steps = []
    for agent_action in state["output"]:
        output = tool_executor.invoke(agent_action)
        intermediate_steps.append((agent_action, str(output)))
    return {"intermediate_steps": intermediate_steps}


def call_model(state):
    output = tool_calling_agent.invoke(state)
    return {"output": output}


tools = [search]
tool_executor = ToolExecutor(tools)
model = ChatOpenAI(model="gpt-4o-mini").bind_tools(tools)
prompt = ChatPromptTemplate.from_messages(
    [
        SystemMessage(
            content="You are a weather reporter, and using the search tool you find the weather of the places"
        ),
        HumanMessagePromptTemplate.from_template("{input}"),
        MessagesPlaceholder(variable_name="agent_scratchpad"),
    ]
)
tool_calling_agent = create_tool_calling_agent(model, tools, prompt)

workflow = StateGraph(AgentState)
workflow.add_node("agent", call_model)
workflow.add_node("action", execute_tools)
workflow.add_edge(START, "agent")
workflow.add_conditional_edges(
    "agent", should_continue, {"continue": "action", "end": END}
)
workflow.add_edge("action", "agent")
app = workflow.compile(checkpointer=MemorySaver(), interrupt_before=["action"])


thread_config = {"configurable": {"thread_id": "1"}}
app.invoke(input={"input": "search for the weather in sf now"}, config=thread_config)
print("get_state")
get_console().print(app.get_state(thread_config))  # message_log is BaseMessage
response = app.invoke(None, config=thread_config)

Package Version

langchain-core==0.3.7
langgraph==0.2.32
langgraph-checkpoint==2.0.0
langchain-openai==0.2.1
pydantic==2.9.2

python-version==3.12.6 (windows)

@vbarda
Copy link
Contributor

vbarda commented Oct 2, 2024

This is an issue with langgraph checkpoint serializer, investigating more

@vbarda
Copy link
Contributor

vbarda commented Oct 2, 2024

This is an issue in langchain-core: https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/agents.py#L94 this needs to be AnyMessage instead of BaseMessage -- going to transfer

@Subham07 in the meantime, if you're blocked by this i can recommend a couple of solutions:

@vbarda vbarda transferred this issue from langchain-ai/langgraph Oct 2, 2024
@dosubot dosubot bot added the 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature label Oct 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

No branches or pull requests

5 participants