Skip to main content
LangGraph is a graph-based agent framework by the LangChain team. It uses ChatOpenAI for model calls, which supports custom OpenAI-compatible endpoints like dottxt. For dottxt, the key integration point is LangChain’s structured-output support: bind a schema to ChatOpenAI, and LangChain will send the corresponding structured output request to dottxt and parse the result back into a typed object.

Install

pip install langgraph langchain-openai pydantic

Configure

Create a ChatOpenAI instance pointed at dottxt:
import os
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    model="openai/gpt-oss-20b",
    base_url="https://api.dottxt.ai/v1",
    api_key=os.environ["DOTTXT_API_KEY"],
)

Structured output

Use with_structured_output() to bind a Pydantic model to the LLM. The result is a typed object:
from pydantic import BaseModel, ConfigDict, Field

class Sentiment(BaseModel):
    model_config = ConfigDict(extra="forbid")

    label: str = Field(description="positive, negative, or neutral")
    confidence: float = Field(ge=0.0, le=1.0)
    reasoning: str

structured_llm = llm.with_structured_output(
    Sentiment,
    method="json_schema",
)

result = structured_llm.invoke("Analyze the sentiment: 'This product is excellent!'")
print(result.label)       # "positive"
print(result.confidence)  # 0.95
LangChain builds the structured output request and parses the JSON response back into your Pydantic model. Under the hood, this still uses the same dottxt structured generation flow described in API Overview.

Using in a graph

Combine structured output with LangGraph’s StateGraph for multi-step workflows:
from typing_extensions import TypedDict
from typing import Literal
from pydantic import BaseModel, ConfigDict, Field
from langgraph.graph import StateGraph, START, END

class Classification(BaseModel):
    model_config = ConfigDict(extra="forbid")

    category: Literal["billing", "account", "bug", "feature"]
    priority: Literal["low", "medium", "high"]
    summary: str = Field(min_length=10, max_length=120)

class State(TypedDict):
    text: str
    result: Classification | None

structured_llm = llm.with_structured_output(
    Classification,
    method="json_schema",
)

def classify(state: State) -> dict:
    return {"result": structured_llm.invoke(
        f"Classify this support ticket: {state['text']}"
    )}

graph = StateGraph(State)
graph.add_node("classify", classify)
graph.add_edge(START, "classify")
graph.add_edge("classify", END)

app = graph.compile()
output = app.invoke({"text": "I can't log in to my account", "result": None})
print(output["result"].category)
print(output["result"].priority)

Notes

  • Prefer method="json_schema" with dottxt so LangChain uses the structured output path explicitly.
  • Graph nodes are plain functions that receive the full state and return a partial dict of updates.
  • ConfigDict(extra="forbid") is useful when you want additionalProperties: false in the generated schema.
  • See the Pydantic authoring guide for how to write effective schemas.