AI Agents

Chat Session Modeling

Chat sessions in AI agents can be modeled at different layers of your architecture. The choice affects state ownership and how you handle interruptions and reconnections.

While there are many ways to model chat sessions, the two most common categories are single-turn and multi-turn.

Single-Turn Workflows

Each user message triggers a new workflow run. The client or API route owns the conversation history and sends the full message array with each request.

workflows/chat/index.ts
import { DurableAgent } from "@workflow/ai/agent";
import { getWritable } from "workflow";
import { flightBookingTools, FLIGHT_ASSISTANT_PROMPT } from "./steps/tools";
import { convertToModelMessages, type UIMessage, type UIMessageChunk } from "ai";

export async function chat(messages: UIMessage[]) {
  "use workflow";

  const writable = getWritable<UIMessageChunk>();

  const agent = new DurableAgent({
    model: "bedrock/claude-haiku-4-5-20251001-v1",
    system: FLIGHT_ASSISTANT_PROMPT,
    tools: flightBookingTools,
  });

  await agent.stream({
    messages: convertToModelMessages(messages), // Full history from client
    writable,
  });
}
app/api/chat/route.ts
import { createUIMessageStreamResponse, type UIMessage } from "ai";
import { start } from "workflow/api";
import { chat } from "@/workflows/chat";

export async function POST(req: Request) {
  const { messages }: { messages: UIMessage[] } = await req.json();

  const run = await start(chat, [messages]); 

  return createUIMessageStreamResponse({
    stream: run.readable,
    headers: {
      "x-workflow-run-id": run.runId, // For stream reconnection
    },
  });
}

Chat messages need to be stored somewhere—typically a database. In this example, we assume a route like /chats/:id passes the session ID, allowing us to fetch existing messages and persist new ones.

app/chats/[id]/page.tsx
"use client";

import { useChat } from "@ai-sdk/react";
import { WorkflowChatTransport } from "@workflow/ai"; 
import { useParams } from "next/navigation";
import { useMemo } from "react";

// Fetch existing messages from your backend
async function getMessages(sessionId: string) { 
  const res = await fetch(`/api/chats/${sessionId}/messages`); 
  return res.json(); 
} 

export function Chat({ initialMessages }) {
  const { id: sessionId } = useParams<{ id: string }>();

  const transport = useMemo( 
    () =>
      new WorkflowChatTransport({ 
        api: "/api/chat", 
        onChatEnd: async () => { 
          // Persist the updated messages to the chat session
          await fetch(`/api/chats/${sessionId}/messages`, { 
            method: "PUT", 
            headers: { "Content-Type": "application/json" }, 
            body: JSON.stringify({ messages }), 
          }); 
        }, 
      }), 
    [sessionId] 
  ); 

  const { messages, input, handleInputChange, handleSubmit } = useChat({
    initialMessages, // Loaded via getMessages(sessionId)
    transport, 
  });

  return (
    <form onSubmit={handleSubmit}>
      {/* ... render messages ... */}
      <input value={input} onChange={handleInputChange} />
    </form>
  );
}

In this pattern, the client owns conversation state, with the latest turn managed by the AI SDK's useChat, and past turns persisted to the backend. The current turn is either managed through the workflow by a resumable stream (see Resumable Streams), or a hook into useChat persists every new message to the backend, as messages come in.

This is the pattern used in the Building Durable AI Agents guide.

Multi-Turn Workflows

A single workflow handles the entire conversation session across multiple turns, and owns the current conversation state. The clients/API routes inject new messages via hooks. The workflow run ID serves as the session identifier.

A key challenge in multi-turn workflows is ensuring user messages appear in the correct order when replaying the stream (e.g., after a page refresh). Since the stream primarily contains AI responses, user messages must be explicitly marked in the stream so the client can reconstruct the full conversation.

workflows/chat/index.ts
import {
  convertToModelMessages,
  type UIMessageChunk,
  type UIMessage,
  type ModelMessage,
} from "ai";
import { DurableAgent } from "@workflow/ai/agent";
import { getWritable, getWorkflowMetadata } from "workflow";
import { chatMessageHook } from "./hooks/chat-message";
import { flightBookingTools, FLIGHT_ASSISTANT_PROMPT } from "./steps/tools";
import { writeUserMessageMarker, writeStreamClose } from "./steps/writer"; 

export async function chat(initialMessages: UIMessage[]) {
  "use workflow";

  const { workflowRunId: runId } = getWorkflowMetadata();
  const writable = getWritable<UIMessageChunk>();
  const messages: ModelMessage[] = convertToModelMessages(initialMessages);

  // Write markers for initial user messages (for replay)
  for (const msg of initialMessages) { 
    if (msg.role === "user") { 
      const text = msg.parts.filter((p) => p.type === "text").map((p) => p.text).join(""); 
      if (text) await writeUserMessageMarker(writable, text, msg.id); 
    } 
  } 

  const agent = new DurableAgent({
    model: "bedrock/claude-haiku-4-5-20251001-v1",
    system: FLIGHT_ASSISTANT_PROMPT,
    tools: flightBookingTools,
  });

  // Use run ID as the hook token for easy resumption
  const hook = chatMessageHook.create({ token: runId });
  let turnNumber = 0;

  while (true) {
    turnNumber++;
    const result = await agent.stream({
      messages,
      writable,
      preventClose: true, // Keep stream open for follow-ups
      sendStart: turnNumber === 1,
      sendFinish: false,
    });
    messages.push(...result.messages.slice(messages.length));

    // Wait for next user message via hook
    const { message: followUp } = await hook;
    if (followUp === "/done") break;

    // Write marker and add to messages
    const followUpId = `user-${runId}-${turnNumber}`; 
    await writeUserMessageMarker(writable, followUp, followUpId); 
    messages.push({ role: "user", content: followUp });
  }

  await writeStreamClose(writable); 
  return { messages };
}

The writeUserMessageMarker helper writes a data-workflow chunk to mark user turns:

workflows/chat/steps/writer.ts
import type { UIMessageChunk } from "ai";

export async function writeUserMessageMarker( 
  writable: WritableStream<UIMessageChunk>,
  content: string,
  messageId: string
) {
  "use step"; 
  const writer = writable.getWriter();
  try {
    await writer.write({
      type: "data-workflow", 
      data: { type: "user-message", id: messageId, content, timestamp: Date.now() }, 
    } as UIMessageChunk);
  } finally {
    writer.releaseLock();
  }
}

export async function writeStreamClose(writable: WritableStream<UIMessageChunk>) {
  const writer = writable.getWriter();
  await writer.write({ type: "finish" });
  await writer.close();
}

Three endpoints: start a session, send follow-up messages, and reconnect to the stream.

app/api/chat/route.ts
import { createUIMessageStreamResponse, type UIMessage } from "ai";
import { start } from "workflow/api";
import { chat } from "@/workflows/chat";

export async function POST(req: Request) {
  const { initialMessage }: { initialMessage: UIMessage } = await req.json();

  const run = await start(chat, [[initialMessage]]); 

  return createUIMessageStreamResponse({
    stream: run.readable,
    headers: {
      "x-workflow-run-id": run.runId, // For follow-ups and reconnection
    },
  });
}
app/api/chat/[id]/route.ts
import { chatMessageHook } from "@/workflows/chat/hooks/chat-message";

export async function POST(
  req: Request,
  { params }: { params: Promise<{ id: string }> }
) {
  const { id: runId } = await params;
  const { message } = await req.json();

  // Resume the hook using the workflow run ID
  await chatMessageHook.resume(runId, { message }); 

  return Response.json({ success: true });
}
app/api/chat/[id]/stream/route.ts
import { createUIMessageStreamResponse } from "ai";
import { getRun } from "workflow/api";

export async function GET(
  request: Request,
  { params }: { params: Promise<{ id: string }> }
) {
  const { id } = await params;
  const { searchParams } = new URL(request.url);
  const startIndex = searchParams.get("startIndex");

  const run = getRun(id); 
  const stream = run.getReadable({ 
    startIndex: startIndex ? parseInt(startIndex, 10) : undefined, 
  }); 

  return createUIMessageStreamResponse({ stream });
}
workflows/chat/hooks/chat-message.ts
import { defineHook } from "workflow";
import { z } from "zod";

export const chatMessageHook = defineHook({
  schema: z.object({
    message: z.string(),
  }),
});

A custom hook wraps useChat to manage the multi-turn session. It handles:

  • Routing between the initial message endpoint and follow-up endpoint
  • Reconstructing user messages from stream markers for correct ordering on replay
hooks/use-multi-turn-chat.ts
"use client";

import type { UIMessage, UIDataTypes, ChatStatus } from "ai";
import { useChat } from "@ai-sdk/react";
import { WorkflowChatTransport } from "@workflow/ai";
import { useState, useCallback, useMemo, useEffect, useRef } from "react";

const STORAGE_KEY = "workflow-run-id";

interface UserMessageData {
  type: "user-message";
  id: string;
  content: string;
  timestamp: number;
}

export function useMultiTurnChat() {
  const [runId, setRunId] = useState<string | null>(null);
  const [shouldResume, setShouldResume] = useState(false);
  const userMessagesRef = useRef<Map<string, UIMessage>>(new Map());

  // Check for existing session on mount
  useEffect(() => {
    const storedRunId = localStorage.getItem(STORAGE_KEY);
    if (storedRunId) {
      setRunId(storedRunId);
      setShouldResume(true);
    }
  }, []);

  const transport = useMemo(
    () =>
      new WorkflowChatTransport({
        api: "/api/chat",
        onChatSendMessage: (response) => {
          const workflowRunId = response.headers.get("x-workflow-run-id");
          if (workflowRunId) {
            setRunId(workflowRunId);
            localStorage.setItem(STORAGE_KEY, workflowRunId);
          }
        },
        onChatEnd: () => {
          setRunId(null);
          localStorage.removeItem(STORAGE_KEY);
          userMessagesRef.current.clear();
        },
        prepareReconnectToStreamRequest: ({ api, ...rest }) => {
          const storedRunId = localStorage.getItem(STORAGE_KEY);
          if (!storedRunId) throw new Error("No active session");
          return { ...rest, api: `/api/chat/${storedRunId}/stream` };
        },
      }),
    []
  );

  const { messages: rawMessages, sendMessage: baseSendMessage, status, stop, setMessages } =
    useChat({ resume: shouldResume, transport });

  // Reconstruct conversation order from stream markers
  const messages = useMemo(() => { 
    const result: UIMessage[] = []; 
    const seenContent = new Set<string>(); 
    // Collect content from optimistic user messages
    for (const msg of rawMessages) { 
      if (msg.role === "user") { 
        const text = msg.parts.filter((p) => p.type === "text").map((p) => p.text).join(""); 
        if (text) seenContent.add(text); 
      } 
    } 
    for (const msg of rawMessages) { 
      if (msg.role === "user") { 
        result.push(msg); 
        continue; 
      } 
      if (msg.role === "assistant") { 
        // Process parts in order, splitting on user-message markers
        let currentParts: typeof msg.parts = []; 
        let partIndex = 0; 
        for (const part of msg.parts) { 
          if (part.type === "data-workflow" && "data" in part) { 
            const data = part.data as UserMessageData; 
            if (data?.type === "user-message") { 
              // Flush accumulated assistant parts
              if (currentParts.length > 0) { 
                result.push({ ...msg, id: `${msg.id}-${partIndex++}`, parts: currentParts }); 
                currentParts = []; 
              } 
              // Add user message if not duplicate
              if (!seenContent.has(data.content)) { 
                seenContent.add(data.content); 
                result.push({ id: data.id, role: "user", parts: [{ type: "text", text: data.content }] }); 
              } 
              continue; 
            } 
          } 
          currentParts.push(part); 
        } 
        if (currentParts.length > 0) { 
          result.push({ ...msg, id: partIndex > 0 ? `${msg.id}-${partIndex}` : msg.id, parts: currentParts }); 
        } 
      } 
    } 
    return result; 
  }, [rawMessages]); 

  // Route messages to appropriate endpoint
  const sendMessage = useCallback(
    async (text: string) => {
      if (runId) {
        // Follow-up: send via hook resumption
        await fetch(`/api/chat/${runId}`, {
          method: "POST",
          headers: { "Content-Type": "application/json" },
          body: JSON.stringify({ message: text }),
        });
      } else {
        // First message: start new workflow
        await baseSendMessage({ text, metadata: { createdAt: Date.now() } });
      }
    },
    [runId, baseSendMessage]
  );

  const endSession = useCallback(async () => {
    if (runId) {
      await fetch(`/api/chat/${runId}`, {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify({ message: "/done" }),
      });
    }
    setRunId(null);
    setShouldResume(false);
    localStorage.removeItem(STORAGE_KEY);
    userMessagesRef.current.clear();
    setMessages([]);
  }, [runId, setMessages]);

  return { messages, status, runId, sendMessage, endSession, stop };
}

In this pattern, the workflow owns the entire conversation session. All messages are persisted in the workflow, and follow-up messages are injected via hooks. The workflow writes user message markers to the stream using data-workflow chunks, which allows the client to reconstruct the full conversation in the correct order when replaying the stream (e.g., after a page refresh).

The client hook processes these markers by:

  1. Iterating through message parts in order
  2. When a user-message marker is found, flushing any accumulated assistant content and inserting the user message
  3. Deduplicating against optimistic sends from the initial message

This ensures the conversation displays as User → AI → User → AI regardless of whether viewing live or replaying from the stream.

Choosing a Pattern

ConsiderationSingle-TurnMulti-Turn
State ownershipClient or API routeWorkflow
Message injection from backendRequires stitching together runsNative via hooks
Workflow complexityLowerHigher
Workflow time horizonMinutesHours to indefinitely
Observability scopePer-turn tracesFull session traces

Multi-turn is recommended for most production use-cases. If you're starting fresh, go with multi-turn. It's more flexible and grows with your requirements. You don't need to maintain the chat history yourself and can offload all that to the workflow's built in persistence. It also enables native message injection and full session observability, which becomes increasingly valuable as your agent matures.

Single-turn works well when adapting existing architectures. If you already have a system for managing message state, and want to adopt durable agents incrementally, single-turn workflows slot in with minimal changes. Each turn maps cleanly to an independent workflow run.

Multiplayer Chat Sessions

The multi-turn pattern also easily enables multi-player chat sessions. New messages can come from system events, external services, and other users. Since a hook injects messages into workflow at any point, and the entire history is a single stream that clients can reconnect to, it doesn't matter where the injected messages come from. Here are different use-cases for multi-player chat sessions:

Internal system events like scheduled tasks, background jobs, or database triggers can inject updates into an active conversation.

app/api/internal/flight-update/route.ts
import { chatMessageHook } from "@/workflows/chat/hooks/chat-message";

// Called by your flight status monitoring system
export async function POST(req: Request) {
  const { runId, flightNumber, newStatus } = await req.json();

  await chatMessageHook.resume(runId, { 
    message: `[System] Flight ${flightNumber} status updated: ${newStatus}`, 
  }); 

  return Response.json({ success: true });
}

External webhooks from third-party services (Stripe, Twilio, etc.) can notify the conversation of events.

app/api/webhooks/payment/route.ts
import { chatMessageHook } from "@/workflows/chat/hooks/chat-message";

export async function POST(req: Request) {
  const { runId, paymentStatus, amount } = await req.json();

  if (paymentStatus === "succeeded") {
    await chatMessageHook.resume(runId, { 
      message: `[Payment] Payment of $${amount.toFixed(2)} received. Your booking is confirmed!`, 
    }); 
  }

  return Response.json({ received: true });
}

Multiple human users can participate in the same conversation. Each user's client connects to the same workflow stream.

app/api/chat/[id]/route.ts
import { chatMessageHook } from "@/workflows/chat/hooks/chat-message";
import { getUser } from "@/lib/auth";

export async function POST(
  req: Request,
  { params }: { params: Promise<{ id: string }> }
) {
  const { id: runId } = await params;
  const { message } = await req.json();
  const user = await getUser(req); 

  // Inject message with user attribution
  await chatMessageHook.resume(runId, { 
    message: `[${user.name}] ${message}`, 
  }); 

  return Response.json({ success: true });
}