Skip to content

Conversation

@keerthanenr
Copy link

Updated backend to use v5 data stream protocol
Updated openai to use the latest AsyncOpenAI
Updated front end to use the latest ai-sdk useChat

@vercel
Copy link

vercel bot commented Oct 13, 2025

@keerthanenr is attempting to deploy a commit to the Vercel Labs Team on Vercel.

A member of the Team first needs to authorize it.

Copy link

@vercel vercel bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additional Comments:

lib/utils.ts (lines 1-41):
The sanitizeUIMessages function is using the old AI SDK v4 format with Message type and toolInvocations, but it's being called on v5 UIMessage objects with parts array, causing runtime errors when the stop button is clicked.

View Details
📝 Patch Details
diff --git a/components/message.tsx b/components/message.tsx
index 0d173d9..a0119d6 100644
--- a/components/message.tsx
+++ b/components/message.tsx
@@ -82,9 +82,10 @@ export const PreviewMessage = ({
                   <PreviewAttachment
                     key={index}
                     attachment={{
+                      type: "file",
                       url: part.url,
-                      name: "",
-                      contentType: part.mediaType,
+                      filename: "",
+                      mediaType: part.mediaType,
                     }}
                   />
                 );
diff --git a/components/multimodal-input.tsx b/components/multimodal-input.tsx
index 1532e1d..1ed5a64 100644
--- a/components/multimodal-input.tsx
+++ b/components/multimodal-input.tsx
@@ -58,7 +58,7 @@ export function MultimodalInput({
   messages: Array<UIMessage>;
   setMessages: Dispatch<SetStateAction<Array<UIMessage>>>;
   append: (
-    message: UIMessage | CreateUIMessage,
+    message: UIMessage | CreateUIMessage<UIMessage>,
     chatRequestOptions?: ChatRequestOptions
   ) => Promise<string | null | undefined>;
   handleSubmit: (
@@ -140,7 +140,7 @@ export function MultimodalInput({
                 onClick={async () => {
                   append({
                     role: "user",
-                    content: suggestedAction.action,
+                    parts: [{ type: "text", text: suggestedAction.action }],
                   });
                 }}
                 className="text-left border rounded-xl px-4 py-3.5 text-sm flex-1 gap-1 sm:flex-col w-full h-auto justify-start items-start"
diff --git a/components/preview-attachment.tsx b/components/preview-attachment.tsx
index 26bca19..4a81cdf 100644
--- a/components/preview-attachment.tsx
+++ b/components/preview-attachment.tsx
@@ -1,4 +1,4 @@
-import type { Attachment } from "ai";
+import type { FileUIPart } from "ai";
 
 import { LoaderIcon } from "./icons";
 
@@ -6,22 +6,22 @@ export const PreviewAttachment = ({
   attachment,
   isUploading = false,
 }: {
-  attachment: Attachment;
+  attachment: FileUIPart;
   isUploading?: boolean;
 }) => {
-  const { name, url, contentType } = attachment;
+  const { filename, url, mediaType } = attachment;
 
   return (
     <div className="flex flex-col gap-2">
       <div className="w-20 aspect-video bg-muted rounded-md relative flex flex-col items-center justify-center">
-        {contentType ? (
-          contentType.startsWith("image") ? (
+        {mediaType ? (
+          mediaType.startsWith("image") ? (
             // NOTE: it is recommended to use next/image for images
             // eslint-disable-next-line @next/next/no-img-element
             <img
               key={url}
               src={url}
-              alt={name ?? "An image attachment"}
+              alt={filename ?? "An image attachment"}
               className="rounded-md size-full object-cover"
             />
           ) : (
@@ -37,7 +37,7 @@ export const PreviewAttachment = ({
           </div>
         )}
       </div>
-      <div className="text-xs text-zinc-500 max-w-16 truncate">{name}</div>
+      <div className="text-xs text-zinc-500 max-w-16 truncate">{filename}</div>
     </div>
   );
 };
diff --git a/lib/utils.ts b/lib/utils.ts
index 9c35457..2d7c436 100644
--- a/lib/utils.ts
+++ b/lib/utils.ts
@@ -1,40 +1,45 @@
-import { Message } from "ai";
 import { clsx, type ClassValue } from "clsx";
 import { twMerge } from "tailwind-merge";
+import type { UIMessage } from "@ai-sdk/react";
 
 export function cn(...inputs: ClassValue[]) {
   return twMerge(clsx(inputs));
 }
 
-export function sanitizeUIMessages(messages: Array<Message>): Array<Message> {
-  const messagesBySanitizedToolInvocations = messages.map((message) => {
+export function sanitizeUIMessages(messages: Array<UIMessage>): Array<UIMessage> {
+  const messagesBySanitizedToolParts = messages.map((message) => {
     if (message.role !== "assistant") return message;
 
-    if (!message.toolInvocations) return message;
-
-    const toolResultIds: Array<string> = [];
-
-    for (const toolInvocation of message.toolInvocations) {
-      if (toolInvocation.state === "result") {
-        toolResultIds.push(toolInvocation.toolCallId);
+    // Filter tool parts to only include completed tool calls (output-available state)
+    const sanitizedParts = message.parts.filter((part) => {
+      if (part.type === "text") {
+        return true; // Always keep text parts
       }
-    }
-
-    const sanitizedToolInvocations = message.toolInvocations.filter(
-      (toolInvocation) =>
-        toolInvocation.state === "result" ||
-        toolResultIds.includes(toolInvocation.toolCallId),
-    );
+      
+      // For tool parts, only keep those with output-available state
+      if (part.type.startsWith("tool-")) {
+        return 'state' in part && part.state === "output-available";
+      }
+      
+      return true; // Keep other part types
+    });
 
     return {
       ...message,
-      toolInvocations: sanitizedToolInvocations,
+      parts: sanitizedParts,
     };
   });
 
-  return messagesBySanitizedToolInvocations.filter(
-    (message) =>
-      message.content.length > 0 ||
-      (message.toolInvocations && message.toolInvocations.length > 0),
-  );
+  return messagesBySanitizedToolParts.filter((message) => {
+    // Keep messages that have text content or completed tool calls
+    return message.parts.some((part) => {
+      if (part.type === "text" && 'text' in part && part.text && part.text.length > 0) {
+        return true;
+      }
+      if (part.type.startsWith("tool-") && 'state' in part && part.state === "output-available") {
+        return true;
+      }
+      return false;
+    });
+  });
 }

Analysis

Incompatible AI SDK v4/v5 message format in sanitizeUIMessages() causes message loss

What fails: The sanitizeUIMessages() function in lib/utils.ts uses AI SDK v4 Message format (toolInvocations, content properties) but receives v5 UIMessage objects with parts array structure, causing all messages to be filtered out when the stop button is clicked.

How to reproduce:

  1. Start a chat conversation with tool usage
  2. Click the stop button during AI response (triggers sanitizeUIMessages in multimodal-input.tsx line 188)
  3. All messages disappear from the conversation

Result: Function returns empty array instead of preserving messages with text content and completed tool calls. V5 messages have parts array with text/tool parts, but function checks message.content and message.toolInvocations which are undefined.

Expected: Messages with text content should be preserved per AI SDK v5 migration guide - v5 uses parts array with output-available state for completed tools instead of v4's toolInvocations with result state.

@keerthanenr
Copy link
Author

Additional Comments:

lib/utils.ts (lines 1-41): The sanitizeUIMessages function is using the old AI SDK v4 format with Message type and toolInvocations, but it's being called on v5 UIMessage objects with parts array, causing runtime errors when the stop button is clicked.

View Details

I've updated it in the latest commit

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant