© 2026 WriterDock.

Frontend

Building Generative UI: How to Stream Components with Vercel AI SDK & Next.js

Suraj - Writer Dock

Suraj - Writer Dock

January 2, 2026

Building Generative UI: How to Stream Components with Vercel AI SDK & Next.js

For years, the standard AI interface has been a chat bubble. You type text, and the AI types text back. While effective, this "text-in, text-out" model is limiting. If you ask for a stock price, you don't just want a sentence saying "AAPL is $150." You want a chart, a percentage change indicator, and a "Buy" button.

Enter Generative UI.

Generative UI allows Large Language Models (LLMs) to go beyond text and render fully interactive React components dynamically. It transforms a chatbot from a text generator into a dynamic interface builder.

In this guide, we will explore how to build Generative UI using Next.js (App Router) and the Vercel AI SDK. We will focus on the powerful streamUI function, which lets you stream React Server Components (RSC) directly from the server to the client.

What is Generative UI?

Generative UI is a pattern where the AI decides which UI component to render based on the user's intent.

Instead of hard-coding a dashboard, you give the AI a "kit" of components (like graphs, weather cards, or flight widgets). When a user asks "Show me flights to Tokyo," the AI doesn't just describe the flights; it effectively calls a function that renders the <FlightWidget /> component directly in the chat stream.

The Old Way (Client-Side Fetching)

  1. User asks a question.
  2. AI returns JSON data.
  3. Client-side code parses JSON.
  4. Client conditionally renders a component based on that data.

The Generative UI Way (Server Streaming)

  1. User asks a question.
  2. AI on the server decides to call a "tool."
  3. The server executes the logic and streams the actual React Component HTML back to the client.
  4. The client simply renders the React Node it received.

This approach reduces bundle size (the client doesn't need to know every possible component logic) and improves performance (logic runs on the server).

The Tech Stack

To build this, you need a modern stack that supports React Server Components:

  • Next.js (App Router): For server actions and RSC support.
  • Vercel AI SDK (@ai-sdk/rsc): The specific library for managing AI state and UI streaming.
  • Zod: For defining schemas so the AI understands your data structure.
  • LLM Provider: OpenAI (GPT-4o) or similar models that support function calling.

Step 1: Setting Up the Project

First, ensure you have a Next.js project set up with the AI SDK installed.

bash
1npx create-next-app@latest my-gen-ui-app
2cd my-gen-ui-app
3npm install ai @ai-sdk/openai @ai-sdk/rsc zod

You will also need to configure your API keys (e.g., OPENAI_API_KEY) in your .env.local file.

Step 2: Defining the Server Action

The core of Generative UI lives in a Server Action. This is a function that runs on the server, communicates with the LLM, and streams the response back.

We use the streamUI function. This function allows you to define "tools" that the AI can use. However, unlike standard tools that return text strings, these tools return ReactNode (JSX).

Create a file named actions.tsx:

actions.tsx
typescript
1'use server';
2
3import { streamUI } from '@ai-sdk/rsc';
4import { openai } from '@ai-sdk/openai';
5import { z } from 'zod';
6
7// Import your actual React components
8import { WeatherCard } from '@/components/weather-card';
9import { Spinner } from '@/components/spinner';
10
11export async function submitUserMessage(content: string) {
12  'use server';
13
14  const result = await streamUI({
15    model: openai('gpt-4-turbo'),
16    prompt: content,
17    text: ({ content }) => <div>{content}</div>, // Default text handler
18    tools: {
19      getWeather: {
20        description: 'Get the weather for a specific location',
21        parameters: z.object({
22          location: z.string().describe('The city and state, e.g. San Francisco, CA'),
23        }),
24        generate: async function* ({ location }) {
25          // 1. Yield a loading state immediately
26          yield <Spinner message={`Checking weather in ${location}...`} />;
27
28          // 2. Perform the actual data fetch (simulate API call)
29          const weatherData = await fetchWeatherApi(location); 
30          
31          // 3. Return the final component
32          return <WeatherCard data={weatherData} />;
33        },
34      },
35    },
36  });
37
38  return result.value;
39}
40
41// Mock function for demo
42async function fetchWeatherApi(location: string) {
43  await new Promise(resolve => setTimeout(resolve, 2000));
44  return { temp: 72, condition: 'Sunny', location };
45}

Unpacking the Code

  • streamUI: This is the magic function. It takes the model, prompt, and a list of tools.
  • async function* (Generator): Notice the asterisk. The generate function is a generator. This allows us to yield a loading component first (like a spinner) and then return the final component once the data is ready.
  • zod Schema: This tells the LLM exactly what arguments getWeather expects (a string for location).

Step 3: The Client-Side Implementation

Now that the server knows how to send components, the client needs to know how to receive and display them.

In your page.tsx (or a dedicated client component), you will manage the list of messages. Because streamUI returns actual React Nodes, our "message" state isn't just an array of strings; it's an array of UI elements.

typescript
1'use client';
2
3import { useState } from 'react';
4import { submitUserMessage } from './actions';
5import { useUIState } from '@ai-sdk/rsc'; // If using full AI state management
6
7export default function ChatPage() {
8  // Simple local state for this example
9  const [messages, setMessages] = useState<React.ReactNode[]>([]);
10  const [input, setInput] = useState('');
11
12  return (
13    <div className="max-w-2xl mx-auto p-4">
14      <div className="space-y-4 mb-4">
15        {messages.map((msg, index) => (
16          <div key={index} className="border p-2 rounded">
17            {msg}
18          </div>
19        ))}
20      </div>
21
22      <form
23        onSubmit={async (e) => {
24          e.preventDefault();
25          
26          // 1. Add user message UI immediately
27          setMessages((msgs) => [...msgs, <div key="user">You: {input}</div>]);
28          
29          const currentInput = input;
30          setInput('');
31
32          // 2. Call the server action
33          const response = await submitUserMessage(currentInput);
34
35          // 3. Add the returned AI UI (which could be text OR a component)
36          setMessages((msgs) => [...msgs, response]);
37        }}
38        className="flex gap-2"
39      >
40        <input
41          className="border p-2 flex-1 rounded"
42          value={input}
43          onChange={(e) => setInput(e.target.value)}
44          placeholder="Ask for the weather..."
45        />
46        <button className="bg-blue-600 text-white p-2 rounded">Send</button>
47      </form>
48    </div>
49  );
50}

Key Benefits of this Architecture

1. Zero-Bundle Size for Tools

When you use streamUI, the logic for fetchWeatherApi and the heavy lifting of determining which component to show happens on the server. If the AI decides not to show a Stock Chart, the code for the Stock Chart component is never even sent to the client (depending on your bundling setup).

2. Instant Loading States

By using the yield keyword in the generator function, you solve the biggest UX problem with AI: latency. LLMs can be slow. API calls can be slow. Instead of making the user stare at a blinking cursor for 5 seconds, you immediately stream a <Skeleton /> or <Spinner />. The interface feels responsive and alive, even while the backend is crunching numbers.

3. Type Safety

Because you are using TypeScript and Zod, the contract between the AI and your UI is strictly enforced. If the AI tries to call getWeather without a location, validation fails before it breaks your UI.

Real-World Use Cases

Generative UI isn't just for weather apps. Here is how it applies to complex industries:

  • E-Commerce:
    • User: "I need running shoes under $100."
    • Generative UI: Renders a carousel of product cards with "Add to Cart" buttons, rather than a text list of links.
  • Financial Dashboards:
    • User: "Compare Apple and Microsoft revenue."
    • Generative UI: Streams a comparative bar chart component generated on the fly.
  • Travel Booking:
    • User: "Find flights to London next Friday."
    • Generative UI: Renders a flight selection widget where users can actually click to book, rather than just reading flight numbers.

Troubleshooting Common Issues

The AI returns text instead of the Component This usually happens if the system prompt or tool description isn't clear enough. The AI needs to know why it should use the tool.

  • Fix: Improve the description field in your tool definition. Make it explicit: "Use this tool whenever the user asks about current weather conditions."

Serialization Errors React Server Components (RSC) cannot pass non-serializable data (like functions) to Client Components.

  • Fix: Ensure the props you pass to your components (e.g., <WeatherCard data={...} />) are simple JSON-serializable objects (strings, numbers, arrays).

Styling Issues Since the component is streamed from the server, ensure your CSS (Tailwind or CSS Modules) is globally available or properly imported in the component file so styles apply correctly when the component "pops" into existence.

FAQ: Generative UI

Q: Is streamUI different from render? Yes. In earlier versions of the AI SDK, render was used. streamUI is the modern, more flexible successor designed specifically for RSCs. It supports generator functions (yielding loading states), which render did not handle as elegantly.

Q: Does this work with any LLM? It works best with models trained for "Function Calling" (or Tool Use). OpenAI's GPT-4o, GPT-3.5-turbo, and Mistral are excellent choices. Older models may struggle to select the correct tool reliably.

Q: Can I stream multiple components at once? Yes! You can define multiple tools. If a user asks a complex question, the AI might decide to call getWeather AND getStockPrice sequentially or in parallel (depending on model capability), rendering multiple interactive widgets in the chat stream.

Conclusion

Generative UI is the bridge between "Chatbots" and "AI Apps." It moves us away from the command line era of AI and into the graphical user interface era.

By using Vercel AI SDK's streamUI with Next.js, you are not just building a smarter chatbot; you are building a system that can dynamically construct its own interface to best serve the user's needs. The result is an experience that feels magical, fast, and incredibly intuitive.

Start small: pick one "tool" in your application—whether it's a pricing calculator or a data fetcher—and convert it into a Generative UI component today.

About the Author

Suraj - Writer Dock

Suraj - Writer Dock

Passionate writer and developer sharing insights on the latest tech trends. loves building clean, accessible web applications.