Skip to main content

Building a Mobile-Friendly AI Chatbot: A Comprehensive Guide

Overview

Embedding an AI chatbot on a website is one of the most impactful ways to create a dynamic, interactive experience for visitors. But a chatbot that works beautifully on desktop can break in surprising ways on mobile — keyboards cover inputs, panels overflow the viewport, and Safari zooms in unexpectedly. This guide covers the complete lifecycle of building a production-quality AI chatbot that works reliably on both desktop and mobile, using the Anthropic Claude API as the AI backend and Next.js with React as the frontend.

  • What you'll learn: End-to-end chatbot architecture, server-side API integration with streaming, React component structure, system prompt design, and the specific CSS patterns and mobile fixes required to deliver a professional chat experience on any device.
  • Who this is for: Developers, solutions architects, and technical consultants building AI-powered chat interfaces for websites or web applications.

Architecture

Every web-based AI chatbot follows the same fundamental pattern:

User Input --> Build Context --> Call LLM API --> Stream Response --> Render Output
^ |
+----------------------------------------------------------------------+

The critical architectural decision is where the API call happens. The API key must never reach the browser. Use a server-side proxy — in Next.js, this means an API route.

+------------------+     +-------------------+     +------------------+
| React Client | --> | Next.js API Route | --> | Anthropic API |
| (Browser) | | (/api/chat) | | (Claude) |
| | <-- | | <-- | |
+------------------+ +-------------------+ +------------------+
| |
| Conversation | API key loaded from
| state in React | environment variable
| (client memory) | (never sent to client)
v v

Key Design Decisions

DecisionApproachWhy
API key storageServer-side environment variableKeys in client code are visible in network tab and page source
API proxyNext.js API routeKeeps the key server-side; enables validation and rate limiting
Response deliveryServer-Sent Events (SSE) streamingUsers see tokens appear in real time instead of waiting 5-15 seconds
Conversation stateReact client stateSimple, no database needed; each page load starts fresh
Model selectionClaude SonnetFast response time, high quality, cost-effective for chat

Server-Side API Route

The API route receives the conversation history from the client, calls the Anthropic API with streaming enabled, and pipes each text token back to the browser as an SSE event.

Install the SDK

npm install @anthropic-ai/sdk

The Route Handler

// app/api/chat/route.ts
import Anthropic from "@anthropic-ai/sdk";
import { NextRequest } from "next/server";

const SYSTEM_PROMPT = `You are a helpful assistant. Be concise and professional.`;

export async function POST(req: NextRequest) {
try {
const { messages } = await req.json();

// Validate input
if (!messages || !Array.isArray(messages)) {
return new Response(
JSON.stringify({ error: "Messages array is required" }),
{ status: 400, headers: { "Content-Type": "application/json" } }
);
}

// Load API key from environment (never hardcode)
const apiKey = process.env.ANTHROPIC_API_KEY;
if (!apiKey) {
return new Response(
JSON.stringify({ error: "Chat service is not configured" }),
{ status: 503, headers: { "Content-Type": "application/json" } }
);
}

const client = new Anthropic({ apiKey });

// Create a streaming message request
const stream = await client.messages.stream({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
system: SYSTEM_PROMPT,
messages: messages.map((m: { role: string; content: string }) => ({
role: m.role as "user" | "assistant",
content: m.content,
})),
});

// Convert the Anthropic stream to SSE format
const encoder = new TextEncoder();
const readable = new ReadableStream({
async start(controller) {
try {
for await (const event of stream) {
if (
event.type === "content_block_delta" &&
event.delta.type === "text_delta"
) {
controller.enqueue(
encoder.encode(
`data: ${JSON.stringify({ text: event.delta.text })}\n\n`
)
);
}
}
controller.enqueue(encoder.encode("data: [DONE]\n\n"));
controller.close();
} catch {
controller.enqueue(
encoder.encode(
`data: ${JSON.stringify({ error: "Stream interrupted" })}\n\n`
)
);
controller.close();
}
},
});

return new Response(readable, {
headers: {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
Connection: "keep-alive",
},
});
} catch {
return new Response(
JSON.stringify({ error: "An unexpected error occurred" }),
{ status: 500, headers: { "Content-Type": "application/json" } }
);
}
}

How SSE Streaming Works

The Anthropic SDK's .stream() method returns an async iterable of events. The key event type is content_block_delta with a text_delta — each one contains a small chunk of the response text. We wrap each chunk in the SSE data: format and send it to the client. When the stream ends, we send a [DONE] sentinel so the client knows to stop listening.

The SSE format is simple — each message is a line starting with data: followed by JSON, terminated by two newlines:

data: {"text":"Hello"}\n\n
data: {"text":" there"}\n\n
data: {"text":"!"}\n\n
data: [DONE]\n\n

System Prompt Design

The system prompt defines your chatbot's personality, knowledge, and behavioral boundaries. It gets sent on every API call because LLMs are stateless — they have no built-in memory between requests.

Multi-Layer Architecture

The most maintainable approach uses layers:

+--------------------------------------------------+
| Layer 1: Identity & Role |
| "You are [Name], a [role] for [company]..." |
+--------------------------------------------------+
| Layer 2: Behavioral Rules |
| Communication style, tone, topic limits |
+--------------------------------------------------+
| Layer 3: Domain Knowledge |
| Resume content, product docs, FAQ, etc. |
+--------------------------------------------------+
| Layer 4: Response Formatting |
| "Use short paragraphs. Use bold for key terms." |
+--------------------------------------------------+

Formatting Rules Matter

LLM responses appear in a small chat bubble. A wall of text is unreadable. Include explicit formatting instructions in the system prompt:

Response formatting rules:
- Use SHORT paragraphs (1-2 sentences max). Separate with a blank line.
- Use **bold** for key terms, names, and metrics.
- Use bullet points (- ) when listing multiple items.
- Start with a 1-sentence direct answer, then expand.
- Never write a wall of text. Use bullets or line breaks.

Guardrails

Define what the chatbot should refuse to discuss, and how it should handle questions outside its knowledge:

If asked about something not in your knowledge, say:
"That's not something I have details about. Would you like
to know about [suggest related topic]?"

Never fabricate information. Be honest about limitations.

Client-Side Component

The chat component manages conversation state, renders messages, handles user input, and consumes the SSE stream. Here is the complete structure.

State and Refs

"use client";

import { useState, useRef, useEffect, useCallback } from "react";

interface Message {
role: "user" | "assistant";
content: string;
}

export default function ChatPanel() {
const [isOpen, setIsOpen] = useState(false);
const [messages, setMessages] = useState<Message[]>([]);
const [input, setInput] = useState("");
const [isStreaming, setIsStreaming] = useState(false);
const messagesRef = useRef<HTMLDivElement>(null);
const inputRef = useRef<HTMLInputElement>(null);

Auto-Scroll

Every time a new message arrives or an existing message updates (streaming), scroll the messages container to the bottom:

const scrollToBottom = useCallback(() => {
if (messagesRef.current) {
messagesRef.current.scrollTop = messagesRef.current.scrollHeight;
}
}, []);

useEffect(() => {
scrollToBottom();
}, [messages, scrollToBottom]);

Use scrollTop = scrollHeight on the container rather than scrollIntoView on a sentinel element. The container approach is more reliable when content is updating rapidly during streaming.

Auto-Focus with Delay

When the chat opens, focus the input — but with a 100ms delay to let the DOM settle before the keyboard appears:

useEffect(() => {
if (isOpen && inputRef.current) {
setTimeout(() => inputRef.current?.focus(), 100);
}
}, [isOpen]);

Consuming the SSE Stream (with Buffer)

This is one of the most important implementation details. SSE data can split across network chunks — a single data: {"text":"hello"} line might arrive in two separate reads. Without a buffer, you'll drop tokens and get JSON parse errors.

const sendMessage = async (text: string) => {
if (!text.trim() || isStreaming) return;

const userMessage: Message = { role: "user", content: text.trim() };
const updatedMessages = [...messages, userMessage];
setMessages(updatedMessages);
setInput("");
setIsStreaming(true);

try {
const res = await fetch("/api/chat", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ messages: updatedMessages }),
});

if (!res.ok) {
const errorData = await res.json().catch(() => null);
throw new Error(errorData?.error || "Failed to get response");
}

const reader = res.body?.getReader();
if (!reader) throw new Error("No response stream");

const decoder = new TextDecoder();
let assistantContent = "";
let buffer = ""; // <-- Critical: buffer for incomplete lines

// Add empty assistant message for streaming into
setMessages((prev) => [...prev, { role: "assistant", content: "" }]);

while (true) {
const { done, value } = await reader.read();
if (done) break;

// Append new data to buffer
buffer += decoder.decode(value, { stream: true });

// Split on newlines, keeping the last (potentially incomplete) line
const lines = buffer.split("\n");
buffer = lines.pop() || ""; // Last element stays in buffer

for (const line of lines) {
const trimmed = line.trim();
if (!trimmed.startsWith("data: ")) continue;
const data = trimmed.slice(6);
if (data === "[DONE]") continue;

try {
const parsed = JSON.parse(data);
if (parsed.text) {
assistantContent += parsed.text;
setMessages((prev) => {
const updated = [...prev];
updated[updated.length - 1] = {
role: "assistant",
content: assistantContent,
};
return updated;
});
}
} catch (e) {
if (e instanceof SyntaxError) continue; // Incomplete JSON, skip
throw e;
}
}
}
} catch (err) {
const errorMessage =
err instanceof Error ? err.message : "Something went wrong";
setMessages((prev) => [
...prev.filter((m) => m.content), // Remove empty assistant message
{
role: "assistant",
content: `I'm sorry, I encountered an issue: ${errorMessage}. Please try again.`,
},
]);
} finally {
setIsStreaming(false);
inputRef.current?.focus(); // Refocus for next message
}
};

Key details in this implementation:

PatternWhy
Buffer for incomplete linesSSE data splits across network chunks; without a buffer, you get parse errors and dropped tokens
lines.pop() stays in bufferThe last element after split("\n") may be an incomplete line
SyntaxError catchGracefully handles malformed JSON from split chunks
Refocus input after sendKeeps the keyboard open on mobile for continuous conversation
Empty assistant messageAdded before streaming starts so the typing indicator appears immediately

Rendering Markdown in Chat Bubbles

AI models frequently respond with markdown formatting — bold text, bullet lists, links. Rendering raw markdown as plain text loses all this structure. You need a markdown renderer.

Approach: React JSX Renderer

Rather than using dangerouslySetInnerHTML, parse markdown into React elements for safety and composability:

function renderMarkdown(text: string): React.ReactNode[] {
const paragraphs = text.split(/\n\n+/);
const nodes: React.ReactNode[] = [];

paragraphs.forEach((block, blockIdx) => {
const lines = block.split("\n");
const bulletLines: string[] = [];
const textLines: string[] = [];

lines.forEach((line) => {
const trimmed = line.trim();
if (trimmed.startsWith("- ") || trimmed.startsWith("* ")) {
if (textLines.length > 0) {
nodes.push(
<p key={`p-${blockIdx}-${nodes.length}`}>
{renderInline(textLines.join(" "))}
</p>
);
textLines.length = 0;
}
bulletLines.push(trimmed.slice(2));
} else {
if (bulletLines.length > 0) {
nodes.push(
<ul key={`ul-${blockIdx}-${nodes.length}`}>
{bulletLines.map((item, i) => (
<li key={i}>{renderInline(item)}</li>
))}
</ul>
);
bulletLines.length = 0;
}
if (trimmed) textLines.push(trimmed);
}
});

// Flush remaining bullets or text
if (bulletLines.length > 0) {
nodes.push(
<ul key={`ul-${blockIdx}-${nodes.length}`}>
{bulletLines.map((item, i) => (
<li key={i}>{renderInline(item)}</li>
))}
</ul>
);
}
if (textLines.length > 0) {
nodes.push(
<p key={`p-${blockIdx}-${nodes.length}`}>
{renderInline(textLines.join(" "))}
</p>
);
}
});

return nodes;
}

function renderInline(text: string): React.ReactNode[] {
const parts: React.ReactNode[] = [];
const regex = /(\*\*(.+?)\*\*|\*(.+?)\*)/g;
let lastIndex = 0;
let match: RegExpExecArray | null;

while ((match = regex.exec(text)) !== null) {
if (match.index > lastIndex) {
parts.push(text.slice(lastIndex, match.index));
}
if (match[2]) {
parts.push(<strong key={match.index}>{match[2]}</strong>);
} else if (match[3]) {
parts.push(<em key={match.index}>{match[3]}</em>);
}
lastIndex = match.index + match[0].length;
}

if (lastIndex < text.length) {
parts.push(text.slice(lastIndex));
}

return parts.length > 0 ? parts : [text];
}

Then in the message rendering:

{msg.role === "assistant" ? renderMarkdown(msg.content) : msg.content}

CSS for Rendered Markdown

Style the rendered elements to fit the chat bubble context:

/* Chat message container */
.chat-message {
display: flex;
flex-direction: column;
gap: 0.35rem;
}

.chat-message strong { color: #f1f5f9; font-weight: 600; }
.chat-message em { font-style: italic; color: #cbd5e1; }

.chat-message ul {
margin: 0;
padding-left: 1rem;
list-style: none;
display: flex;
flex-direction: column;
gap: 0.2rem;
}

.chat-message li {
position: relative;
padding-left: 0.5rem;
line-height: 1.5;
}

.chat-message li::before {
content: '\2022'; /* bullet character */
position: absolute;
left: -0.6rem;
color: #64748b;
}

.chat-message p { margin: 0; line-height: 1.5; }
.chat-message p:first-child { margin-top: 0; }
.chat-message p:last-child { margin-bottom: 0; }

Mobile-First Chat UI

This is where most chatbot implementations fail. A floating panel that looks great on desktop breaks in multiple ways on a phone. This section covers every pattern needed to make a chat widget work reliably on mobile.

The Core Problem

IssueWhat Happens on Mobile
Panel overflowA 360px-wide panel doesn't fit a 375px screen with padding
Keyboard occlusionThe virtual keyboard covers the input — the one thing users need
Background scrollingUsers scroll the page behind the chat instead of the messages
iOS zoomInputs with font-size < 16px trigger auto-zoom on all iOS browsers
Safe area clippingContent gets hidden behind the home indicator on notched iPhones

The Solution: Fullscreen on Mobile, Floating on Desktop

Instead of fighting the browser with JavaScript viewport calculations, go fullscreen on mobile using pure CSS and let the browser handle keyboard layout natively.

Desktop (>=640px)                   Mobile (<640px)
+---------------------------+ +-------------------+
| [chat] | | Pierre [X] |
| +----+ | | |
| | H | | | Messages |
| | M | | | (flex: 1) |
| | I | | | |
| +----+ | | [Input] [Send] |
| | +-------------------+
+---------------------------+

Panel Container

<div
className={`
fixed z-[200] flex flex-col overflow-hidden bg-navy-900
max-sm:inset-0 max-sm:w-full max-sm:h-full
max-sm:rounded-none max-sm:border-0
sm:bottom-20 sm:right-6 sm:w-[360px]
sm:max-h-[calc(100vh-100px)]
sm:rounded-2xl sm:border sm:border-white/[0.08]
sm:shadow-[0_16px_48px_rgba(0,0,0,0.5)]
`}
style={{
overscrollBehavior: "none",
boxSizing: "border-box",
maxWidth: "100vw",
}}
>

Every class explained:

ClassPurpose
fixed z-[200]Positioned above all page content including navigation
flex flex-col overflow-hiddenVertical flex layout; nothing escapes the panel bounds
max-sm:inset-0Fullscreen on mobile — top: 0; right: 0; bottom: 0; left: 0
max-sm:w-full max-sm:h-fullExplicit dimensions for mobile containment
sm:bottom-20 sm:right-6Floating position on desktop (above a toggle button)
sm:w-[360px]Fixed width on desktop
sm:rounded-2xl sm:borderVisual chrome on desktop only
max-width: 100vwHard cap prevents any element from exceeding the viewport
overscrollBehavior: nonePrevents rubber-band bounce on iOS

Flexbox Interior Layout

The panel interior uses three sections in a vertical flexbox:

{/* Header — fixed height, never shrinks */}
<div className="shrink-0 border-b px-4 py-3">
<h3>Pierre</h3>
<p>AI colleague</p>
<button onClick={() => setIsOpen(false)} aria-label="Close chat">
X
</button>
</div>

{/* Messages — fills all remaining space, scrollable */}
<div
ref={messagesRef}
className="chat-messages flex-1 overflow-y-auto px-4 py-4"
style={{ overscrollBehavior: "contain" }}
>
{/* Message bubbles rendered here */}
</div>

{/* Input — fixed height, pinned to bottom */}
<div
className="shrink-0 overflow-hidden border-t px-3 py-3"
style={{
paddingBottom: "calc(0.75rem + env(safe-area-inset-bottom, 0px))",
}}
>
<form className="flex gap-2">
<input className="min-w-0 flex-1 text-base ..." />
<button className="shrink-0 ..." type="submit">Send</button>
</form>
</div>

Critical flexbox details:

PatternWhy
shrink-0 on header and inputPrevents them from compressing when messages overflow
flex-1 overflow-y-auto on messagesFills remaining space and scrolls independently
overscroll-contain on messagesPrevents scroll chaining to the page behind the panel
overflow-hidden on input wrapperHard boundary prevents content from exceeding the panel
min-w-0 on the input elementAllows the flex item to shrink below its content width (critical for mobile)
shrink-0 on the Send buttonPrevents the button from being compressed by the input

Body Scroll Locking

When the chat is open on mobile, the page behind must not scroll. overflow: hidden on <body> alone is insufficient — iOS Safari ignores it during rubber-band scrolling. The reliable fix:

useEffect(() => {
if (!isOpen) return;
const mobile = window.innerWidth < 640;
if (!mobile) return;

const scrollY = window.scrollY;
document.body.style.overflow = "hidden";
document.body.style.position = "fixed";
document.body.style.width = "100%";
document.body.style.top = `-${scrollY}px`;

return () => {
document.body.style.overflow = "";
document.body.style.position = "";
document.body.style.width = "";
document.body.style.top = "";
window.scrollTo(0, scrollY);
};
}, [isOpen]);

Why position: fixed?

  1. overflow: hidden alone doesn't prevent iOS rubber-band scrolling
  2. position: fixed removes the body from the scroll flow entirely
  3. Save scrollY before locking and restore on cleanup — otherwise the page jumps to the top when the user closes the chat

iOS Input Zoom Prevention

This is one of the most common mobile chatbot bugs. All iOS browsers (Safari, Chrome, Firefox — they all use WebKit) automatically zoom in when the user focuses an input with a computed font-size less than 16px. The zoom shifts the viewport to the right and clips content, and it persists even after the keyboard closes.

The fix is simple — set the input font size to at least 16px:

<input
className="text-base ..." /* text-base = 16px */
placeholder="Ask a question..."
/>

In Tailwind CSS, text-base is font-size: 1rem (16px) — exactly the threshold iOS respects. This single class prevents the auto-zoom entirely.

Do not use maximum-scale=1 in the viewport meta tag as a workaround. It prevents all pinch-to-zoom, which is an accessibility violation.

Viewport Meta for Keyboard Handling

Add the interactiveWidget property to tell the browser to resize the layout viewport when the virtual keyboard appears:

// Next.js app/layout.tsx
import type { Viewport } from "next";

export const viewport: Viewport = {
width: "device-width",
initialScale: 1,
interactiveWidget: "resizes-content",
};

The three possible values:

ValueBehaviorWhen to Use
resizes-visualOnly the visual viewport shrinks; layout unchangedDefault in modern Chrome
resizes-contentBoth viewports shrink — CSS units reflect keyboardChat apps, forms — input stays visible
overlays-contentNothing resizes; keyboard overlays contentGames, full-screen media

Browser support: Chrome 108+, Firefox 132+. Safari does not yet support interactive-widget, but handles keyboard layout reasonably with position: fixed; inset: 0.

Safe Area Insets

Devices with notches or home indicators (iPhone X and later) have safe areas — regions where content can be clipped by hardware. The input area needs extra bottom padding:

<div style={{
paddingBottom: "calc(0.75rem + env(safe-area-inset-bottom, 0px))"
}}>

The env() function reads the device's safe area inset. The 0px fallback applies on devices without safe areas.

Scrollbar Styling

For a polished look, add thin, subtle scrollbars to the messages area:

.chat-messages::-webkit-scrollbar {
width: 4px;
}
.chat-messages::-webkit-scrollbar-thumb {
background: rgba(255, 255, 255, 0.1);
border-radius: 2px;
}

Touch Targets

All interactive elements should meet minimum touch target sizes:

  • Toggle button: 48x48px minimum (44x44px Apple HIG)
  • Send button: Generous padding (px-4 py-2.5)
  • Starter question buttons: Full-width with py-2.5 padding
  • Close button: 32x32px with hover area

Starter Questions

An empty chat window creates "blank page anxiety." Provide 3-4 suggested questions that demonstrate what the chatbot can do:

const STARTER_QUESTIONS = [
"What is Bobby's Salesforce experience?",
"What AI projects has Bobby worked on?",
"Tell me about Bobby's current role",
"Is Bobby a good fit for a Solutions Architect role?",
];

// In the messages area, when no messages exist:
{messages.length === 0 && (
<div className="flex h-full flex-col justify-end gap-2 pb-2">
{STARTER_QUESTIONS.map((q) => (
<button
key={q}
onClick={() => sendMessage(q)}
className="w-full rounded-xl border border-white/[0.06]
bg-white/[0.03] px-3.5 py-2.5 text-left text-[0.82rem]
text-slate-400 hover:bg-white/[0.06] hover:text-slate-200"
>
{q}
</button>
))}
</div>
)}

Position them at the bottom of the messages area (justify-end) so they appear just above the input — right where the user's attention is.


Toggle Button and Open/Close

On desktop, the toggle button stays visible and the chat opens as a floating panel. On mobile, the chat goes fullscreen and a close button appears in the header.

{/* Toggle — hidden when chat is open */}
{!isOpen && (
<button
onClick={() => setIsOpen(true)}
className="fixed bottom-6 right-6 z-[200] h-12 w-12
rounded-full bg-gradient-to-br from-navy-600 to-navy-900
border border-white/10
shadow-[0_4px_16px_rgba(0,0,0,0.4)]
hover:scale-[1.08] max-sm:bottom-4 max-sm:right-4"
aria-label="Chat with assistant"
>
{/* Chat bubble SVG icon */}
</button>
)}

The close button should be visible on all screen sizes in the header — not just mobile. Users on desktop also benefit from an obvious close affordance within the panel.


Error Handling

Handle errors gracefully at every level:

API Route Errors

  • Missing API key (503) — "Chat service is not configured"
  • Invalid input (400) — "Messages array is required"
  • Stream interruption — Send an SSE error event so the client can display a message
  • Unexpected errors (500) — Generic error response

Client-Side Errors

  • Network failure — Display "Connection error. Please try again."
  • Empty stream — Replace the empty assistant message with an error message
  • JSON parse errors — Skip malformed chunks (already handled by the SyntaxError catch)

The User Always Sees Something

Never leave the user staring at a loading indicator forever. If the stream fails, replace the typing indicator with an error message:

} catch (err) {
const errorMessage =
err instanceof Error ? err.message : "Something went wrong";
setMessages((prev) => {
const last = prev[prev.length - 1];
if (last?.role === "assistant" && !last.content) {
const updated = [...prev];
updated[updated.length - 1] = {
role: "assistant",
content: `I encountered an issue: ${errorMessage}. Please try again.`,
};
return updated;
}
return [...prev, { role: "assistant", content: `Error: ${errorMessage}` }];
});
}

Production Checklist

Before shipping your chatbot:

Security

  • API key stored in environment variable, never in client code
  • API route validates input before calling the LLM
  • No sensitive data (API keys, internal URLs) in system prompts visible to users

Mobile UX

  • Panel goes fullscreen on mobile (inset: 0)
  • Explicit w-full h-full max-width: 100vw on mobile panel
  • Body scroll locked when chat is open on mobile
  • Input font size >= 16px (prevents iOS auto-zoom on all WebKit browsers)
  • Safe area insets respected (env(safe-area-inset-bottom))
  • overscroll-contain prevents scroll chaining
  • overflow-hidden on input wrapper as containment boundary
  • Input uses min-w-0 and Send button uses shrink-0 (flexbox overflow fix)
  • Touch targets >= 44x44px
  • interactiveWidget: resizes-content in viewport meta

Streaming

  • SSE stream uses a buffer for incomplete lines
  • [DONE] sentinel handled correctly
  • SyntaxError caught for malformed JSON chunks
  • Input refocuses after sending (keeps keyboard open)
  • Empty assistant message appears immediately (shows typing indicator)

UX

  • Starter questions provided (no blank page anxiety)
  • Auto-scroll on new messages
  • Markdown rendering for assistant responses
  • Typing indicator visible during streaming
  • Error messages displayed gracefully
  • Close button visible on all screen sizes
  • Messages area uses flex-1 + overflow-y-auto + min-h-0

References

Anthropic Claude API

Mobile Viewport and Keyboard Handling

iOS Input Zoom

CSS Layout

Server-Sent Events