Introduction

What is BiModal Design?

BiModal Design (formerly AgentUX) is a validated design framework for creating dual-mode interfaces that serve both human users and AI agents with equal effectiveness.

The framework emerged from a critical discovery: most AI agents (~80%) make simple HTTP requests without JavaScript execution. This means beautifully designed, semantically perfect interfaces can be completely invisible to agents if they rely on client-side rendering.

The Critical Discovery

Through real-world implementation and research, we identified that conventional UX optimization often has a critical blind spot: it extensively covers WHAT to put in the DOM (semantic structure, ARIA roles, structured data) but never addresses HOW to ensure that DOM exists for agents.

⚠️ Key Insight

When an AI agent accesses a web page, it makes a simple HTTP GET request (like curl). The agent parses HTML WITHOUT running JavaScript. It sees only what's in that initial HTML.

The curl Test

Every BiModal interface must pass this foundational test:

$ curl -s https://yoursite.com | grep "main content"

 Should return: actual content
 Should NOT return: <div id="root"></div>

Try it on your site right now. Open your terminal and run this test on your production website. What do you see?

The BiModal Concept

The "BiModal" name reflects two fundamental modes of interaction:

  1. Human Mode: Rich, interactive experiences with JavaScript enhancement, animations, and real-time updates
  2. Agent Mode: Accessible, semantic content present in the initial HTTP payload that agents can parse and understand

The key insight: these aren't separate versions—they're the same interface, experienced through different interaction models.

Performance Evidence

Recent benchmarks reveal critical performance gaps between conventional and BiModal-optimized interfaces:

Interface TypeHuman SuccessAgent SuccessGap
Conventional Web UI72-89%12-25%60% gap
BiModal Optimized72-89%42-70%19-47% gap
API-Augmented72-89%65-85%4-24% gap

Sources: WebArena, VisualWebArena, ST-WebAgentBench studies (2024-2025)

Why It Matters Now

The web has fundamentally transformed from a human-only medium to a collaborative space where AI agents perform critical business functions:

  • Autonomous Web Agents: Navigate and interact with websites independently
  • Agentic Systems: Multi-agent workflows coordinating complex tasks
  • Web Automation Agents: Execute repetitive tasks like form completion
  • Conversational Interface Agents: Bridge natural language to web actions

📊 Market Evidence

Microsoft Build 2025 introduced the "agentic web" with the NLWeb protocol. 230,000+ organizations use platforms like Copilot Studio for agent automation. The agent-web interaction revolution isn't coming—it's here.

Prerequisites

This documentation assumes basic familiarity with:

  • HTML5 semantic elements and document structure
  • CSS fundamentals and responsive design
  • JavaScript basics and async operations
  • HTTP request/response cycle
  • Server-side rendering concepts

Pick Your Learning Path

Different developers have different learning styles. Choose your path:

💡 Recommendation

Whatever path you choose, we recommend understanding FR-1 (Initial Payload Accessibility) first—it's the foundation everything else builds upon.