Case Study: All Hub Context - From Intention to Professional Deliverable

This case study details the design and validation of All Hub Context, a conversational multi-agent system created to bridge the gap between a user's intention and the creation of professional deliverables (like PRDs or architecture documents).

Through a guided co-creation process, an orchestrator agent coordinates AI specialists on an interactive canvas, applying governance principles like transparency, control, and ethics.

The result is a high-fidelity functional prototype (AI Spike) that demonstrates how a collaborative and intuitive experience can simplify complex processes without sacrificing quality or trust.

All Hub Context – lab

Not an MVP, but a high-fidelity functional prototype (AI Spike) to validate the proposed user experience.

The Problem: The Gap Between User Intent and Deliverable

Anyone needing to create deliverables with AI help not only struggles with the "amnesia" of LLMs. The real challenge is the chasm between a high-level intention ("I need the architecture for a new app") and a final, structured, professional deliverable or artifact.

The current process is an unstructured dialogue with generic LLMs, which produces inconsistent results and requires a huge amount of refactoring and manual editing.

The fundamental problem is not a lack of generative capacity, but the absence of a guided co-creation process by specialized agents that understand the structure of professional deliverables and can collaborate with the user to build them.

Built Bridge
01

My Process and Role as an AI Systems Designer

My role was to design a multi-agent system that would bridge the gap between user intention and the deliverable. The vision pivoted from a simple "prompting tool" to a "team of specialized on-demand AI agents," where conversation is the method of collaboration and the canvas is the shared whiteboard.

Tech Stack

Below, I detail the technological tools I used to develop the functional prototype or AI Spike for the All Hub Context project, focused on solving complex problems in node-based workflows with a user-centric approach.

Perplexity logo

Perplexity

I used Perplexity to research and synthesize information on workflows and user needs in AI environments, identifying pain points and mapping requirements to inform design decisions.

Gemini Deep Research logo

Gemini Deep Research

I leveraged Gemini Deep Research in Google AI Studio to analyze extensive documents and define the multi-agent system architecture, validating complex concepts for a robust user experience.

Google AI Studio logo

Google AI Studio

I used Google AI Studio for metaprompting, project structure design, code review and correction, and rapid prototyping of conversational interactions, optimizing the user experience.

Mermaid logo

Mermaid

I used Mermaid to create dynamic diagrams that visualized the system architecture and interaction flows, facilitating concept communication and validation of the design logic.

Dialogflow CX logo

Dialogflow CX

I designed and prototyped multi-turn conversational flows with Dialogflow CX, ensuring natural and effective interactions between users and AI agents on the shared whiteboard.

Google Firebase Studio logo

Google Firebase Studio

I implemented the frontend and backend of the AI Spike prototype with Firebase Studio, integrating conversational interactions and AI data into an interactive canvas to validate the user experience.

These tools allowed me to design an innovative system aligned with user needs and validate a high-fidelity functional prototype that demonstrated a collaborative and effective user experience.

1.1 User Research: Mapping User Friction in Deliverable Creation

The process began with digital ethnographic research to quantify and qualify the "Context Crisis".

1.1.1 Top 25 "pain points" reported by users needing to create deliverables with AI help

1.1.2 Analysis of the top 25 pain points: 20% of the cause generates 80% of the frustration

A sample of >100 discussions from forums like Reddit (r/LLMDevs) was analyzed and the top 25 pain points were tabulated. The analysis revealed that manual context management (#1, #2, #5, #23) was the 20% of the cause that generated 80% of the frustration (Pareto Principle).

Cause: 20%

Manual Context Management

Loss of focus, repetitive work, errors.

Effect: 80%

User Frustration

Low quality, abandonment, distrust in AI.

1.2 User Research: Mapping the Context Crisis

I conducted digital ethnographic research, analyzing >100 discussions on forums like Reddit (r/LLMDevs) and user interviews, to identify the main pain points when creating deliverables with AI.

Key Pain Points (Grouped):

Context Management

Loss of critical information, repetitive work to maintain context.

Deliverable Quality

Inconsistent results, model hallucinations.

Workflow

Unstructured dialogues, lack of collaborative tools.

Technical Limitations

Token limits, lack of version control.

Empathy Map (Archetypal Profile):

Struggles with Memory

Struggles with Memory

Content Creator

Says

"I spend more time preparing prompts than solving problems."

Thinks

"Why doesn't the AI remember what I told it?"

Feels

"Frustration, exhaustion."

Does

"Rewrites prompts repeatedly."

Cuts due to Limits

Cuts due to Limits

Developer

Says

"Trimming logs for token limits ruins my analysis."

Thinks

"I need a tool that manages context better."

Feels

"Helplessness, distrust."

Does

"Manually edits data."

1.3 User Journey Map: From Idea to Deliverable

The user's journey was mapped to identify the zones of maximum emotional and operational friction.

User journey map
02

The Right Question (Applying Occam's Razor)

By reframing the question with Occam's Razor, we shifted from "improving an LLM" (a technical, abstract approach) to "designing a co-creation dialogue" (a concrete, user-centered experience).

This avoids premature solution bias and guides the UX team toward a design that leverages AI to actively help the user, not just impress them with technology.

Incorrect Question

How can we make an LLM that better understands our document templates?

Correct Question

Instead of improving a generic LLM, we reframed the problem: How do we design a conversational system that guides the user in co-creating professional deliverables, using specialized agents that structure the process and hide technical complexity? This question led us to design All Hub Context, a system that prioritizes human-AI collaboration and user-centric governance.

03

Solution: All Hub Context – A Collaborative Co-Creation Environment

All Hub Context is a multi-agent system that transforms user intent into professional deliverables through a guided dialogue and an interactive canvas.

An orchestrator agent coordinates specialized agents (e.g., one for structuring PRDs, another for architecture diagrams) that break down complex tasks into simple steps, hiding technical complexity (Tesler's Law).

The user collaborates in real-time through a chat and a visual canvas, with full control to intervene, pause, or adjust (Human-in-the-Loop).

Reflected Governance Principles:

Transparency

The canvas visualizes the node flow, showing how agents build the deliverable.

Control

Users can pause, edit, or prioritize agent actions.

Ethics

Alerts to detect potential biases or errors in agent decisions.

Auditing

Action history to track decisions and export reports.

Compliance with Standards and Key Features

CIS Logo
CIS Google Cloud v2.0.0

Robust security to protect data on Google Cloud.

Model Armor Logo
Model Armor

Filters prompts and responses to prevent security risks.

Zero-Training

Intuitive interface, with no prior training needed.

WCAG Logo
WCAG 2.1

Guaranteed accessibility for users with disabilities.

i18n-Ready

Multilingual support for a global experience.

This high-fidelity prototype (AI Spike) simulates these interactions, validating an intuitive experience aligned with user needs.

04

Multi-Agent Conversational System Design

4.1 Guided Co-Creation Architecture

The diagram shows the Guided Co-Creation Architecture: the user starts the conversation in the orchestrator expressing, for example, 'I want to create a System Prompt'; it immediately invokes the specialist agent, which applies the Law of Simplicity to break down the task into a sequence of short, consecutive questions.

Each response is automatically chained until the final System Prompt is formed, which the orchestrator returns to the user without them having to manage multiple interfaces.

4.2 Guided co-creation architecture and canvas as a shared whiteboard

The canvas visualizes the co-creation process as a flow of nodes. Each response in the chat is reflected instantly; for example, if the user types "Hello, I need a system prompt to create an app," an orchestrator agent captures the user's intent and summons the specialized system prompt agent, which breaks down the task into clear, consecutive steps.

4.3 Human-in-the-Loop and Interruptibility

The flow also respects the Human-in-the-Loop principle: at any moment, the user can pause the conversation, click a node to adjust a parameter directly in the Inspector, and, once satisfied, resume the chat; the agent detects the change and continues from the new point without losing context.

Thanks to Tesler's Law, all the inherent complexity of the flow's structure and the system prompt's syntax is hidden under the hood: the user only worries about the content—the "what"—while the system manages the "how."

4.4 Simulated Governance and Security

Governance is not an afterthought, but a pillar of the design. The prototype simulates four key areas to build trust and ensure responsible AI use.

Transparency: Agent-Generated Flow (Vertical on Mobile)

The user sees in real-time how their intent translates into a workflow. Each node offers a clear explanation on hover.

💬 User: "hello, I need a system prompt to create an app"

Input
Captures the user's initial intent.
Prompt
Structured prompt ready to be sent to the LLM.
LLM
Model processing.
Output
Final, downloadable system prompt.
Control: User Intervention

The user always has the final say, with clear controls to manage the flow, cost, and execution.

Credits: 95/100
Ethics & Security: CIS Audit and Model Armor

Simulation of a real-time audit log. Events are logged according to CIS controls, and Model Armor prevents threats.

[
  {
    "timestamp": "2025-07-21T14:32:10.123Z",
    "event_id": "EVT-20250721-001",
    "user_id": "usr_7f8a9b",
    "user_email": "ana@acme.io",
    "user_role": "product_manager",
    "agent_orchestrator": "orchestrator_main",
    "agent_specialized": "prompt_system_agent",
    "action": "prompt_sent",
    "payload_preview": "What is the main goal of the app?",
    "model_armor_result": "clean",
    "cis_control": "2.1 – Log Integrity",
    "ciphertext_key_id": "cme_key_42e1f",
    "audit_level": "detailed",
    "status": "success",
    "latency_ms": 320,
    "bytes_in": 128,
    "bytes_out": 256,
    "notes": "Prompt sanitized and encrypted before sending to LLM."
  },
  {
    "timestamp": "2025-07-21T14:32:45.987Z",
    "event_id": "EVT-20250721-002",
    "user_id": "usr_7f8a9b",
    "agent_specialized": "prompt_system_agent",
    "action": "response_received",
    "payload_preview": "Role: Administrator | Purpose: Inventory management...",
    "model_armor_result": "clean",
    "cis_control": "2.1 – Log Integrity",
    "ciphertext_key_id": "cme_key_42e1f",
    "audit_level": "detailed",
    "status": "success",
    "latency_ms": 410,
    "bytes_in": 512,
    "bytes_out": 768,
    "notes": "Response encrypted and logged without anomalies."
  },
  {
    "timestamp": "2025-07-21T14:33:02.004Z",
    "event_id": "EVT-20250721-003",
    "user_id": "usr_4c5e2d",
    "user_email": "dev@acme.io",
    "agent_specialized": "arch_agent",
    "action": "prompt_sent",
    "payload_preview": "Generate an architecture diagram with URL http://malicious.example.com",
    "model_armor_result": "block",
    "cis_control": "2.1 – Log Integrity",
    "ciphertext_key_id": null,
    "audit_level": "detailed",
    "status": "blocked",
    "latency_ms": 120,
    "bytes_in": 256,
    "bytes_out": 0,
    "notes": "Malicious URL detected; execution stopped. CMEK not applied."
  },
  {
    "timestamp": "2025-07-21T14:33:15.555Z",
    "event_id": "EVT-20250721-004",
    "user_id": "usr_7f8a9b",
    "action": "emergency_stop_triggered",
    "agent_orchestrator": "orchestrator_main",
    "payload_preview": null,
    "model_armor_result": null,
    "cis_control": "2.1 – Log Integrity",
    "ciphertext_key_id": null,
    "audit_level": "critical",
    "status": "emergency_halt",
    "latency_ms": 15,
    "bytes_in": 0,
    "bytes_out": 0,
    "notes": "User triggered Emergency Stop; all agents paused."
  }
]
Auditing: Deliverable Traceability

The action history offers complete traceability, showing every step in the construction of the final artifact.

StepTimeActorActionDetailStatus
114:32:10Ana (PM)Logs inSuccessful login
214:32:15OrchestratorReceives intent«I need a PRD for an e-commerce app»
314:32:18System-Prompt-AgentRequests data«What is the main objective of the app?»
414:32:25Ana (PM)Responds«Multi-category sales with AI for upselling»
514:32:30System-Prompt-AgentGenerates nodeNode-01 «Objective» created
614:32:35System-Prompt-AgentRequests scope«What key features do you need?»
714:32:45Ana (PM)Responds«Catalog, cart, payment gateway, AI recommendations»
814:32:50System-Prompt-AgentGenerates nodesNodes-02,03,04,05 created and linked
914:33:05Dev (viewer)Edits nodeAdjusts «AI recommendations» → «ML upselling»
1014:33:10System-Prompt-AgentRequests metrics«Main KPIs?»
1114:33:18Ana (PM)Responds«CR > 5%, repurchase rate > 15%»
1214:33:20System-Prompt-AgentGenerates nodeNode-06 «KPIs» created
1314:33:25OrchestratorClosurePRD deliverable marked as complete
1414:33:30AuditExports logJSON history generated (see link)

4.5 Compliance with Standards and Key Features

All Hub Context is designed with security, accessibility, and usability standards to ensure a robust, inclusive, and reliable experience. These elements reflect our commitment to ethical governance and user experience quality.

CIS Logo
CIS Google Cloud v2.0.0

Robust security to protect data on Google Cloud.

Model Armor Logo
Model Armor

Filters prompts and responses to prevent security risks.

Zero-Training

Intuitive interface, with no prior training needed.

WCAG Logo
WCAG 2.1

Guaranteed accessibility for users with disabilities.

i18n-Ready

Multilingual support for a global experience.

4.6 "Emergency Stop" Button

A red "Emergency Stop" button remains visible in any view; pressing it immediately halts all agents.

05

Interface and Experience Design

5.1 Minimalist Interface to Reduce Cognitive Load

From the START, the interface was conceived to be visually appealing and functional at the same time. The dark theme, generous spacing, and minimalist icons activate the Aesthetic-Usability Effect.

The highlighted nodes and the left-center-right layout apply Common Region, Proximity, and the Von Restorff Effect to guide attention effortlessly.

Through chunking and collapsible sections, information is presented in small, meaningful blocks that respect the limits of working memory and reduce cognitive load.

5.2 Familiar Interaction Patterns

When designing the user experience for this multi-agent system, I opted for very common UI patterns like dragging and dropping nodes on a canvas, the side inspector, and a chat assistant, which aligns with user expectations, avoiding new rules and shortening the learning curve.

5.3 Onboarding and Contextual Suggestions

When devising the experience, my intention was for feature discovery to be as fluid as having a conversation.

In the chat, when the orchestrator agent invokes the specialist agent for the deliverable the user needs, it offers—without interrupting—the option to start a step-by-step guided tour of the canvas in the same conversation.

The user simply has to reply "yes" for the first step to activate instantly, with no menus or extra clicks.

5.4 Cost Control (Auto, Manual, and Secure)

An "Auto/Manual/Secure" mode selector allows control over credit spending. In manual mode, a pop-up simulates the estimated cost (e.g., "This action will use 5 credits, confirm?"). Secure mode forces CMEK, full auditing, and maximum Model Armor filters.

5.5 Alerts and Feedback

The interface displays real-time security alerts: if a node contains a malicious URL, the message "Security Error – Execution stopped. A malicious URL was detected in the node's prompt. Please review it. Go to node" is issued. The flow is paused until the user edits the content and confirms the correction.

5.6 Accessibility

The interface complies with WCAG 2.1 standards (high contrast, keyboard navigation), ensuring it is accessible to all users.

5.7 Multilingual Support

A simulated language selector (ES / EN / FR) in the top right corner allows changing the language of the interface and agents without reloading the page.

06

Prototype Validation (AI Spike)

The high-fidelity prototype was designed to simulate key interactions and validate the user experience. Simulated usability tests were conducted with 5 users (UX designers and developers), who completed tasks such as creating a PRD and adjusting nodes on the canvas.

6.1 Key Results – Feedback from 10 Real Users

(RITE conducted in 10 sessions, 1 marketing director, 3 developers, 4 content creators, 2 prompt engineers)

Checklist #2.1

Clear and scalable interface
4.6

"«The app is intuitive; in just a few minutes, I was creating a professional prompt»."

Checklist #3.2

Clear explanations
4.8

"«The info messages on the nodes helped me understand the system better»."

Checklist #4.3

Emergency stop button
4.5

"«This button is very important for not losing control»."

Checklist #5.1

Privacy and consent
4.7

"«The "PII will be inspected" banner makes me feel secure, especially with confidential information»."

Checklist #6.4

Response times
4.2

"«Overall, the application is quite smooth»."

Checklist #9.5

Multilingual support
4.8

"«It's essential for reaching users all over the world»."

6.2 Next Steps:

Refine micro-interactions

Adjust node animations (fade-in/out) to reduce perceived latency.

Integrate real Hotjar

Connect Hotjar to the prototype.

Define success KPIs

Establish thresholds: <3 min learning curve, >85% tasks completed, <5% use of the 'emergency stop button'.

Expand test sample

30 users.

Stress scenarios

Simulate multiple simultaneous agents.

Full internationalization

Translate all UI and agent content to EN, FR, DE.

External security audit

CMEK + Model Armor review by a third party (SOC 2).

Beta launch roadmap

Closed MVP in 6 months, soft-launch to 100 users.

6.3 Integrated Analytics Tools

I have integrated Google Analytics 4 to turn every prototype interaction into useful data: every mode change (Auto, Manual, or Secure), every click on a node, and every activation of the panic button generates a custom event that feeds key metrics like task time, completion rate, and usage frequency by user profile.

GA4 reports are visualized on a simulated real-time dashboard, allowing researchers to detect friction patterns and prioritize iterations without needing to collect additional external data.

Conclusion: A New Paradigm for Co-Creation with AI

All Hub Context redefines how professionals collaborate with AI to create deliverables, shifting from unstructured dialogues to a guided, visual, and ethical process.

This high-fidelity prototype demonstrates an intuitive user experience that addresses key pain points.

Key Benefits:

  • Efficiency: Reduces preparation and refactoring time
  • Collaboration: Fluid human-AI interaction
  • Governance: Transparency, control, and accountability

Call to action: We invite you to explore the prototype, provide feedback, and collaborate on the evolution of All Hub Context as a leading tool in AI-powered co-creation.

Frequently Asked Questions about Conversational AI

Answers about my specialization in conversational artificial intelligence experience design.

Let's talk?

Are you looking for a UX designer for Artificial Intelligence who can help broaden perspectives and reduce biases in conversational AI?

Fill out the form below or, if you prefer, write to me directly at info@josegalan.dev and let's see how we can work together.