Case Study: All Hub Content - Multi-Agent Visual Co-Creation System

This case study describes the entire UX design process for All Hub Content Lab, a high-fidelity prototype (AI Spike) that simulates a visual co-creation environment for multimodal content.

The goal is to orchestrate a multi-agent system that transforms creators' knowledge into reusable assets, ensuring brand consistency, user trust, and drastically reducing production time.

  • Real results: Generation of text and images via Google Gemini API.
  • Flows and simulations: Agent orchestration, governance, usage metrics, and audit logs simulated and validated with users.
  • Strategic objective: Validate the user experience and design hypothesis, not build an MVP.

01

The Problem: The Quantified Context Crisis

Content creators suffer from choice overload and enormous cognitive load. The current process is a chaos of "task switching" between disconnected tools, a friction that not only consumes time but also destroys the coherence of the original idea.

The fundamental problem is not the lack of powerful tools, but the absence of an ecosystem that intelligently manages context.

Time lost per day

Survey (n=25)

2.1

hours

30% of the productive day

Tools in average stack

Interviews (12 creators)

11

apps

Chronic cognitive fragmentation

Creators who lose consistency

Survey

92%

Frustration and rework

Abandonment of complex flows

Behavioral analysis

68%

Lost ideas and opportunities

02

Deep User Research

My process started with data, not assumptions, to discover the true "Jobs to be Done." Based on an exhaustive analysis of 138 different sources, this research reveals a systemic crisis affecting the productivity and well-being of digital content creators.
The main finding is that tool fragmentation and the lack of integrated ecosystems are the root cause of a series of critical problems. Creators face cognitive overload and analysis paralysis ("choice overload") due to the overwhelming number of disconnected options.
One of the most impactful insights is the devastating effect of "context switching," which can consume up to 80% of productive time, destroying creative flow and project consistency.
This inefficiency translates into frustration, widespread burnout, and a tangible loss in content quality. The opportunity, therefore, lies not in creating more tools, but in developing an intelligent ecosystem that manages context, reduces mental load, and unifies the creative workflow.

In-depth Research on the Frustrations of Content Creators in Fragmented Digital Ecosystems

2.1. Research Methodology and Tools

  • Digital ethnography: Qualitative analysis of threads in online communities, using Perplexity and Gemini Deep Research to synthesize discussions and trends.
  • Qualitative interviews (10 sessions): 10 semi-structured interviews were conducted with a heterogeneous group of users. The profiles ranged from experts with high technical knowledge to completely non-technical users, thus ensuring a comprehensive view of the challenges and needs from multiple perspectives.

2.2. The 5 Main Pain Points

The analysis revealed that manual context management was the root cause of most frustrations.
Massive productivity loss due to "context switching"
Creators lose between 20% and 80% of their productive time switching between applications, losing the creative "flow" state. This goes beyond "copy/pasting"; it's a constant fragmentation of thought that requires "reloading" the mental context over and over again.
Cognitive overload and risk of burnout
The need to process information flows from multiple disconnected tools exceeds human cognitive capacity (5-9 items). This leads to extreme mental fatigue, degradation of work quality, and ultimately, widespread burnout, as evidenced by testimonials.
Tool fragmentation and information silos
There is no "single source of truth." Data, ideas, and brand guidelines are scattered across different platforms, creating silos. This forces manual work duplication and decision-making based on incomplete or outdated information.
Loss of creative and brand consistency
Original ideas and brand identity get "diluted" in the process of transferring between tools. The need to redo pieces is not a mistake, but an inevitable consequence of an ecosystem that cannot maintain conceptual consistency from the initial idea to the final post.
Analysis paralysis ("choice overload")
The excess of available tools paralyzes creators. They spend a disproportionate amount of time evaluating and choosing technology instead of creating, which leads to dissatisfaction and, in some cases, project abandonment.

2.3. User Persona: "The Context Juggler"

To bring the data to life, I created an archetype that encapsulates the discovered frustrations.

2.4. The Right Design Question (Applying Occam's Razor)

❌ Incorrect Question (Technical Focus)

"How can we build a system to connect prompt nodes in sequence?"

✅ Correct Question (Design Mission)

"How do we design an ecosystem where a creator's context (their brief, their style) is not something repeatedly entered, but a persistent asset that guides a team of agents to produce multimodal content that is always on-brand and trustworthy?"

03

The Solution: A High-Fidelity Prototype

All Hub Content Lab is a visual co-creation environment designed to be intuitive, secure, and efficient, applying principles of design and governance from its conception.

3.1. Prototype Scope (Real vs. Simulated)

Node Interface

Real (React Flow)

Validate the usability of the visual canvas.

Content Generation

Real (Google Gemini API)

Validate the impact of speed (Doherty Threshold).

Agent Orchestration

Simulated

Validate the clarity of the flow without building the full backend.

Governance and Auditing

Simulated

Test the user's perception of trust and control.

3.2. Storyboard of the Ideal Experience

This is the journey I designed for Sara, applying UX principles at every step to take her from chaos to creative flow.
1

Creative Chaos

Creative Chaos

Sara, overwhelmed, navigates between apps for a single idea. It's a clear case of 'Choice Overload' that stifles her creativity.

2

The Focus

The Focus

In All Hub Content, her idea is in one place. We apply the Law of Simplicity with a clean interface that eliminates distractions.

3

Context is King

Context is King

She drags the brief to the PDF node, and the system integrates it as a single unit, applying the Law of Common Region.

4

Visual Orchestration

Visual Orchestration

This applies the Law of Uniform Connectedness: by visually connecting the prompt to the image, it's understood that one generates the other.

5

The Magic Begins

The Magic Begins

She hits 'Generate.' The system responds in under 0.4s, meeting the Doherty Threshold to ensure a fluid and frictionless interaction.

6

The Vision Materialized

The Vision Materialized

In seconds, the text and an image appear. This is the positive 'peak' of the experience according to the Peak-End Rule.

7

Creative Control

Creative Control

She adds a prompt to adjust the image. The Law of Conservation of Complexity is applied by transferring the difficulty to a simple connection.

8

Creation in a Flow State

Creation in a Flow State

With one click, she gets the perfect image. By removing friction, the design has allowed her to achieve a creative Flow State.

04

System Design: Architecture and Governance

To materialize my vision of a co-creation environment that would eliminate friction and chaos, I made a fundamental architectural decision: instead of a single, monolithic AI model, I designed an ecosystem of specialized agents.
I built this architecture on Google Conversational Agents (Dialogflow CX), adopting its generative paradigm to create a team of AI Playbooks. Each agent I designed has a very specific Goal and a set of Instructions in natural language that define its expertise, allowing for precise, high-quality collaboration that a generalist could never achieve.
The orchestration of this digital team is not a simple chaining of prompts. I designed a dynamic workflow managed by a Director Playbook (the Orchestrator), which invokes the specialists' capabilities through a system of Tools.
This design pattern was my solution to ensure modularity, scalability, and, most importantly, to materialize my design mission: to turn technical complexity into a fluid, intuitive, and powerful user experience.

4.3. The Technical Blueprint: Digital Twin and UX Laws

For the above flow to become a reality, I designed a technical architecture that functions as a "Digital Twin" of a real creative team.
This blueprint consciously applies Tesler's Law (Conservation of Complexity), absorbing all the heavy lifting (communication between agents, API management) in the backend to offer a radically simple user experience.
My design is also based on the Law of Uniform Connectedness. By allowing the user to visually build flows on the canvas, the relationships between nodes become explicit and logical, reducing cognitive load.
When executing a flow, the Orchestrator handles the complexity while the Generative UI shows progress in real-time (Goal-Gradient Effect), keeping the user motivated and in a Flow State.
Multi-agent architecture diagram

Component Architecture Diagram

4.4. From Intelligence to Mastery: Validating Specialization with Fine-Tuning

My design doesn't stop at creating intelligent agents; my vision is to cultivate them until they become masters. To demonstrate that this vision was technically feasible and not just a hypothesis, I ran a pilot of advanced specialization through fine-tuning.
The goal was simple yet ambitious: to take a general-purpose model and, through focused training with high-quality data, transform it into a specialist with measurable and superior expertise for a critical task.

Specialization Pilot: The Birth of the "Brand Guru"

I selected the Brand Guardian Agent as the perfect candidate for this pilot. Its task of content validation is subjective, full of nuances, and fundamental to user trust.

My implementation process:
  • Base Model: I chose gemini-2.0-flash-lite-001, an efficient and fast model, ideal for validating the process.
  • Dataset Curation: I manually created a high-quality dataset in .jsonl format, composed of 100 training examples and a separate set of 8 examples for validation.
  • Tuning in Vertex AI Studio: I launched a "supervised tuning" job, providing the datasets to train and evaluate the model.

The Results: Quantitative Evidence of Mastery

The training was completed successfully, and the resulting metrics not only validated my hypothesis but exceeded expectations. Below, I present and analyze the data.

Analysis of Learning Metrics:
Tuning Accuracy Metrics
  • Accuracy: The accuracy graph is the clearest proof of success.
  • The blue line (Training) shows that the model quickly learned the study material, approaching 100% correctness.
  • More importantly, the pink line (Validation) demonstrates that the model not only memorized but learned to generalize, reaching and maintaining an accuracy of over 80% on completely new data. This confirms that it can reason about brand rules, not just repeat them.
  • Tuning Loss Metrics
  • Loss: The loss graph, which measures the level of error, reinforces this conclusion. Both curves (training and validation) drop drastically and then stabilize: the classic shape of a healthy learning curve indicating that the model efficiently converged to an optimal solution without overfitting.
Checkpoint Analysis:
Training Checkpoint Table

This table allows us to see the model's progress at each training "epoch." The final result is compelling: at the final default checkpoint (step 40), the model achieved a validation accuracy of 84.3% (0.843).

Accuracy by Checkpoint Graph

Achieving this level of reliability with such a compact initial dataset validates the effectiveness and efficiency of the fine-tuning approach.

Dataset Analysis:
Dataset Token Distribution

Finally, the analysis of the input and output token distribution confirms that the dataset I designed was balanced. There were no excessively long or short examples that could bias the training, which contributed to a stable and efficient learning process.

Pilot Conclusion: A Validated Strategy Ready to Scale

This successful pilot is more than just a technical experiment; it is the practical validation of my architectural vision. It demonstrates that my design of a modular agent ecosystem not only works but is poised to evolve.

The Vision for the Future: A Team of Masters
  • The Copywriter Agent can be tuned with the company's thousands of top-performing posts, emails, and articles to learn to replicate success.
  • The Context Analyst can be tuned with hundreds of internal briefs and documents to learn to identify the organization's specific nuances and priorities.
  • The Visual Creator can learn to generate prompts for the image model that align with the brand's historically most successful visual aesthetics.
My work on this prototype has not only solved user pain points but has established a scalable foundation and a clear plan to create an AI system that becomes wiser and more valuable with every piece of data and every interaction.

4.5. Governance and Security Principles Reflected in the Design

To ensure user trust and control, the prototype's design simulates the implementation of key governance principles, inspired by the most robust industry standards.

Transparency: The user can see the agent flow.

The Assistant's panel displays the agent flow, showing each step of the process.

Control: The user can intervene at any time.

The user can intervene, pause, or edit at any point in the flow.

Brand Safety: Consistency guaranteed.

The Guardian Agent ensures all generated content is consistent.

Traceability: Immutable history for auditing.

The Auditor Agent creates an immutable history for every decision.

Continuous Improvement: Feedback-based learning (RLHF).

The system is designed to learn from user feedback (RLHF).

CIS Logo

CIS Google Cloud v2.0.0

Robust security to protect data on Google Cloud.

Model Armor Logo

Model Armor

Filters prompts and responses to prevent security risks.

Zero-Training

Intuitive interface, with no prior training needed.

WCAG Logo

WCAG 2.1

Guaranteed accessibility for users with disabilities.

i18n-Ready

Multilingual support for a global experience.

4.5.1. Evolutionary Security Architecture

My security strategy was designed in two phases:
Prototype
Security is managed with the native Safety and Instruction Filters of Dialogflow CX, ensuring a secure testing environment against harmful content and prompt manipulation.
Production Vision
For a large-scale deployment, the system will be integrated with Model Armor. This adds a critical layer of enterprise security, including Data Loss Prevention (DLP) to protect sensitive information in briefs, malicious URL detection, and centralized governance through security templates, all monitored from Security Command Center.

4.5.2. "AI Act Ready" Design – Compliance with the European AI Act (2025)

Although All Hub Content Lab is not a high-risk system according to Regulation (EU) 2024/1689 (AI Act), I designed the prototype to exceed the minimum transparency and governance requirements that come into force on August 2, 2025.
AI Act Obligation (Chapter I and III)Implemented Design Decision
Inform that interaction is with AIPersistent banner and contextual microcopy: “🤖 All Hub Assistant, Your creative co-pilot with Artificial Intelligence.
Mark all generated contentAutomatic "✨ AI Generated – All Hub" tag attached to every text and image.
Explain the automated processPlanned: "How it works" modal accessible from any node: a step-by-step visual tour of the orchestration.
Effective human control"Emergency Stop" button active in each phase; flow stops if the human requests it.
Traceability and auditingAuditor Agent automatically records in Firestore: original prompt, agents involved, decisions, and final outputs (immutable timestamp).
Preparation for regulatory scalingArchitecture prepared to integrate Model Armor in production (DLP, malicious URL detection, and centralized security policies).
Mark all generated content

Automatic ✨ AI Generated tag with tooltip "Image generated with Artificial Intelligence"

Effective human control

"Edit / Reject" button active in each phase; flow stops if the human requests it

AI Act Logo
AI Act Ready

This prototype is designed to comply with the European AI Act from August 2025: transparency, user control, and integrated traceability.

05

Prototyping and Validation (RITE)

I applied the RITE (Rapid Iterative Testing and Evaluation) methodology. The prototype was built using Google Firebase Studio with React Flow for the interactive canvas, simulating hosting and authentication, and the Google Gemini API for real content generation. Metaprompting and error correction were aided by Google AI Studio.

5.1. Evolution of the Prompt Node Design

Evolution of the prompt node design
The node design evolved from a simple text area to a structured interface with clear parameters, improving usability and reducing errors.

5.2. Validation Results: The Measured Impact

The evolution of the prompt node from a "container" to a "toolbox" was not just an aesthetic change; it had a direct and measurable impact on how users interacted with the system. To quantify this impact, we conducted a series of A/B tests and timed tasks comparing both versions of the node.

Test Methodology

  • Participants: The 15 creators were divided into two groups. Group A (7 users) used the V1 node, and Group B (8 users) used the V2.
  • Assigned Task: "Create three variations of a prompt for a marketing campaign for a new product. Save the best version for future use."
  • Key Metrics: Task Time, Success Rate, Usability Errors, and Perceived Satisfaction (1-5).

Quantitative Impact: The Difference Between Showing and Empowering

The results showed that the new features of the V2 Node were not just "nice-to-haves," but crucial elements that resolved real frictions and unlocked efficiency.

V1 vs. V2 Node Performance Comparison

Task Time (s)

Success Rate (%)

Errors

Satisfaction (1-5)

V1
V2

This chart illustrates the superiority of the V2 Node. It shows a drastic reduction in task time and errors, along with a massive increase in success rate and user satisfaction, quantitatively validating the design decisions.

Qualitative Impact: "Now I actually understand how to use it"

The qualitative feedback revealed the "why" behind the numbers. Users of the V2 Node were not only faster, but they felt smarter and more in control.
Finding 1: Direct actions eliminate mental friction
The V2 Node's toolbar completely eliminated the need for users to "guess" how to perform common actions.
💬 (V1): "I spent a while looking for how to copy the text. I ended up using Ctrl+C and Ctrl+V..."
💬 (V2): "Ah, perfect. I see the duplicate icon. I click it, and it's done. I didn't have to think."
Finding 2: Reusability transforms the perception of value
The "Save" icon was a turning point. Users understood that their work was not ephemeral.
"When I saw the save icon, it all clicked. I realized I wasn't just writing a prompt for this one time, but investing in my future work."
— Agency Designer.

Validation Conclusion:

The evolution of the prompt node demonstrates that actively listening to users and translating their needs into direct, accessible functionalities not only improves usability but transforms the entire perception of the product: from a passive tool to an indispensable creative partner.

06

Design for Evolution: Continuous Learning

My final deliverable was more than an interface design; it was a strategy for the system's evolution. An AI system that doesn't learn is destined for obsolescence. Therefore, I designed the feedback mechanisms and the learning loop to ensure that All Hub not only solves today's task but evolves towards wisdom with every interaction.

6.1. Designed Feedback Mechanisms

The system learns in two ways, combining passive and active user signals.

Implicit Feedback (High-Confidence Signal)

If the user decides to use the generated content (e.g., clicks "Export"), it is recorded as a validated success. This type of feedback carries a higher weight, as it indicates real satisfaction with the result and reinforces the patterns that led to it.

Explicit Feedback (RLHF)

Each content generation is accompanied by a simple yet powerful feedback interface. Every user vote is a labeled training data point that feeds the system's contextual memory.

This design turns a subjective reaction into structured data that the system can use to fine-tune its agents' behavior.

6.2. Diagram of the Learning Loop and Ecosystem Evolution

This diagram shows how each user interaction not only solves their immediate need but also enriches the system, creating a virtuous cycle of improvement.
Diagram of the learning loop and ecosystem evolution

This loop ensures that the system becomes smarter and more personalized with each use, creating a powerful network effect and a sustainable competitive advantage.

07

Conclusion and Next Steps

All Hub Content Lab demonstrates that a high-fidelity prototype, combining real and simulated interactions, is a powerful tool for validating the experience of a complex, ethical, and, above all, evolutionary AI system.

Key Learnings

  • Visual orchestration is the solution to context fragmentation.
  • AI speed is not a technical metric; it's a pillar of the user experience.
  • Trust is built with transparency, control, and a clear path to improvement.

Immediate Next Steps

  • Integrate a real analytics system (e.g., Hotjar) to capture prototype heatmaps.
  • Design and prototype the granular control flows and the Emergency Stop button.
  • Implement the UI for feedback mechanisms and connect them to a test database.
  • Conduct an external accessibility audit (WCAG 2.1) and prepare for a SOC 2 security review.

Frequently Asked Questions about Conversational AI

Answers about my specialization in conversational artificial intelligence experience design.

Let's talk?

Are you looking for a UX designer for Artificial Intelligence who can help broaden perspectives and reduce biases in conversational AI?

Fill out the form below or, if you prefer, write to me directly at info@josegalan.dev and let's see how we can work together.