top of page
Logotipo letras negras fondo transparente.avif
AI-Powered Media
Intelligence Platform

Transformed a technically advanced AI tagging platform into a transparent, efficient, and enterprise-ready product.

Frame 35 (1).png

Role

Lead Product Designer (UX/UI)

Company

GRAIPH

Duration

2021 - 2025 (4 years)

Team

Devices 

Web · Desktop App · Tablet

Tools & methodology

Figma · Adobe XD · FigJam · Slack · Miro

Impact at a Glance

  • 40% reduction in task completion time (8 → 4.8 min)

  • 60% improvement in usability score

  • 80+ UI components developed and documented

4-person product team (CEO, CIO 2 engineers)

PROJECT IMPACT

40%

Faster Task Completion

60%

Usability Improvement (SUS 68 → 85)

80+

Reusable Components Delivered

+60%

Increase in AI Trust

CONTEXT & CHALLENGE

GRAIPH’s AI tagging engine was powerful but difficult to understand for non-technical users.

Workflows were fragmented across 4 screens.
AI decisions felt like a black box.
Users lacked confidence reviewing automated results.


The challenge:
Make AI transparent, controllable, and efficient without reducing automation power.

MY ROLE

Led end-to-end UX/UI strategy focused on AI transparency and workflow efficiency.

   • Conducted user research with media analysts
   • Designed AI confidence visualization framework
   • Unified fragmented workflows into single interface
   • Built 80+ reusable component library
   • Partnered closely with AI/ML engineers
   • Validated solutions through usability and A/B testing

Legacy UI (before)

​Dark UI, olive green palette, dense screens, confusing Tagger, inconsistent interactions.

New UI (after)

Clean, bright and modern UI; unified design system; clearer workflows; improved tagging experience; new brand identity.

Frame 37 (2).png

THE PROBLEM

User challenges

85% didn’t understand AI confidence scores

❌ Tagging required switching between 4 screens

❌ High cognitive load reviewing large media libraries

❌ Frequent manual corrections

Business challenges

 

Low AI trust

❌ Slower adoption

 Reduced enterprise confidence

Captura de pantalla 2025-11-22 131218_ed

RESEARCH & DISCOVERY

Methods

  • User interviews (media analysts & content managers)​

  • 200+ support ticket analysis

  • Shadowed tagging sessions

 

Key Insights

AI Transparency Gap
Users couldn’t predict or verify AI decisions.
Confidence scores lacked meaning.


Workflow Fragmentation
Tagging process split across 4 screens.
Context switching increased errors.


Data Overload
Large libraries overwhelmed users.
No prioritization by confidence level.

Core Insight:

  • Users didn’t want less AI.

  • They wanted AI they could understand and control.

PROCESS & APPROACH

Phase 1 — AI Transparency Framework

Mapped user mental models of trust.
Tested multiple confidence visualization concepts.

Outcome:
3-tier confidence system simplifying review focus.

Frame 38 (2).png

Phase 2 — Unified Tagging Interface

Collapsed 4-screen workflow into single interface:
 

  • Left: Media preview

  • Center: AI suggestions + manual tagging

  • Right: Applied tags + bulk actions

 

Outcome:
Reduced context switching and improved task flow.

Phase 3 — Data Visualization Optimization

Redesigned dashboards to:

• Prioritize high-risk content
• Surface low-confidence AI tags
• Enable batch review

 

Outcome:
Faster decision-making at scale.

KEY DESIGN
DECISIONS

Decision 1 — AI Confidence Visualization

Problem:
Users didn’t trust AI because decisions lacked transparency.

Decision:
3-tier confidence model:


        High (85–100%) Auto-applied
        Medium (60–84%) Requires review
        Low (<60%) Manual tagging

Added “Show reasoning” explanation.

Impact:
• +60% increase in AI trust
• -40% review time
• -55% manual corrections

Decision 2 — Unified Tagging Interface

Problem:
Workflow split across 4 screens created friction.


Decision:

Single interface with contextual panels and batch actions.


Impact:
• Task time: 8 → 4.8 minutes
• -55% tagging errors
• +45% satisfaction
• Lower cognitive load (NASA-TLX)

DESIGN SYSTEM

To support scale and consistency, I built an 80+ component system tailored for AI and data-heavy interfaces.

Includes:
   • Data visualization components
   • Tagging controls & confidence indicators
   • Batch action modules
   • Structured design tokens
   • Documentation for design & engineering


Impact:

   • 50% faster design-to-development
   • 80% component reuse across modules
   • +40% engineering velocity

TESTING & VALIDATION

Concept Testing
   • 10 sessions validating confidence visualization
   • Added reasoning tooltip after feedback


Usability Testing
   • 10 beta sessions
   • Unified interface reduced errors by 55%


A/B Testing
   • 25 users (old vs new workflow)
   • 40% faster task completion
   • SUS improved from 68 → 85

Continuous validation through analytics and enterprise client feedback.

OUTCOMES & IMPACT

Frame 53.png

User Metrics


• 40% faster task completion
• 60% usability improvement (SUS 68 → 85)
• 55% fewer tagging errors
• 70% of users rated AI as trustworthy (vs 30% before)

image 244.png

Business Impact


• 10 enterprise clients onboarded in 6 months
• Contributed to $2.5M funding round
• Platform positioned for Series A
• Increased enterprise adoption confidence

image 242.png

Product Impact


• 80+ reusable components
• 50% faster feature delivery
• 40% engineering velocity increase

LEARNINGS & REFLECTION

AI products require trust frameworks, not just automation.


• Transparency increases efficiency.
• Reducing cognitive load drives adoption.
• Workflow unification improves both UX and business metrics.
• Data-heavy interfaces demand strong visual hierarchy.

This project deepened my expertise in AI transparency, complex workflow simplification, and enterprise SaaS design.

bottom of page