Social Impact Catagory

TIME's Best Inventions of 2025

Redesigning an AI-Driven content curation platform for TIME’s best inventions 2025

Redesigning an AI-Driven content curation platform for TIME’s best inventions 2025

Redesigning an AI-Driven content curation platform for TIME’s best inventions 2025

0-1 Redesign • Shipped 2025

TL;DR

Redesigned an enterprise internal CMS to scale metadata and copyright tagging using AI-assisted workflows with human-in-the-loop guardrails.

Outcome

  • ~60% faster time-to-publish for new libraries

  • ~3× faster content ingestion during tagging workflows

  • Fewer metadata errors and reduced legal risk during review

A digital library used over 15 countries and more than 300,000 users

SolarSPELL is a portable solar-powered digital library that works as an offline Wi-Fi hotspot, giving users access to curated educational content without an internet connection.

OVERVIEW

Spell CC is internal enterprise Content Management System (CMS) used to create, tag, and organize educational resources for digital libraries.

It helps teams tag metadata, manage copyright, and assemble libraries efficiently at a single, scalable workspace.

I led end-to-end product UX decisions across AI interaction models, validation flows, and system guardrails.

THE PROBLEM

The 2015 UI surfaced over 40 ungrouped metadata fields across multiple screens, with no progressive disclosure, weak validation, and error feedback only after submission.

Curators often discovered mistakes late in the process, triggering full rework cycles and delaying library publication by weeks.

We didn't need a UI Refresh.

We needed to transform the tool from a "Database Form" to an "Intelligent Curation Workspace.

KEY CONSTRAINTS

Primary Constraint

Cost of correction

Metadata errors increased rework, QA cycles, and redeployment effort.

Primary Constraint

Legacy backend

The UI had to work with old backend without breaking historical data.

Primary Constraint

Copyright risk

Incorrect tagging created legal exposure and required strict validation.

Given these constraints, we couldn’t rely on either full manual workflows or unchecked automation.

THE BIG BET

What if we could replace time-intensive manual metadata entry with AI-assisted extraction while keeping humans accountable for every decision?

OPERATIONAL RISK

1. Trust and Quality Risk

Curators re-verified nearly every field due to low confidence in system outputs.

2. Time and Throughput Constraints

Manual entry scaled linearly with content volume, creating hard bottlenecks during peak demand.

3. Structural Limitation of the Tool

The form-based UI forced curators to think in database terms rather than library-building workflows.

To understand how to resolve this, I worked with the in-house UX research team to closely examine where the existing curation flow was breaking down.

RESEARCH OBJECTIVE

Define the right balance between AI assistance and human control in metadata and copyright tagging.

  • Heuristic evaluation

  • 10+ interviews with lead curators

  • Secondary research on Explainable AI (XAI) and Human-in-the-Loop (HITL) systems

CORE INSIGHT

Research showed that curators were open to AI assistance as long as it did not replace their judgment.

This showed up most clearly during live tagging, where curators paused longest on copyright attribution and ambiguous subject fields.

Because SPELL-CC already relied on structured metadata and copyright schemas, curators needed AI as a Co-pilot, not an Autopilot.

This insight led to two core principles that shaped the redesign:

Explainable AI (XAI)

AI surfaced suggestions mapped to existing databases.

Human-in-the-Loop (HITL)

Curators reviewed, edited, or approved each suggestion before it was saved.

This reframed the design challenge from “How do we automate tagging?” to “How do we reduce effort while preserving curator confidence and accountability?”

STRATEGIC DECISIONS & TRADE-OFFS

Instead of jumping straight to a solution, the redesign was grounded in explicit trade-offs.

Decision 1

AI as a Co-pilot, Not an Autopilot (Metadata Tagging)

Trade-off: Speed vs trust

Result: We designed AI to assist, not decide. Suggestions were surfaced with confidence signals and required explicit human approval, enabling ~3× faster ingestion without sacrificing accuracy or accountability.

Decision 2

Just-in-Time Legacy Data Cleanup

Trade-off: System purity vs delivery velocity

A full migration was proposed but rejected due to timeline risk and the likelihood of blocking active curation work.

Result: We adopted a just-in-time cleanup strategy that improved data quality within active workflows while avoiding migration paralysis and preserving operational continuity.

WHAT WE EXPLORED

Before moving into high-fidelity design, we explored multiple approaches to AI assistance and library management to understand trade-offs around control, efficiency, and scale.

Concept 1: Field-level AI metadata assistance

AI supported curators at the individual metadata field level, offering contextual suggestions on demand.

Why it was tempting

  • Maximum accuracy

  • Low risk

Why it failed

  • Still doesn’t scale

  • Limiting efficiency

Concept 2: Fully Autonomous AI Tagging

  • One-click ingestion

  • Auto-save metadata

Why it was tempting

  • Maximum speed

  • “Wow” factor

Why it failed

  • Curators didn’t trust it

  • High legal and educational risk

Concept 3: Batch-oriented library management

Library organization was handled through a dedicated modal, allowing selected content to be reassigned in a single step.

Why it was tempting

  • Felt safer and more predictable for copyright-sensitive data

  • Minimized accidental structural changes and provided a controlled way to manage content at scale.

Why it failed

  • Could not handle edge cases or new content types

  • Removed content from its broader context, making it harder to reason about library structure holistically.

Concept 4: Assisted Tagging (HITL)

  • AI suggests

  • Human approves

  • Explicit confidence + evidence

Why it wasn’t obvious

  • Added interaction overhead

  • Required careful UX to avoid slowing experts

Why it Worked

  • Balanced speed with trust

  • Fit curator mental models

  • Scaled without new risk

HOW WE KNEW WE WERE RIGHT

We ran quick usability tests to validate our options

Curators preferred reviewing drafts over typing from scratch

Confidence indicators reduced re-check time

Forced approvals reduced anxiety, not speed

THE FINAL SYSTEM

Rather than optimizing for visual novelty, each iteration focused on answering a single question:

Does this reduce effort without reducing confidence?

AI-Assisted Metadata Tagging (HITL)

AI extracts metadata and maps it to SPELL-CC’s existing taxonomy.

Suggestions appear as drafts

Low-confidence fields require human intervention

Forced approvals reduced anxiety, not speed

Result: Reduced repetitive work while preserving accountability.

Copyright Checks & Compliance

Structured copyright workflows replaced unstructured free-text entry.

Guided templates and attribution fields

Publisher cross-checks

Hard blocks for invalid or unsafe inferences

Reduced legal risk and eliminated inconsistent copyright tagging.

Clean & Improved UX / UI (Foundational)

The redesign established a clear, consistent interaction model across the platform.

Clear information hierarchy

Visible system state at all times

Consistent component patterns aligned to SolarSPELL’s brand

Old Design:

Redesign

Result: Lower cognitive load and higher confidence in system decisions.

Visual Drag-and-Drop Library Builder

Library creation shifted from nested forms to a visual assembly model.

Drag-and-drop interactions

Side Drawer repository + collection view

Completeness and duplicate indicators

Result: Library structuring time dropped from hours to minutes.

AI GUARDRAILS & FAILURE PREVENTION

AI-assisted workflows were designed with explicit limits to prevent silent errors and protect high-stakes decisions.

AI suggestions never auto-commit

Publisher cross-checks against known databases

Restricted to controlled metadata taxonomies

IMPACT & VALIDATION

Metrics were measured during a pilot with a core group of curators tagging live content over multiple production cycles, comparing baseline manual workflows to AI-assisted flows.

Early results

~60% reduction in time-to-publish for new library collections

Fewer metadata corrections required during review

~3× faster content ingestion during AI-assisted tagging workflow

What I’d Do Next

With a trusted foundation in place, the next phase would focus on scaling insight rather than automation.

Multilingual metadata extraction to support non-English content at scale

AI-assisted topic clustering to identify gaps and overlaps across libraries

Allow safe bulk approvals for high-certainty AI suggestions.