Select Page

problem

missing/minimal string descriptions cause costly issues for partners across the string lifecycle 

Role

Project lead and content designer. Owner of the internal string management tool used by content designers. Led an AI hackathon project to improve string description quality.

Scope

Internal hackathon project exploring AI-generated contextual descriptions for visible strings and accessibility labels.

Impact

  • Solved a systemic localization context problem without exposing internal tools
  • Feature built during the hackathon and later integrated into the string management system and developer tooling
  • Reduced cognitive load and manual documentation work across the string lifecycle
  • Demonstrated systems-level thinking and cross-functional problem framing

Team

Localization experts, software engineer, content designers

 

Project Timeline

3-day hackathon, Spring 2024

 

Tools

Figma, internal AI agent, content standards, developer tools

Snapshot

Primary users
Content designers and software engineers

Beneficiaries
Localization specialists, content designers and other cross-functional partners involved in the string lifecycle

Environment
String management and localization pipeline supporting multiple products

Key constraint
Localization experts cannot access Meta internal tools or product interfaces. They work in external localization platforms and receive minimal context for each string.

Main problem

Localization partners often receive isolated strings with limited metadata. Without context, they must infer meaning, which leads to:

Localization errors

Clarification tickets back to a volunteer-led content expert group

Delays and inconsistent translations

Accessibility strings face the same challenge. Because they are not visible in the interface and are only read by assistive technologies, they are often harder to localize.

Secondary problem

Content can be widely reused across products, but similar strings do not always carry the same meaning.

Without context about the product or the function of the string, localization decisions can create breaking points in UX flows. With millions of people using Meta products, these mistakes can become costly.

Ideally, every string would include a description when it is created, following standardized templates. In practice:

  • there is often not enough time to write full descriptions
  • strings may be implemented in code without accurate descriptions
  • descriptions may contain only minimal information

This leaves localization partners working with incomplete context.

Within the localization community at Meta, this problem had been discussed for some time. Localization specialists and content designers regularly raised concerns about missing or insufficient string descriptions and the challenges they create during translation.

As the owner of the string management tool used by content designers, I saw an opportunity to address the problem at the system level rather than relying on better documentation habits alone.

A: 9-question Online Survey | typeform

Quantitative Research

responses

READ THE FULL REPORT >
(you won’t believe how many people track their 💩 !)

B: Observations | real social situations

Qualitative Research

years

occasions

C: Remote video interviews | Zoom

Qualitative Research

people with celiac disease

THE TWO PERSONAS >
that materialized after the research

Why a hackathon

Improving infrastructure inside a large organization typically requires engineering resources, experimentation and proof of value.

Since the string descriptions project was deprioritized for that half, the Meta-wide Spring hackathon provided the only realistic environment to test a new approach quickly with cross-functional contributors.

I proposed a project exploring whether our internal AI could generate usable string descriptions automatically. The model had already been trained on internal content standards and description templates.

The idea attracted more than 30 registered contributors during the three-day event, which is unusual for a content-centered project.

My role

I initiated and led the hackathon project (and later presented the outcome company-wide.)

The resulting descriptions needed to be comprehensive enough to stand alone or require only minimal editing.

Responsibilities included:

  • Framing the localization context problem
  • Socializing the project to gain contributors and visibility
  • Defining the AI experiment constraints and leading the three-day effort
  • Refining prompts to improve output quality
  • Ensuring outputs aligned with internal content standards and description templates
  • Extending the approach to accessibility strings

1. Context analysis

I mapped the lifecycle of strings across two common workflows to reveal how missing context is creating feedback loops.

This analysis highlighted how lacking or missing contextual information disrupts the regular workflow as strings move across tools, cross functional partners and systems.

Content touch points with sufficient metadata:

1) Content designer → Figma/string manager (CMS) → developer → code base → localization platform (CMS) → localization partner → code base

2) Developer → code base → localization platform (CMS) → localization partner → code base

1) insufficient metadata – content designer

Content designer → Figma/string manager (CMS) → develper → code base → localization platform (CMS) → localization partner → troubleshooting ticket (CMS) → content designer → string manager (CMS) → localization platform (CMS) → localization partner → code base

2) insufficient metadata – developer

Developer → code base → localization platform (CMS) → localization expert → troubleshooting ticket (CMS) → content designer → string manager (CMS) → localization platform (CMS) → localization expert → code base

2. AI experiment

Tested whether an AI model trained on internal content standards could produce meaningful string descriptions using available metadata and system signals.

The goal was to generate descriptions that could either:

  • Stand alone for localization
  • Serve as useful placeholders requiring minimal editing

3. Prompt refinement

Iteratively refined prompts to ensure outputs:

  • Followed the string description template
  • Were clear and concise
  • Aligned with content standards
  • Were usable for localization workflows

4. Accessibility extension

Applied the same approach to accessibility strings used by assistive technologies, which are often the most difficult to localize without context.

Outcome

  • AI-supported string descriptions became a widely used feature, reducing documentation workload for more than 4,400 employees and external partners.
  • The feature was later integrated into the updated string management system, including a custom Figma plugin and internal tooling.
  • The work was presented at a company-wide accessibility content event.
  • Generating string descriptions now helps restore context for localization partners and improves content quality across the system.

Reflection

This project revealed two bottlenecks in localization workflows.

First, meaning often erodes as strings move through complex systems without sufficient metadata. Second, when infrastructure does not support sustainable context creation, people must compensate manually through documentation, tickets and troubleshooting.

AI did not replace human judgment. Instead, it helped generate missing context at the creation of content, allowing localization partners to work effectively without access to internal systems while saving hours of work across the organization.