I Translated 489 Strings into 3 Languages in Under 2 Minutes
Adding i18n to a Production App Without Losing Your Mind
How translate-kit turned a multi-week localization project into a single coding session.
Internationalization is one of those features you know you should have, but the overhead keeps it permanently on the backlog. You picture weeks of wrapping strings, maintaining parallel JSON files, and debugging layout issues when German words are three times longer than English.
I just shipped Spanish and French support for Mystic Cards — a Next.js 14 app with ~67 component files and roughly 489 user-facing strings — in a single session. The actual translation step took 84 seconds and cost less than a cent. Here's the full workflow.
The Stack
- next-intl — runtime i18n library for Next.js App Router
- translate-kit — CLI built on the Vercel AI SDK that scans your codebase, extracts strings, wraps them in
t()calls, and translates via any AI provider
The key decision was using next-intl's "without i18n routing" mode. No [locale] folder restructure, no middleware, no URL prefixes. The user's language preference is stored in a NEXT_LOCALE cookie, and the server reads it on every request. URLs stay clean — /decks, not /es/decks.
This means zero disruption to existing routes, zero SEO implications (for now), and a migration that touches component files but not the routing structure.
Step 1: Scan — Find Every String
translate-kit's scan command walks your JSX AST and extracts hardcoded strings into a structured JSON file:
npx translate-kit scan
In about 10 seconds, it produced messages/en.json with 489 keys organized by namespace — one namespace per component or page. The keys are semantic: home.whereArtistsAndSeekers, navigation.signOut, read.drawCards.
The scan is configurable. My config excluded API routes and test files:
scan: {
include: ["src/app/**/*.tsx", "src/components/**/*.tsx"],
exclude: ["src/app/api/**", "**/*.test.tsx", "**/*.test.ts"],
}
Step 2: Codegen — Wrap Every String
This is the step that would normally take days by hand. translate-kit's codegen command replaces every extracted string with a t() call and adds the necessary imports:
npx translate-kit codegen
It modified 48 files in under a minute. Before and after:
// Before
<h2>Where Artists & Seekers</h2>
// After
<h2>{t("home.whereArtistsAndSeekers")}</h2>
For server components, it uses getTranslations from next-intl/server. For client components, useTranslations. It figures out which is which from the 'use client' directive.
What Codegen Can't Do
AST-based transformation has limits. Module-level constants with string values — things like sort option arrays or error message maps — don't get transformed because the t() function isn't available outside a component body.
I had two of these: SORT_OPTIONS in the decks page and OAUTH_ERRORS in the login form. The fix was straightforward — change the label property to a labelKey property and call t() at render time:
// Before
const SORT_OPTIONS = [{ value: 'popular', label: 'Most Popular' }]
// After
const SORT_OPTIONS = [{ value: 'popular', labelKey: 'decks.mostPopular' }]
// In JSX: {t(opt.labelKey)}
These edge cases took about 10 minutes to fix manually. On a 489-key project, that's a 98% automation rate.
Step 3: Translate — AI Does the Rest
npx translate-kit translate
This is where it gets interesting. translate-kit sends your English JSON to an AI model (I used GPT-4o-mini) with configurable context and a glossary:
translation: {
context: "Mystical tarot reading web application...",
glossary: {
"Mystic Cards": "Mystic Cards",
Pro: "Pro",
seekers: "consultantes (ES) / consultants (FR)",
},
tone: "mystical and inviting",
}
489 keys translated into Spanish in 3.4 seconds. Adding French later was another 80 seconds for 494 keys (not sure why French took longer?). Total cost across both runs: $0.008.
The glossary is important. Without it, "seekers" — a tarot term for the person receiving a reading — got translated as "buscadores" (Spanish for generic searchers) and "chercheurs" (French for researchers). The correct tarot terms are "consultantes" and "consultants". Domain-specific vocabulary needs human oversight, but the glossary means you only correct it once.
The Infrastructure
The runtime setup is minimal — three files:
src/i18n/request.ts reads the cookie and loads the right JSON:
export default getRequestConfig(async () => {
const store = await cookies()
const locale = store.get('NEXT_LOCALE')?.value ?? 'en'
return {
locale,
messages: (await import(`../../messages/${locale}.json`)).default,
}
})
next.config.js wraps the existing config with next-intl's plugin (one line).
src/app/layout.tsx becomes async and wraps children in <NextIntlClientProvider>.
For the language switcher itself, I built a dropdown with inline SVG flag icons (Union Jack, Spanish flag, French tricolour) clipped to a squircle shape. It sets the cookie and calls router.refresh() — the entire page re-renders with the new locale, no full reload.
Testing Without Pain
The biggest risk with i18n is breaking every existing test. If your tests assert against English strings like expect(screen.getByText('Sign Out')), and you swap that string for t("navigation.signOut"), every test fails.
The fix is a test mock that resolves keys against the actual English JSON:
import en from '../../messages/en.json'
function createTranslator(namespace?: string) {
const ns = namespace ? en[namespace] : en
return (key: string) => resolveKey(ns, key)
}
vi.mock('next-intl', () => ({
useTranslations: (ns?: string) => createTranslator(ns),
}))
Now t("navigation.signOut") returns "Sign Out" in tests. Every existing assertion works unchanged. Zero test rewrites.
I also added a key parity test that loops over all locale files and verifies they have identical key sets with no empty values. Adding a new language automatically gets coverage:
for (const { name, data } of locales) {
it(`${name} has all keys that English has`, () => {
const missing = enKeys.filter(k => !getAllKeys(data).includes(k))
expect(missing).toEqual([])
})
}
Adding a New Language
With the infrastructure in place, adding French was five changes:
- Add
'fr'to the valid locales array - Add
"fr"to translate-kit's target locales - Run
npx translate-kit translate - Add a French flag SVG and locale entry to the switcher component
- Update the test to expect "Français" in the dropdown
The translate step — which is the part that would traditionally require hiring a translator or spending hours with a dictionary — took 80 seconds and cost half a cent.
What I'd Do Differently
Run scan before writing any new component. If you add a component after the initial scan, its strings won't be in en.json. You either need to re-run scan (which may re-extract strings you've already translated) or add keys manually. I hit this with the AuthCTA component — its "Start Free Reading" default prop was invisible to the scanner because it lived in a function parameter, not in JSX.
Set up the glossary early. Domain-specific terms need human review regardless of the AI model. Getting "seeker" wrong in two languages and fixing it after the fact was easy, but catching it in the glossary upfront is better.
Keep metadata in English. Page titles and meta descriptions affect SEO. Until you're ready for proper hreflang tags and localized URLs, keep metadata objects static in English.
The Numbers
| Metric | Value |
|---|---|
| Strings extracted | 489 |
| Files auto-modified by codegen | 48 |
| Manual fixups needed | 2 files |
| Translation time (ES + FR) | 84 seconds |
| Translation cost | $0.008 |
| Test rewrites needed | 0 |
| New test files | 2 |
| Total languages supported | 3 |
The entire implementation — from npm install next-intl to git push — happened in a single Claude Code session. The infrastructure is designed so that adding language number 4, 5, or 10 is a five-minute task.
i18n doesn't have to be a multi-sprint epic. With the right tooling, it's an afternoon.