How I Rebuilt My i18n Libraries to Scale Across Four Programming Languages
After maintaining two popular Ruby gems for internationalization for years, I recently did something that fundamentally changed how I think about building reusable software: I completely abstracted the translation layer. What started as updating some stale packages evolved into a complete architectural rethink that now spans Ruby, JavaScript, Go, and Rust. The breakthrough wasn't just about supporting more languages—it was about separating data from implementation in a way that makes maintaining and expanding these libraries exponentially easier. For anyone building libraries or tools meant to be used across different ecosystems, this approach solves a problem that's plagued open source for years: keeping translations and data synchronized across multiple implementations.
The Problem: Translation Chaos Across Packages
If you've ever worked with internationalization (i18n) libraries, you know the pain point immediately. Every language ecosystem has its own package for country names, time zones, and other locale-specific data, and each one embeds the translations directly into the package itself. This creates a maintenance nightmare because when translations need updating—and they do, constantly, as country names change, new locales are added, or errors are discovered—you have to update each package individually. I had been maintaining two Ruby gems around country translations and time zone translations for many years, and while they were relatively successful in the community, I had let them get stale over the past year. The thought of updating translations across multiple codebases, ensuring consistency, and managing different release cycles for essentially the same data was exhausting. This hodgepodge approach to translation support meant that users of one package might have updated data while users of another were working with outdated information, creating inconsistencies across projects.
The traditional approach treats translations as tightly coupled to the implementation code, which makes sense from a simple packaging perspective but creates massive overhead at scale. When you think about it, the translation data—the actual JSON or YAML files containing country names in different languages, for instance—has nothing to do with whether you're using Ruby, JavaScript, or Go. The data is the data. But because we've historically bundled everything together, we end up duplicating that data across every language-specific implementation, manually syncing updates, and introducing opportunities for drift and inconsistency. This isn't just inefficient; it's architecturally wrong. It violates the basic principle of separating concerns and creates unnecessary coupling between what should be independent components.
The Breakthrough: Separating Data from Implementation
The solution hit me while I was working with AI-assisted development tools that let me move incredibly quickly across different codebases and languages. I realized I could abstract the entire translation layer into its own GitHub repository, completely separate from any implementation code. This repository is essentially just JSON files—all the translations for countries, time zones, and other locale-specific data, organized by locale and kept in one central, authoritative location. From there, I set up this translation repository as a package in its own right, supporting multiple package management systems: it's an npm module, a Ruby gem, a Go module, and a Rust package, all generated from the same source of truth. The beauty of this approach is that the translation repository doesn't care about implementation details—it's just data, versioned and distributed through standard package managers.
This architectural shift fundamentally changes the economics of maintaining i18n libraries across multiple languages. Now, when I want to add support for a new programming language, I don't need to duplicate all the translation data and figure out how to keep it in sync. Instead, I create a thin implementation layer in that language that imports the translation package and provides language-specific APIs and conventions. I've already done this for four languages: I updated my original Ruby gems, added npm packages for both country and time zone translations, created Go packages for both, and even added Rust packages. Each of these implementation packages is relatively small because it's focused solely on providing idiomatic interfaces for that language, while the heavy lifting of storing and versioning translations happens in the central repository. This separation means I can add translation support to pretty much any other language quite easily by just creating a new implementation package that consumes the shared data.
The Technical Architecture: How It Actually Works
Let me walk through how this actually works in practice, because the details matter for anyone thinking about applying this pattern to their own libraries. The core translation repository contains all locale data organized in a consistent JSON structure. This structure is language-agnostic—it's just data describing countries, their translations in various locales, time zone information, and so on. This repository has its own versioning scheme using semantic versioning, so when translations are updated, corrected, or expanded, the version number increments appropriately. Each language-specific package manager can consume this repository as a dependency, which means when I publish an update to the translation data, every implementation package can simply bump its dependency version to pull in the latest translations.
The implementation layers are where language-specific considerations come into play. For the Ruby gems, I provide a familiar object-oriented interface with methods that Ruby developers expect. The JavaScript npm packages use patterns familiar to Node and browser developers. The Go packages follow Go conventions for package structure and error handling. The Rust packages leverage Rust's type system and ownership model. Each of these is a thin wrapper around the core translation data, providing idiomatic APIs while ensuring that everyone is working with the same underlying information. This means a developer using the Ruby gem and another using the npm package are guaranteed to have access to exactly the same translations if they're both using compatible versions, eliminating the inconsistency problems that plagued the old approach.
The Maintenance Win: Updates Everywhere, Instantly
The real payoff of this architecture becomes clear when it's time to update translations. In the old world, if I discovered that a country name translation was incorrect in French, I'd need to update that translation in the Ruby gem, then remember to update it in any other implementations I was maintaining, test each one separately, and coordinate releases. In practice, this meant updates often only happened in one package, leading to drift over time. Now, I update the translation in the central repository, bump the version, and publish. Every implementation package that updates its dependency automatically gets the fix. This isn't just convenient—it's a fundamental improvement in how open source libraries can maintain data consistency across ecosystems.
This approach also makes contributions from the community significantly easier. If someone wants to add support for a new locale or correct a translation, they can submit a pull request to the central translation repository without needing to know Ruby, JavaScript, Go, or Rust. The translation repository is just JSON files, which anyone can edit and verify. Once that PR is merged and a new version is published, all the implementation packages benefit automatically. Similarly, if someone wants to add support for a new programming language—say, Python or Elixir—they don't need to worry about gathering all the translation data. They just create an implementation package that consumes the existing translation package and provides a Pythonic or Elixir-friendly API. This dramatically lowers the barrier to expanding the ecosystem and ensures that expansion doesn't fragment the data.
What AI-Assisted Development Enabled
I want to be transparent about something that made this possible: AI-assisted development tools. I've been writing code for 25 years, and I can confidently say that these tools changed what's feasible for a single developer to accomplish. Moving quickly across four different programming languages, each with its own conventions, package management systems, and testing requirements, would have been prohibitively time-consuming before. The AI assistance let me generate boilerplate, adapt patterns across languages, and handle the mechanical aspects of setting up package structures so I could focus on the architectural decisions and ensuring everything worked correctly. This isn't about replacing developer skill—it's about amplifying what a skilled developer can accomplish. The architecture I described required deep understanding of separation of concerns, package management, and versioning, but executing that architecture across multiple ecosystems became practical in a way it simply wasn't before.
Lessons for Building Cross-Language Libraries
If you're building libraries or tools meant to be used across different programming ecosystems, this pattern is worth considering. The key insight is identifying what's truly data versus what's implementation. Translations, configuration schemas, validation rules, taxonomy definitions—these are all candidates for extraction into language-agnostic repositories. Once you've made that separation, you can version the data independently, make it consumable through standard package managers, and create thin implementation layers that provide idiomatic interfaces for each language. This approach trades some initial architectural complexity for massive long-term maintainability wins and makes it practical to support ecosystems you might not have the resources to support otherwise.
The other lesson is about the compounding benefits of good architecture. Now that I have this foundation in place, adding support for additional data types or additional languages has become straightforward. If I wanted to add currency translations, I could create a new section in the translation repository and update each implementation package to expose it. If I wanted to support PHP or Swift, I could create new implementation packages without touching the data layer. Each addition builds on the same foundation rather than requiring parallel work across disconnected codebases. This is what scalable open source looks like—architecture that makes future work easier rather than harder.
Moving Forward
These libraries—spanning country translations and time zone translations across Ruby, JavaScript, Go, and Rust—are now in a position to evolve more rapidly and more consistently than ever before. The architecture supports growth in both directions: more data types and more programming languages, without the exponential increase in maintenance burden that the old approach would have required. For me, this represents the kind of software building I care about: systems that are well-architected, maintainable, and genuinely useful across different contexts. It's not about building something flashy or trendy; it's about building something solid that solves a real problem and can continue solving that problem as the ecosystem evolves.
The i18n space might seem like a solved problem—after all, there are already translation libraries for most languages. But this project reminded me that there's always room for better architecture, for rethinking assumptions, and for building things that are not just functional but elegant in how they handle complexity. That's the kind of building that keeps me engaged after 25 years of writing software.
The Repositories
Data Layer (shared source of truth)
- i18n-country-translations-data — Country translation data for all locales
- i18n-timezones-data — Timezone translation data for all locales
Ruby
- i18n-country-translations — Ruby gem for country translations
- i18n-timezones — Ruby gem for timezone translations
JavaScript
- i18n-country-translations-js — npm package for country translations
- i18n-timezones-js — npm package for timezone translations
Go
- i18n-country-translations-go — Go module for country translations
- i18n-timezones-go — Go module for timezone translations
Rust
- i18n-country-translations-rs — Rust crate for country translations
- i18n-timezones-rs — Rust crate for timezone translations