We had a small microservice to build, a team eager to experiment, and a client willing to invest in innovation. We chose Hanami 2.0. It worked. Then we rewrote it in Rails anyway.

This is not a story about Hanami being bad. It is a story about evaluating framework readiness for production use and making honest trade-off decisions. For a broader framework comparison, see our analysis of Ruby on Rails vs. Django.

What was the scope of the experiment?

The service was simple: a favoriting feature with four API endpoints, extracted from a monolith as part of a microservice architecture.

Why this service was the right candidate:

  • Small enough that a full rewrite to Rails would take one day
  • Not a critical business feature
  • Isolated from the rest of the system

The risk was minimal. The client gave us a green light.

How do you assess if a framework is actively maintained?

My first task was verifying Hanami’s development activity. I checked the main GitHub repository and found almost no updates in six months. That looked alarming.

The reality was more nuanced.

Hanami uses a multi-repo architecture. Unlike Rails (one big mono-repo), Hanami splits into independent components. The main repository just glues them together. Low activity there does not mean low activity overall.

Three gem families power Hanami 2.0:

  • DRY-RB - independent libraries, each handling a single responsibility
  • ROM-RB - a flexible persistence framework for any data store
  • Hanami components - the application framework layer

To see the real picture, you need to check activity across all three families. When I did, the development pace was intense.

Lesson: Never judge a multi-repo project by its umbrella repository alone. Check the component repos.

What happens when naming conventions mislead you?

Coming from Rails, I saw “Hanami::API” and assumed it was the equivalent of Rails::API: a full framework without views.

Wrong. Hanami::API is a minimal router layer, closer to Sinatra than to a complete application framework. I started building with it and quickly hit walls: excessive configuration, missing features, almost no documentation.

After reaching out to the Hanami core team, I understood the difference. In Hanami, you can enable or disable any component:

  • No database needed? Remove the persistence layer entirely.
  • No web interface? Disable the router.
  • API only? Use the full framework and skip the view layer.

There is no need for a separate “API mode” because the framework is modular by design.

Lesson: Do not map concepts from one framework onto another. Read the architecture docs before writing code.

Was writing the actual service difficult?

Once I understood the architecture, building the service was straightforward. I went through the ROM-rb guides and started coding.

Minor frustrations existed. As a senior Ruby developer, having to look up syntax for basic operations like find or test setup for the router felt slow. But these were documentation gaps, not framework problems.

Our existing Rails habits actually helped. We already:

  • Kept business logic in lib/, not in models
  • Used additional abstractions beyond ActiveRecord
  • Kept models thin with only persistence logic

This made the transition to Hanami’s architecture natural. Everything worked out of the box. I had the complete application running before Hanami 2.0 even reached its first alpha release.

Why did we rewrite it in Rails anyway?

Three factors drove the decision:

1. Team onboarding cost. Hanami 2.0 was pre-release. The community was small. Learning resources were scarce. Teaching an entire team a new framework without adequate documentation would slow delivery significantly.

I started the Hanami Mastery project to address this gap with video guides and tutorials. But creating resources and waiting for a team to absorb them are different timelines.

2. Infrastructure duplication. Our Rails services had years of accumulated tooling: scaffold generators, deployment scripts, CI pipelines. None of it was framework-agnostic. Every Hanami service would need these rebuilt from scratch.

3. Framework maturity timeline. Hanami 2.0 was already a year past its targeted release date, with likely another year before a stable version. The core team does incredible work, but they are a small group working largely in their spare time.

This is not a criticism. Open source maintainers building frameworks on nights and weekends deserve immense respect. But for a client’s production system, framework stability is a business requirement, not a philosophical preference.

Each factor alone was manageable. Together, they tipped the balance toward Rails.

Practical Implementation: The USEO Approach

Evaluating new technology for production use requires a structured process. Here is how we approach it at USEO:

  • Start with a low-risk service. Pick something small, non-critical, and easy to rewrite. Never bet the core product on unproven technology.
  • Verify community health beyond star counts. Check commit frequency across all repos, issue response times, and release cadence. A framework with 10k stars but no commits in six months is a red flag.
  • Time-box the experiment. Set a deadline. If the team cannot be productive within that window, switch back. Sunk cost should not drive technical decisions.
  • Separate personal excitement from client risk. Being an early adopter is valuable for learning. Forcing early adoption on a client’s production system is reckless.
  • Document what you learn. Even failed experiments produce valuable knowledge. We wrote internal decision records explaining why we chose Rails over Hanami for this specific case and time.

Hanami remains a strong framework with a sound architecture. The decision was not “Hanami is bad” but “Hanami is not ready for our production needs right now.” That distinction matters.