Sep 27, 2025

Ultimate Guide to Scaling Web Applications

Dariusz Michalski

CEO

Learn essential strategies for scaling Ruby on Rails applications, including performance monitoring, database optimization, and caching techniques.

Scaling a Ruby on Rails application requires addressing performance bottlenecks, database optimisation, and localisation challenges. Here's what you need to know:

  • Scalability matters: Ensure your app handles growing user traffic without slowing down or crashing.

  • Common challenges: N+1 queries, memory bloat, background job delays, and session management can hinder performance.

  • Localisation for Switzerland: Handle CHF currency, Swiss date/time formats (DD.MM.YYYY, 24-hour), number formatting (1'000.00), and multilingual needs (German, French, Italian, Romansh).

  • Performance tools: Use tools like Rack MiniProfiler, Bullet, and Skylight for monitoring and fixing inefficiencies.

  • Database strategies: Optimise queries, add indexes, use read replicas, and consider sharding or partitioning for large datasets.

  • Caching: Use fragment, action, and HTTP caching with tools like Redis or Memcached to improve load times.

  • Background jobs: Offload tasks with Sidekiq, monitor queue depths, and prioritise critical jobs.

  • Deployment: Use Docker and Kubernetes for scalability, and implement load balancing with Nginx or HAProxy.

Key takeaway: Combine these strategies to ensure your Rails app remains fast, reliable, and scalable as it grows.

Scaling Rails & Postgres to Millions of Users at Microsoft: Lessons & Takeaways

Finding and Fixing Performance Bottlenecks

Identifying performance bottlenecks in Ruby on Rails applications is all about using the right tools and techniques to address inefficiencies before they spiral out of control. The sooner you catch these issues during development and maintain continuous monitoring in production, the smoother your application will run. Let’s dive into some effective strategies for pinpointing and resolving these challenges.

Performance Monitoring Tools for Ruby on Rails

Ruby on Rails

To tackle performance issues effectively, you need tools that provide deep insights into your application’s behaviour.

During development, Rack MiniProfiler is a must-have. It gives you real-time performance profiling directly in your browser, showing SQL query times, Ruby execution time, memory usage, and warnings about N+1 queries. Pair this with Bullet, which is excellent for detecting N+1 queries, unused eager loading, and missing counter caches. It even offers actionable recommendations to fix these ORM-related problems.

For more targeted profiling, StackProf provides method-level insights using sampling-based profiling, while Memory Profiler helps you track memory allocations to uncover bloat or leaks.

In production, tools like Skylight and AppSignal are invaluable. They offer low-overhead profiling, endpoint-level performance breakdowns, and detailed database query analysis. They also include exception tracking to help you stay on top of errors.

If you’re working in an environment with strict data residency requirements, a self-hosted option like Rails Performance Dashboard is a great alternative. It lets you monitor request performance, background jobs, slow endpoints, and errors - all while keeping your data in-house.

Reading Logs and Metrics

Once you’ve set up monitoring tools, the next step is to dig into the logs and metrics they provide. Start by looking at response time patterns. Endpoints with consistently high response times often point to inefficiencies in database queries or Ruby code.

For database performance, focus on query execution times, frequency, and memory allocation trends. These metrics can reveal areas that need optimisation or highlight potential memory leaks.

Pay attention to error rates and patterns as well. A spike in error codes or timeouts is often a sign of system degradation. Similarly, background job metrics - like queue depth, processing times, and failure rates - can give you early warnings about scalability issues.

Setting Priorities for Issue Resolution

When it comes to fixing performance issues, prioritisation is key. Focus on user-facing features that impact peak traffic times. For example, enterprise applications might experience heavy usage during business hours, while consumer apps may see a spike in the evenings.

Resource consumption analysis can help you decide where to focus your efforts. Database query optimisations often deliver significant improvements with minimal changes, while issues like memory leaks or CPU-intensive operations may require immediate attention. In some cases, offloading tasks to background jobs or implementing caching strategies can make a big difference.

Always keep user experience at the forefront. Optimising endpoints that directly affect page load times and conversion rates should be a top priority.

Benchmarking tools can guide your decisions with hard data. For instance, Benchmark-ips measures iterations per second, allowing you to compare different approaches statistically. Flamegraphs, on the other hand, provide a visual breakdown of where your application spends its time, making it easier to identify problem areas and explain them to your team.

To catch regressions early, consider integrating Derailed Benchmarks into your CI pipeline. This ensures performance issues are flagged before they make it into production.

Database and Active Record Optimization

Once you've identified bottlenecks through monitoring, the next logical step is to refine how your Rails application interacts with the database. As your Ruby on Rails app scales, database performance often becomes the main hurdle. The trick is to understand both how Active Record works and what your database can handle.

Active Record Query Optimization

One of the most effective ways to improve performance is by addressing inefficient queries. For example, loading 100 users without eager loading results in 101 queries, but adding includes(:company) reduces this to just 2 queries.

Database indexing is another powerful tool for speeding up queries. PostgreSQL's EXPLAIN ANALYZE helps you examine query execution plans and spot missing indexes. If you're often querying users by email and registration date, creating a composite index on these columns can cut query times from hundreds of milliseconds to under 10 milliseconds.

For processing large datasets, use find_each to handle records in batches (default: 1,000 records). This reduces memory usage significantly. Additionally, limit the data retrieved by using select to fetch only the columns you need, saving memory and reducing network overhead.

Counter caches are another way to optimise. Instead of running COUNT queries repeatedly, you can add a posts_count column to the users table and let Rails maintain the count automatically. This reduces query complexity from O(n) to O(1), which is especially useful if you're frequently displaying counts like the number of posts per user.

These query-level improvements lay the groundwork for broader database scaling strategies.

Database Scaling Methods

Scaling your database can take several forms. Vertical scaling - adding more CPU, RAM, or faster storage - can provide immediate relief but comes with physical and cost limitations.

For read-heavy applications, read replicas are a more practical approach. PostgreSQL's streaming replication allows you to create read-only copies of your primary database. Starting with Rails 6, you can configure multiple databases, enabling SELECT queries to route automatically to read replicas.

When a single database server can't handle your dataset, database sharding becomes necessary. This involves splitting data horizontally across multiple instances based on a sharding key, such as geographic regions or user ID ranges. While sharding can handle massive datasets, it introduces complexity - cross-shard queries become tricky, and you may lose some referential integrity.

Tuning your connection pool is another critical step. Connection pooling ensures efficient use of database resources by reusing existing connections. Rails uses a connection pool by default, but adjusting the pool size to match your app's concurrency can make a big difference. For example, a web server handling 50 concurrent requests might perform well with a pool size of 10-15 connections.

Managing Large Datasets

Handling millions of records requires different techniques than smaller datasets. Traditional pagination methods can become slow due to OFFSET. Instead, use cursor-based pagination with indexed columns like timestamps or IDs to maintain consistent performance.

For bulk operations or data migrations, batch processing is essential. Rails' find_in_batches method allows you to process records in manageable chunks. For instance, with a table containing 5 million records, processing in batches of 1,000 records can keep memory usage around 50 MB, compared to several gigabytes if you load everything at once.

To keep your primary tables lean, consider archival strategies. Moving older data to separate archive tables or databases can significantly improve performance. For example, moving orders older than two years to an archive database could reduce your main orders table from 10 million records to just 2 million, speeding up queries.

Partitioning is another way to optimise large datasets. In PostgreSQL, partitioning tables by date ranges can improve performance for time-series data. For example, partitioning a table by month allows queries for recent data to skip older partitions entirely, reducing I/O and query time.

Finally, managing concurrent queries becomes critical under high traffic. PostgreSQL's default max_connections setting of 100 may not suffice for high-traffic apps. However, simply increasing this limit can overwhelm the database. Tools like PgBouncer can help by pooling hundreds of application connections into a smaller number of database connections, maintaining performance while supporting higher concurrency levels.

Caching and Background Job Systems

To keep your server running smoothly and speed up response times, consider implementing caching and background job systems. These tools work in tandem: caching serves up frequently requested data quickly, while background jobs handle tasks behind the scenes without interrupting user requests.

Setting Up Caching for Faster Response Times

Caching is all about storing commonly accessed data in memory, which drastically reduces the time it takes to deliver responses. Rails provides several caching options that you can layer together for even better performance.

  • Fragment caching: This targets specific parts of a webpage that don’t change often. For example, on a product listing page, while prices might update daily, descriptions often stay the same. Caching these static sections can make rendering faster, especially for pages with complex layouts.

  • Action caching: This takes it a step further by caching entire controller actions. If your homepage mostly displays static content, you can serve the whole page from memory, cutting down on database queries.

  • HTTP caching: This works with browsers and CDNs by setting cache headers like Cache-Control. For instance, you can tell browsers to temporarily store static assets, reducing the need for repeated server requests from returning visitors.

When it comes to tools, Redis is a popular choice due to its speed and flexible data handling. Setting it up as your cache store in production, along with connection pooling, ensures smooth handling of concurrent requests. Alternatively, Memcached is a solid option for straightforward key-value caching, especially if you don’t need advanced data structures.

To keep cached data fresh, use versioned cache keys - like adding updated_at timestamps - so outdated data gets automatically refreshed.

Scaling Background Job Processing

Background jobs are perfect for tasks that don’t need to run immediately, like sending emails, processing images, or exporting data. These jobs free up your application to respond to users faster.

Sidekiq is a powerful tool for this. It uses threads within a single process, allowing it to handle many jobs at once without hogging memory. You can prioritise tasks by setting up multiple queues with different priority levels. For example, critical tasks like sending user notifications can be processed first, even during busy periods.

How many workers you need depends on your jobs and server capacity. CPU-heavy tasks, like image processing, might benefit from a worker count that matches your server’s CPU cores. On the other hand, I/O-heavy tasks can usually handle more workers. Regularly check queue depths and processing times to fine-tune your setup.

Sidekiq also has built-in features like exponential backoff for job retries, which is handy for handling temporary issues like API timeouts. For jobs that fail repeatedly, use dead job queues to review and fix problematic tasks without disrupting active queues.

Monitoring and Managing Job Queues

To keep your background job system running smoothly, monitoring is essential. Pay attention to job processing rates, failure rates, and queue depths across different priority levels.

Tools like New Relic and Skylight provide detailed insights into background job performance. They can help you track execution times, identify failures, and monitor resource usage, ensuring potential problems are caught early.

Keep an eye on queue depths and set up alerts for metrics like processing times and failure rates. This helps you address backlogs and inefficiencies before they affect users. Memory usage is another critical area - spotting leaks or inefficiencies early can save you from bigger issues later.

For error tracking, tools like Sentry are invaluable. They provide detailed context for diagnosing job failures, making troubleshooting much easier.

Balancing resources between background jobs and web requests is crucial, especially on shared servers. During peak traffic times, like lunch hours (12:00–13:00) or early evening (17:00–19:00) in Switzerland, you might need to adjust CPU usage for background workers to ensure web requests remain responsive. For high-demand scenarios, dedicating separate servers for background processing can help maintain performance.

Don’t forget regular maintenance - clearing out old completed jobs and keeping an eye on dead job queues will keep your system efficient. With these strategies in place, you’ll be ready to tackle load balancing and deployment with confidence.

Deployment and Load Balancing Methods

Once your caching and background jobs are running smoothly, the next challenge is managing traffic surges and server reliability. To build a scalable Ruby on Rails application, you need effective deployment strategies and dependable load balancing.

Containerisation and Orchestration

Docker simplifies Rails application deployment by packaging everything into a consistent environment. With a well-defined Dockerfile, you can specify the Ruby version, install dependencies, and set up the runtime environment. This consistency reduces the risk of environment-specific problems that could lead to downtime.

Kubernetes takes this concept to the next level by orchestrating how containers operate in production. It can automatically scale your application, perform health checks, and replace failed containers to keep things running smoothly. For example, during peak hours, Kubernetes can add extra container instances to handle the increased traffic.

Kubernetes also lets you set CPU and memory limits, ensuring that background jobs don’t interfere with web response times. This level of control is especially useful for businesses that experience heavy loads during standard working hours.

Once your containers are orchestrated, the next step is implementing load balancing to distribute traffic efficiently.

Load Balancing Techniques

After containerising your application, load balancing ensures that no single server is overwhelmed. Tools like Nginx and HAProxy offer excellent options, each with unique strengths.

Nginx is a versatile choice that handles static assets while also balancing dynamic requests. It can also manage SSL termination, which reduces the workload on backend servers. This dual functionality speeds up the delivery of assets like CSS, JavaScript, and images, enhancing the user experience.

If your application stores session data locally instead of using a shared store like Redis, maintaining session persistence becomes critical. Nginx can manage this using methods like IP hashing or cookie-based routing, ensuring users consistently connect to the same server to preserve their session state.

HAProxy, on the other hand, excels with its advanced load balancing algorithms and robust health-checking features. It can detect slow or unresponsive backend servers and redirect traffic to more reliable ones. Additionally, its detailed statistics provide real-time insights into server performance and traffic distribution.

For organisations managing sensitive data, HAProxy’s comprehensive logging capabilities can help meet compliance requirements by maintaining a detailed audit trail of requests and responses.

Regular health checks are essential to quickly identify server issues without creating unnecessary overhead. Configuring connection and response timeouts tailored to your application's needs can further enhance both reliability and responsiveness.

Deploying on Reliable Platforms

With containerisation and load balancing in place, selecting a dependable deployment platform is the final step to ensure scalability.

Heroku simplifies Ruby on Rails deployments by automating many tasks and dynamically scaling resources. It integrates seamlessly with standard Rails tools and gems, making it a convenient choice for development teams.

One standout feature of Heroku is its ability to create temporary review apps for testing changes before they go live. This is especially helpful for teams working across different time zones, as it supports asynchronous testing and collaboration.

Heroku also handles SSL certificate management through its Heroku SSL service, ensuring secure connections without requiring in-depth DevOps expertise.

For database needs, Heroku Postgres offers automated backups, point-in-time recovery, and scalable performance. Routine maintenance tasks, like applying security patches, are scheduled during low-traffic periods to minimise disruptions.

Heroku’s Scheduler integrates well with Rails' rake tasks, making it easy to run periodic jobs like data cleanup or report generation during off-peak hours.

For applications with strict compliance needs, Heroku’s Private Spaces provide enhanced security with network isolation and dedicated runtime environments.

Monitoring is straightforward with Heroku’s metrics dashboard, which tracks key performance indicators such as response times, throughput, and error rates. Automated alerts notify you of any issues, allowing for proactive problem-solving.

Finally, Heroku’s usage-based pricing model offers flexibility. You can scale resources up during busy periods and scale down when traffic slows, optimising both costs and performance.

Conclusion and Next Steps

Scaling Rails applications requires tackling bottlenecks, fine-tuning databases, leveraging caching, and streamlining deployment processes. The strategies outlined in this guide offer a reliable starting point for managing increased traffic while keeping performance steady as your application evolves. These methods build on the principles discussed earlier, providing practical steps for growth.

Key Takeaways from the Guide

A few key strategies stand out:

  • Monitoring: Early detection of bottlenecks through robust monitoring tools.

  • Database Optimisation: Improving Active Record efficiency by adding proper indices, addressing N+1 queries, and employing eager loading techniques.

  • Caching: Quick performance improvements with minimal implementation effort.

  • Background Jobs: Tools like Sidekiq prevent slow tasks from affecting user experience.

  • Deployment: Using Docker and Kubernetes for scalable and efficient traffic handling.

Together, these methods create a solid framework for scaling Rails applications without needing significant overhauls.

How USEO Can Help

USEO takes these strategies to the next level, transforming technical expertise into real business growth. Their services align with Swiss standards for currency, data formatting, and data protection, ensuring compliance and relevance.

With more than a decade of experience in Ruby on Rails development, USEO excels in crafting and scaling applications that grow alongside your business. Their expertise spans architecture design and ongoing performance optimisation, ensuring your app remains robust and adaptable.

Their technical skill set covers the entire stack:

  • Rails for back-end development

  • Modern front-end frameworks like React and Vue.js

  • Database management with PostgreSQL and MySQL

  • Cloud hosting for scalable infrastructure

USEO also offers flexible collaboration models to meet your needs. Whether you’re looking for end-to-end project delivery, support for your IT team, or dedicated Ruby specialists, they tailor their approach to fit your goals and budget.

What sets them apart is their commitment to high code quality. Through rigorous engineering practices, thorough code reviews, and automated testing, they ensure that scaling doesn’t compromise your app’s stability. This disciplined approach prevents technical debt, keeping your application future-proof.

For Swiss businesses aiming to scale their Ruby on Rails applications effectively, USEO provides the expertise and dependability to overcome technical hurdles while staying focused on achieving business success.

FAQs

What are the best ways to optimise database performance when scaling a Ruby on Rails application?

To keep your Ruby on Rails application running smoothly as it grows, focus on improving database performance. Start by adding indexes to columns that are queried often - this simple step can make data retrieval much quicker. Be mindful of N+1 query issues, and optimise your Active Record queries by using eager loading when necessary.

Another key approach is implementing caching strategies. You can use fragment caching or low-level caching tools like Redis or Memcached to ease the load on your database. Additionally, make it a habit to regularly review query performance. Identifying and resolving bottlenecks early on will keep your app responsive and efficient, even as user traffic increases.

How can I manage background jobs to keep my Rails app responsive during high traffic?

To keep your Rails application running smoothly under heavy traffic, it's smart to organise your background jobs into priority queues. Group tasks based on their urgency and resource needs - think high, medium, and low priority. This way, critical operations get handled first, minimising delays where it matters most.

Tools like Sidekiq paired with ActiveJob make managing these queues straightforward. They’re scalable and help prevent job backlogs, keeping your app responsive even during traffic surges. Don’t forget to regularly monitor and fine-tune job performance to ensure everything stays on track.

What should I consider when localising a Ruby on Rails application for users in Switzerland?

When localising a Ruby on Rails application for the Swiss market, you need to consider the linguistic diversity of the country. Switzerland has four official languages: Swiss German, French, Italian, and Romansh. Your application should support all these languages and make it simple for users to switch between them.

Pay close attention to Swiss-specific formats. For currency, use the Swiss Franc (CHF). Dates should follow the DD.MM.YYYY format, and time should be displayed using the 24-hour clock. Numbers require a single quote as the thousand separator (e.g., 1'234.56). Measurements should use the metric system, and temperatures should be displayed in Celsius.

Lastly, ensure your application aligns with Swiss cultural norms and legal standards. This means respecting regional preferences and creating content that resonates with users from all linguistic and cultural regions of Switzerland. Compliance with local regulations is also essential to ensure your application feels trustworthy and relevant.

Related Blog Posts

Have a project idea? Let's talk and bring it to life

Your highly qualified specialists are here. Get in touch to see what we can do together.

Dariusz Michalski

Dariusz Michalski, CEO

dariusz@useo.pl

Konrad Pochodaj

Konrad Pochodaj, CGO

konrad@useo.pl

Have a project idea? Let's talk and bring it to life

Your highly qualified specialists are here. Get in touch to see what we can do together.

Dariusz Michalski

Dariusz Michalski, CEO

dariusz@useo.pl

Konrad Pochodaj

Konrad Pochodaj, CGO

konrad@useo.pl

Have a project idea? Let's talk and bring it to life

Your highly qualified specialists are here. Get in touch to see what we can do together.

Start a Project
our Office

ul. Ofiar Oświęcimskich 17

50-069 Wrocław, Poland

©2009 - 2025 Useo sp. z o.o.

Start a Project
our Office

ul. Ofiar Oświęcimskich 17

50-069 Wrocław, Poland

©2009 - 2025 Useo sp. z o.o.