Most SimpleCov tutorials stop at SimpleCov.start 'rails' and a screenshot of the HTML report. That is the easy part. The hard part is making coverage data trustworthy inside a CI pipeline where tests run in parallel, containers get recycled, and flaky specs silently corrupt your numbers.

This guide covers what the official docs skip: complete GitHub Actions and CircleCI configs, coverage gating that actually blocks merges, and strategies for dealing with flaky tests that make your reports lie.

Why does SimpleCov break in CI but work locally?

Locally, every spec runs in a single process. SimpleCov hooks into Ruby’s Coverage module at boot, watches every line, and writes one clean .resultset.json at exit.

CI is different. You often have:

  • Parallel workers splitting specs across multiple containers
  • Docker layers that cache gems but not coverage state
  • Spring preloaders that load application code before SimpleCov starts tracking

The most common failure: SimpleCov is required after Rails boots. In your spec_helper.rb, these two lines must come before everything else:

require 'simplecov'
SimpleCov.start 'rails' do
  enable_coverage :branch
  minimum_coverage line: 85, branch: 70
  add_filter 'app/views'
  add_filter 'vendor'
  add_filter '/config/'
  add_group 'Services', 'app/services'
  add_group 'Serializers', 'app/serializers'
end

If even a single require of your app code comes before SimpleCov.start, coverage for that file reads as 0% and drags your total down. In CI, where boot order can differ from your local machine, this is the first thing to check when numbers look wrong.

How to configure SimpleCov in GitHub Actions

A working workflow needs three things: run specs with coverage enabled, upload the coverage artifact, and fail the build if coverage drops below threshold.

Here is a complete .github/workflows/ci.yml:

name: CI

on:
  pull_request:
  push:
    branches: [main]

jobs:
  test:
    runs-on: ubuntu-latest

    services:
      postgres:
        image: postgres:16
        env:
          POSTGRES_PASSWORD: postgres
        ports: ['5432:5432']
        options: >-
          --health-cmd pg_isready
          --health-interval 10s
          --health-timeout 5s
          --health-retries 5

    env:
      RAILS_ENV: test
      DATABASE_URL: postgres://postgres:postgres@localhost:5432/app_test

    steps:
      - uses: actions/checkout@v4

      - uses: ruby/setup-ruby@v1
        with:
          bundler-cache: true

      - name: Setup database
        run: bin/rails db:create db:schema:load

      - name: Run tests with coverage
        run: bundle exec rspec

      - name: Upload coverage artifact
        if: always()
        uses: actions/upload-artifact@v4
        with:
          name: coverage-report
          path: coverage/

      - name: Check coverage threshold
        run: |
          COVERAGE=$(ruby -e "
            require 'json'
            data = JSON.parse(File.read('coverage/.last_run.json'))
            puts data['result']['line'].to_f.round(2)
          ")
          echo "Line coverage: ${COVERAGE}%"
          if (( $(echo "$COVERAGE < 85" | bc -l) )); then
            echo "::error::Coverage ${COVERAGE}% is below the 85% threshold"
            exit 1
          fi

The key detail: SimpleCov writes .last_run.json after every run. The final step parses that file and fails the build if line coverage drops below 85%. This is more reliable than relying solely on minimum_coverage in the SimpleCov config, because it gives you a clear error message in the GitHub Actions log with the exact percentage.

What about branch protection rules?

Add a branch protection rule on main that requires the test job to pass. Now no PR can merge if coverage drops. This is the real enforcement — minimum_coverage in SimpleCov exits with a non-zero code too, but the explicit check step makes failures visible in the PR status without digging into test output.

How to set up SimpleCov in CircleCI

CircleCI has native support for storing artifacts and test results. Here is a .circleci/config.yml:

version: 2.1

orbs:
  ruby: circleci/ruby@2.1

jobs:
  test:
    docker:
      - image: cimg/ruby:3.3-node
      - image: cimg/postgres:16.0
        environment:
          POSTGRES_USER: circleci
          POSTGRES_DB: app_test
          POSTGRES_PASSWORD: ''

    environment:
      RAILS_ENV: test
      DATABASE_URL: postgres://circleci@localhost:5432/app_test
      COVERAGE: true

    steps:
      - checkout
      - ruby/install-deps
      - run:
          name: Setup database
          command: bin/rails db:create db:schema:load
      - run:
          name: Run tests
          command: bundle exec rspec
      - store_artifacts:
          path: coverage
          destination: coverage
      - run:
          name: Enforce coverage threshold
          command: |
            COVERAGE=$(ruby -e "
              require 'json'
              data = JSON.parse(File.read('coverage/.last_run.json'))
              puts data['result']['line'].to_f.round(2)
            ")
            echo "Line coverage: ${COVERAGE}%"
            if [ $(echo "$COVERAGE < 85" | bc -l) -eq 1 ]; then
              echo "Coverage ${COVERAGE}% is below 85% threshold"
              exit 1
            fi

workflows:
  build-and-test:
    jobs:
      - test

The store_artifacts step makes the full HTML report browsable directly in CircleCI’s UI. Your team can click through to see exactly which lines are uncovered without downloading anything.

Why SimpleCov reports lie about flaky tests

A flaky test is one that sometimes passes and sometimes fails without any code change. Here is how flaky tests corrupt coverage data:

  1. Test A covers lines 10-50 of Order model. Test A is flaky and fails on this run.
  2. SimpleCov records those lines as not executed because the test errored out before reaching them.
  3. Your coverage report says Order model has 40% coverage instead of 90%.
  4. Next run, the flaky test passes, and coverage jumps back to 90%.

This makes coverage trends useless. You cannot tell whether a real coverage regression happened or whether a flaky spec just had a bad day.

How to detect coverage corruption from flaky tests

Compare coverage results across your last 5-10 CI runs. If coverage for a specific file fluctuates by more than 5% between runs with no code changes, you likely have a flaky test covering that file.

Add this to your CI pipeline to catch it:

# spec/support/coverage_stability.rb
RSpec.configure do |config|
  config.after(:suite) do
    if ENV['CI'] && File.exist?('coverage/.last_run.json')
      current = JSON.parse(File.read('coverage/.last_run.json'))
      current_coverage = current['result']['line']

      previous_file = 'tmp/previous_coverage.json'
      if File.exist?(previous_file)
        previous = JSON.parse(File.read(previous_file))
        previous_coverage = previous['result']['line']
        delta = (current_coverage - previous_coverage).abs

        if delta > 3.0
          warn "[COVERAGE WARNING] Coverage changed by #{delta.round(2)}% " \
               "(#{previous_coverage}% -> #{current_coverage}%). " \
               "Possible flaky test corruption."
        end
      end
    end
  end
end

How to fix coverage merging with parallel workers

When you split tests across parallel CI workers (using parallel_tests, Knapsack, or CircleCI’s test splitting), each worker generates its own .resultset.json. You need to merge them.

SimpleCov has built-in collation support:

# spec/support/simplecov_setup.rb
require 'simplecov'

SimpleCov.start 'rails' do
  enable_coverage :branch
  minimum_coverage line: 85, branch: 70

  if ENV['CI']
    command_name "worker-#{ENV['CIRCLE_NODE_INDEX'] || ENV['CI_NODE_INDEX'] || 0}"
  end

  add_filter 'app/views'
  add_filter 'vendor'
end

After all workers finish, add a merge step:

# GitHub Actions example - merge step
- name: Download all coverage artifacts
  uses: actions/download-artifact@v4
  with:
    pattern: coverage-worker-*
    path: tmp/coverage/

- name: Merge coverage results
  run: |
    ruby -e "
      require 'simplecov'
      SimpleCov.collate Dir['tmp/coverage/*/.resultset.json'] do
        minimum_coverage line: 85, branch: 70
      end
    "

Without this merge step, each worker only knows about the specs it ran. Worker 1 might show 45% coverage and worker 2 shows 50%, but the merged result is actually 92%. Failing builds based on unmerged parallel coverage is a common source of false negatives.

Checking Code coverage in Rails with simplecov

What coverage threshold should you actually enforce?

The common advice is “aim for 80%.” That number is arbitrary. The right threshold depends on the project.

For greenfield apps: Start at 90% line coverage and 75% branch coverage. New code has no excuse for being untested.

For legacy monoliths: Start wherever you are now and ratchet up. If current coverage is 52%, set the threshold to 52% and increase by 2-3% per sprint. A threshold you cannot meet today just gets ignored or disabled.

For branch coverage: Always set it lower than line coverage. Branch coverage requires testing both sides of every conditional, which is significantly harder. A 15-20% gap between line and branch thresholds is realistic.

SimpleCov.start 'rails' do
  # Ratchet pattern: read current minimum from a file
  if File.exist?('.coverage_threshold')
    threshold = File.read('.coverage_threshold').strip.to_f
    minimum_coverage line: threshold
  else
    minimum_coverage line: 85
  end
end

The ratchet pattern stores the current threshold in a committed file. Every time someone improves coverage, they update the file. Coverage can never go down.

Why we chose this approach at USEO

We learned most of these lessons the hard way on client projects. On the Yousty engagement, a Swiss HR portal where we have been the development partner for over 12 years, we enforced 85% line coverage gates in CI. The platform had grown to 120+ models across two interconnected portals (Yousty.ch and Professional.ch), and uncovered code paths in the apprenticeship matching logic caused production incidents that affected real students looking for positions.

The parallel test problem hit us hard there. The test suite took over 20 minutes in a single process, so we split it across 4 CircleCI workers. For weeks, coverage reports fluctuated between 78% and 91% on the same codebase with no code changes. The root cause was a combination of flaky integration tests (Capybara timeouts on slower CI machines) and unmerged parallel results.

Our fix was threefold:

  1. Quarantine flaky tests into a separate RSpec tag (:flaky) and run them in a dedicated, non-parallel job
  2. Merge coverage from all parallel workers before evaluating thresholds
  3. Track coverage trends across runs and alert on fluctuations greater than 3%

After stabilizing the pipeline, coverage data became trustworthy. The team could see actual regressions instead of noise, and we caught two critical gaps in payment webhook handlers that had been masked by the fluctuations.

What should you filter out of coverage reports?

Not every file matters equally. Including everything inflates your denominator and hides real gaps.

SimpleCov.start 'rails' do
  add_filter 'app/views'          # Tested via integration specs
  add_filter 'app/channels'       # Action Cable, often unused
  add_filter 'app/mailer_previews'
  add_filter '/config/'
  add_filter '/db/'
  add_filter 'vendor'
  add_filter %r{\.rake$}

  # Group what remains for readability
  add_group 'Models',       'app/models'
  add_group 'Controllers',  'app/controllers'
  add_group 'Services',     'app/services'
  add_group 'Jobs',         'app/jobs'
  add_group 'Serializers',  'app/serializers'
end

A common mistake in larger Rails apps: including app/views in coverage. ERB and Haml templates generate Ruby code that SimpleCov tracks, but covering every <% if %> branch in a view template is low-value work. Your integration and system specs already exercise views indirectly. Filter them out and focus coverage enforcement on models, services, and business logic.

How to make coverage visible to the whole team

Coverage data locked inside CI artifacts does not change behavior. Make it visible:

PR comments with coverage diff: Use the simplecov-json formatter and a CI step that posts coverage changes as a PR comment:

# Gemfile
gem 'simplecov-json', require: false, group: :test

# spec/support/simplecov_setup.rb
require 'simplecov'
require 'simplecov-json'

SimpleCov.formatters = SimpleCov::Formatter::MultiFormatter.new([
  SimpleCov::Formatter::HTMLFormatter,
  SimpleCov::Formatter::JSONFormatter
])

SimpleCov.start 'rails' do
  minimum_coverage line: 85, branch: 70
  add_filter 'app/views'
  add_filter 'vendor'
end

Coverage badges in README: Pull the percentage from .last_run.json in CI and push it to a badge service like Shields.io. This makes current coverage visible without opening CI logs.

Slack/Teams notifications on drops: Add a CI step that compares coverage against the previous run and posts to your team channel if it drops by more than 1%.

FAQs

How do you prevent SimpleCov from slowing down the test suite?

SimpleCov adds 2-5% overhead to test execution. If your suite is already slow, that feels significant. The solution is not to disable coverage but to run it selectively. Use an environment variable (COVERAGE=true bundle exec rspec) and only enable it on CI and when you explicitly want local reports. Do not run SimpleCov during TDD red-green-refactor cycles where fast feedback matters more than coverage numbers.

Does branch coverage matter if you already have high line coverage?

Yes. You can have 100% line coverage and still miss entire code paths. Consider an if/else where your test only exercises the if branch. Line coverage counts the lines inside else as uncovered, but if the else block is a single line, missing it barely affects your line percentage. Branch coverage catches this because it tracks decision points, not just lines.

How do you handle SimpleCov with Spring in development?

Spring preloads your Rails application, which means code loads before SimpleCov starts. In development this rarely matters since you are looking at HTML reports after a full test run. But if you use Spring in CI (which you should not), SimpleCov will undercount. Disable Spring in CI by setting DISABLE_SPRING=1 in your environment. This also makes CI runs more deterministic.