Switch Language
Toggle Theme

GitHub Actions Cache Strategy: Speed Up CI/CD Pipeline 5x

npm install: 3 minutes and 15 seconds.

That was the CI build time for a project I inherited last year. Every time I pushed code, I’d stare at GitHub Actions logs, waiting for that green checkmark. Honestly, I spent a lot of that time doom-scrolling Twitter—what else could I do?

Then I added caching. Same build? 40 seconds. Nearly 5x faster.

This isn’t magic. It’s just getting GitHub Actions cache strategy right. Today, I’ll share the pitfalls I hit, the data I tested, and configuration templates you can copy-paste. If you’re waiting on CI builds, this might save you some coffee breaks.

1. Core Concepts of Cache Mechanism

Before configuring, let’s understand how caching works—otherwise you’ll trip over the same issues I did.

GitHub Actions caching is straightforward: find → restore → save. You define a key, GitHub searches all caches for a match. Found it? It restores directly to your workspace. Not found? After the job completes, it saves a new cache.

But there are hard limits you should know:

LimitValue
Cache limit per repo10 GB
Single cache file limit5 GB (realistically, issues start above 1GB)
Cache retentionDeleted after 7 days without access
Concurrent upload limitMax 5 caches uploading simultaneously

I’ve seen someone hit the 10GB wall—too many dependencies, caches piling up, then new caches get rejected, old ones get evicted, and every build is a “cold start.”

Another thing people confuse: Cache vs Artifact. Cache is for CI, optimized for speed. Artifacts are for humans—build outputs, test reports, things you want to keep long-term. Cache has a 10GB limit; Artifacts have no hard limit (but they eat into your repo storage).

There’s also Docker Layer Cache, specifically for Docker builds. The logic differs from regular caching—I’ll cover that separately.

2. Cache Key Design Strategy

Cache hit or miss entirely depends on your key design. This is the core of your caching strategy.

What is hashFiles()?

GitHub provides a built-in function hashFiles() that computes file hashes. Commonly used on package-lock.json or yarn.lock—if dependencies don’t change, the hash doesn’t change, and the cache hits.

key: npm-{{ runner.os }}-{{ hashFiles('**/package-lock.json') }}

This generates a key like npm-Linux-a1b2c3d4e5f6.... As long as package-lock.json stays the same, the key stays the same.

restore-keys: The Fallback

But dependencies do update. That’s where restore-keys comes in—a “fallback matching” mechanism:

- uses: actions/cache@v4
  with:
    path: ~/.npm
    key: npm-{{ runner.os }}-{{ hashFiles('**/package-lock.json') }}
    restore-keys: |
      npm-{{ runner.os }}-

First, match the full key. No match? Fall back to old caches starting with npm-Linux-. Not a perfect hit, but at least most packages in node_modules are there—you only need to install the new dependencies incrementally.

Three Key Naming Patterns Compared

Based on my testing, here are three recommended patterns:

Simple Pattern (good for small projects):

key: {{ runner.os }}-node-{{ hashFiles('**/package-lock.json') }}

Version Pattern (for multiple Node versions):

key: {{ runner.os }}-node{{ matrix.node-version }}-{{ hashFiles('**/package-lock.json') }}

Multi-path Pattern (for monorepos):

key: {{ runner.os }}-{{ hashFiles('**/package-lock.json', '**/yarn.lock') }}

How to Check If Cache Hit

actions/cache outputs a cache-hit variable:

- uses: actions/cache@v4
  id: cache-npm
  with:
    path: ~/.npm
    key: {{ runner.os }}-node-{{ hashFiles('**/package-lock.json') }}

- name: Check cache hit
  run: echo "Cache hit - {{ steps.cache-npm.outputs.cache-hit }}"

true means exact hit; false means partial hit or complete miss. You can use this to decide whether to run npm ci:

- name: Install dependencies
  if: steps.cache-npm.outputs.cache-hit != 'true'
  run: npm ci

3. Practical Configuration Examples

Theory done. Let’s look at code. These configurations are tested and ready to copy-paste.

npm Caching (Prefer setup-node)

setup-node actually has built-in caching—it’s cleaner than manually using actions/cache:

- uses: actions/setup-node@v4
  with:
    node-version: '20'
    cache: 'npm'  # or 'yarn', 'pnpm'

One line. Done. But if you want to cache other directories (like node_modules), you’ll still need actions/cache:

- uses: actions/cache@v4
  with:
    path: node_modules
    key: {{ runner.os }}-nm-{{ hashFiles('**/package-lock.json') }}
    restore-keys: {{ runner.os }}-nm-

My recommendation: Use setup-node’s built-in cache unless you have special requirements.

yarn and pnpm

yarn’s cache directory differs from npm:

- uses: actions/cache@v4
  with:
    path: |
      ~/.yarn/cache
      ~/.yarn/install-state.gz
    key: yarn-{{ runner.os }}-{{ hashFiles('**/yarn.lock') }}

pnpm is special—it uses a global store:

- uses: pnpm/action-setup@v4
  with:
    version: 9

- uses: actions/cache@v4
  with:
    path: ~/.pnpm-store
    key: pnpm-{{ runner.os }}-{{ hashFiles('**/pnpm-lock.yaml') }}

Python/pip Caching

Python project cache paths:

- uses: actions/cache@v4
  with:
    path: ~/.cache/pip
    key: pip-{{ runner.os }}-{{ hashFiles('**/requirements.txt') }}
    restore-keys: pip-{{ runner.os }}-

Docker Layer Cache

Docker builds are the biggest time sink. The good news: BuildKit supports GitHub Actions cache backend:

- uses: docker/setup-buildx-action@v3

- uses: docker/build-push-action@v6
  with:
    context: .
    push: false
    cache-from: type=gha
    cache-to: type=gha,mode=max

type=gha tells it to use GitHub Actions cache service for Docker layers. In my testing, a 5-minute image build dropped to about 1 minute.

Go Module Cache

- uses: actions/cache@v4
  with:
    path: |
      ~/go/pkg/mod
      ~/.cache/go-build
    key: go-{{ runner.os }}-{{ hashFiles('**/go.sum') }}

Rust Cargo Cache

- uses: actions/cache@v4
  with:
    path: |
      ~/.cargo/registry
      ~/.cargo/git
      target
    key: cargo-{{ runner.os }}-{{ hashFiles('**/Cargo.lock') }}

Rust compiles slowly—caching saves massive time. But watch out for target directory growing indefinitely—I recommend periodic cleanup.

4. Performance Optimization and Best Practices

Here’s what I’ve learned from testing and from hitting pitfalls—hopefully it saves you some headaches.

Performance Benchmarks

According to RunsOn’s test report (updated January 2026), with proper caching:

OperationWithout CacheWith CacheSpeedup
npm install3 min40 sec~5x
yarn install2 min 30 sec35 sec~4x
Docker build5 min1 min~5x
pip install45 sec8 sec~5x

Cache hit rates land between 70-90%, depending on how well you design your key strategy.

Common Pitfalls

Don’t Cache node_modules Directly

I made this mistake at first. Big mistake.

# Don't do this
path: node_modules

node_modules is platform-specific—packages installed on Linux might have issues running on Windows. The right approach: cache the global cache directory (~/.npm), let npm ci assemble it.

Cross-OS Caching Needs GNU tar + zstd

Default tar formats differ on macOS and Windows, causing cache restore failures. Add this config:

- uses: actions/cache@v4
  with:
    path: ~/.npm
    key: npm-{{ runner.os }}-{{ hashFiles('**/package-lock.json') }}
    enableCrossOsArchive: true

Cache Pollution Issues

Sometimes your cache stores broken dependencies, causing builds to fail repeatedly. Solutions:

  1. Manual deletion: Go to GitHub repo → Actions → Caches page, click delete
  2. Force key update: Add a prefix or timestamp to the key to regenerate
key: npm-v2-{{ runner.os }}-{{ hashFiles('**/package-lock.json') }}

Best Practices Checklist

Before you configure, check these points:

  1. Prefer built-in caching in official actions (setup-node, setup-python)
  2. Include hashFiles in your key, otherwise dependencies update but you keep using old cache
  3. Add restore-keys—fallback matching can save you
  4. Don’t cache node_modules, cache global directories
  5. Periodically clean old caches to avoid hitting the 10GB limit

5. Common Questions Answered

Q1: Why is my cache hit rate so low?

The most common reason: key changes too frequently. Maybe you added a timestamp or branch name—every push generates a new key. Solution: only use runner.os and hashFiles, remove unnecessary variables.

Another reason: hashFiles matched files it shouldn’t. If you wrote hashFiles('**/*.json'), changing any config file invalidates the cache. Fix it to only match package-lock.json or yarn.lock.

Q2: What happens when cache exceeds 10GB?

10GB seems like a lot, but monorepos or Docker caches blow past it easily. Solutions:

  1. Periodic cleanup: GitHub Actions → Caches, manually delete old ones
  2. Split caches: use different keys for different dependencies, avoid one cache for everything
  3. Use self-hosted runners: they don’t have the 10GB limit

Q3: Do self-hosted runners need special configuration?

No special config needed—the cache mechanism works the same. But self-hosted runners have an advantage: caches are stored locally, no network transfer latency, so restore is faster. The downside: caches don’t auto-clean—you’ll need to write a script for that.

Q4: How to force cache update?

Change the key. Add a version prefix:

key: npm-v3-{{ runner.os }}-{{ hashFiles('**/package-lock.json') }}

Or just delete the old cache and let the system regenerate.

Conclusion

All that said, it boils down to one thing: use caching right, CI gets 5x faster.

Let me do the math—assuming you save 2 minutes per build, 10 builds a day, that’s 600 minutes a month. Almost 10 hours. Enough time to write quite a few articles.

If you’re new to GitHub Actions, start with setup-node’s built-in cache—one line does the job. When you hit bottlenecks, come back and explore more complex key strategies and Docker Layer Cache.

By the way, this is the 3rd article in the GitHub Actions guide series. I’ve previously covered CI pipeline setup and deployment strategies—check out the archives if you’re interested.

Next time you push code, check your build time. See if you can go from 3 minutes to 40 seconds. Give it a try.

Configure GitHub Actions Cache to Accelerate CI/CD

Configure GitHub Actions caching to reduce npm install build time from 3 minutes to 40 seconds

⏱️ Estimated time: 10 min

  1. 1

    Step1: Choose cache strategy

    Select cache strategy based on your package manager:

    • npm projects: Prefer setup-node built-in cache
    • yarn/pnpm projects: Configure cache paths
    • Docker builds: Use BuildKit's gha backend
  2. 2

    Step2: Design cache key

    Use hashFiles() to generate stable keys based on lock files:

    • Basic pattern: {{ runner.os }}-node-{{ hashFiles('**/package-lock.json') }}
    • Add restore-keys for fallback matching
    • Avoid timestamps or branch names in keys
  3. 3

    Step3: Add cache configuration

    Add cache steps to your workflow file:

    • npm: Use actions/setup-node@v4, set cache: 'npm'
    • Custom paths: Use actions/cache@v4
    • Docker: Set cache-from and cache-to
  4. 4

    Step4: Verify cache effectiveness

    Check if cache hits:

    • View cache-hit output variable (true means exact hit)
    • Compare build times (should reduce 4-5x)
    • Check Actions → Caches page to confirm cache is stored
  5. 5

    Step5: Maintain cache regularly

    Avoid cache issues:

    • Monitor cache space usage (10GB limit)
    • Periodically clean old caches
    • When encountering pollution, update key prefix to force rebuild

FAQ

Why is my cache hit rate only 30%?
Usually a key design issue. Check if you included frequently-changing variables in the key (like timestamps or branch names)—switch to only using runner.os and hashFiles. Also verify hashFiles path matches the lock file exactly, avoid wildcards that match too many files.
What happens when cache exceeds 10GB?
GitHub automatically evicts oldest caches to make room. Recommendations:

• Cache different dependency types separately (npm, Docker, pip with separate keys)
• Regularly manually delete unused caches on Actions → Caches page
• For monorepos, consider splitting repos or using self-hosted runners
Can different branches share cache?
By default, cache is only shared between current branch and default branch (main/master). If you want cross-branch sharing, remove branch name from key, use only file-based hash. Also, restore-keys can help match caches from other branches.
What's different about self-hosted runner cache?
Same caching mechanism, but two differences: advantage is cache is local, restore is faster (no network latency); disadvantage is no 10GB limit but also no auto-cleanup, you need to write scripts to periodically clean old caches.
Will cache restore failure break the build?
No. Cache is optional optimization—if restore fails, build continues. GitHub Actions proceeds with subsequent steps, just redownloads dependencies this time. You'll see 'Cache not found for key: xxx' in logs, then it automatically saves new cache for next run.
How do I know when cache needs updating?
Three scenarios require cache updates:

• Dependency version changes: hashFiles handles automatically, no manual intervention
• Cache pollution: build suddenly fails, need to clear old cache
• Config changes: like Node version upgrade, need to add version number to key

In most cases, with correct configuration, no manual management needed.

8 min read · Published on: Apr 7, 2026 · Modified on: Apr 8, 2026

Comments

Sign in with GitHub to leave a comment

Related Posts