Switch Language
Toggle Theme

Next.js E2E Testing: Playwright Automation Testing Guide

3 AM, staring at that bright red “URGENT” label on the bug ticket. Third time refreshing the same page in production. The payment flow broke again. It worked fine in the test environment—why is it failing in production?

Thinking back to last week’s release: manually clicking through 30+ pages, filling out a dozen forms, switching back and forth between three browsers—and I still missed a button that only appears when you scroll to the bottom on mobile. A WeChat message from the product manager pops up: “Users are saying the coupon code doesn’t work.” My stomach starts to hurt.

That night, I made a decision: we need E2E testing. Can’t keep doing manual testing like this. It’s going to burn me out and drag down the whole team.

So I got to work. When choosing tools, I looked at Cypress, Selenium, Puppeteer, Playwright… ended up picking Playwright, mainly because it supports multi-browser testing and the configuration is way simpler than Cypress. In the first week after installing Playwright, it caught five bugs I never found through manual testing—some were styling issues that only appeared in Firefox, others were race conditions from async API calls.

This article is about my six months of experience using Playwright in Next.js projects. To be honest, I stepped on quite a few landmines at the beginning. Rewrote the config file several times, redid test cases twice. But now the whole process runs smoothly—CI/CD is fully automated, tests run on every commit, and production bugs have been cut in half.

Why Choose Playwright (vs Cypress)

Anyone who’s used Cypress knows it’s simple to configure, has friendly documentation, and an active community. So why didn’t I choose it?

Three main issues:

Multi-browser support is too weak. Cypress’s support for Firefox and Safari has always been half-dead—tests mainly run on Chromium. Sounds like no big deal, until I stepped into a huge trap—the payment page ran perfectly on Chrome but white-screened on Safari because I used a CSS property Safari doesn’t support. Playwright natively supports Chromium, Firefox, and WebKit—one test suite covers all major browsers.

Test speed. Playwright’s parallel capabilities are much stronger. Cypress runs tests serially—50 test cases can take over ten minutes; Playwright with 8 workers running in parallel finishes the same cases in five minutes. In CI/CD, every minute costs money—this difference is pretty significant.

API design. Honestly, when I first switched from Cypress to Playwright, I was a bit uncomfortable—Cypress’s chaining syntax feels nice to write. But after using Playwright’s async/await for a while, I found this approach actually aligns better with modern JavaScript and is more consistent with Next.js Server Components style.

To be fair, Cypress isn’t bad. If your project only needs to test Chrome and your team isn’t very familiar with testing, Cypress is indeed easier to get started with. Its debugging tools are great—the Time Travel feature lets you see every step of the test, very beginner-friendly.

But for my needs, Playwright is a better fit:

  • Need cross-browser testing
  • Already have Next.js/React experience, familiar with async/await
  • CI environment requires fast feedback
  • Want to test API routes and SSR pages

Choosing tools isn’t about absolute good or bad—it depends on the scenario. My advice: if the project is still small and team members have little testing experience, start with Cypress to get up and running quickly; if you already have a certain scale and want to invest in automation testing long-term, Playwright will be the better choice.

Next.js + Playwright Configuration in Practice

Installing Playwright is super simple—three commands and you’re done:

npm init playwright@latest
# or use pnpm
pnpm create playwright

The installation process will ask you a few questions. Recommended choices:

  • TypeScript? Yes (highly recommended—type hints will help you avoid pitfalls)
  • Test directory? tests (default is fine)
  • GitHub Actions? Yes (will be used for CI/CD later)

After installation, your project will have these new files:

your-nextjs-project/
├── tests/               # Test cases directory
│   └── example.spec.ts
├── playwright.config.ts # Playwright configuration
└── .github/
    └── workflows/
        └── playwright.yml  # CI configuration

Configuration File Pitfalls

The initial playwright.config.ts is for regular web projects—Next.js projects need some adjustments. This is the most stable configuration I’ve found after six months:

import { defineConfig, devices } from '@playwright/test';

export default defineConfig({
  // Test directory
  testDir: './tests',

  // Global timeout: 30 seconds per test
  timeout: 30 * 1000,

  // Global expect timeout: 5 seconds for element lookup
  expect: {
    timeout: 5000,
  },

  // Retry count on failure (recommended to enable in CI)
  retries: process.env.CI ? 2 : 0,

  // Number of parallel workers (my machine has 8 cores, so I set 4)
  workers: process.env.CI ? 2 : 4,

  // Test reporter
  reporter: [
    ['html'],                    // Generate HTML report
    ['list'],                    // Terminal output list
    process.env.CI ? ['github'] : ['list'], // GitHub format in CI
  ],

  // Start Next.js dev server
  webServer: {
    command: 'npm run dev',
    port: 3000,
    timeout: 120 * 1000,         // Next.js first start may need compilation time
    reuseExistingServer: !process.env.CI, // Reuse server locally to save time
  },

  // Test projects (multi-browser configuration)
  projects: [
    {
      name: 'chromium',
      use: { ...devices['Desktop Chrome'] },
    },
    {
      name: 'firefox',
      use: { ...devices['Desktop Firefox'] },
    },
    {
      name: 'webkit',
      use: { ...devices['Desktop Safari'] },
    },
    // Mobile testing (optional)
    {
      name: 'Mobile Chrome',
      use: { ...devices['Pixel 5'] },
    },
  ],

  // Global configuration
  use: {
    baseURL: 'http://localhost:3000',
    trace: 'on-first-retry',      // Record trace on failure for debugging
    screenshot: 'only-on-failure', // Screenshot on failure
    video: 'retain-on-failure',   // Record video on failure
  },
});

Easy Pitfalls to Step Into

  1. webServer.timeout must be long enough. I initially set 30 seconds, but Next.js cold start needs compilation and often timed out. Now I set 120 seconds—stable.

  2. Remember to set reuseExistingServer: true for local development. Otherwise, every test run will restart Next.js—drives you crazy waiting.

  3. Don’t set workers count too high. I used to set it to CPU cores and the computer would freeze during tests. Now I set it to half the cores—fast and stable.

  4. Mobile testing is optional. If your Next.js project is responsive, adding Mobile Chrome testing can catch some mobile-specific bugs. But it doubles test time—decide based on your needs.

After the configuration is set, run the official example test:

npx playwright test

If you see green passed, the environment is good. Now you can write real test cases.

Page Interaction Testing Best Practices (Page Object Model)

When I first started writing tests, I crammed all the code into one file. Tested a login page, wrote over 100 lines of code, full of page.locator, page.fill, page.click everywhere. Later when I needed to change a button selector, I had to modify a dozen test files—total breakdown.

Later I learned about Page Object Model (POM) and the code became so much cleaner. Simply put, you encapsulate page operations into a class, and test cases just call methods without directly operating elements.

Tests Without POM (Bad Example)

// tests/login.spec.ts
import { test, expect } from '@playwright/test';

test('User login', async ({ page }) => {
  await page.goto('/login');

  // Directly operating elements, code duplication
  await page.locator('input[name="email"]').fill('[email protected]');
  await page.locator('input[name="password"]').fill('password123');
  await page.locator('button[type="submit"]').click();

  await expect(page.locator('h1')).toContainText('Dashboard');
});

test('Login failure message', async ({ page }) => {
  await page.goto('/login');

  // Same operations all over again...
  await page.locator('input[name="email"]').fill('[email protected]');
  await page.locator('input[name="password"]').fill('wrongpass');
  await page.locator('button[type="submit"]').click();

  await expect(page.locator('.error')).toBeVisible();
});

See the problem? If the designer changes input[name="email"] to input[id="email"], all tests need to be modified.

First create the Page Object:

// tests/pages/LoginPage.ts
import { Page, Locator } from '@playwright/test';

export class LoginPage {
  readonly page: Page;
  readonly emailInput: Locator;
  readonly passwordInput: Locator;
  readonly submitButton: Locator;
  readonly errorMessage: Locator;
  readonly dashboardTitle: Locator;

  constructor(page: Page) {
    this.page = page;
    this.emailInput = page.locator('input[name="email"]');
    this.passwordInput = page.locator('input[name="password"]');
    this.submitButton = page.locator('button[type="submit"]');
    this.errorMessage = page.locator('.error');
    this.dashboardTitle = page.locator('h1');
  }

  // Encapsulate login action
  async login(email: string, password: string) {
    await this.emailInput.fill(email);
    await this.passwordInput.fill(password);
    await this.submitButton.click();
  }

  // Encapsulate navigation action
  async goto() {
    await this.page.goto('/login');
  }

  // Encapsulate validation logic
  async expectLoginSuccess() {
    await this.dashboardTitle.waitFor();
    await expect(this.dashboardTitle).toContainText('Dashboard');
  }

  async expectLoginError() {
    await expect(this.errorMessage).toBeVisible();
  }
}

Test cases become super concise:

// tests/login.spec.ts
import { test } from '@playwright/test';
import { LoginPage } from './pages/LoginPage';

test('User login', async ({ page }) => {
  const loginPage = new LoginPage(page);

  await loginPage.goto();
  await loginPage.login('[email protected]', 'password123');
  await loginPage.expectLoginSuccess();
});

test('Login failure message', async ({ page }) => {
  const loginPage = new LoginPage(page);

  await loginPage.goto();
  await loginPage.login('[email protected]', 'wrongpass');
  await loginPage.expectLoginError();
});

Nice, right? Now to change selectors, you only need to modify one file: LoginPage.ts. And the test code reads like natural language—newcomers can understand it immediately.

Directory Structure in Real Projects

tests/
├── pages/                  # Page Objects
│   ├── LoginPage.ts
│   ├── DashboardPage.ts
│   └── CheckoutPage.ts
├── fixtures/               # Test data and utilities
│   └── testData.ts
├── auth.spec.ts           # Authentication tests
├── checkout.spec.ts       # Payment flow tests
└── dashboard.spec.ts      # Dashboard tests

Pitfalls I’ve Stepped Into and Suggestions

  1. Don’t over-encapsulate. Not every page needs a Page Object. If a page is only tested once, just write it directly in the test—don’t use POM for the sake of POM.

  2. Method names should be semantic. async fillLoginForm() is clearer than async fillForm(). Six months later when you come back to the code, you’ll thank yourself.

  3. Encapsulate waiting logic in Page Objects. Playwright’s auto-wait is smart, but sometimes you still need manual waitFor(). Hide this logic in Page Objects to keep test cases cleaner.

  4. Manage test data separately. Test data like usernames and passwords—I put them all in fixtures/testData.ts for easier management:

// tests/fixtures/testData.ts
export const testUsers = {
  validUser: {
    email: '[email protected]',
    password: 'password123'
  },
  invalidUser: {
    email: '[email protected]',
    password: 'wrongpass'
  }
};

Then reference in tests:

import { testUsers } from './fixtures/testData';

await loginPage.login(testUsers.validUser.email, testUsers.validUser.password);

Using this pattern, code maintenance cost drops dramatically. Now when I write tests, it’s basically: define Page Object → write a few test case lines → done.

API Route E2E Testing

Next.js API Routes are also part of the whole application, so they need testing too. I used to manually test APIs with Postman—exhausting. Now I test directly in Playwright without even opening a browser.

Playwright provides a request object that can send HTTP requests directly, perfect for testing Next.js API Routes.

Basic API Testing

Start with a simple example—testing an API to get user list:

// tests/api/users.spec.ts
import { test, expect } from '@playwright/test';

test.describe('User API tests', () => {
  test('GET /api/users - Get user list', async ({ request }) => {
    const response = await request.get('/api/users');

    // Verify status code
    expect(response.status()).toBe(200);

    // Verify response format
    const users = await response.json();
    expect(Array.isArray(users)).toBeTruthy();
    expect(users.length).toBeGreaterThan(0);

    // Verify data structure
    expect(users[0]).toHaveProperty('id');
    expect(users[0]).toHaveProperty('email');
    expect(users[0]).toHaveProperty('name');
  });

  test('POST /api/users - Create user', async ({ request }) => {
    const newUser = {
      email: '[email protected]',
      name: 'Test User',
      password: 'password123'
    };

    const response = await request.post('/api/users', {
      data: newUser
    });

    expect(response.status()).toBe(201);

    const createdUser = await response.json();
    expect(createdUser.email).toBe(newUser.email);
    expect(createdUser).not.toHaveProperty('password'); // Password shouldn't be returned
  });

  test('POST /api/users - Duplicate email should return error', async ({ request }) => {
    const duplicateUser = {
      email: '[email protected]',
      name: 'Duplicate User',
      password: 'password123'
    };

    const response = await request.post('/api/users', {
      data: duplicateUser
    });

    expect(response.status()).toBe(400);

    const error = await response.json();
    expect(error.message).toContain('Email already exists');
  });
});

Authenticated API Testing

In real projects, many APIs require login to access. This requires getting a token first, then including it in the request header:

// tests/api/auth.spec.ts
import { test, expect } from '@playwright/test';

let authToken: string;

test.describe('Authenticated APIs', () => {
  // Get token before all tests
  test.beforeAll(async ({ request }) => {
    const response = await request.post('/api/auth/login', {
      data: {
        email: '[email protected]',
        password: 'password123'
      }
    });

    const { token } = await response.json();
    authToken = token;
  });

  test('GET /api/profile - Get user profile', async ({ request }) => {
    const response = await request.get('/api/profile', {
      headers: {
        'Authorization': `Bearer ${authToken}`
      }
    });

    expect(response.status()).toBe(200);

    const profile = await response.json();
    expect(profile.email).toBe('[email protected]');
  });

  test('Unauthenticated access should return 401', async ({ request }) => {
    const response = await request.get('/api/profile');
    expect(response.status()).toBe(401);
  });
});

Hybrid Testing: Page + API

The most powerful approach is combining page testing and API testing. For example, testing an add-article feature, I’d write it like this:

// tests/posts.spec.ts
import { test, expect } from '@playwright/test';

test('Complete post publishing flow', async ({ page, request }) => {
  // 1. Login via page
  await page.goto('/login');
  await page.fill('input[name="email"]', '[email protected]');
  await page.fill('input[name="password"]', 'password123');
  await page.click('button[type="submit"]');

  // 2. Go to post editor
  await page.goto('/posts/new');
  await page.fill('input[name="title"]', 'Test Post Title');
  await page.fill('textarea[name="content"]', 'This is test content');
  await page.click('button:has-text("Publish")');

  // 3. Wait for redirect to post detail page
  await page.waitForURL(/\/posts\/\d+/);

  // 4. Verify post was actually created via API
  const url = page.url();
  const postId = url.split('/').pop();

  const response = await request.get(`/api/posts/${postId}`);
  expect(response.status()).toBe(200);

  const post = await response.json();
  expect(post.title).toBe('Test Post Title');
  expect(post.content).toBe('This is test content');
  expect(post.status).toBe('published');
});

The advantage of this approach: it tests both frontend interaction and backend data. Once I caught a sneaky bug with this—page showed “Published successfully” but the post status in the database was draft. Turned out the status update logic was wrong.

My Practical Suggestions

  1. API tests should cover edge cases. Everyone tests normal flows, but you also need to test missing parameters, wrong types, insufficient permissions, etc.

  2. Test data cleanup. API tests write data to the database—remember to clean up in afterAll. I usually use a dedicated test database and periodically clear it:

test.afterAll(async ({ request }) => {
  await request.delete('/api/test/cleanup');
});
  1. Mock external services. If the API calls third-party services (payment, SMS), remember to mock them during testing—otherwise you’ll be charged for every test run.

  2. Watch response time. Playwright can get request duration—I’ll add an assertion to ensure the API is fast enough:

const start = Date.now();
await request.get('/api/users');
const duration = Date.now() - start;

expect(duration).toBeLessThan(1000); // API should respond within 1 second

When API testing and page testing work well together, you can cover 90% of scenarios. The remaining 10% can be supplemented with unit tests.

GitHub Actions CI/CD Integration

After writing tests, the next step is integrating with CI/CD. Automatically running tests on every commit can be a lifesaver—so many times I was confident before committing, only to find out from CI that I broke something else.

Good news: npm init playwright already generated a GitHub Actions config file for you. But that default config is pretty basic—I adjust it based on actual needs.

Basic CI Configuration

First, let’s look at the initially generated .github/workflows/playwright.yml:

name: Playwright Tests

on:
  push:
    branches: [ main, master ]
  pull_request:
    branches: [ main, master ]

jobs:
  test:
    timeout-minutes: 60
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v3

    - uses: actions/setup-node@v3
      with:
        node-version: 18

    - name: Install dependencies
      run: npm ci

    - name: Install Playwright Browsers
      run: npx playwright install --with-deps

    - name: Run Playwright tests
      run: npx playwright test

    - uses: actions/upload-artifact@v3
      if: always()
      with:
        name: playwright-report
        path: playwright-report/
        retention-days: 30

This config works, but has several issues:

  1. Installs browsers every time—slow
  2. No test database—API tests will fail
  3. Reports can only be viewed by downloading—inconvenient

My Production-Grade Configuration

This is the config I actually use, with caching, database, and report deployment added:

name: E2E Tests

on:
  push:
    branches: [ main, dev ]
  pull_request:
    branches: [ main ]

jobs:
  test:
    timeout-minutes: 60
    runs-on: ubuntu-latest

    services:
      # Test database (PostgreSQL)
      postgres:
        image: postgres:15
        env:
          POSTGRES_USER: test
          POSTGRES_PASSWORD: test
          POSTGRES_DB: testdb
        options: >-
          --health-cmd pg_isready
          --health-interval 10s
          --health-timeout 5s
          --health-retries 5
        ports:
          - 5432:5432

    env:
      DATABASE_URL: postgresql://test:test@localhost:5432/testdb
      NODE_ENV: test

    steps:
    - uses: actions/checkout@v4

    - name: Setup Node.js
      uses: actions/setup-node@v4
      with:
        node-version: '20'
        cache: 'npm'

    - name: Install dependencies
      run: npm ci

    - name: Cache Playwright browsers
      uses: actions/cache@v3
      with:
        path: ~/.cache/ms-playwright
        key: ${{ runner.os }}-playwright-${{ hashFiles('**/package-lock.json') }}

    - name: Install Playwright Browsers
      run: npx playwright install --with-deps chromium

    - name: Run database migrations
      run: npm run db:migrate

    - name: Run Playwright tests
      run: npx playwright test

    - name: Upload test results
      if: always()
      uses: actions/upload-artifact@v4
      with:
        name: playwright-report
        path: playwright-report/
        retention-days: 30

    # If main branch, deploy report to GitHub Pages
    - name: Deploy report to GitHub Pages
      if: always() && github.ref == 'refs/heads/main'
      uses: peaceiris/actions-gh-pages@v3
      with:
        github_token: ${{ secrets.GITHUB_TOKEN }}
        publish_dir: ./playwright-report

Configuration Key Points Explained

  1. Service containers (services): I use PostgreSQL as the test database so API tests can run normally. If you use MySQL, change to mysql:8.

  2. Cache browsers: actions/cache will cache Playwright browser files. First run is slow, subsequent runs are fast. I only install chromium because running three browsers in CI is too slow—chromium is enough.

  3. Database migration: npm run db:migrate creates tables before running tests. Remember to configure this script in package.json:

{
  "scripts": {
    "db:migrate": "prisma migrate deploy"
  }
}
  1. Report deployment: Test reports from the main branch are automatically deployed to GitHub Pages so team members can view reports online without downloading.

Environment Variables Configuration

Test environments may need some API keys or secrets. I configure them in the repository’s Settings → Secrets:

env:
  DATABASE_URL: ${{ secrets.DATABASE_URL }}
  NEXTAUTH_SECRET: ${{ secrets.NEXTAUTH_SECRET }}
  STRIPE_SECRET_KEY: ${{ secrets.STRIPE_TEST_KEY }}

Debugging When Tests Fail

What if CI fails? Playwright provides several super useful features:

  1. View trace: In my config I set trace: 'on-first-retry'—when it fails, a trace file is generated. Download it and use npx playwright show-trace trace.zip to replay the entire test process.

  2. View screenshots and videos: Automatically captures screenshots and records video on failure—you can visually see page state.

  3. Reproduce CI environment locally: Use the act tool to run GitHub Actions locally for faster debugging:

# Install act
brew install act  # macOS
# or
choco install act  # Windows

# Run workflow
act -j test

Pitfalls I’ve Stepped Into

  1. Timeout settings should be reasonable. I initially set 30 minutes, but one test got stuck and wasted a lot of CI time. Now I set 60 minutes but monitor which tests run particularly slowly.

  2. Don’t set too many parallel workers. CI machine performance is average—opening too many workers is actually slower. I set 2 workers—enough.

  3. Don’t set too many failure retries. retries: 2 is to handle occasional network hiccups, but if the test really has a problem, retrying multiple times is useless and just extends CI time.

After integrating CI/CD, code quality went straight up. Now every PR must have a green checkmark before merging—forces everyone to take testing seriously.

Test Coverage and Report Generation

After running tests, the most important thing is seeing the results. Playwright’s test reports are excellent—comprehensive and intuitive.

HTML Report (Most Commonly Used)

After tests finish, run:

npx playwright show-report

It will automatically open a local webpage showing all test results. This report includes:

  • Pass/fail status for each test
  • Runtime duration
  • Screenshots and videos of failed tests
  • Trace files (can replay entire test process)

My favorite feature is the Trace Viewer. Click on a failed test to see every step of test execution: network requests, DOM snapshots, console logs—everything. Like a time machine that can pinpoint exactly where the problem is.

Test Coverage

E2E test coverage is a bit special—it’s not code coverage but feature coverage. I’ll make a table recording which features already have tests:

## Test Coverage Checklist

### User Authentication
- [x] Login (normal flow)
- [x] Login failure (wrong password)
- [x] Registration
- [x] Password recovery
- [ ] Third-party login (Google)

### Product Management
- [x] Add product
- [x] Edit product
- [x] Delete product
- [ ] Batch import

### Order Flow
- [x] Add to cart
- [x] Checkout
- [x] Payment (test environment)
- [ ] Refund process

This checklist goes in the project’s tests/README.md and gets updated whenever new features are added. At a glance you know which features don’t have test coverage yet.

Code Coverage (Optional)

If you really want to see code coverage, Playwright can do that too. You need to configure Istanbul or v8 coverage:

// playwright.config.ts
export default defineConfig({
  use: {
    // Enable code coverage
    trace: 'on',
    // Inject coverage collection code
    contextOptions: {
      recordVideo: {
        dir: 'test-results/videos'
      }
    }
  }
});

But honestly, I rarely look at code coverage in E2E tests. Unit tests already cover core logic—E2E is more focused on whether feature flows work properly.

Custom Reports

Sometimes you need to send test results to Slack or DingTalk to notify the team. Playwright supports custom reporters:

// my-reporter.ts
import { Reporter } from '@playwright/test/reporter';

class SlackReporter implements Reporter {
  onEnd(result) {
    const passed = result.suites.filter(s => s.ok).length;
    const failed = result.suites.length - passed;

    // Send to Slack
    fetch('https://hooks.slack.com/services/YOUR_WEBHOOK', {
      method: 'POST',
      body: JSON.stringify({
        text: `Tests completed: ${passed} passed, ${failed} failed`
      })
    });
  }
}

export default SlackReporter;

Enable in config file:

// playwright.config.ts
export default defineConfig({
  reporter: [
    ['html'],
    ['./my-reporter.ts']
  ]
});

If you want to see test history trends (pass rate changes, runtime changes), you can use Playwright Test Runner. Upload the trace files generated by CI to visualize test quality analysis.

But I prefer a simpler, more straightforward method: after each CI run, write pass rate and runtime to a CSV file, then use Google Sheets to draw a chart. Simple and practical.

My Report Usage Habits

  1. Local development: Just look at terminal output. If it fails, rerun in --debug mode:

    npx playwright test --debug
  2. PR review: Look at CI’s HTML report, focus on failed tests and runtime. If a test always times out, the code might have issues.

  3. Regular review: Look at the test coverage checklist once a week, supplement missing test cases.

Test reports aren’t just numbers—more importantly, they help you discover problems and improve processes.

Conclusion

Thinking back to that night six months ago when I manually tested until 3 AM, current me is truly so much more relaxed.

Using the Playwright + Next.js combination, the biggest feeling is peace of mind. Configure once, then basically don’t have to manage it. Every commit, CI automatically runs tests; every release, I’m confident. Production bugs have indeed decreased a lot, and the product manager’s relentless calls have also decreased (laugh).

If your Next.js project is still doing manual testing, my advice:

  1. Start with core flows. Don’t try to cover all features at once—first test key paths like login and payment.
  2. Use Page Object Model. May seem troublesome at first, but saves time in the long run.
  3. Integrate CI/CD. Automated tests that don’t run automatically might as well not be written.
  4. Don’t chase 100% coverage. Pick the important stuff—test core features well and you’re good.

One last tip: E2E testing isn’t just a technical tool—it’s a way of team collaboration. It makes everyone value code quality, lowers communication costs (tests are the best documentation), and makes releases predictable.

Now I can leave work at 5:30 every day. The time saved—finally can go to the gym. That annual membership was about to expire.

Start writing tests. Your future self will thank you.

FAQ

Playwright vs Cypress—which one should I choose?
Depends on your project needs:

• Playwright: Cross-browser testing (Chromium/Firefox/WebKit), parallel speed is fast (8 workers are 3x faster than Cypress), async/await syntax, suited for medium-large projects
• Cypress: Chrome-focused, strong time-travel debugging, rich community resources, beginner-friendly

If the project only tests Chrome and the team has little testing experience, choose Cypress; if you need cross-browser, CI requires speed, and the team has React/Next.js experience, choose Playwright.
Do I have to use Page Object Model?
Not mandatory, but highly recommended.

If the project is small (under 10 test cases) or a page is only tested once, just write directly in the test. But if:
• Multiple test cases operate the same page
• The team has multiple people maintaining test code
• The project will iterate long-term

Then POM will save you from pitfalls. You'll know its value when you need to change a selector—without POM you'd change 10+ files, with POM only 1.
Tests in CI always timeout—what should I do?
Common causes and solutions:

• webServer.timeout too short: Change to 120 seconds (Next.js cold start needs compilation)
• Too many workers: CI machines have average performance, set 2-4 is enough
• Problem with test itself: Use trace file to troubleshoot—see if network request is slow or element wait times out
• Slow browser installation: Add actions/cache to cache browser files

Another tip: Only run chromium in CI, run multi-browser tests locally—much faster.
How to manage test data? Do I have to manually clean the database every time?
Three approaches:

• Separate test database: Dedicated for testing, periodically cleared, doesn't affect dev environment
• Clean after each test: Call cleanup API in test.afterAll(), but may miss some
• Use Docker containers: Start fresh database container for each test, destroy after completion (cleanest but slow)

I use a combination of first + second: CI uses Docker container database, local development uses separate test database + afterAll cleanup.
Do API tests need to mock third-party services?
Must mock, three reasons:

• Cost issue: Real calls to payment/SMS APIs will charge fees—lose money every test
• Speed issue: Slow third-party service responses will drag down test speed
• Stability issue: Third-party service downtime shouldn't affect your tests

Playwright supports network interception to mock API responses:
await page.route('**/api/payment', route => route.fulfill({ status: 200, body: '{"success": true}' }));

You can also check environment variables in Next.js API Routes—return mock data in test environment.
What test coverage percentage is considered acceptable?
E2E tests don't look at code coverage—look at feature coverage.

Priority ranking:
• Core features (login, payment, orders): Must be 100% covered
• High-frequency features (browse products, add to cart): 80%+ coverage
• Low-frequency features (password recovery, refunds): 50%+ coverage
• Edge features (theme switching, language switching): Optional

Don't chase 100% coverage—pick the important stuff. My project has 100% core flow coverage, overall feature coverage 60%, already blocks 90% of bugs.
After writing Playwright tests, do I still need unit tests?
Yes, the two complement each other:

• E2E tests (Playwright): Verify feature flows, user interactions, frontend-backend integration—slow but comprehensive
• Unit tests (Jest/Vitest): Verify function logic, edge cases, error handling—fast but localized

Ideal ratio: 70% unit tests, 30% E2E tests. Use unit tests for core utility functions, hooks, component logic; use E2E tests for complete user flows.

Unit tests can quickly locate problems, E2E tests ensure functionality actually works. Both hands must be strong.

13 min read · Published on: Jan 7, 2026 · Modified on: Jan 15, 2026

Comments

Sign in with GitHub to leave a comment

Related Posts