Complete Guide to Next.js Production Monitoring: Sentry Integration, Log Management & Alert Configuration

Friday night, 9:17 PM. My phone buzzed.
The WeChat group was already on fire—“Payment page won’t load”, “Order just failed”, “White screen! White screen!”. I opened my laptop, connected to VPN, ran everything locally—perfect. Checked server logs—nothing but a single line saying “Internal Server Error”. Users reported the page froze after clicking the payment button, but I couldn’t reproduce it at all.
That weekend I barely slept. 2 AM digging through Vercel deployment logs, 6 AM redeploying test versions. Turns out it was a third-party payment SDK timing out sporadically in production. The whole debugging process felt like searching for a dropped needle in the dark.
To be honest, this wasn’t an isolated incident. Your Next.js app might run smooth as butter in development, but deploying to production opens Pandora’s box—server-side rendering occasionally returns 500s, edge functions mysteriously error out, API response times spike without any clue where the bottleneck is.
The root cause is simple: you lack a complete production monitoring system.
This article will walk you through building a Next.js monitoring system from scratch—from Sentry error tracking to structured log management, from performance monitoring to alert configuration. No theoretical fluff, just copy-paste configuration code. After reading this, you’ll know about issues before your users do.
Why Next.js Needs Specialized Monitoring
Next.js’s “Three-Headed Monster” Architecture
Traditional frontend apps only run in the browser, errors show up directly in DevTools console. Next.js is different—the same application runs simultaneously in three completely different environments:
- Client (Browser): React components in the user’s browser
- Server (Node.js): SSR rendering, API Routes, Server Actions
- Edge Network (Edge Runtime): Middleware, edge functions
A payment feature might involve: client-side form validation → middleware authentication → Server Action invocation → API Route database query → return to client for display. If any link breaks, traditional browser monitoring won’t see the full picture.
Last year I encountered a bizarre bug: users reported “page loads slowly, then shows 500”. Browser Network panel confirmed the request was slow, but couldn’t tell where exactly. After integrating Sentry’s distributed tracing, I discovered the server-side rendering was calling a third-party API that spiked from its usual 200ms to 8 seconds. Client-side monitoring would never catch this.
SSR’s “Black Box Effect”
When server-side rendering fails, users often see nothing but a bare 500 page. No stack trace, no context, nothing.
Even worse are Hydration errors. You might’ve seen this warning:
Warning: Expected server HTML to contain a matching <div> in <div>These errors might seem minor in development but can cause complete page interaction failures in production. Without monitoring, you’ll only discover these through passive user feedback saying “the page won’t respond”.
According to Vercel data, SSR-related errors account for roughly 35% of Next.js production issues. That’s just errors—doesn’t include performance problems like a component’s SSR time suddenly increasing, making users feel “the page is loading slower” without any idea where the bottleneck is.
Four Pillars of Complete Monitoring
A reliable Next.js monitoring solution needs to cover these bases:
Error Tracking
Not just capturing exceptions, but knowing: who triggered the error (user info), in what environment (device, browser, network), what operations they performed (breadcrumb trail), and what the related request parameters were.
Performance Monitoring
Is LCP (Largest Contentful Paint) exceeding 2.5 seconds? Are API response times slowing down? Which database query is dragging down the entire request?
Log Management
Structured logs searchable by time, user, and request ID. Pretty output for development debugging, integrated with log platforms for production analysis.
Alert Configuration
Immediate team notifications when error rates exceed thresholds, Slack pushes for new error types on first occurrence, automatic alerts for performance regressions.
With these four in place, you’ll no longer be “feeling the elephant” when production issues arise. Let’s tackle them one by one.
Sentry Integration in Practice - From Installation to Deep Configuration
5-Minute Quick Setup
Sentry’s Next.js support is quite mature, with an official auto-configuration wizard. Truly just 5 minutes:
# Install SDK
npm install @sentry/nextjs
# Run configuration wizard
npx @sentry/wizard@latest -i nextjsThe wizard asks a few questions (Sentry project DSN, whether to upload Source Maps, etc.), then automatically creates three config files:
sentry.client.config.ts- Browser environmentsentry.server.config.ts- Node.js server-sidesentry.edge.config.ts- Edge Runtime
It also modifies next.config.js to add Sentry’s webpack plugin. Run the wizard once, basic monitoring is live.
But this is just the beginning. Production environments need more refined configuration.
App Router Error Capture Essentials
If you’re using App Router, pay special attention to these areas:
Global Error Handling
Create app/global-error.tsx, the last line of defense for App Router:
'use client';
import * as Sentry from '@sentry/nextjs';
import { useEffect } from 'react';
export default function GlobalError({
error,
reset,
}: {
error: Error & { digest?: string };
reset: () => void;
}) {
useEffect(() => {
// Send to Sentry
Sentry.captureException(error);
}, [error]);
return (
<html>
<body>
<div style={{ padding: '2rem', textAlign: 'center' }}>
<h2>Something went wrong</h2>
<p>We've logged this error and will fix it soon</p>
<button onClick={() => reset()}>Try again</button>
</div>
</body>
</html>
);
}Server Actions Error Capture
Server Actions are a killer feature of App Router, but error handling is often overlooked:
'use server';
import * as Sentry from '@sentry/nextjs';
export async function createOrder(formData: FormData) {
return await Sentry.withServerActionInstrumentation(
'createOrder', // action name, shows up in Sentry
{
recordResponse: true, // record response data
},
async () => {
// Your business logic
const productId = formData.get('productId');
const order = await db.order.create({
data: { productId, userId: getCurrentUserId() },
});
return order;
}
);
}With this wrapper, any errors in Server Actions are automatically reported, and execution time is tracked.
Production Environment Optimization
After integrating Sentry, your first month’s bill might shock you—because default config reports all events. Adjust sampling rates:
// sentry.client.config.ts
import * as Sentry from '@sentry/nextjs';
Sentry.init({
dsn: process.env.NEXT_PUBLIC_SENTRY_DSN,
// Performance tracing sample rate
// Development 100%, production 10% (adjust based on traffic)
tracesSampleRate: process.env.NODE_ENV === 'production' ? 0.1 : 1.0,
// Session Replay sampling
replaysSessionSampleRate: 0.1, // 10% of normal sessions recorded
replaysOnErrorSampleRate: 1.0, // 100% of error sessions recorded
// Environment identifier
environment: process.env.NEXT_PUBLIC_VERCEL_ENV || 'development',
// Ignore specific errors
ignoreErrors: [
// Browser extension injected errors
'ResizeObserver loop limit exceeded',
// Third-party script errors
/chrome-extension/,
/^Non-Error promise rejection/,
],
});tracesSampleRate setting guide:
- Daily UV < 10K: 0.2 - 0.5
- Daily UV 10K-100K: 0.1 - 0.2
- Daily UV > 100K: 0.05 - 0.1
Our project gets around 30K daily UV, set to 0.15, using about 60% of monthly Sentry quota—enough sample coverage without going over budget.
Source Maps: Debuggable Without Exposing Code
Production JavaScript code is typically minified and obfuscated, error stacks look like this:
at r.render (app.js:1:23456)Completely unreadable. Source Maps can map obfuscated code back to original code, but exposing Source Maps directly leaks source code.
Sentry’s approach: upload Source Maps to Sentry servers, user browsers can’t access them, only Sentry uses them internally to restore stacks.
Configure in CI/CD (GitHub Actions example):
# .github/workflows/deploy.yml
- name: Upload Source Maps to Sentry
env:
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
SENTRY_ORG: your-org
SENTRY_PROJECT: your-project
run: npm run buildSentry plugin in next.config.js handles upload automatically. Remember to add SENTRY_AUTH_TOKEN to GitHub Secrets, don’t commit to repo.
Advanced Features: Session Replay and Distributed Tracing
Session Replay is my favorite feature—replays user actions like watching a recording to reconstruct the scene.
Once a user reported “can’t click payment button”. I watched their Session Replay and discovered they were using iPad in landscape mode, payment button was covered by virtual keyboard. This kind of issue would never be found with error logs alone.
Easy to enable, add to client config:
import * as Sentry from '@sentry/nextjs';
import { Replay } from '@sentry/nextjs';
Sentry.init({
integrations: [
new Replay({
maskAllText: false, // whether to hide all text
blockAllMedia: true, // whether to block all media
maskAllInputs: true, // hide form inputs (avoid leaking sensitive info)
}),
],
replaysSessionSampleRate: 0.1,
replaysOnErrorSampleRate: 1.0,
});Distributed tracing can track a request’s complete lifecycle. User clicks button → frontend sends request → API Route queries database → returns to frontend for rendering, every step’s duration is visible.
Configuration isn’t complex either, just ensure frontend and backend use the same Sentry.init config, Sentry SDK will automatically pass sentry-trace in request headers.
Custom Context: Making Errors More Meaningful
By default Sentry only knows “an error occurred”. Through custom context, error reports become more valuable:
import * as Sentry from '@sentry/nextjs';
// Set user information
Sentry.setUser({
id: user.id,
email: user.email,
username: user.username,
// Don't put sensitive info like passwords!
});
// Add business context
Sentry.setContext('purchase', {
orderId: '12345',
amount: 99.99,
paymentMethod: 'credit_card',
});
// Add tags (for filtering)
Sentry.setTag('feature', 'checkout');
Sentry.setTag('ab_test', 'variant_b');This way when errors occur, you immediately know which user triggered it and in what business scenario.
Real case: We once noticed a payment error occurring more frequently than expected, checked Sentry context and found all were ab_test: variant_b users. Pinpointed a bug in the A/B test’s new payment flow, immediately disabled that variant, avoiding bigger losses.
Log Management - Making Logs Work for You
console.log Isn’t Enough
Early on I loved sprinkling console.log everywhere too. Felt great for debugging, but useless after going live:
- Can’t filter: Finding one user’s request records in 100K logs? Good luck.
- Can’t aggregate: Want to know how many database queries were slower than 1 second in the past hour? Can’t count.
- Can’t alert: “Payment failed” appears in logs? Nobody knows.
Structured logging solves these problems. Instead of simply printing strings, output JSON objects with each log carrying metadata like timestamp, log level, request ID, user ID. Later you can search and aggregate by any field.
Pino vs Winston: How to Choose
Two major logging libraries in Node.js ecosystem, each with strengths:
| Feature | Pino | Winston |
|---|---|---|
| Performance | Super fast, async logging nearly zero overhead | Slightly slower, but still sufficient |
| Ease of use | Simple config, works out of box | Feature-rich, good plugin ecosystem |
| Extensibility | Extend via Transport | Multiple built-in Transports |
| Community | Next.js officially recommended | Veteran library, most complete docs |
My recommendation:
- High concurrency scenarios (QPS > 1000): Choose Pino, performance advantage clear
- Complex log processing needs (multiple formats, targets): Choose Winston
- Don’t know what to pick: Choose Pino, Next.js official docs all use it
Pino Practical Configuration
First install:
npm install pino
npm install pino-pretty --save-dev # development environment pretty outputCreate global logger:
// lib/logger.ts
import pino from 'pino';
const logger = pino({
level: process.env.LOG_LEVEL || 'info',
// Format log level
formatters: {
level: (label) => ({ level: label.toUpperCase() }),
},
// Development environment uses pino-pretty for pretty output
transport: process.env.NODE_ENV === 'development'
? {
target: 'pino-pretty',
options: {
colorize: true,
translateTime: 'HH:MM:ss',
ignore: 'pid,hostname',
},
}
: undefined,
});
export { logger };Development output will be colorized and readable, production is JSON for easy log platform parsing.
Using in API Routes
Key is assigning each request a correlationId (correlation ID) to string together all logs related to that request:
// app/api/products/route.ts
import { logger } from '@/lib/logger';
import { randomUUID } from 'crypto';
export async function GET(request: Request) {
// Generate unique request ID
const correlationId = request.headers.get('x-correlation-id') || randomUUID();
// Create child logger, automatically includes correlationId
const log = logger.child({ correlationId });
try {
log.info({ url: request.url }, 'Processing product request');
const products = await db.product.findMany();
log.info({ count: products.length }, 'Products fetched successfully');
return Response.json(products);
} catch (error) {
log.error({ error: error.message, stack: error.stack }, 'Failed to fetch products');
throw error;
}
}This way all logs with the same correlationId are linked together, when troubleshooting just search this ID and the entire request chain is clear.
Log Level Best Practices
Too few logs are useless, too many drown out key information. My leveling standard:
ERROR - Issues requiring immediate attention
- Database connection failed
- Payment interface call failed
- Critical business logic threw exception
log.error({ error, userId, orderId }, 'Payment processing failed');WARN - Abnormal but recoverable situations
- API call retry succeeded
- Fallback logic triggered
- Approaching quota limit
log.warn({ retryCount: 3 }, 'External API retry succeeded');INFO - Critical business checkpoints
- User login/logout
- Order created/completed
- Key configuration changes
log.info({ userId, ip }, 'User logged in');DEBUG - Detailed debug information
- Function parameters and return values
- Intermediate states
- Performance timing
log.debug({ params }, 'Calling external API');Production default INFO level, temporarily raise to DEBUG when troubleshooting.
Log Aggregation and Analysis
Local development pino-pretty is enough, production needs integration with log platform. Several mainstream choices:
Vercel Logs
If deployed on Vercel, built-in logging system, zero config. But only retains 7 days, limited search functionality.
Datadog
Enterprise solution, APM + logs + monitoring full suite. Configuration:
import { datadogLogs } from '@datadog/browser-logs';
datadogLogs.init({
clientToken: process.env.NEXT_PUBLIC_DATADOG_CLIENT_TOKEN,
site: 'datadoghq.com',
forwardErrorsToLogs: true,
sampleRate: 100,
});Logtail/BetterStack
Good value, focused on log analysis. Supports real-time search, alert rules, custom dashboards.
For personal projects I use Logtail, simple config, 1GB logs free per month. Team projects use Datadog, expensive but powerful.
Key Log Field Design
A good log should contain these fields:
{
"timestamp": "2025-12-20T15:00:06.123Z", // Timestamp
"level": "INFO", // Log level
"correlationId": "abc-123-def", // Request correlation ID
"userId": "user_456", // User ID
"action": "create_order", // Business action
"duration": 234, // Execution duration (ms)
"status": "success", // Status
"metadata": { // Additional metadata
"orderId": "order_789",
"amount": 99.99
}
}Logs like this can answer “who did what when, with what result”.
Performance Monitoring - Data-Driven Optimization
Core Web Vitals: Metrics Google Cares About
Google uses Core Web Vitals as search ranking factors, you should care too:
- LCP (Largest Contentful Paint): Largest content paint time, ideal value < 2.5s
- FID (First Input Delay) / INP (Interaction to Next Paint): Interaction response time, < 100ms / < 200ms
- CLS (Cumulative Layout Shift): Cumulative layout shift, < 0.1
Next.js has built-in Web Vitals reporting, just add a few lines in app/layout.tsx:
'use client';
import { useReportWebVitals } from 'next/web-vitals';
export function WebVitalsReporter() {
useReportWebVitals((metric) => {
// Send to Sentry
if (window.Sentry) {
window.Sentry.captureMessage(`Web Vital: ${metric.name}`, {
level: 'info',
tags: {
web_vital: metric.name,
},
contexts: {
web_vitals: {
value: metric.value,
rating: metric.rating,
},
},
});
}
// Or send to your analytics platform
fetch('/api/analytics/web-vitals', {
method: 'POST',
body: JSON.stringify(metric),
});
});
return null;
}Then import in root layout:
// app/layout.tsx
export default function RootLayout({ children }) {
return (
<html>
<body>
<WebVitalsReporter />
{children}
</body>
</html>
);
}API Performance Tracking
Frontend performance is only half, slow backend APIs make users wait just the same. Sentry’s Performance Monitoring can track every API request:
// app/api/products/[id]/route.ts
import * as Sentry from '@sentry/nextjs';
export async function GET(
request: Request,
{ params }: { params: { id: string } }
) {
// Create a transaction
return await Sentry.startSpan(
{
op: 'api.request',
name: 'GET /api/products/[id]',
},
async () => {
// Track database query
const product = await Sentry.startSpan(
{
op: 'db.query',
name: 'Fetch product from database',
},
async () => {
return await db.product.findUnique({
where: { id: params.id },
include: { reviews: true },
});
}
);
if (!product) {
return Response.json({ error: 'Not found' }, { status: 404 });
}
// Track external API call
const pricing = await Sentry.startSpan(
{
op: 'http.client',
name: 'Fetch pricing from external API',
},
async () => {
const res = await fetch(`https://pricing-api.com/product/${params.id}`);
return res.json();
}
);
return Response.json({ ...product, pricing });
}
);
}In Sentry dashboard you’ll see:
- Total request time 450ms
- Database query 120ms
- External API call 300ms
- Other logic 30ms
Crystal clear, bottleneck is the external API.
Slow Query Alerts
Databases are frequent performance bottlenecks. Can add monitoring in Prisma middleware:
// lib/prisma.ts
import { PrismaClient } from '@prisma/client';
import { logger } from './logger';
const prisma = new PrismaClient();
// Monitor slow queries
prisma.$use(async (params, next) => {
const before = Date.now();
const result = await next(params);
const after = Date.now();
const duration = after - before;
// Queries over 1 second logged as WARN
if (duration > 1000) {
logger.warn({
model: params.model,
action: params.action,
duration,
args: params.args,
}, 'Slow database query detected');
// Also send to Sentry
Sentry.captureMessage('Slow database query', {
level: 'warning',
tags: { model: params.model, action: params.action },
extra: { duration, args: params.args },
});
}
return result;
});
export { prisma };This way, no slow query escapes your notice.
Real User Monitoring vs Synthetic Monitoring
Real User Monitoring (RUM)
Uses Sentry, Datadog etc. to collect performance data from real users. Advantage is reflecting real scenarios (different networks, devices), disadvantage is reactive, only knowing after problems occur.
Synthetic Monitoring
Uses Checkly, Pingdom etc. to periodically simulate visits to your site from around the world. Advantage is proactive problem discovery, disadvantage is can’t cover all user scenarios.
Best to combine both:
- RUM focuses on user experience metrics
- Synthetic monitoring handles availability and critical flows (login, payment)
I use Checkly to hit homepage and login endpoints every 5 minutes from 5 geographic locations, any timeout or failure triggers immediate alert.
Alert Configuration - Discovering Issues First
Slack Integration: Team Knows Immediately
Sentry Slack integration is simple, Settings → Integrations → Slack, authorize then select which channel to push to.
But default config pushes all errors, quickly becoming noise. Need to configure alert rules:
In Sentry project settings:
- Alerts → Create Alert Rule
- Select trigger conditions:
- Error rate alert: “Error count > 50 in 10 minutes”
- New error alert: “Immediately notify on first occurrence”
- Performance regression: “API P95 response time > 1s”
- Select Action: Send a notification via Slack
Slack message format:
🚨 Production Error Spike
Project: my-nextjs-app
Environment: production
Error: TypeError: Cannot read property 'id' of undefined
Events: 127 events in 10 minutes
View in Sentry: https://sentry.io/...Click link to jump directly to Sentry, view details and stack.
Alert Prioritization: Avoiding Fatigue
Not all issues are equally urgent. My prioritization strategy:
P0 - Critical (immediate response)
- Service completely unavailable
- Payment functionality failed
- Database connection down
Trigger: Phone (PagerDuty) + Slack @channel
P1 - Important (respond within 1 hour)
- Core functionality abnormal
- Error rate spike (> 100 in 10 minutes)
- API response time P95 > 3s
Trigger: Slack push to dev channel
P2 - Normal (handle during work hours)
- Small-scale errors (< 10/hour)
- Non-critical functionality abnormal
- Third-party script errors
Trigger: Email daily digest
Configuration example (Sentry Alert Rule):
// P0 alert: payment failure
{
conditions: [
{ type: 'event.tag', key: 'feature', value: 'payment' },
{ type: 'event.level', value: 'error' }
],
frequency: 'every event', // notify every time
actions: [
{ type: 'slack', channel: '#critical-alerts', mention: '@channel' },
{ type: 'pagerduty', service: 'payments' }
]
}
// P1 alert: error rate spike
{
conditions: [
{ type: 'event.count', value: 100, interval: '10m' }
],
frequency: 'once per issue', // notify once per issue
actions: [
{ type: 'slack', channel: '#alerts-dev' }
]
}Alert Noise Reduction Techniques
When first integrating monitoring, you might be overwhelmed by alerts. Several noise reduction techniques:
1. Ignore known issues
Development hot reload errors, third-party script exceptions, these noises can be filtered:
// sentry.client.config.ts
Sentry.init({
ignoreErrors: [
// Browser extension errors
/chrome-extension/,
/moz-extension/,
// Third-party scripts
/google-analytics/,
// Development hot reload
/HMR/,
],
denyUrls: [
// Ignore errors from specific domains
/extensions\//i,
/^chrome:\/\//i,
],
});2. Merge duplicate alerts
Same error only notified once in 10 minutes, avoiding channel spam. Sentry’s “Issue Grouping” automatically merges similar errors.
3. Set quiet periods
During deployment there might be brief error spikes, can set “Mute for 10 minutes”.
4. Use fingerprints
Custom error grouping rules to merge errors with same root cause:
Sentry.captureException(error, {
fingerprint: ['database-connection-error', databaseName],
});This way connection errors from different databases are separated, easier to pinpoint.
Real-World Case - Complete Monitoring Solution Implementation
E-commerce Site Monitoring Architecture
Last year helped an e-commerce site with monitoring transformation, sharing the complete solution:
Background:
- Daily UV 80K
- Peak QPS 3000+
- Main issues: sporadic payment failures, slow homepage loading
Monitoring Architecture:
┌─────────────┐
│ Next.js │
│ Frontend/SSR│
└──────┬──────┘
│
├─ Sentry (error + performance)
├─ Pino (structured logs) → Datadog
├─ Web Vitals → Sentry
└─ Checkly (synthetic monitoring)Key Configuration:
- User Behavior Tracking
// lib/tracking.ts
import * as Sentry from '@sentry/nextjs';
export function trackCheckoutStep(step: string, data: any) {
Sentry.addBreadcrumb({
category: 'checkout',
message: `Checkout step: ${step}`,
data,
level: 'info',
});
}
// Call in shopping flow
trackCheckoutStep('add_to_cart', { productId, price });
trackCheckoutStep('proceed_to_payment', { cartTotal });
trackCheckoutStep('payment_submitted', { method: 'credit_card' });This way when payment fails, can see user’s complete shopping path.
- Payment Monitoring
// app/api/payment/route.ts
export async function POST(request: Request) {
const log = logger.child({ action: 'payment' });
try {
const result = await processPayment(data);
log.info({ orderId, amount, method }, 'Payment succeeded');
return Response.json({ success: true, orderId });
} catch (error) {
log.error({ error, orderId, userId }, 'Payment failed');
// P0 alert
Sentry.captureException(error, {
tags: { feature: 'payment', severity: 'critical' },
level: 'fatal',
});
return Response.json({ error: 'Payment failed' }, { status: 500 });
}
}Any payment failure immediately notifies team.
- Performance Baseline Setting
Use Sentry Performance Monitoring to establish baseline:
- Homepage LCP < 2s
- Product detail page LCP < 2.5s
- API /api/products P95 < 500ms
Auto-alert when exceeding baseline.
Results:
- Incident discovery time reduced from average 40 minutes to 3 minutes
- Payment failure rate reduced from 0.8% to 0.2%
- Homepage LCP optimized from 3.2s to 1.8s
Monitoring Checklist
Finally, a checklist to audit your project:
**Error Monitoring**
- [ ] Sentry configured and tested
- [ ] Source Maps uploaded successfully
- [ ] global-error.tsx created (App Router)
- [ ] Server Actions wrapped with error handling
- [ ] Ignore rules configured (filter noise)
**Log Management**
- [ ] Logging library integrated (Pino/Winston)
- [ ] Production outputs JSON format
- [ ] Logs include correlationId
- [ ] Log level correctly set (production INFO)
- [ ] Logs integrated with aggregation platform
**Performance Monitoring**
- [ ] Web Vitals reporting enabled
- [ ] Core Web Vitals meeting standards (LCP<2.5s, INP<200ms, CLS<0.1)
- [ ] Critical APIs have performance tracking
- [ ] Slow query monitoring configured
- [ ] Synthetic monitoring set up (optional)
**Alert Configuration**
- [ ] Slack/email alerts tested
- [ ] Alert rules prioritized by severity
- [ ] Alert noise reduction rules configured
- [ ] Team members know alert procedures
- [ ] P0 incidents have clear responders
**Continuous Improvement**
- [ ] Weekly monitoring data review
- [ ] Error trend analysis (which errors increasing)
- [ ] Performance regression detection (which pages slowing)
- [ ] Alert rules regularly optimizedConclusion
From reactive firefighting to proactive discovery, monitoring gives you control over production.
Recap of the monitoring system we built:
- Sentry handles error tracking and performance monitoring, can even replay user actions
- Pino provides structured logs, links entire request chain by correlationId
- Web Vitals focuses on user experience metrics, directly impacts SEO rankings
- Slack alerts let team know about issues immediately, prioritization prevents fatigue
More importantly, a mindset shift: monitoring isn’t “nice to have”, it’s the production environment’s airbag. You wouldn’t wait for a car crash to install an airbag, similarly shouldn’t wait for production issues to think about monitoring.
Take action today. If your project has no monitoring yet:
- This weekend spend 2 hours integrating Sentry, configure basic error tracking
- Next week add structured logging and correlationId
- Week after set up Slack alerts and performance baselines
Don’t pursue perfection in one step, first get basic error monitoring running, then gradually improve. After each production incident ask yourself: “Could monitoring have discovered this issue earlier?” Continuous improvement, monitoring becomes your most reliable teammate.
Finally, if this article helped you, share it with your team to improve together. After all, monitoring is a team effort, not a solo battle.
May your Next.js app be rock solid, never crash. (But reality often disappoints, so monitoring is really important 😄)
FAQ
Why does Next.js need specialized monitoring?
• Browser (client-side)
• Node.js (server-side)
• Edge Runtime (edge functions)
Traditional browser monitoring only sees client-side errors. You need monitoring for all three environments to get the full picture.
How do I integrate Sentry with Next.js?
1) Install @sentry/nextjs
2) Run npx @sentry/wizard@latest -i nextjs
3) Configure for client, server, and edge
4) Set up source maps for debugging
Sentry automatically captures errors from all three environments and provides unified error tracking.
How do I implement structured logging?
Example:
const correlationId = crypto.randomUUID()
logger.info({ correlationId, message: 'Request started' })
This allows you to trace a request across client, server, and edge environments.
Use JSON format for easy aggregation and analysis.
How do I monitor performance?
• Core Web Vitals (LCP, INP, CLS)
• API response times
• Database query times
• Edge function execution time
Tools:
• Vercel Analytics (if using Vercel)
• Sentry Performance Monitoring
• Custom metrics with logging
Set up alerts for performance thresholds.
How do I set up alerts?
• Error rate thresholds
• Performance degradation
• API response time spikes
• Database connection issues
Methods:
• Sentry alerts (email, Slack, etc.)
• Custom webhook integrations
• Monitoring service alerts (Datadog, New Relic)
Set reasonable thresholds to avoid alert fatigue.
What should I monitor in production?
• Error rates and types
• Response times
• User experience (Core Web Vitals)
• Resource usage (CPU, memory)
• Database performance
• API endpoint health
Start with error tracking, then add performance monitoring, then add custom metrics.
How do I debug production issues?
1) Check Sentry for error details
2) Use correlationId to trace request
3) Check logs for context
4) Review performance metrics
5) Reproduce in staging environment
Tools:
• Sentry error tracking
• Structured logs with correlationId
• Performance monitoring
• Source maps for stack traces
Always have source maps enabled for production debugging.
13 min read · Published on: Dec 20, 2025 · Modified on: Jan 22, 2026
Related Posts
Next.js E-commerce in Practice: Complete Guide to Shopping Cart and Stripe Payment Implementation

Next.js E-commerce in Practice: Complete Guide to Shopping Cart and Stripe Payment Implementation
Complete Guide to Next.js File Upload: S3/Qiniu Cloud Presigned URL Direct Upload

Complete Guide to Next.js File Upload: S3/Qiniu Cloud Presigned URL Direct Upload
Next.js Unit Testing Guide: Complete Jest + React Testing Library Setup


Comments
Sign in with GitHub to leave a comment