Switch Language
Toggle Theme

Docker logs Command Guide: 7 Tips to Quickly Troubleshoot Container Issues

Docker log viewing techniques illustration

3 AM. My phone vibrates, jolting me awake. The alert says: production payment service is down. I scramble to my laptop and quickly type docker logs payment-service, and then—my screen floods with tens of thousands of log lines, all INFO-level noise. I stare at the endless stream, hunting for that one critical ERROR message like searching for a book with no table of contents in a library.

To be honest, I still remember that sinking feeling. The docker logs command looks simple, but when it really matters, you realize “viewing logs” isn’t as easy as it sounds. What do you do when logs flood your screen? How do you monitor in real-time? How do you filter by time range? Where is that life-saving error message hiding?

If you’ve faced these issues, this article is for you. I’ll share 7 practical docker logs tips, from basics to advanced techniques, to help you quickly locate container problems. These are lessons I learned from making mistakes, researching, and real-world testing—not a parameter manual, but techniques you’ll actually use.

Basic Log Viewing

1. The Most Basic Log View

Let’s start with the basics. You’ve probably seen this command:

docker logs <container-name>
# Or using container ID
docker logs abc123def456

This command dumps all logs from when the container started until now to your terminal. Sounds great, but in practice—oh my, the logs cascade like a waterfall, completely unreadable.

The first time I hit this problem, the container had been running for three days with tens of thousands of log lines. The terminal scrolled furiously, and I stared at it for ages without catching a single ERROR. Later I learned you shouldn’t use this “bare command” in such situations.

So when should you use this basic command?

Honestly, only two scenarios:

  • The container just started and doesn’t have many logs yet
  • You need to export all logs to a file for backup

Otherwise? Don’t use it. There are better ways.

2. View the Last N Lines

This is the most commonly used technique for daily work:

docker logs --tail 50 my-container

The --tail parameter shows only the last N lines. I usually stick with 50 or 100 lines—enough to spot issues without drowning in information.

Real-world scenario:

Last week our API service suddenly slowed down. My first instinct:

docker logs --tail 100 api-server

In those latest 100 lines, I immediately spotted database connection timeout warnings. The problem scope narrowed instantly—not a code issue, something was up with the database.

The core idea of this technique: check recent logs first to determine the general direction of the problem. If the latest logs don’t reveal clues, dig deeper with other methods.

One more thing—if you want to pinpoint issues but aren’t sure how many lines to check, I recommend starting with 50. Not enough? Try 100. Still not enough? Go to 200. Gradual escalation beats viewing everything at once.

Real-time Monitoring

3. Real-time Log Streaming

This technique is especially useful when debugging—just like Linux’s tail -f command, monitoring logs in real-time:

docker logs -f my-container

Adding the -f parameter (short for follow) keeps logs streaming, showing new logs instantly on your screen.

I typically use it in these scenarios:

  1. Monitoring container startup
    When deploying a new service, start the container and immediately add -f to watch startup logs. If something’s wrong in the config file, you’ll see the error right away instead of waiting for the service to fail.

  2. Catching issues while reproducing bugs
    Got a bug that only triggers under specific conditions? I’ll start docker logs -f first, then trigger that operation. Logs update in real-time, errors caught red-handed.

Here’s an even more useful combination:

docker logs -f --tail 100 my-container

This shows the last 100 lines of history first, then continuously tracks new logs. When starting monitoring, see what happened recently, then observe what happens next in real-time.

Speaking of which, I remember a colleague who loved using -f for debugging. He once stared at the screen for half an hour with no log updates—he forgot the container had already crashed and wasn’t producing new logs. Before using -f, make sure the container is running: docker ps.

Precise Filtering

4. Filter by Time Range

This is one of my favorite features. Ever experienced this? Monitoring tells you “service failed at 3 AM last night,” but you see the alert in the morning and logs have accumulated thousands more lines. How do you quickly pinpoint that time period’s logs?

Use --since and --until parameters:

# View logs from a specific time onward
docker logs --since "2025-12-18T03:00:00" my-container

# View logs within a time range
docker logs --since "2025-12-18T03:00:00" --until "2025-12-18T04:00:00" my-container

The time format is ISO 8601 standard, but you don’t need to be that strict—Docker also supports relative time:

# View logs from the last hour
docker logs --since 1h my-container

# View logs from the last 30 minutes
docker logs --since 30m my-container

# View logs from the last 24 hours
docker logs --since 24h my-container

Real case:

Once at 4 AM our order service crashed. When I came to investigate at 9 AM, logs had accumulated for 5 hours. I went straight to:

docker logs --since "2025-12-18T03:30:00" --until "2025-12-18T04:30:00" order-service

Looking only at logs from an hour before and after the issue, I instantly located the memory overflow error stack. Reviewing everything would’ve taken forever.

5. Show Timestamps

Sometimes you see an ERROR in the logs but don’t know when it occurred, making it impossible to correlate with monitoring data. That’s when you need timestamps:

docker logs -t my-container

The -t parameter adds a timestamp to each log line, like this:

2025-12-18T10:23:45.123456789Z [INFO] Server started
2025-12-18T10:23:47.234567890Z [ERROR] Database connection failed

Now you know exactly when each log was generated. I usually combine it with other parameters:

# Show logs from the last 30 minutes with timestamps
docker logs -t --since 30m my-container

# Real-time monitoring with timestamps
docker logs -f -t my-container

Timestamps are super useful when analyzing performance issues. You can precisely see how long a request took from entry to completion, which step was slow—crystal clear.

6. Filter with grep

Logs full of INFO but you only want ERROR? Filter with grep:

docker logs my-container | grep "ERROR"

This only shows lines containing “ERROR”. But there’s a gotcha I must warn you about:

Sometimes grep filtering doesn’t work!

I was confused the first time I hit this—the container logs clearly had ERROR, but grep couldn’t find it. Later I learned Docker containers might output logs to stderr (standard error stream) instead of stdout (standard output stream), and the pipe operator | only processes stdout by default.

The solution is redirecting stderr to stdout:

docker logs my-container 2>&1 | grep "ERROR"

2>&1 means redirect stderr (file descriptor 2) to stdout (file descriptor 1), so grep can capture all logs.

More useful combination techniques:

# View 10 lines before and after ERROR
docker logs my-container 2>&1 | grep -C 10 "ERROR"

# Case-insensitive search for error
docker logs my-container 2>&1 | grep -i "error"

# Find the last 20 errors
docker logs -t my-container 2>&1 | grep -i "error" | tail -20

The -C 10 parameter is particularly useful—it shows 10 lines before and after the match. Sometimes just seeing the ERROR line isn’t enough; you need to know the context around the error to understand the complete flow.

Advanced Techniques

7. Find the Physical Log File Location

You might not know this, but container logs are actually stored as physical files on the host machine. Want to know where? Use this command:

docker inspect --format='{{.LogPath}}' my-container

The output is usually:

/var/lib/docker/containers/abc123.../abc123...-json.log

What’s this useful for?

  1. Direct file viewing
    Sometimes the docker logs command puts pressure on Docker daemon (especially with large logs), and reading the file directly is actually faster:

    sudo tail -f /var/lib/docker/containers/abc123.../abc123...-json.log
  2. Backup logs
    Need to archive logs? Just copy this file:

    sudo cp /var/lib/docker/containers/abc123.../abc123...-json.log ./backup/
  3. Use more powerful analysis tools
    For example, opening the log file with vim lets you use various editor search functions, more flexible than grep.

Note that this log file is in JSON format with each log line wrapped in a JSON object—looks a bit messy. If you just want plain text logs, using the docker logs command is more convenient.

Export logs to file:

For plain text log backups, use redirection:

docker logs my-container > container.log

The exported file is plain text, convenient for later analysis or sharing with others.

Production Environment Best Practices

8. Configure Log Rotation (Prevent Disk Overflow)

Honestly, this is the most overlooked yet most important configuration in production environments.

Real disaster case:

I witnessed a painful lesson once. A container ran for several months, log files grew wildly, and finally filled up the host’s disk. All containers on the server crashed, database couldn’t write, website down. It took two hours of investigation to discover log files had consumed all space.

How to avoid this disaster?

Configure log rotation. Add to /etc/docker/daemon.json:

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

Parameter meanings:

  • max-size: Single log file maximum 10MB
  • max-file: Keep at most 3 log files

With this configuration, each container occupies at most 10MB × 3 = 30MB of log space. When a log file reaches 10MB, Docker automatically creates a new file, and when exceeding 3 files, the oldest gets deleted.

Making configuration take effect:

After modifying daemon.json, restart Docker service:

sudo systemctl restart docker

Note: Restarting Docker restarts all containers—choose an appropriate time window in production.

Configure for individual containers:

If you only want log rotation for a specific container, specify it when starting:

docker run -d \
  --log-opt max-size=10m \
  --log-opt max-file=3 \
  my-image

This won’t affect other containers.

9. Production Environment Log Management Strategy

After using docker logs for a while, you’ll notice a problem: when log volume is large, the docker logs command becomes slow and might even freeze your terminal. This is because docker logs puts considerable pressure on Docker daemon.

Production environment trade-offs:

  • Small projects (1-10 containers):

    • docker logs + log rotation configuration = sufficient
    • Simple and direct, no additional infrastructure needed
  • Medium to large projects (10+ containers, or microservices architecture):

    • Must implement centralized logging
    • Common solution: ELK (Elasticsearch + Logstash + Kibana)
    • Other options: Loki, Fluentd, Splunk

Why can’t large projects rely solely on docker logs?

  1. Performance issues: Multiple containers querying logs simultaneously overwhelm Docker daemon
  2. Aggregation needs: In microservices architecture, one request might span 10 services with logs scattered across 10 containers—how do you piece together the analysis?
  3. Historical queries: docker logs only shows current container logs; old logs are gone after container restarts
  4. Team collaboration: Having ops, dev, and QA all SSH into servers to run commands? Unrealistic

My recommendations:

  • Just starting with Docker? Focus on mastering docker logs commands—that’s enough
  • Personal project or small team? Configure log rotation, docker logs is fine
  • Production environment with 10+ containers? Seriously consider centralized logging
  • Microservices architecture? Centralized logging isn’t a nice-to-have, it’s essential

Conclusion

After all this, let’s revisit that 3 AM scenario from the beginning. If I encountered it now, here’s what I’d do:

  1. Start with docker logs --tail 100 payment-service to quickly check recent logs
  2. If no clues found, use time range filtering: docker logs --since "2025-12-18T03:00:00" --until "2025-12-18T03:30:00" payment-service
  3. Combine timestamps with grep: docker logs -t payment-service 2>&1 | grep -i "error" | tail -20

Three steps, two minutes max, problem located.

This is the value of mastering docker logs: not memorizing all parameters, but knowing which combinations to use in what scenarios.

One final emphasis: if your containers run in production, configure log rotation right now. Don’t wait until your disk is full to regret it. It’s just a few lines in the config file, but it could save your life.

Quick Reference Card

# Basic viewing
docker logs <container>                    # View all logs
docker logs --tail 50 <container>         # View last 50 lines

# Real-time monitoring
docker logs -f <container>                # Real-time tracking
docker logs -f --tail 100 <container>     # Show last 100 lines and track

# Time filtering
docker logs --since 1h <container>        # Last hour
docker logs --since "2025-12-18T03:00:00" <container>  # From specific time

# Precise search
docker logs -t <container>                           # Show timestamps
docker logs <container> 2>&1 | grep -i "error"      # Search errors
docker logs <container> 2>&1 | grep -C 10 "error"   # Search with context

# Advanced techniques
docker inspect --format='{{.LogPath}}' <container>   # View log file location
docker logs <container> > log.txt                   # Export logs

Save this reference card—next time a container has issues, just follow along.

9 min read · Published on: Dec 18, 2025 · Modified on: Dec 26, 2025

Comments

Sign in with GitHub to leave a comment

Related Posts