Docker Container Debugging Guide: The Right Way to Use exec Command

It’s 3 PM on a Friday. My test environment API suddenly returns 502. I quickly open the terminal and run docker ps - the container shows “Up 2 hours”, clearly running fine. Then I check the logs - just one cold message: “Connection refused”.
This is when things get frustrating - you know the container is running, but you have no idea what’s happening inside. Are the config files correct? Is the process actually up? Is the port listening? These questions can only be answered by “entering” the container, like you need to peek inside a black box to see what’s really in there.
To be honest, when I first started with Docker, I had no clue how to “enter” a container. After searching online, some said use docker exec, others said docker attach, and some just suggested restarting the container. After stepping on quite a few landmines, I finally figured out there’s actually a proper way to debug containers.
This article will teach you how to properly use docker exec to enter containers for troubleshooting, including the essential differences between exec and attach, how to handle missing tools in containers, and some practical debugging tips. Master these, and next time your container acts up, you won’t just resort to the restart-and-pray approach.
docker exec Basics - The Right Way to Enter Containers
The Simplest Way In
To enter a running container, the most common command is:
docker exec -it my-nginx bashThere are three key parts here:
-it: This is shorthand for two parameters.-ikeeps standard input open,-tallocates a terminal. Simply put, it lets you “talk” to the containermy-nginx: Container name, or you can use container ID (likedocker exec -it abc123def456 bash)bash: The command you want to run in the container - here, starting a bash terminal
After running this command, your command prompt will change to something like root@abc123def456:/#, indicating you’ve successfully “entered” the container. Now you can execute commands just like on a regular Linux server.
What If bash Doesn’t Exist?
Sometimes you’ll encounter this error:
$ docker exec -it my-alpine bash
OCI runtime exec failed: exec failed: container_linux.go:380:
starting container process caused: exec: "bash": executable file not foundDon’t panic. This usually happens because the container uses Alpine Linux, an ultra-minimal system that only includes sh by default, not bash. Just switch commands:
docker exec -it my-alpine shIn my experience, try bash first, then fall back to sh if it doesn’t work. This covers 90% of containers.
Using Container IDs for Flexibility
Often you might not want to look up container names - just use the first few characters of the ID:
# First check container ID
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED
abc123def456 nginx "/docker-entrypoint.…" 2 hours ago
# Enter using ID prefix (3-4 characters is enough)
$ docker exec -it abc1 bashDocker automatically matches ID prefixes as long as they’re unique. This trick is super handy when you have many containers.
The Right Way to Exit
To exit after entering a container, just type exit or press Ctrl+D:
root@abc123def456:/# exit
exit
$The key point is: when you exit a container entered with exec, the container keeps running. This is important, and we’ll explain why next.
exec vs attach - Stop Getting Them Mixed Up
The Essential Difference
Many tutorials mention both docker exec and docker attach, but few explain their fundamental difference clearly. I once mistakenly used attach and mysteriously stopped a container.
Simply put:
- docker exec: Starts a new process in the container, like opening a new window
- docker attach: Connects to the container’s main process (PID 1), like screen-sharing to an existing display
Sounds abstract? Let me give you an example.
Say you use docker attach my-nginx to connect to an Nginx container, then press Ctrl+C or type exit to quit. What happens? The container’s main process (Nginx) receives the exit signal, and the entire container stops. Do this in production and you’ll probably get chewed out by your boss.
But if you use docker exec -it my-nginx bash, you’re starting an independent bash process. Exiting this bash doesn’t affect the Nginx main process - the container keeps running normally.
When to Use exec vs attach?
My advice is simple: use exec 99% of the time.
attach is only useful in rare cases when you need to directly interact with the container’s main process, such as:
- The container runs an interactive program (like a Python REPL)
- You need to see the main process’s live output
- Multiple terminals need to “sync” and see the same content (like screen sharing)
But honestly, these scenarios are rare. Most of the time you just want to check files, modify configs, or run commands - exec is safe and convenient.
A Comparison Table to Remember Forever
| Feature | docker exec | docker attach |
|---|---|---|
| Starts new process | ✓ Yes | ✗ No |
| Container stops on exit | ✗ No | ✓ Yes (Dangerous!) |
| Can run any command | ✓ Yes | ✗ No |
| Independent terminals | ✓ Yes | ✗ No (shared tty) |
| Debugging recommendation | ⭐⭐⭐⭐⭐ | ⭐ |
Now you should understand why I keep emphasizing exec. attach was designed for viewing container output, not for daily debugging.
Handling Missing Tools in Containers
Why Are All Commands Missing?
You finally get into the container, ready to edit a config file with vim, and then:
root@abc123:/# vim /etc/nginx/nginx.conf
bash: vim: command not foundWant to test an API with curl:
root@abc123:/# curl localhost:8080
bash: curl: command not foundEven ping is missing. The first time this happened, I was devastated - how can I debug with nothing available?
This is actually Docker image design philosophy: only include what’s necessary, avoid installing anything else. A full Ubuntu image might be hundreds of MBs, but Alpine is only 5MB. How? By removing all “unnecessary” tools, including vim, curl, and ping - things you assume should exist by default.
Temporarily Installing Tools for Emergency
When this happens, you can temporarily install needed tools. Use different package managers based on the container’s system:
Debian/Ubuntu systems (apt/apt-get):
# First update package lists
apt-get update
# Install common tools
apt-get install -y vim curl wget net-tools
# If you need network diagnostic tools
apt-get install -y iputils-ping dnsutilsCentOS/RedHat systems (yum):
yum install -y vim curl wgetAlpine systems (apk):
apk update
apk add vim curl bashHow to Tell What System the Container Uses?
If you’re unsure what system the container runs, here are two simple methods:
# Method 1: Check system release file
cat /etc/os-release
# Method 2: Test if package manager commands exist
which apt-get # Exists means Debian/Ubuntu
which yum # Exists means CentOS/RedHat
which apk # Exists means AlpineI usually just try apt-get update directly. If it errors, I switch to another. You won’t break the container anyway.
Is This Safe?
A few things to note about temporarily installing tools:
Development environment: Install freely, no worries. Test containers reset after restart anyway, and your installed tools disappear.
Production environment: Only do this when urgently troubleshooting. After fixing the issue, add needed tools to the Dockerfile and rebuild the image.
Why? Two reasons:
- Security: Temporarily installed packages might have vulnerabilities
- Reproducibility: Your installed tools vanish after container restart - you’ll need to install again next time
Long-term Solution: Pre-install in Dockerfile
If you frequently need certain debugging tools, install them when building the image:
FROM nginx:latest
# Install common debugging tools
RUN apt-get update && apt-get install -y \
vim \
curl \
wget \
net-tools \
iputils-ping \
&& rm -rf /var/lib/apt/lists/* # Clean cache to reduce image size
# Other configurations...Note the final rm -rf /var/lib/apt/lists/* - this is good practice that can significantly reduce your image size.
A Practical Tip
If you just want to quickly view a config file, you don’t actually need vim. Use cat or less - these commands exist in virtually all containers:
# View entire file
cat /etc/nginx/nginx.conf
# Paginated view (for long files)
less /etc/nginx/nginx.conf # Press q to quit
# View first few lines only
head -n 20 /etc/nginx/nginx.confIf you need to edit a file but don’t have vim, you can use sed for direct replacement. While not as intuitive as vim, it works in emergencies:
# Change listen 80 to listen 8080
sed -i 's/listen 80/listen 8080/g' /etc/nginx/nginx.confEntering Containers with Specific User Identity
Why Specify User?
Sometimes after entering a container you’ll encounter this:
root@abc123:/app# cat /var/log/app.log
cat: /var/log/app.log: Permission deniedOr when trying to change file permissions:
root@abc123:/app# chmod 644 config.yaml
chmod: changing permissions of 'config.yaml': Operation not permittedThe prompt shows root, so why no permissions? This usually happens because the container is configured to run with a non-root user (a good practice, recommended for production), but your exec command inherits the container’s user settings by default.
Entering as root
In this case, you need to explicitly specify entering as root:
docker exec -it --user root my-app bashOr shortened:
docker exec -it -u root my-app bashNow you have true root permissions and can view any file or modify any config.
Entering with Specific UID
Sometimes you might need to enter as a specific user ID, like when the application user in the container is UID 1000:
# Enter as UID 1000
docker exec -it --user 1000 my-app bash
# Can also specify username (if the user exists in container)
docker exec -it --user appuser my-app bash
# Specify user and group (format: user:group)
docker exec -it --user 1000:1000 my-app bashThis is particularly useful when debugging permission issues. For example, if you suspect the app can’t write to a directory, try entering as the app user:
# Enter as application user
docker exec -it --user appuser my-app bash
# Try creating a file in the data directory
appuser@abc123:/app$ touch /data/test.txt
touch: cannot touch '/data/test.txt': Permission denied
# Got it - confirmed permission issueWhen Do You Need root Permissions?
Based on my experience, these scenarios typically require root:
- Modifying system configs: Various config files under /etc/
- Installing packages: apt-get, yum, etc. all need root
- Viewing system logs: Many log files under /var/log/ aren’t readable by regular users
- Adjusting file permissions: Operations like chmod and chown
- Debugging network issues: Network tools like tcpdump and netstat usually need root
But remember: avoid using root unless necessary. Especially in production - if you use root to fix something, document what you changed and sync modifications to the Dockerfile ASAP.
A Security Warning
While you can enter containers as root, keep these points in mind:
Test environment: Use freely, not a big deal. Container resets on restart anyway.
Production environment:
- Looking around → OK
- Temporarily modifying config for emergency fix → Barely OK, but document what you changed
- Directly compiling code or installing software in container → Not OK! These changes should be in the Dockerfile
Why? Because containers follow the “immutable infrastructure” philosophy. Any changes you make in a running container disappear on restart. Real modifications should be reflected in the Dockerfile to ensure reproducibility.
There’s also a security risk: the root user in a container might be the same user as the host’s root (sharing UID 0) under certain configurations. While there’s namespace isolation, if misconfigured, the container’s root could potentially affect the host. This is why production environments should run containers with non-root users.
Practical Tips and Common Scenarios
Running Single Commands (Without Interactive Shell)
Often you don’t need to stay in the container - you just want to quickly run a command and see the result. In this case, skip the -it parameters:
# View directory contents
docker exec my-nginx ls -la /etc/nginx/
# View config file
docker exec my-nginx cat /etc/nginx/nginx.conf
# Check processes
docker exec my-nginx ps aux
# View port listening status
docker exec my-nginx netstat -tlnp
# Test network connectivity
docker exec my-nginx curl -I localhost:80This approach is great for scripts or quick value checks. For example, I often use this to check if an application responds normally:
# Health check script
if docker exec my-app curl -f http://localhost:8080/health; then
echo "App is healthy"
else
echo "App is down!"
fiStandard Container Debugging Workflow
When encountering container issues, I typically follow this troubleshooting flow:
1. Check container status first
docker ps -a # See if container is running2. Check logs for clues
docker logs my-app --tail 100 # View last 100 log lines
docker logs my-app -f # Follow logs in real-time (like tail -f)3. Enter container to check processes
docker exec my-app ps auxSee if expected processes are running. For example, an Nginx container should show nginx master and worker processes.
4. Check port listening
docker exec my-app netstat -tlnp
# Or (if netstat isn't available)
docker exec my-app ss -tlnpConfirm the application is listening on the correct port.
5. Test service availability
# Test from inside container
docker exec my-app curl localhost:8080
# If curl isn't available, use telnet to check if port is open
docker exec my-app telnet localhost 80806. Review config files
docker exec my-app cat /etc/nginx/nginx.conf
docker exec my-app cat /app/config.yamlSee if configurations match expectations.
7. Check disk space
docker exec my-app df -hSometimes the container’s disk is full, preventing the app from writing files.
Useful Debugging Command Combinations
Check environment variables:
docker exec my-app env | grep DATABASEVerify database connection configs.
Find specific files:
docker exec my-app find /app -name "*.log"Check file permissions:
docker exec my-app ls -la /app/Useful when troubleshooting permission issues.
Monitor container resources in real-time:
docker stats my-appYou don’t need exec for this - run directly on the host. Shows real-time CPU, memory, network, and IO usage.
Batch Operations on Multiple Containers
If you need to check multiple containers, combine with shell scripts:
# Check memory usage for all running containers
for container in $(docker ps -q); do
echo "Container: $container"
docker exec $container free -h
echo "---"
doneDebugging Network Issues
Check connectivity between containers:
# Ping container B from container A
docker exec containerA ping containerB
# Test if container can access external network
docker exec my-app ping -c 3 google.com
# Check DNS resolution
docker exec my-app nslookup google.comView container IP:
docker inspect my-app | grep IPAddress
# Or check from inside container
docker exec my-app ip addr showWhat If Container Keeps Restarting?
Sometimes a container exits right after starting - no time to exec in. Here’s what you can do:
Method 1: Override startup command
# Make container stay running without executing original CMD
docker run -it --name debug-app my-app-image shThis way the container stays at shell prompt without executing the original startup script, giving you time to investigate.
Method 2: Check exited container
# Start an already exited container
docker start my-app
# Immediately check logs
docker logs my-app
# If container hasn't exited yet, quickly exec in
docker exec -it my-app bashThese techniques I’ve accumulated bit by bit through practical experience. Docker container debugging might seem confusing at first, but once you’re familiar with these commands, troubleshooting becomes much faster.
Conclusion
After all that, Docker container debugging really boils down to these core points:
Using docker exec to enter containers is the safest approach - exiting doesn’t stop the container. Remember this universal command: docker exec -it container-name bash, and if bash doesn’t work, try sh.
Don’t confuse exec and attach - attach connects to the main process, and exit stops the container. Unless you know exactly what you’re doing, stick with exec.
Don’t panic when tools are missing - temporarily install them (apt-get/yum/apk). But remember, this is just for emergencies. The long-term solution is adding tools to the Dockerfile.
When you need root permissions, add the --user root parameter. But be cautious in production - document what you changed and sync modifications to the image ASAP.
Follow a debugging process: check logs first, then processes, then configs, finally network. Following this order solves most issues.
Next time a container acts up, don’t rush to the restart button. Take a look inside - many problems are simple things like wrong configs, incorrect permissions, or port conflicts. Master the exec command, and container debugging isn’t that hard.
Here’s a final debugging checklist to follow when troubleshooting:
Container Debugging Checklist:
- Is the container running? (docker ps)
- Any errors in logs? (docker logs)
- Are processes started? (docker exec ps aux)
- Is port listening normal? (docker exec netstat -tlnp)
- Are config files correct? (docker exec cat config)
- Is there enough disk space? (docker exec df -h)
- Is network working? (docker exec curl/ping)
Check through all these, and you’ll usually pinpoint the issue.
10 min read · Published on: Dec 18, 2025 · Modified on: Dec 26, 2025
Related Posts

Docker Container Exits Immediately? Complete Troubleshooting Guide (Exit Code 137/1 Solutions)

Complete Guide to Docker Log Cleanup: 5 Ways to Prevent Disk from Being Filled by json.log

Comments
Sign in with GitHub to leave a comment