Create Docker Image
in Seconds
Free tool to create Docker image configurations instantly from GitHub repositories or uploaded files.Build Docker image from Dockerfile with automatic framework detection.
Try These Example Repositories
Click any example to instantly generate its Dockerfile
How to Create Docker Image
Build Docker image from Dockerfile in three simple steps
Provide Your Code
Enter a GitHub repository URL or upload your project files. We support all major programming languages and frameworks.
Auto-Detection
Our intelligent detection system analyzes your project and identifies the framework, package manager, and build configuration.
Get Your Dockerfile
Receive a production-ready Dockerfile with multi-stage builds, security best practices, and optimization.
Create Docker Image for Any Framework
Build Docker image from Dockerfile for any programming language or framework
And many more... Create Docker image for any language or framework instantly
What is a Dockerfile?
A Dockerfile is a text document that contains all the commands and instructions needed to build a Docker image. Think of it as a recipe or blueprint that Docker uses to automatically assemble a container image layer by layer. Each instruction in a Dockerfile creates a new layer in the image, making it efficient and cacheable.
Docker images are lightweight, standalone, executable packages that include everything needed to run an application: code, runtime, system tools, libraries, and settings. When you run a Docker image, it becomes a container — an isolated environment that runs consistently across any infrastructure, whether it is your laptop, a cloud server, or a Kubernetes cluster.
The Dockerfile format was introduced by Docker in 2013 and has since become the industry standard for containerizing applications. It uses a simple, declarative syntax that developers can learn quickly. Common instructions include FROM to specify the base image, COPY to add files, RUN to execute commands, and CMD to define the default command.
Using a Dockerfile brings numerous benefits: reproducible builds across all environments, version control for your infrastructure, easier collaboration between team members, and simplified CI/CD pipelines. Whether you are developing a microservice, deploying a web application, or packaging a CLI tool, a well-crafted Dockerfile is essential for modern software development.
How to Write a Dockerfile
A step-by-step guide to creating production-ready Dockerfiles
1Choose a Base Image
Every Dockerfile starts with a FROM instruction that specifies the base image. Choose an image that matches your application's runtime requirements. For production, prefer official images and specific version tags over latest to ensure reproducibility.
FROM node:20-alpine2Set the Working Directory
Use WORKDIR to set the working directory for subsequent instructions. This creates the directory if it does not exist and sets it as the current directory for RUN, COPY, CMD, and other instructions.
WORKDIR /app3Copy Dependencies First
Copy dependency files separately before copying application code. This leverages Docker's layer caching — if dependencies have not changed, Docker reuses the cached layer, significantly speeding up rebuilds.
COPY package.json package-lock.json ./
RUN npm ci --only=production4Copy Application Code
After installing dependencies, copy the rest of your application code. Use a .dockerignore file to exclude unnecessary files like node_modules, .git, and local configuration files.
COPY . .5Expose Ports and Set the Command
Use EXPOSE to document which ports your application listens on. Finally, use CMD to specify the default command that runs when the container starts.
EXPOSE 3000
CMD ["node", "server.js"]Dockerfile Examples
Production-ready Dockerfile templates for popular frameworks
Dockerfile for Node.js
A production-ready Dockerfile for Node.js applications using multi-stage builds to minimize image size.
# Build stage
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
# Create non-root user for security
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nodeuser
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./
USER nodeuser
EXPOSE 3000
CMD ["node", "dist/index.js"]Dockerfile for Python
An optimized Dockerfile for Python applications with pip dependencies and virtual environments.
FROM python:3.12-slim
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
gcc \
&& rm -rf /var/lib/apt/lists/*
# Install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . .
# Create non-root user
RUN useradd --create-home appuser
USER appuser
EXPOSE 8000
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "app:app"]Dockerfile for Next.js
The official recommended Dockerfile for Next.js with standalone output mode for minimal image size.
FROM node:20-alpine AS base
# Install dependencies only when needed
FROM base AS deps
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
# Build the application
FROM base AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
# Production image
FROM base AS runner
WORKDIR /app
ENV NODE_ENV=production
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT=3000
CMD ["node", "server.js"]Dockerfile for Go
A minimal Dockerfile for Go applications using scratch or distroless base images for maximum security and smallest size.
# Build stage
FROM golang:1.22-alpine AS builder
WORKDIR /app
# Download dependencies
COPY go.mod go.sum ./
RUN go mod download
# Build the application
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -ldflags="-w -s" -o /app/main .
# Final stage - using scratch for minimal image
FROM scratch
# Copy SSL certificates for HTTPS
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
# Copy the binary
COPY --from=builder /app/main /main
EXPOSE 8080
ENTRYPOINT ["/main"]Common Dockerfile Mistakes
Avoid these pitfalls when writing your Dockerfiles
Using the latest tag for base images
The latest tag can change at any time, leading to unexpected build failures or behavior changes. Always pin your base images to specific versions (e.g., node:20.11-alpine instead of node:latest) for reproducible builds.
Running containers as root
Running your application as root inside a container is a security risk. If an attacker exploits your application, they gain root access to the container. Always create a non-root user with USER instruction and run your application with minimal privileges.
Not leveraging build cache effectively
Placing frequently changing files (like application code) before dependency installation invalidates the cache for every build. Copy dependency files first, install dependencies, then copy the rest of your code to maximize cache utilization.
Installing unnecessary packages
Including development tools, documentation, and other unnecessary packages bloats your image size and increases the attack surface. Use --no-install-recommends with apt-get, remove package manager caches, and consider multi-stage builds to keep only production artifacts.
Hardcoding secrets in Dockerfiles
Never include API keys, passwords, or sensitive data directly in your Dockerfile. These values are visible in the image history. Use environment variables, Docker secrets, or external secret management tools instead.
Frequently Asked Questions
Common questions about Dockerfiles and Docker images
What is the difference between CMD and ENTRYPOINT?
Both CMD and ENTRYPOINT define what command runs when a container starts, but they serve different purposes. ENTRYPOINT defines the main executable and is harder to override (requires --entrypoint flag). CMD provides default arguments that can be easily overridden by passing arguments to docker run. Best practice: use ENTRYPOINT for the main command and CMD for default arguments. For example: ENTRYPOINT ["python"] and CMD ["app.py"] allows users to easily run python other.py instead.
How do I reduce Docker image size?
Several strategies help minimize image size: (1) Use multi-stage builds to separate build dependencies from runtime. (2) Choose minimal base images like Alpine or distroless. (3) Combine RUN commands to reduce layers. (4) Remove package manager caches (apt-get clean, rm -rf /var/lib/apt/lists/*). (5) Use .dockerignore to exclude unnecessary files. (6) For compiled languages like Go, use scratch as the final base image. A typical Node.js image can go from 1GB+ to under 100MB with these techniques.
Should I use Alpine or Debian base image?
Alpine Linux produces much smaller images (5MB vs 120MB for Debian slim) but uses musl libc instead of glibc, which can cause compatibility issues with some software. Use Alpine when: image size is critical, your dependencies are compatible, or you are using interpreted languages like Node.js or Python. Use Debian when: you need glibc compatibility, require specific system packages, or encounter strange runtime errors with Alpine. For production, Debian slim is often a good balance.
What is a multi-stage build?
Multi-stage builds use multiple FROM statements in a single Dockerfile. Each FROM starts a new build stage that can copy artifacts from previous stages. This allows you to use a full SDK image for building (with compilers, dev tools) and a minimal runtime image for production. The final image only contains what you explicitly copy into it, dramatically reducing size. Example: build a Go app in golang:1.22 stage, copy only the compiled binary to a scratch or alpine final stage.
How do I handle environment variables in Docker?
Docker supports environment variables through multiple mechanisms: (1) ENV instruction in Dockerfile sets permanent defaults baked into the image. (2) ARG instruction defines build-time variables that are not present in the final image. (3) docker run -e or --env-file passes runtime variables. (4) Docker Compose environment or env_file in docker-compose.yml. Best practice: use ENV for non-sensitive defaults, ARG for build-time config like version numbers, and runtime injection for secrets and environment-specific settings.
How do I debug a failing Docker build?
When a Docker build fails: (1) Read the error message carefully — it usually indicates the exact line and command that failed. (2) Run the failing command manually in an interactive container: docker run -it <base-image> sh. (3) Use docker build --progress=plain for more verbose output. (4) Add temporary RUN commands to inspect the filesystem state. (5) For complex builds, build up to a specific stage with --target flag. (6) Check that all files referenced in COPY exist and are not in .dockerignore.
Deploy Your Docker App with Server Compass
Now that you have your Dockerfile ready, deploy your containerized application in minutes with Server Compass. Self-hosted, secure, and developer-friendly server management. No DevOps expertise required — just push your code and watch it go live.
- Automatic Docker builds
- Zero-downtime deployments
- Built-in SSL certificates
- Self-hosted on your servers
Ready to Create Docker Image?
Build Docker image from Dockerfile and deploy instantly with Server Compass. No DevOps expertise required.