- Build and package a Spring Boot application as an executable JAR file
- Create a Dockerfile with appropriate base images and configuration for Spring Boot
- Build Docker images from Spring Boot applications using proper tagging conventions
- Push Spring Boot Docker images to container registries like Docker Hub
- Run containerized Spring Boot applications with proper port mapping and configuration
Spring Boot's opinionated approach to microservice development makes it a natural fit for containerized deployments. The framework was designed with cloud-native architectures in mind, where applications run as isolated, portable units that can be deployed anywhere. While you could theoretically package Spring Boot apps as WAR files and deploy them to traditional application servers, that defeats the purpose—Spring Boot includes an embedded Tomcat server precisely so you don't need external infrastructure.
Containerization takes this portability to its logical conclusion. Once your Spring Boot application lives inside a Docker image, you've created a self-contained unit that includes everything needed to run: the JVM, your compiled code, all dependencies, and the embedded web server. This image can run identically on your laptop, in your CI/CD pipeline, or across a Kubernetes cluster spanning multiple availability zones.
Docker dominates the containerization space, though alternatives like Podman and containerd exist. Understanding how to properly dockerize Spring Boot applications isn't just a nice-to-have skill—it's fundamental to modern Java development. Let's walk through the entire process, from building your application to pushing it to a container registry.
Spring Boot projects can be packaged as either JAR or WAR files. For containerization, JAR packaging is significantly simpler and aligns with Spring Boot's design philosophy. When Maven or Gradle builds your project as a JAR, it creates what's called a "fat JAR" or "uber JAR"—a single file containing your application code, all dependencies, and the embedded Tomcat server.
Check your pom.xml if you're using Maven. You should see something like this:
<packaging>jar</packaging>For Gradle users, the packaging type is implicitly JAR unless you've explicitly configured WAR packaging. The Spring Boot Gradle plugin handles creating the executable JAR automatically.
This packaging choice matters because it determines your application's runtime requirements. A JAR-packaged Spring Boot app only needs a JVM to run—no application server, no complex deployment descriptors, no external dependencies beyond what's bundled inside. This simplicity translates directly to simpler Dockerfiles and more portable containers.
Before you can containerize anything, you need a compiled, tested, executable artifact. Maven and Gradle both provide build lifecycles that handle compilation, unit testing, and packaging.
With Maven, run this command from your project root:
mvn clean packageThe clean phase removes any previous build artifacts, ensuring you're working with a fresh build. The package phase
compiles your code, runs tests, and creates the JAR file in the target directory.
For Gradle users:
./gradlew clean buildAfter the build completes, navigate to your target directory (Maven) or build/libs (Gradle). You should find your
JAR file—something like helloworld-1.0.jar or helloworld-0.0.1-SNAPSHOT.jar depending on your project configuration.
Note the exact filename; you'll reference it in the Dockerfile.
If you want to verify the application works before containerization, you can run it directly:
java -jar target/helloworld.jarYour application should start up, and you'll see Spring Boot's banner in the console followed by logging output. If you've defined any REST endpoints, you can hit them with curl or a browser to confirm everything works. This pre-containerization verification catches issues early—debugging a broken application is easier before you add Docker to the mix.
A Dockerfile is essentially a recipe that tells Docker how to construct your image. It specifies the base image (which provides the operating system and runtime), copies your application files into the image, and defines how to start your application.
Create a file named Dockerfile (no extension) in your project root—the same directory containing your pom.xml
or build.gradle. Here's a complete Dockerfile for our Spring Boot application:
FROM openjdk:17-jdk-alpine
LABEL maintainer="your.email@example.com"
COPY target/helloworld.jar app.jar
ENTRYPOINT ["java", "-jar", "/app.jar"]Let's break down what each instruction does. The FROM directive specifies the base image. We're
using openjdk:17-jdk-alpine, which provides OpenJDK 17 on Alpine Linux. Alpine is a minimal Linux distribution that
produces smaller images—often under 150MB for a complete Java environment. If your application requires a different Java
version, change the tag accordingly: openjdk:11-jdk-alpine for Java 11, openjdk:21-jdk-alpine for Java 21, and so
on.
You're not locked into OpenJDK, either. Amazon Corretto, Eclipse Temurin, and other distributions offer their own Docker
images. For production deployments, many teams prefer Eclipse Temurin's eclipse-temurin:17-jdk-alpine images because
they receive long-term support and security updates.
The LABEL instruction adds metadata to your image. While optional, it's good practice—especially in larger
organizations where multiple teams build images. You can add labels for anything: maintainer contact info, build
timestamps, Git commit SHAs, or whatever helps with image management.
COPY target/helloworld.jar app.jar copies your compiled JAR from the build output directory into the Docker image's
filesystem, renaming it to app.jar. This standardized name makes the Dockerfile more maintainable—you can update your
application version in pom.xml without touching the Dockerfile.
Finally, ENTRYPOINT defines the command that runs when a container starts from this image. We're telling Docker to
execute java -jar /app.jar, which launches your Spring Boot application. The JSON array
syntax (["java", "-jar", "/app.jar"]) is preferred because it doesn't invoke a shell, making the process tree cleaner
and signal handling more predictable.
Some developers include JVM tuning flags in the ENTRYPOINT:
ENTRYPOINT ["java", "-Xmx512m", "-Xms256m", "-jar", "/app.jar"]This explicitly sets heap memory limits, which can be crucial in containerized environments where the JVM's automatic memory detection sometimes makes poor choices. However, modern JVMs (Java 10+) handle container resource limits better, so you might not need explicit flags.
With your Dockerfile ready, you can build the image. Docker reads the Dockerfile, executes each instruction in order, and produces a layered image. Run this command from the directory containing your Dockerfile:
docker build --tag=yourdockerhubusername/helloworld:latest .The --tag flag (or -t for short) assigns a name and optional tag to your image. The format follows Docker's naming
convention: registry/repository:tag. If you're pushing to Docker Hub, the registry is implied, so you only
need username/repository:tag.
The tag portion (latest in this example) is version metadata. latest is conventional for the most recent build, but
you should also create specific version tags for production deployments: helloworld:1.0.0, helloworld:2.1.3, etc.
This versioning becomes critical when you need to rollback a deployment or debug which version is running in production.
The period at the end of the command is easy to miss but absolutely essential—it tells Docker where to find the
Dockerfile and any files referenced by COPY instructions. It's the build context, and Docker uploads this entire
directory tree to the Docker daemon before building.
Watch the build output carefully. Docker executes each Dockerfile instruction as a separate layer, and you'll see it pulling the base image (if you don't have it cached), copying files, and completing each step. A successful build ends with a message like:
Successfully built 8f9d3b2a1c4e
Successfully tagged yourdockerhubusername/helloworld:latest
That hexadecimal string is your image ID—Docker's internal reference. The tag you specified is the human-readable alias.
If the build fails, Docker provides error messages indicating which instruction failed and why. Common issues include
typos in the JAR filename, missing JAR files (forgot to run mvn package), or Docker daemon connectivity problems.
Container registries store and distribute Docker images. Docker Hub is the default registry and probably the most widely used, but alternatives include Amazon ECR, Google Container Registry, Azure Container Registry, and self-hosted options like Harbor or Artifactory.
Before pushing, authenticate with Docker Hub:
docker loginEnter your Docker Hub username and password when prompted. Docker caches these credentials locally, so you won't need to log in repeatedly.
Now push your image:
docker push yourdockerhubusername/helloworld:latestDocker uploads each layer of your image. If some layers already exist in the registry (because you've pushed similar images before), Docker skips those—it only uploads changed layers. This layer caching dramatically speeds up subsequent pushes.
The upload progress shows each layer's status. When it completes, your image is live on Docker Hub. You can verify by
visiting https://hub.docker.com/r/yourdockerhubusername/helloworld in a browser. The repository page displays
available tags, creation dates, image sizes, and pull counts.
If you're working in an enterprise environment, you'll likely push to a private registry instead. The process is nearly identical; you just need to include the registry hostname in your tag:
docker build --tag=registry.company.com/helloworld:latest .
docker push registry.company.com/helloworld:latestWith your image pushed to a registry, anyone with access can pull and run it. On your local machine, you can run the container like this:
docker run -p 8080:8080 yourdockerhubusername/helloworld:latestThe -p flag maps ports between your host machine and the container. The format is host-port:container-port. Spring
Boot applications default to port 8080 internally, so -p 8080:8080 makes the application accessible
on localhost:8080 from your browser or curl.
You can map to different host ports if 8080 is already in use:
docker run -p 9000:8080 yourdockerhubusername/helloworld:latestNow the application responds on port 9000 instead. The container still listens on 8080 internally—Docker handles the port translation.
For production deployments, you typically run containers in detached mode with the -d flag:
docker run -d -p 8080:8080 --name helloworld-app yourdockerhubusername/helloworld:latestThe --name flag assigns a human-readable name to the container, making it easier to manage with commands
like docker stop helloworld-app or docker logs helloworld-app. Without --name, Docker generates a random name
like determined_curie or eloquent_turing.
You can pass environment variables to configure your application:
docker run -p 8080:8080 -e SPRING_PROFILES_ACTIVE=production yourdockerhubusername/helloworld:latestSpring Boot reads environment variables and uses them for configuration. This pattern lets you use the same image across environments—development, staging, production—with environment-specific configuration injected at runtime.
To see your running containers:
docker psTo stop a container:
docker stop helloworld-appTo remove a stopped container:
docker rm helloworld-appDockerizing your Spring Boot application isn't just about following trends. Containers solve real problems that plague traditional deployment models.
Environment consistency disappears as a concern. "It works on my machine" becomes obsolete when your machine, your CI server, and your production cluster all run identical containers. The application's runtime environment travels with the code—no more debugging classpath issues or dependency version conflicts across environments.
Scaling becomes trivial. Need to handle more traffic? Spin up additional container instances behind a load balancer. Kubernetes and other orchestration platforms handle this automatically based on CPU usage, request rates, or custom metrics. Each instance is identical, starts quickly, and can be destroyed just as easily when load decreases.
Deployment rollbacks are nearly instant. Keep previous image versions in your registry, and rolling back is just pointing at a different tag. No need to redeploy old JARs or worry about incomplete deployments.
Resource utilization improves dramatically compared to virtual machines. Containers share the host OS kernel, so they're lightweight—startup times measure in seconds rather than minutes. You can run dozens of containers on hardware that might support only a handful of VMs.
Multi-tenancy and isolation become manageable. Different teams can run applications with conflicting dependencies on shared infrastructure without interference. Each container has its own filesystem, network stack, and process tree.
Dockerizing Spring Boot applications has become standard practice in modern Java development for good reasons. The framework's embedded server architecture and JAR-based packaging make containerization straightforward—you're essentially wrapping your already-portable application in an even more portable runtime.
The process follows a clear sequence: build your application as a JAR, create a Dockerfile that specifies the Java runtime and startup command, build the Docker image with proper tagging, and push it to a container registry. Each step builds on established conventions that the broader Docker ecosystem understands.
Once containerized, your Spring Boot application gains portability across any environment that runs Docker. Development, testing, staging, and production can all run identical containers, eliminating entire classes of environment-related bugs. Orchestration platforms like Kubernetes build on this foundation, enabling sophisticated deployment patterns—rolling updates, blue-green deployments, canary releases—that would be painful to implement with traditional deployment approaches.
The Dockerfile itself remains simple for most Spring Boot applications. A base image with the appropriate Java version, a COPY instruction for your JAR, and an ENTRYPOINT to launch the application cover the basics. You can enhance it with health checks, multi-stage builds to optimize image size, or custom JVM tuning, but the fundamental pattern stays consistent.
Container registries centralize image distribution and versioning. Whether you use Docker Hub, a cloud provider's registry, or a self-hosted solution, the workflow is the same: tag your images meaningfully, push them after builds, and pull them during deployments. This centralization enables CI/CD pipelines to build once and deploy anywhere, a massive improvement over building separate artifacts for each environment.
Running containers requires understanding port mapping and environment variable injection, but these concepts quickly become second nature. The Docker CLI provides straightforward commands for starting, stopping, and inspecting containers. In production, you'll rarely interact with Docker directly—orchestration platforms handle container lifecycle management—but understanding the fundamentals remains valuable for troubleshooting and local development.