top of page
Search
  • Writer's pictureHenrik Holst

Docker image size optimization

I was digging in the documentation of K3s for a way to make the logging less verbose in the way Pod names and containers are presented. I ran across a section that described K3s experimental support for something called "eStargz" images. The new image format is supposed to be compatible with the old "Docker" format but at the same time allow lazy loading of image data from a image registry.

To me, this sounded very much like another run of the mill "not invented here" solution. A solution crying out for a problem. While surely a lot of Docker images out there in the wild are much larger than what most of us would think is reasonable, the obvious solution to this is to optimize these images first and try to make them smaller.

As a case study, I consider a sidecar service called "azure-queue-sidecar". The sidecar is simply a program that consumes an Azure Storage Queue and republishes the message on a service endpoint for processing.

As simple as it sounds, the image still has an impressive size, weighting in at almost 300 MB.

The relevant parts of the Dockerfile that we use to build the production image:

FROM python:3.8-alpine
RUN apk add --no-cache build-base libffi-dev
RUN pip install --no-cache \
    azure-storage-queue azure-storage-blob \
    cloudevents requests
COPY src /app/
CMD ["python3","-m","app"]

While the result is not terrible, it has an obvious problem: The base image is included in the production image, and that contains the whole build tooling!

We should be able to something more intelligent here, if we only could get rid of the build tooling from the base image.

Optimization: multi-stage build

The first and most obvious way to optimize the size would be to try and avoid to include the build related tooling in the final image.

For doing this a way to store the build artifacts is needed, so we need to look up how that is done with Pythons pip package manager, because nobody can remember these things.

FROM python:3.8-alpine AS base

FROM base AS builder
RUN apk add --no-cache build-base libffi-dev
RUN python -m pip wheel --wheel-dir=/whl \
    azure-storage-queue azure-storage-blob cloudevents requests

FROM base AS production
COPY --from=builder /whl /whl
RUN python -m pip install --no-index --find-links=/whl \
    azure-storage-queue azure-storage-blob cloudevents requests 
COPY src /app/
CMD ["python3","-m","app"]

We build the image using the command

docker build --target=production -t azure-queue-sidecar:latest .

Note that the production image is the last layer in the Dockerfile and docker build will automatically build that target as default.

The resulting image is much better - smaller in this context - than the original image.

Optimization: Buildkit Dockerfile

We could have stopped with a multi-stage build. But there is still an obvious flaw with our current image. The wheel packages are stored in the final image. There is really no need for that. By using a modern version of Dockerfile we can get rid of those as well.

#syntax=docker/dockerfile:1.4
FROM python:3.8-alpine AS base

FROM base AS builder
RUN apk add --no-cache build-base libffi-dev
RUN python -m pip wheel --wheel-dir=/whl \
    azure-storage-queue azure-storage-blob cloudevents requests

FROM base AS production
RUN --mount=target=/builder,from=builder \
     python -m pip install \
     --no-cache --no-index --find-links=/builder/whl \
    azure-storage-queue azure-storage-blob cloudevents requests
COPY --link src /app/
CMD ["python3","-m","app"]

The image is built with Buildkit that is integrated into Docker Engine:

DOCKER_BUILDKIT=1 docker build -t azure-storage-sidecar:latest .

And the result:

I could further reduce the image size by removing the blob storage and the cloudevent dependency as they were no longer used.

Overall, an impressive size reduction for very little work! And no additional experimental technology was needed.

There is a limit to how much we can achieve without resolving to switching out the Python base (used and reused in the stack) so I will leave this here to avoid over optimization.


75 views0 comments
bottom of page