Multi-Stage Docker Builds with Python and uv: What I Was Doing Wrong

in #dev7 days ago

Containers and Python's uv package manager should be a perfect match. Fast builds, efficient dependency management, small images… what's not to love? Except when your production containers mysteriously can't import Django, rebuilds take forever despite unchanged dependencies, or when development and production behave completely differently.

I was doing multi-stage builds, separating concerns, following best practices. But I was still hitting these issues constantly. Turns out, I wasn't doing it quite right. Here's what I learned about making Python, uv, and Docker actually work together.

What I Was Doing Wrong

What I need in development isn’t necessarily needed in production. ruff - a Python linter and code formatter, for example - has no business in production. It’s purpose is to help a developer write cleaner and maintainable code. Hence the need for multi-stage builds. The right idea, but the execution was the problem.

Here’s a simplified version of my original approach:

# Stage 1: Build dependencies
FROM python:3.12-slim-bookworm AS builder

RUN pip install uv==0.8.15

WORKDIR /build
COPY pyproject.toml uv.lock ./
RUN uv sync --frozen --no-cache --no-dev

# Stage 2: Production runtime
FROM python:3.12-slim-bookworm AS production

WORKDIR /app
COPY --from=builder /build/.venv .venv

ENV PATH="/app/.venv/bin:$PATH"
COPY . .

CMD ["python", "manage.py", "runserver", "8000"]

This looks reasonable at first glance. I’m using multi-stage builds, separating build from runtime, and only including production dependencies. So what’s the problem?

Issue #1: Path Mismatch

Notice that the virtual environment is created in /build.venv during the builder stage, but my application code lives in /app in the production stage. The assumption is that the venv created would live alongside the code in /build, but now they’re separated. This can cause import issues and broken paths within the venv, which was the most common problem I experienced.

Issue #2: Manual uv installation

I’m installing uv via pip in the builder. What this means is that I’m not getting uv’s Docker-specific optimizations. uv has official Docker images that come pre-configured to prevent these exact issues, but I wasn't using them.

Issue #3: No Layer Caching

COPY pyproject.toml uv.lock ./
RUN uv sync --frozen --no-cache --no-dev

The builder stage uses --no-cache and doesn't leverage BuildKit cache mounts, so package downloads aren't cached between builds. Even unchanged dependencies get re-downloaded whenever pyproject.toml or uv.lock changes, making rebuilds unnecessarily slow.

Issue #4: Symlinks Breaking Across Stages

By default, uv creates symlinks when installing packages… because it’s faster. But the problem with that in this type of situation is that symlinks created in the builder stage can point to locations that don’t exist when copied to the production stage. What then happens is something like, “Cannot import Django” — import errors that only up at runtime.

Let me show you what happens when I try to add a development stage:

# ... builder stage ...

FROM python:3.12-slim-bookworm AS development

WORKDIR /app
COPY --from=builder /build/.venv /build/.venv
ENV PATH="/build/.venv/bin:$PATH"

# Now install dev dependencies
COPY pyproject.toml uv.lock ./
RUN pip install uv==0.8.15  # Installing uv AGAIN (wasteful)
RUN uv sync --frozen --no-cache --group dev

COPY . .
CMD ["python", "manage.py", "runserver", "8000"]

Now I have multiple problems compounding:

  • The base production venv is in /build/.venv

  • I'm trying to add dev dependencies to it, but my working directory is /app

  • I'm installing uv again (wasteful)

  • The paths are all over the place

This all results to inconsistent behaviors. While I managed to remedy the situation somehow and got containers run just fine eventually, I was losing out on more efficient approaches.

The Right Approach

There had to be a better way, I figured after running into the aforementioned issues in a new project. I felt I didn’t know enough, having ran into the same issues again. After digging through uv’s documentation and examples, I discovered the patterns that actually work reliably.

# Stage 1: Build with official uv image
FROM ghcr.io/astral-sh/uv:python3.12-bookworm-slim AS builder

# Critical uv settings for Docker
ENV UV_COMPILE_BYTECODE=1 \
    UV_LINK_MODE=copy \
    UV_PYTHON_DOWNLOADS=0

WORKDIR /app

# Install dependencies with cache mounts
RUN --mount=type=cache,target=/root/.cache/uv \
    --mount=type=bind,source=uv.lock,target=uv.lock \
    --mount=type=bind,source=pyproject.toml,target=pyproject.toml \
    uv sync --locked --no-install-project --no-dev

# Copy source and install project
COPY . /app
RUN --mount=type=cache,target=/root/.cache/uv \
    uv sync --locked --no-dev

# Stage 2: Minimal production runtime
FROM python:3.12-slim-bookworm AS production

# Create non-root user
RUN groupadd --system --gid 999 appuser \
 && useradd --system --gid 999 --uid 999 --create-home appuser

# Copy the entire app including venv - paths stay consistent
COPY --from=builder --chown=appuser:appuser /app /app

ENV PATH="/app/.venv/bin:$PATH"
USER appuser
WORKDIR /app

CMD ["python", "manage.py", "runserver", "8000"]

Let's break down what changed and why it works:

Fix #1: Using the Official uv Image

FROM ghcr.io/astral-sh/uv:python3.12-bookworm-slim AS builder

It turned out that the Astral team - makers of uv and ruff - specifically designed a base image for Docker workflows. It comes with uv pre-installed and properly configured for container environments.

Fix #2: The Magic Environment Variables

ENV UV_COMPILE_BYTECODE=1 \
    UV_LINK_MODE=copy \
    UV_PYTHON_DOWNLOADS=0

You see these three settings? They’re crucial:

  • UV_COMPILE_BYTECODE=1 - Compiles Python bytecode during the build, not at runtime. Containers start faster because Python doesn't need to compile .py files to .pyc on first import.

  • UV_LINK_MODE=copy - Makes uv copy files instead of creating symlinks. The venv becomes truly self-contained and portable across build stages.

  • UV_PYTHON_DOWNLOADS=0 - Tells uv to use the system Python interpreter instead of downloading its own. Builder and production stages use the exact same Python, no subtle compatibility issues.

Fix #3: BuildKit Cache Mounts

RUN --mount=type=cache,target=/root/.cache/uv \
    --mount=type=bind,source=uv.lock,target=uv.lock \
    --mount=type=bind,source=pyproject.toml,target=pyproject.toml \
    uv sync --locked --no-install-project --no-dev

It’s at this point we talk about speed improvements:

  • --mount=type=cache persists uv’s download cache between builds. This means that when I change one dependency, only that package gets re-downloaded, not the entire dependency tree.

  • --mount=type=bind temporarily mounts only the files needed for dependency resolution. Docker can detect when these specific files change and invalidate the cache appropriately.

  • --no-install-project first installs just dependencies (which changes rarely) then copies source code in a separate step and installs the project itself (which changes frequently). Better layer caching means faster rebulids.

Fix #4: Path Consistency

# Builder
WORKDIR /app
COPY . /app
RUN uv sync --locked --no-dev

# Production
COPY --from=builder --chown=appuser:appuser /app /app
WORKDIR /app

Everything happens in /app. the venv is created in /app/.venv, the source code lives right in /app, and the production stage runs from /app as well. No path confusion or broken references this time.

Adding Development Stage (The Right Way)

FROM base AS development

# Use build args to conditionally install dev dependencies
FROM ghcr.io/astral-sh/uv:python3.12-bookworm-slim AS builder-dev

ENV UV_COMPILE_BYTECODE=1 \
    UV_LINK_MODE=copy \
    UV_PYTHON_DOWNLOADS=0

WORKDIR /app

RUN --mount=type=cache,target=/root/.cache/uv \
    --mount=type=bind,source=uv.lock,target=uv.lock \
    --mount=type=bind,source=pyproject.toml,target=pyproject.toml \
    uv sync --locked --no-install-project --group dev

COPY . /app
RUN --mount=type=cache,target=/root/.cache/uv \
    uv sync --locked --group dev

FROM python:3.12-slim-bookworm AS development

RUN groupadd --system --gid 999 appuser \
 && useradd --system --gid 999 --uid 999 --create-home appuser

COPY --from=builder-dev --chown=appuser:appuser /app /app

ENV PATH="/app/.venv/bin:$PATH"
USER appuser
WORKDIR /app

CMD ["python", "manage.py", "runserver", "8000"]

This makes more sense. Same consistent paths, same pattern, just including dev dependency group. No more reinstalling uv, no path kung-fu.

The Results

Before vs. After:

MetricBeforeAfter
First build~5 minutes~4 minutes
Rebuild (dep change)3-5 minutes10-20 seconds
Rebuild (code change)2-3 minutes5-10 seconds
Production image~300MB~150MB
Import errorsFrequentNone

What this means in practice:

  • Development velocity - Code changes rebuild in seconds, not minutes

  • CI/CD efficiency - Tests run faster with cached dependencies

  • Deployment confidence - Same paths everywhere = no surprises in production

  • Resource efficiency - Smaller images = faster pulls, lower bandwidth costs

This is exactly what we want. Predictability and efficiency.

Production Considerations:

The simplified examples above demonstrate the core concepts, but production environments need more:

  • Security hardening - Non-root users, gosu for proper signal handling, GPG verification

  • Multiple build stages - Separate dev, test, staging, and production

  • Conditional dependencies - Build args to optionally include dev/test/docs groups

  • Full Django stack - Including Celery workers, beat scheduler, migrations

  • Observability - Health checks, resource limits, OpenTelemetry integration

I'll share in future posts how I actually do this in production for my Python (Django/FastAPI) projects.


Originally published at blog.theolujay.dev