Ubuntu 24.04 vs. Debian 13: Docker Performance Benchmark

Is Debian really leaner and faster than Ubuntu? I pitted both operating systems against each other as Docker hosts using a benchmark script.

Ubuntu 24.04 vs. Debian 13: Docker Performance Benchmark hero image

Anyone who regularly works with Linux servers knows the fundamental question with every new installation: Ubuntu or Debian?

The choice is often difficult. On one side stands Ubuntu, the modern and widespread all-rounder with LTS support. On the other side stands Debian, known for its unwavering stability and minimalism.

Especially if you rely almost exclusively on Docker, you might think the decision is secondary. Once the Docker Daemon is running, you only interact with containers and images anyway. Docker’s abstraction layer makes the underlying operating system feel invisible to the developer.

But is that really the case? Are there measurable differences in performance, resource consumption, or the network stack that make one system an objectively better Docker host?

I tried to test this using an automated Bash script to pit Ubuntu 24.04 LTS against Debian 13 (Trixie/Testing) on identical hardware (Hetzner Cloud).

The Test Scenario

To create fair conditions, I ran the benchmark script on four servers:

  1. Shared vCPU (small): Hetzner CX22 (Intel) – Ubuntu 24.04
  2. Shared vCPU (small): Hetzner CX22 (Intel) – Debian 13
  3. Dedicated vCPU (large): Hetzner CCX13 (AMD) – Ubuntu 24.04
  4. Dedicated vCPU (large): Hetzner CCX13 (AMD) – Debian 13

All servers were freshly installed, provided with the latest updates, and have Docker version 28.4.0 installed.

The Benchmark Script

To make the tests reproducible, I used a script that runs through 8 different scenarios. The script uses tools like sysbench, iperf3, and fio directly inside Docker containers to measure the performance of the container environment.

Here is the script if you want to test your own servers:

#!/bin/bash

# Docker Host Benchmark Script
# Comparison: Ubuntu 24.04 vs Debian 13

set -e

# Colors for Output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color

# Define absolute paths to avoid 'cd' issues
SCRIPT_DIR=$(pwd)
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
RESULT_DIR_NAME="benchmark_results_${TIMESTAMP}"
RESULT_DIR="${SCRIPT_DIR}/${RESULT_DIR_NAME}"
RESULT_FILE="${RESULT_DIR}/results.json"
LOG_FILE="${RESULT_DIR}/benchmark.log"

# Functions
log() {
    echo -e "${GREEN}[$(date +'%Y-%m-%d %H:%M:%S')]${NC} $1" | tee -a "$LOG_FILE"
}

error() {
    echo -e "${RED}[ERROR]${NC} $1" | tee -a "$LOG_FILE"
}

info() {
    echo -e "${BLUE}[INFO]${NC} $1" | tee -a "$LOG_FILE"
}

warn() {
    echo -e "${YELLOW}[WARN]${NC} $1" | tee -a "$LOG_FILE"
}

# Initialization
init_benchmark() {
    mkdir -p "$RESULT_DIR"
    log "Initializing Benchmark Suite..."

    if command -v lsb_release &> /dev/null; then
        OS_NAME=$(lsb_release -ds)
    else
        OS_NAME=$(grep PRETTY_NAME /etc/os-release | cut -d'"' -f2)
    fi

    KERNEL_VERSION=$(uname -r)
    DOCKER_VERSION=$(docker --version | cut -d' ' -f3 | tr -d ',')
    CPU_MODEL=$(lscpu | grep "Model name" | cut -d':' -f2 | xargs)
    CPU_CORES=$(nproc)
    TOTAL_RAM=$(free -h | grep Mem | awk '{print $2}')

    cat > "$RESULT_FILE" <<EOF
{
  "system_info": {
    "os": "$OS_NAME",
    "kernel": "$KERNEL_VERSION",
    "docker_version": "$DOCKER_VERSION",
    "cpu_model": "$CPU_MODEL",
    "cpu_cores": $CPU_CORES,
    "total_ram": "$TOTAL_RAM",
    "timestamp": "$TIMESTAMP"
  },
  "tests": {}
}
EOF

    log "System: $OS_NAME"
    log "Kernel: $KERNEL_VERSION"
    log "Docker: $DOCKER_VERSION"
    log "CPU: $CPU_MODEL ($CPU_CORES Cores)"
    log "RAM: $TOTAL_RAM"
}

# Pre-warm Images
prewarm_images() {
    log "=== Preparation: Downloading Docker Images ==="
    info "Loading required images to exclude download time from benchmarks..."

    local images=(
        "nginx:alpine"
        "severalnines/sysbench"
        "xridge/fio"
        "networkstatic/iperf3"
        "node:20-alpine"
        "postgres:16-alpine"
        "redis:7-alpine"
    )

    for img in "${images[@]}"; do
        info "Pulling $img..."
        docker pull "$img" > /dev/null
    done
    log "All images pre-warmed."
}

# Test 0: Idle Resources (Baseline)
test_idle_resources() {
    log "=== Test 0: Idle Resource Usage (Baseline) ==="
    sleep 5

    local ram_used_mb=$(free -m | grep Mem | awk '{print $3}')
    local disk_used_gb=$(df -h / | tail -1 | awk '{print $3}')

    log "RAM used (Idle): ${ram_used_mb} MB"
    log "Disk used (Idle): ${disk_used_gb}"

    local temp_json=$(jq ".tests.idle_resources = {
        \"ram_used_mb\": $ram_used_mb,
        \"disk_used_gb\": \"$disk_used_gb\"
    }" "$RESULT_FILE")
    echo "$temp_json" > "$RESULT_FILE"
}

# Test 1: Container Start Performance
test_container_start() {
    log "=== Test 1: Container Start Performance ==="

    local iterations=100
    local start_time=$(date +%s.%N)

    for i in $(seq 1 $iterations); do
        docker run --rm nginx:alpine echo "Container $i" > /dev/null 2>&1
    done

    local end_time=$(date +%s.%N)
    local total_time=$(echo "$end_time - $start_time" | bc)
    local avg_time=$(echo "scale=4; $total_time / $iterations" | bc)

    log "Total time for $iterations starts: ${total_time}s"
    log "Average per container: ${avg_time}s"

    local temp_json=$(jq ".tests.container_start = {
        \"total_time_seconds\": $total_time,
        \"average_time_seconds\": $avg_time,
        \"iterations\": $iterations
    }" "$RESULT_FILE")
    echo "$temp_json" > "$RESULT_FILE"
}

# Test 2: CPU Performance
test_cpu_performance() {
    log "=== Test 2: CPU Performance ==="

    info "Single-Thread Test..."
    local single_output=$(docker run --rm severalnines/sysbench \
        sysbench cpu --cpu-max-prime=20000 --threads=1 run 2>&1)

    local single_events=$(echo "$single_output" | grep "events per second" | awk '{print $4}')
    local single_time=$(echo "$single_output" | grep "total time:" | awk '{print $3}' | tr -d 's')

    info "Multi-Thread Test (2 Threads)..."
    local multi_output=$(docker run --rm severalnines/sysbench \
        sysbench cpu --cpu-max-prime=20000 --threads=2 run 2>&1)

    local multi_events=$(echo "$multi_output" | grep "events per second" | awk '{print $4}')
    local multi_time=$(echo "$multi_output" | grep "total time:" | awk '{print $3}' | tr -d 's')

    log "Single-Thread: ${single_events} events/s"
    log "Multi-Thread: ${multi_events} events/s"

    local temp_json=$(jq ".tests.cpu_performance = {
        \"single_thread\": {
            \"events_per_second\": $single_events,
            \"total_time_seconds\": $single_time
        },
        \"multi_thread\": {
            \"events_per_second\": $multi_events,
            \"total_time_seconds\": $multi_time
        }
    }" "$RESULT_FILE")
    echo "$temp_json" > "$RESULT_FILE"
}

# Test 3: I/O Performance
test_io_performance() {
    log "=== Test 3: I/O Performance ==="

    local fio_dir="$(pwd)/fio-test"
    mkdir -p "$fio_dir"

    info "Random Write Test (4k blocks, 1GB file)..."
    # --- Use xridge/fio image ---
    local fio_output=$(docker run --rm -v "$fio_dir":/data xridge/fio \
        --name=random-write --ioengine=libaio --iodepth=16 \
        --rw=randwrite --bs=4k --direct=1 --size=1G \
        --numjobs=1 --runtime=30 --time_based \
        --group_reporting --directory=/data \
        --output-format=json 2>&1)

    local iops=$(echo "$fio_output" | jq '.jobs[0].write.iops')
    local bw_kib=$(echo "$fio_output" | jq '.jobs[0].write.bw')
    local bw_mib=$(echo "scale=2; $bw_kib / 1024" | bc)

    log "IOPS: ${iops}"
    log "Bandwidth: ${bw_mib} MiB/s"

    rm -rf "$fio_dir"

    local temp_json=$(jq ".tests.io_performance = {
        \"iops\": $iops,
        \"bandwidth_mibs\": $bw_mib
    }" "$RESULT_FILE")
    echo "$temp_json" > "$RESULT_FILE"
}

# Test 4: Network Performance
test_network_performance() {
    log "=== Test 4: Network Performance (Container-to-Container on same Host) ==="
    info "This test measures the performance of the Docker Bridge and the Kernel."

    info "Starting iperf3 Server..."
    docker run -d --name iperf-server --rm networkstatic/iperf3 -s > /dev/null 2>&1
    sleep 3

    local server_ip=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' iperf-server)

    info "Starting iperf3 Client..."
    local iperf_output=$(docker run --rm networkstatic/iperf3 -c "$server_ip" -t 10 2>&1)

    local bandwidth=$(echo "$iperf_output" | grep "sender" | tail -1 | awk '{print $(NF-2)}')
    local unit=$(echo "$iperf_output" | grep "sender" | tail -1 | awk '{print $(NF-1)}')

    docker stop iperf-server > /dev/null 2>&1

    log "Bandwidth: ${bandwidth} ${unit}"

    local temp_json=$(jq ".tests.network_performance = {
        \"bandwidth\": \"$bandwidth $unit\"
    }" "$RESULT_FILE")
    echo "$temp_json" > "$RESULT_FILE"
}

# Test 5: Memory Performance
test_memory_performance() {
    log "=== Test 5: Memory Performance ==="

    local mem_output=$(docker run --rm severalnines/sysbench \
        sysbench memory --memory-block-size=1M \
        --memory-total-size=10G --threads=1 run 2>&1)

    local operations=$(echo "$mem_output" | grep "Total operations" | awk '{print $3}')
    local ops_per_sec=$(echo "$mem_output" | grep "Total operations" | sed -E 's/.*\(//;s/ per second\)//')
    local throughput=$(echo "$mem_output" | grep "MiB/sec" | awk '{print $1}')

    log "Throughput: ${throughput} MiB/sec"
    log "Ops/sec: ${ops_per_sec}"

    local temp_json=$(jq ".tests.memory_performance = {
        \"total_operations\": $operations,
        \"ops_per_second\": $ops_per_sec,
        \"throughput_mibs\": $throughput
    }" "$RESULT_FILE")
    echo "$temp_json" > "$RESULT_FILE"
}

# Test 6: Build Performance
test_build_performance() {
    log "=== Test 6: Build Performance ==="

    local build_dir="/tmp/docker-build-test"
    mkdir -p "$build_dir"
    cd "$build_dir"

    cat > Dockerfile <<'EOF'
FROM node:20-alpine
WORKDIR /app
RUN npm install -g npm@latest
RUN apk add --no-cache python3 make g++
RUN echo '{"name":"test","version":"1.0.0"}' > package.json
RUN npm install express
RUN echo 'console.log("test");' > index.js
EOF

    info "Build Test (3 runs)..."
    local total_time=0

    for i in {1..3}; do
        docker image rm test-build:$i 2>/dev/null || true
        local start=$(date +%s.%N)
        docker build --no-cache -t test-build:$i . > /dev/null 2>&1
        local end=$(date +%s.%N)
        local duration=$(echo "$end - $start" | bc)
        total_time=$(echo "$total_time + $duration" | bc)
        info "Run $i: ${duration}s"
    done

    local avg_time=$(echo "scale=4; $total_time / 3" | bc)

    log "Average Build Time: ${avg_time}s"

    docker image rm test-build:1 test-build:2 test-build:3 2>/dev/null || true
    cd - > /dev/null
    rm -rf "$build_dir"

    local temp_json=$(jq ".tests.build_performance = {
        \"average_build_time_seconds\": $avg_time,
        \"iterations\": 3
    }" "$RESULT_FILE")
    echo "$temp_json" > "$RESULT_FILE"
}

# Test 7: Container Density
test_container_density() {
    log "=== Test 7: Container Density Test ==="

    local container_count=50

    info "Starting $container_count containers..."
    for i in $(seq 1 $container_count); do
        docker run -d --name nginx-$i --memory=50m nginx:alpine > /dev/null 2>&1
        echo -ne "\rStarted: $i/$container_count"
    done
    echo ""

    sleep 10

    local mem_usage=$(free | grep Mem | awk '{printf "%.2f", ($3/$2) * 100}')
    local load_avg=$(uptime | awk -F'load average:' '{print $2}' | awk '{print $1}' | tr -d ',')

    log "Memory Usage (Host): ${mem_usage}%"
    log "Load Average (1min): ${load_avg}"

    info "Stopping containers..."
    docker rm -f $(seq -f "nginx-%g" 1 $container_count) > /dev/null 2>&1

    local temp_json=$(jq ".tests.container_density = {
        \"container_count\": $container_count,
        \"memory_usage_percent\": $mem_usage,
        \"load_average\": $load_avg
    }" "$RESULT_FILE")
    echo "$temp_json" > "$RESULT_FILE"
}

# Test 8: Docker Compose Performance
test_compose_performance() {
    log "=== Test 8: Docker Compose Stack Performance ==="

    local compose_dir="/tmp/compose-test"
    mkdir -p "$compose_dir"
    cd "$compose_dir"

    cat > docker-compose.yml <<'EOF'
services:
  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_PASSWORD: test
  redis:
    image: redis:7-alpine
  app:
    image: nginx:alpine
    depends_on:
      - db
      - redis
EOF

    info "Compose Test (10 cycles Up/Down)..."
    local start_time=$(date +%s.%N)

    for i in $(seq 1 10); do
        docker compose up -d > /dev/null 2>&1
        docker compose down > /dev/null 2>&1
        echo -ne "\rCycle: $i/10"
    done
    echo ""

    local end_time=$(date +%s.%N)
    local total_time=$(echo "$end_time - $start_time" | bc)
    local avg_time=$(echo "scale=4; $total_time / 10" | bc)

    log "Average per cycle: ${avg_time}s"

    cd - > /dev/null
    rm -rf "$compose_dir"

    local temp_json=$(jq ".tests.compose_performance = {
        \"average_cycle_time_seconds\": $avg_time,
        \"iterations\": 10
    }" "$RESULT_FILE")
    echo "$temp_json" > "$RESULT_FILE"
}

# Cleanup
cleanup() {
    log "Cleaning up..."
    docker system prune -af > /dev/null 2>&1 || true
}

# Create Summary
create_summary() {
    log "=== Creating Summary ==="

    local summary_file="${RESULT_DIR}/summary.txt"

    cat > "$summary_file" <<EOF
Docker Host Benchmark Summary
======================================
System: $(jq -r '.system_info.os' "$RESULT_FILE")
Kernel: $(jq -r '.system_info.kernel' "$RESULT_FILE")
Docker: $(jq -r '.system_info.docker_version' "$RESULT_FILE")
Timestamp: $TIMESTAMP

Test Results:
-----------------
0. Baseline (Idle):
   - RAM used: $(jq -r '.tests.idle_resources.ram_used_mb' "$RESULT_FILE") MB
   - Disk used: $(jq -r '.tests.idle_resources.disk_used_gb' "$RESULT_FILE")

1. Container Start Performance:
   - Average: $(jq -r '.tests.container_start.average_time_seconds' "$RESULT_FILE")s

2. CPU Performance:
   - Single-Thread: $(jq -r '.tests.cpu_performance.single_thread.events_per_second' "$RESULT_FILE") events/s
   - Multi-Thread: $(jq -r '.tests.cpu_performance.multi_thread.events_per_second' "$RESULT_FILE") events/s

3. I/O Performance (Random Write 4k):
   - IOPS: $(jq -r '.tests.io_performance.iops' "$RESULT_FILE")
   - Bandwidth: $(jq -r '.tests.io_performance.bandwidth_mibs' "$RESULT_FILE") MiB/s

4. Network Performance (Internal):
   - Bandwidth: $(jq -r '.tests.network_performance.bandwidth' "$RESULT_FILE")

5. Memory Performance:
   - Throughput: $(jq -r '.tests.memory_performance.throughput_mibs' "$RESULT_FILE") MiB/s
   - Ops/sec: $(jq -r '.tests.memory_performance.ops_per_second' "$RESULT_FILE")

6. Build Performance (No Cache):
   - Average: $(jq -r '.tests.build_performance.average_build_time_seconds' "$RESULT_FILE")s

7. Container Density (50 Containers):
   - Memory Usage: $(jq -r '.tests.container_density.memory_usage_percent' "$RESULT_FILE")%
   - Load Average: $(jq -r '.tests.container_density.load_average' "$RESULT_FILE")

8. Compose Performance:
   - Average: $(jq -r '.tests.compose_performance.average_cycle_time_seconds' "$RESULT_FILE")s

Detailed Results (JSON): $RESULT_FILE
EOF

    cat "$summary_file"
    log "Summary saved to: $summary_file"
}

# Main Program
main() {
    clear
    echo -e "${BLUE}"
    cat <<'EOF'
╔═══════════════════════════════════════╗
β•‘  Docker Host Benchmark Suite          β•‘
β•‘  Ubuntu 24.04 vs Debian 13            β•‘
β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•
EOF
    echo -e "${NC}"

    if ! command -v docker &> /dev/null; then error "Docker is not installed!"; exit 1; fi
    if ! command -v jq &> /dev/null; then warn "jq not found. Installing..."; sudo apt-get update >/dev/null && sudo apt-get install -y jq; fi
    if ! command -v bc &> /dev/null; then warn "bc not found. Installing..."; sudo apt-get install -y bc; fi

    init_benchmark

    warn "This benchmark takes approx. 15-25 minutes."
    warn "Please ensure that NO other containers are running!"
    read -p "Continue? (y/n) " -n 1 -r
    echo
    if [[ ! $REPLY =~ ^[Yy]$ ]]; then exit 0; fi

    prewarm_images

    test_idle_resources
    test_container_start
    test_cpu_performance
    test_io_performance
    test_network_performance
    test_memory_performance
    test_build_performance
    test_container_density
    test_compose_performance

    cleanup
    create_summary

    log "${GREEN}Benchmark finished!${NC}"
    log "Results saved in: $RESULT_DIR"
}

main "$@"

(Note: You can simply copy the full script and save it as benchmark.sh. Don’t forget to make it executable with chmod +x benchmark.sh and install jq as well as bc.)

What is being tested?

The script performs the following tests:

  1. Idle Resources: How much RAM and storage does the empty OS need?
  2. Container Start: How fast can 100 containers be started and stopped?
  3. CPU Performance: Calculation of prime numbers (Single- & Multi-Thread) via Sysbench.
  4. I/O Performance: Stress test for the disk (4k Random Writes).
  5. Network: Internal bandwidth between two containers (Bridge Network).
  6. Memory: Read and write speed of the RAM.
  7. Build Performance: Time measurement for a docker build of a Node.js app (without cache).
  8. Container Density: Starting 50 Nginx containers in parallel (RAM consumption).
  9. Docker Compose: Up/Down cycles of a stack (Postgres, Redis, App).

The Results

The results were not unequivocal and showed that the choice of hardware has a massive influence on which operating system performs β€œbetter”.

Scenario 1: Shared vCPU (Small Server)

MetricUbuntu 24.04Debian 13Difference
Container Start0.43s0.46sUbuntu -6.4%
CPU Multi-Thread669 ev/s713 ev/sDebian +6.2%
Disk IOPS29,07430,914Debian +5.9%
Build Time (NodeJS)26.0s24.3sDebian -7.3%
RAM Usage (50 Containers)44.96%41.87%Debian more efficient

Debian scored particularly well in memory efficiency and build time. While Ubuntu was minimally faster at starting empty containers, it lost in almost all load tests.

Scenario 2: Dedicated vCPU (Large Server)

Switching to dedicated AMD EPYC hardware flips the picture. Here, Ubuntu dominates and won 7 out of 8 tests. What stands out here is the I/O Performance. Ubuntu seems to handle the NVMe drivers or the file system much better on this architecture, reaching almost double the write speed compared to Debian.

MetricUbuntu 24.04Debian 13Difference
Container Start0.22s0.47sUbuntu -51% πŸš€
CPU Multi-Thread1798 ev/s1804 ev/sTie
Disk IOPS57,58729,541Ubuntu +94% πŸš€
Build Time (NodeJS)13.5s17.1sUbuntu -21%
Docker Compose Cycle11.08s11.58sUbuntu -4.3%

Conclusion: Which OS for Docker?

The result is not as clear-cut as one might hope. However, in the end, Ubuntu is definitely not the worse choice based on these results.

Note: This is a snapshot. In previous tests, I had runs where Ubuntu won more often on the small servers as well.

Ubuntu’s often-cited β€œbloatware” carries hardly any weight in pure Docker operations on modern hardware.

FAQs

How do I run the benchmark script?

Upload the script to your server, make it executable with `chmod +x scriptname.sh`, and run it with `./scriptname.sh`. Make sure `jq` and `bc` are installed (`apt install jq bc`).

Does Docker affect OS performance?

Docker has very low overhead because it is not full virtualization but uses kernel features (namespaces, cgroups). However, there are differences in the network stack (Bridge/Overlay) and the storage driver (overlay2).

Can I use the script on a Raspberry Pi?

Yes, while the script doesn't explicitly check the architecture, it uses standard Docker images (nginx:alpine, sysbench) which are mostly available as multi-arch (ARM64). The results are, of course, only comparable with other Pis.

Are the results universally valid?

No, benchmarks are always snapshots. Performance depends heavily on the hoster, the load on the host system (for vServers), and the exact kernel version.

Share this post:

This website uses cookies. These are necessary for the functionality of the website. You can find more information in the privacy policy