Astro.js & Docker: Your Path to a Live Website (Static & Dynamic on a VPS)
In this article, you'll learn how to deploy an Astro.js website using Docker on a VPS. I'll show you how to host both static and dynamic websites.

Table of Contents
- Key Takeaways at a Glance
- The Tools: Preparing Your Development Environment
- Creating Your First Astro Project
- Understanding Astro: Building Blocks of Your Site
- Static or Dynamic? The Build Process
- Docker Time: Your App in a Container
- Orchestration with Docker Compose
- Off to the Server: The Deployment
- Important Details & Tips
- Conclusion
Do you want to build a modern, fast website with Astro.js and reliably deploy it on your own server? Docker is the key to making this process clean, repeatable, and efficient. In this post, I’ll guide you through the entire process – from setting up your development environment to the live deployment of your Astro application (both static and dynamic) using Docker and Docker Compose on a Virtual Private Server (VPS).
Key Takeaways at a Glance
- Tools: Visual Studio Code, Node.js (ideally via a version manager like NVM), and Git are the fundamentals.
- Astro Project: Quick start with
npm create astro@latest
, using templates (e.g.,minimal
). - Astro Concepts: Pages (
.astro
), Layouts (e.g.,BaseLayout.astro
with<slot />
), and Components (e.g.,Header.astro
,Footer.astro
) enable a modular structure. - Docker is essential: Containerization simplifies deployment and ensures consistent environments.
- Multi-Stage Builds: Dockerfiles should use multiple stages (a build stage with Node.js, a runtime stage with Nginx or Node.js) to create lean final images.
- Static vs. Dynamic: Static sites can be served via Nginx. Dynamic routes in Astro require the Node.js adapter and a Node.js runtime in the Docker container.
- Docker Compose:
docker-compose.yml
simplifies starting and managing your Docker containers on the server. - Deployment Workflow: Manage code with Git (e.g., on GitHub), clone it on the server, and start with
docker-compose up -d
. - Server Setup: Docker, Docker Compose, and Git must be installed on the VPS.
The Tools: Preparing Your Development Environment
Before diving into Astro and Docker, we need a solid foundation. These three tools are essential:
-
Text Editor: Visual Studio Code (VS Code) is an excellent choice. It’s free, extremely powerful, and has a huge community with countless extensions that facilitate web development.
-
Node.js: Astro is based on Node.js. Instead of installing Node.js directly, I strongly recommend using a version manager.
-
For Windows: Use
nvm-windows
. This avoids many permission issues and allows you to easily switch between different Node.js versions if projects require different ones. -
For Linux/macOS: Use the original
nvm
(Node Version Manager). Installation is usually done via a curl or wget script.
nvm install lts # Installs the latest Long-Term Support version nvm use lts # Activates the installed LTS version
-
-
Git: The standard tool for version control. Indispensable for tracking changes and getting the code onto the server. Download Git.
# During installation on Windows: Choose VS Code as the default editor for Git. # Ensure the Default Branch Name is set to "main".
After installing these tools, open your terminal (in VS Code: Ctrl
+Shift
+\`` or
Terminal > New Terminal`) and check the installations:
node -v
npm -v
git --version
nvm version # For nvm-windows, use 'nvm version' or 'nvm list'
Creating Your First Astro Project
In your terminal, navigate to the folder where you want your projects to reside and create a new Astro project:
# Choose a simple template to start
npm create astro@latest -- --template minimal my-astro-project
# Change into the project directory
cd my-astro-project
# Install dependencies (often done during creation)
npm install
# Start the development server
npm run dev
Your browser should open (or you can manually open http://localhost:4321
) and display the default Astro page. Now, if you change code in your project (e.g., in src/pages/index.astro
) and save, the page in the browser will update automatically (Hot Module Replacement).
Understanding Astro: Building Blocks of Your Site
Astro projects are clearly structured. The most important folders and concepts:
src/pages/
: This is where your pages live. Each.astro
,.md
, or.html
file here becomes a route on your website.index.astro
becomes/
,about.astro
becomes/about
.src/layouts/
: Reusable page structures. A typical layout (BaseLayout.astro
) contains the basic HTML structure (<html>
,<head>
,<body>
), perhaps a header and footer, and a<slot />
.<slot />
: This is the placeholder in the layout where the specific content of a page is inserted. Each page using the layout “fills” this slot.src/components/
: Reusable UI elements like buttons, navigation bars, cards, etc. These can be imported and used in pages and layouts (Header.astro
,Footer.astro
are good examples).- Styling:
- Global: CSS files (e.g.,
src/styles/global.css
) can be imported into layouts to define styles for the entire site (e.g., body styling, CSS variables, resets). - Scoped: Within an
.astro
file, you can use<style>
tags. By default, the styles here apply only to the HTML in the same file. This prevents conflicts between components.
- Global: CSS files (e.g.,
---
// Example: src/layouts/BaseLayout.astro
import Header from "../components/Header.astro";
import Footer from "../components/Footer.astro";
import "../styles/global.css"; // Import global CSS
interface Props {
title: string;
description?: string; // Optional
}
const { title, description = "My Awesome Astro Site" } = Astro.props;
---
<!doctype html>
<html lang="en">
{/* Changed lang to 'en' */}
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<meta name="description" content={description} />
<link rel="icon" type="image/svg+xml" href="/favicon.svg" />
<title>{title} | Astro Demo</title>
<body>
<Header />
<main class="container">
<slot />
{/* Content from pages goes here */}
</main>
<Footer />
</body>
</html>
<style>
.container {
max-width: 1100px;
margin-left: auto;
margin-right: auto;
padding-left: 1rem;
padding-right: 1rem;
flex-grow: 1; /* Ensures main content pushes footer down */
}
body {
display: flex;
flex-direction: column;
min-height: 100vh;
}
</style>
Static or Dynamic? The Build Process
Astro can do both: generate highly optimized static pages or render dynamic pages on a server.
The Static Build
This is the default in Astro. When you run npm run build
, Astro analyzes your project and generates pure HTML, CSS, and JavaScript. The result lands in the dist/
folder.
npm run build
This dist/
folder contains everything you need to deploy your website on any static web host (like an Nginx server in Docker). No Node.js server is needed at runtime anymore.
Dynamic Routes & The Node Adapter
What if a page needs dynamic data that can change with each request (e.g., user information from a header)? This is where Server-Side Rendering (SSR) comes in.
-
Install Adapter: To enable SSR with Node.js, you need the Astro Node.js adapter.
npx astro add node
This adjusts your
astro.config.mjs
and installs the necessary packages. -
Disable Prerendering: For the dynamic page (e.g.,
src/pages/request-info.astro
), you need to explicitly turn off prerendering:--- // src/pages/request-info.astro export const prerender = false; // Opt-out of static generation // Access request headers (only available in SSR mode) const userAgent = Astro.request.headers.get("user-agent") || "Unknown"; import BaseLayout from "../layouts/BaseLayout.astro"; --- <BaseLayout title="Request Info"> <h1>Your Request Info</h1> <p>Your User-Agent: {userAgent}</p> </BaseLayout>
-
Build Result: Now, when you run
npm run build
, thedist/
folder contains not only static assets (client/
) but also server code (server/entry.mjs
). This server code needs to be executed with Node.js at runtime.
Docker Time: Your App in a Container
Docker packages your application and all its dependencies into an isolated container. This makes deployment extremely reliable – “it works on my machine” becomes “it runs anywhere Docker runs.”
Why Docker?
- Consistency: Your app runs in the same environment, whether locally or on the server.
- Isolation: No conflicts with other applications on the server.
- Repeatability: Easy creation and distribution of identical environments.
- Scalability: Easier scaling up and down of your application.
The .dockerignore
: What Doesn’t Belong Inside
Similar to .gitignore
, this file tells Docker which files and folders should not be copied into the image. This keeps images small and speeds up the build process.
# .dockerignore
node_modules
npm-debug.log
dist
.astro
.env*
*.env
.git
.vscode
Dockerfile*
docker-compose*
README.md
Scenario 1: Static Astro Site with Nginx
For a purely static site, we use a multi-stage Docker build:
- Build Stage: Uses a Node.js image to execute
npm install
andnpm run build
. - Runtime Stage: Uses a lean Nginx image and copies only the finished
dist/
folder from the build stage and an Nginx configuration file into it.
Dockerfile
(for static site):
# Dockerfile (for static site)
# ---- Build Stage ----
# Use an official Node.js image as the base for building
# 'alpine' versions are smaller
FROM node:22-alpine AS builder
# Set the working directory inside the container
WORKDIR /app
# Copy package.json and lock file first to leverage Docker cache
COPY package*.json ./
# Install project dependencies
RUN npm install
# Copy the rest of the application code (respects .dockerignore)
COPY . .
# Build the Astro site for production
RUN npm run build
# The static files are now in /app/dist
# ---- Runtime Stage ----
# Use an official Nginx image as the base for serving
FROM nginx:1.27-alpine AS runtime
# Set the working directory for Nginx files
WORKDIR /usr/share/nginx/html
# Remove default Nginx welcome page
RUN rm -rf ./*
# Copy the built static files from the 'builder' stage
COPY --from=builder /app/dist .
# Copy our custom Nginx configuration
COPY nginx.conf /etc/nginx/conf.d/default.conf
# Expose port 80 (standard HTTP port)
EXPOSE 80
# Command to run Nginx in the foreground when the container starts
CMD ["nginx", "-g", "daemon off;"]
nginx.conf
(Example):
This file configures Nginx to serve the static files, enable Gzip compression, and set caching headers. It also ensures that a 404 error (file not found) serves index.html
, which can be useful for Single-Page Application (SPA)-like routing in Astro.
# /nginx.conf
server {
listen 80;
server_name _; # Accepts any hostname
root /usr/share/nginx/html; # Root directory for static files
# Standard index file
index index.html;
# Error page for SPA routing (optional, but often useful)
# If a file is not found, try serving index.html instead
error_page 404 /index.html;
# Enable Gzip compression for text-based files
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css text/xml application/json application/javascript application/xml+rss application/atom+xml image/svg+xml;
location / {
# Try serving the requested file, then directory, then fallback to index.html
try_files $uri $uri/ /index.html;
# Set long cache expiration for static assets (e.g., 1 year)
location ~* \.(?:css|js|svg|gif|png|jpg|jpeg|webp|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public";
access_log off; # Optional: Disable logging for static assets
}
}
# Deny access to hidden files (like .htaccess)
location ~ /\. {
deny all;
}
}
Scenario 2: Dynamic Astro Site with Node.js
If you’re using the Node adapter, you need Node.js at runtime. The Dockerfile
is simpler since we don’t need a separate Nginx stage.
Dockerfile
(for dynamic site):
# Dockerfile (for dynamic site with Node adapter)
# ---- Build Stage ----
FROM node:22-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# Build produces output in /app/dist (including server/entry.mjs)
# ---- Runtime Stage ----
# Use the same Node.js version for runtime
FROM node:22-alpine AS runtime
WORKDIR /app
# Set environment variable to signal production mode
ENV NODE_ENV=production
# Copy only the necessary build artifacts and production node_modules
# Note: You might need 'npm ci --omit=dev' in the build stage for smaller node_modules
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json
# Astro's Node adapter typically runs on port 4321 by default
EXPOSE 4321
# Set HOST env var to make the server listen on all interfaces within the container
ENV HOST="0.0.0.0"
# Set PORT env var (Astro Node adapter reads this)
ENV PORT="4321"
# Command to start the Node.js server produced by the Astro build
CMD ["node", "dist/server/entry.mjs"]
Orchestration with Docker Compose
docker-compose
is a tool to define and run multi-container applications. Even though we only have one container here, it greatly simplifies starting and configuration.
docker-compose.yml
:
services:
astro-app:
build:
context: . # Use the current directory as the build context
dockerfile: Dockerfile # Specify the Dockerfile name
image: my-astro-project:latest # Name the built image
container_name: my-astro-container # Name the running container
ports:
# Example for STATIC site (Nginx on port 80 inside container)
- "80:80" # Map host port 80 to container port 80
# Example for DYNAMIC site (Node adapter on port 4321 inside container)
# - "4321:4321" # Map host port 4321 to container port 4321
# - "80:4321" # Map host port 80 to container port 4321 if you want access on default HTTP port
restart: unless-stopped # Restart policy
Choose the appropriate ports
mapping depending on whether you want to deploy the static (Nginx, port 80) or dynamic (Node.js, e.g., port 4321) version. The host port (the left number) is the port you access in your browser to reach the app.
Off to the Server: The Deployment
Now let’s bring it all together and deploy the application on a VPS.
1. Server Preparation (VPS, e.g., Hetzner)
- Create a VPS with a provider of your choice (e.g., Hetzner Cloud, DigitalOcean, Linode). Choose an operating system like Ubuntu 24.04.
- Connect to your server via SSH (
ssh root@YOUR_SERVER_IP
). - Install Docker: Follow the official Docker documentation for your OS (usually involves a few
apt update
,apt install
commands, and adding the Docker repository).
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
# Install Docker Engine, CLI, Containerd, and Buildx/Compose plugins
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
-
Install Docker Compose (if not installed as a plugin): Follow the official documentation (often a
curl
command to download the binary and make it executable). Note: The command above usually installs thedocker-compose-plugin
, allowingdocker compose
(with a space). If you need the standalonedocker-compose
(hyphen), follow separate instructions.# Check if plugin works (preferred) docker compose version # Example for standalone install (if needed, check official docs for latest version!) # sudo curl -L "https://github.com/docker/compose/releases/download/v2.24.6/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose # sudo chmod +x /usr/local/bin/docker-compose # docker-compose --version # Check standalone version
-
Install Git:
sudo apt update sudo apt install git -y git --version # Check
2. Get the Code (Git Clone)
Clone your project repository (which you previously pushed to GitHub, for example) onto the server:
git clone YOUR_REPOSITORY_URL
cd your-project-folder
3. The Launch (docker compose up -d
)
Now comes the magic moment. In the project folder on the server, where your docker-compose.yml
and Dockerfile
reside, execute:
# Use 'docker compose' (with space) if using the plugin
sudo docker compose up --build -d
sudo
: Docker commands often require root privileges (or add your user to thedocker
group).docker compose up
: Starts the services defined indocker-compose.yml
.--build
: Forces a rebuild of the image if theDockerfile
or code has changed. Necessary the first time.-d
: Starts the containers in “detached” mode (in the background).
Docker will now build your image (this might take a while the first time) and start the container.
Now open http://YOUR_SERVER_IP:<Host-Port>
(where <Host-Port>
is the port you specified on the left side of the ports
mapping in docker-compose.yml
, e.g., 80 or 4321) in your browser. Your Astro website should be live!
Important Details & Tips
- Port Mapping: The
Host-Port:Container-Port
mapping is crucial. If you want your site to be accessible directly via the IP (or later a domain) without specifying a port, map to host port 80 ("80:80"
for Nginx or"80:4321"
for the Node adapter). Make sure no other service on the server is blocking port 80. - Multi-Stage Build Advantages: The main advantage is a drastically smaller final Docker image. The runtime stage no longer contains build tools (Node.js, npm, source code, etc.), only the essentials for execution. This saves disk space and improves security.
- Updates: To update your site:
- Make changes locally, test, commit, and push (Git).
- Log into the server via SSH, navigate to the project folder.
- Run
git pull
to fetch the latest changes. - Run
sudo docker compose up --build -d
again to rebuild the image and restart the container with the new code.
Conclusion
Congratulations! You’ve successfully deployed an Astro.js website using Docker on a VPS. This setup allows you to host both static and dynamic websites efficiently. With Docker, you can ensure that your application runs consistently across different environments, making it easier to manage and scale.