From Single Server to Multi-VM Infrastructure
Level 2 – Scaling the WordPress Ansible Deployment
In the previous article, we deployed a WordPress website using Ansible on a single virtual machine.
That first architecture was intentionally simple:
-
1 VM
-
Docker
-
WordPress container
-
MariaDB container
-
Caddy reverse proxy
Everything ran on the same host.
While this setup works well for learning automation, it does not reflect how infrastructure is usually organized in production.
In Level 2, we evolve toward a multi-server architecture that better resembles real DevOps environments.
Level 1 Architecture Recap
In Level 1, all components ran on the same machine.
│
▼
Caddy Reverse Proxy
│
▼
WordPress container
│
▼
MariaDB container
The advantages were simplicity and ease of deployment.
However, it had several limitations:
-
the reverse proxy and application share the same host
-
difficult to scale multiple websites
-
limited separation between infrastructure components
To move closer to a real production environment, we redesigned the architecture.
Level 2 Architecture
In Level 2, the infrastructure is split across multiple machines.
-
1 VM dedicated to the reverse proxy
-
1 VM per WordPress site
Example architecture:
Internet
│
▼
VM-PROXY (Caddy)
Reverse Proxy
/ \
/ \
▼ ▼
VM-WP1 VM-WP2
WordPress Site 1 WordPress Site 2
This design introduces several improvements:
-
clear separation between edge proxy and application servers
-
easier scalability
-
more realistic infrastructure layout
From an automation perspective, this required changes to our Ansible roles.
The two roles that evolved the most are:
-
caddy_proxy -
wordpress_backend
Changes in the Ansible Inventory
Before looking at the roles, the first change appears in the inventory structure.
Level 1 only had one host.
Level 2 introduces host groups.
Example:
[proxy] vm-proxy ansible_host=10.0.0.10[wordpress] vm-wp1 ansible_host=10.0.0.11
vm-wp2 ansible_host=10.0.0.12
This allows Ansible to apply different roles depending on the host type.
Example playbook:
– hosts: proxy
roles:
– caddy_proxy– hosts: wordpress
roles:
– wordpress_backend
Now the reverse proxy and WordPress servers are managed independently.
The group_vars Directory
In Level 1, the WordPress role had two responsibilities:
-
deploy WordPress
-
expose it through the local reverse proxy
In Level 2, the role focuses only on deploying the application backend.
The reverse proxy is now external.
Level 1 WordPress Exposure
In Level 1, WordPress could be accessed locally by Caddy.
Example configuration:
ports:
– “8080:80”
Caddy would proxy traffic to:
localhost:8080
Level 2 Backend Exposure
In Level 2, WordPress exposes its HTTP port only to the internal network.
Example:
ports:
– “8080:80”
But now Caddy connects using the private VM IP.
Example:
http://10.0.0.11:8080
The backend role therefore becomes simpler:
Responsibilities:
-
deploy MariaDB container
-
deploy WordPress container
-
expose the application port
-
ensure containers restart automatically
Role Structure
roles/wordpress_backend/
├── tasks/
│ └── main.yml
├── templates/
│ └── compose.yml.j2
The template now includes variables describing the site.
Example:
services:
db:
image: mariadb:11
environment:
MYSQL_DATABASE:
MYSQL_USER:
MYSQL_PASSWORD:wordpress:
image: wordpress
ports:
– ":80"
Each VM therefore runs an independent WordPress stack.
Evolution of the caddy_proxy Role
The biggest architectural change occurs in the Caddy role.
In Level 1, Caddy proxied to localhost.
In Level 2, Caddy must route traffic to multiple backend servers.
This required making the configuration dynamic.
Using a Site List Variable
Instead of defining one static site, we introduce a list of sites in the variables.
Example:
sites:
– name: site1
domain: site1.example.com
backend_host: 10.0.0.11
backend_port: 8080– name: site2
domain: site2.example.com
backend_host: 10.0.0.12
backend_port: 8080
This allows Ansible to generate the proxy configuration automatically.
Dynamic Caddyfile Template
The Caddy configuration is generated using a Jinja2 template.
Example:
This template loops over all defined sites and creates a routing block for each domain.
The benefits are significant:
-
adding a new website only requires adding a variable entry
-
the proxy configuration is generated automatically
-
no manual configuration changes are required
Automatic Handling of Multiple Domains
With this system, adding a new WordPress site becomes extremely simple.
Steps:
-
create a new WordPress VM
-
add it to the inventory
-
add a new entry in the
sitesvariable
Example:
– name: site3
domain: blog.example.com
backend_host: 10.0.0.13
backend_port: 8080
When the playbook runs again:
-
the WordPress stack is deployed
-
Caddy automatically generates the new routing rule
-
HTTPS certificates are obtained automatically
Benefits of the Level 2 Architecture
This evolution introduces several DevOps best practices:
Separation of responsibilities
Proxy and applications run on different servers.
Scalability
New sites can be added easily.
Reusability
Roles are now modular and reusable.
Automation
Infrastructure changes are driven entirely by code.
Preparing for Level 3
Level 2 already resembles a small production infrastructure.
However, several improvements are still possible:
-
centralized logging
-
monitoring with Prometheus and Grafana
-
CI pipelines for Ansible
-
automated deployments
These topics will be covered in Level 3, where we introduce observability and infrastructure monitoring.