This Docker tutorial explains how to run a PHP application using Apache and real SSL certificates on any Windows, mac OS, or Linux development PC.
PHP may not be the trendiest technology but it’s used by many developers and projects. According to W3Techs, PHP is used on 78% of all websites. That may be an underestimate since sites may not – and ideally shouldn’t – announce their stack. A more reliable statistic is that WordPress powers 43% of the web and the CMS uses PHP.
I rarely embark on new PHP projects but have many legacy sites and apps with folders full of .php
files. Installing PHP can be time-consuming and error prone. There are various versions and you’ll encounter further complexities when integrating PHP with a web server such as Apache to match a real hosting solutions.
Additionally, Windows users are offered a confusing array of options although the situation is about to become easier – Microsoft is dropping PHP support in Windows:
“We are not going to be supporting PHP for Windows in any capacity for version 8.0 and beyond.”
Dale Hirt, Microsoft
Someone is likely to compile Windows editions and the Windows Subsystem for Linux provides another option. However, the point remains that maintaining one or more PHP development environments can be difficult…
…unless you use Docker.
Docker is a tool that can install, configure, and manage software. It places a wrapper around executables known as a container. Containers are launched from pre-configured images which are a snapshot of an executable and its libraries.
My “Docker for Web Developers” book and video course concisely explains how to adopt Docker for your new and existing projects.
Docker provides pre-built Apache and PHP images which can be downloaded and run on any OS where Docker is installed (see the Docker installation instructions).
The following sections describe how to prepare a Docker development environment which can execute PHP files located on your host PC.
Web apps use HTTPS to ensure communication between the client and the server is encrypted and cannot be intercepted. Google also penalizes content sites which remain on HTTP.
For local development, developers either:
Use HTTP
This means the local and production versions are different. It can be more difficult to spot problems such as linking to insecure assets.
Or use a (fake) self-signed certificate
This is closer to the production version but the browser still treats requests differently. For example, fake SSL assets are not cached.
A third lesser-known option is mkcert. This creates a new locally-trusted authority and SSL certificates. As far as the browser is concerned, the HTTPS connection is fully secure despite running on a local domain.
Configuring certificates need only be done once and creating them on your local machine will also work in Docker containers or WSL2. Follow the mkcert installation instructions then install a new local certificate authority in your browsers:
mkcert -install
Firefox requires some additional configuration:
rootCA.pem
file by entering mkcert -CAROOT
in your terminal.rootCA.pem
file, and restart the browser.Now create locally-trusted development certificates for your development domain:
mkcert localhost 127.0.0.1 ::1
It’s easier to use localhost
, but you can create any domain name as long as it is referenced in your hosts
file.
Rename the generated files:
cert.pem
for the SSL certificate, andcert-key.pem
for the SSL certificate key fileCreate a directory somewhere on your system, e.g. dockerphp
, and copy the two .pem
files into it.
Create a file named 000-default.conf
in the same directory with the following Apache HTTP and HTTPS configuration. This sets the web to root /var/www/html
and references the SSL certificates you created with mkcert:
<VirtualHost *:80>
ServerAdmin admin@localhost
DocumentRoot /var/www/html
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
<VirtualHost *:443>
SSLEngine on
SSLCertificateFile /etc/apache2/ssl/cert.pem
SSLCertificateKeyFile /etc/apache2/ssl/cert-key.pem
ServerAdmin admin@localhost
DocumentRoot /var/www/html
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
Create a file named Dockerfile
in your directory and add the following content to build a PHP and Apache image. You can choose from dozens of starting images at Docker Hub but this example uses php:8-apache
which has the latest version of PHP 8 on Apache 2.4:
FROM php:8-apache
RUN a2enmod ssl && a2enmod rewrite
RUN mkdir -p /etc/apache2/ssl
RUN mv "$PHP_INI_DIR/php.ini-development" "$PHP_INI_DIR/php.ini"
COPY ./ssl/*.pem /etc/apache2/ssl/
COPY ./apache/000-default.conf /etc/apache2/sites-available/000-default.conf
EXPOSE 80
EXPOSE 443
The Dockerfile
:
php.ini
so errors and warnings are shown./etc/apache2/ssl
directory and copies the SSL .pem
certificate files created above.If necessary, you can define a your own php.ini
file and COPY
it into the image at /usr/local/etc/php/php.ini
.
Note: the separate Dockerfile
RUN
commands can be merged on to one line and separated with &&
. This makes the Docker build process faster and more efficient although the code is more difficult to read.
Build a Docker image named php8
from your Dockerfile
by navigating to the directory in a terminal and entering:
docker image build -t php8 .
(The last .
period is important!)
Assuming you don’t have errors, a new Docker image will be built. Run docker image ls
to see php8
in the list of images.
You can now start a Docker container from the php8
image. Navigate to any directory containing a PHP project and run the following docker
command:
docker run \
-it --rm \
-p 8080:80 -p 443:443 \
--name php8site \
-v "$PWD":/var/www/html \
php8
Windows Powershell users must remove the line-breaks and \
backslashes from the command. Additionally, $PWD
references the current directory on Linux and macOS. This cannot be used on Windows so the full path must be specified in Linux notation, e.g.
-v /c/projects/mysite:/var/www/html
The container will continue to run until it is stopped with Ctrl | Cmd + C.
Alternately, you may find it easier to launch the container with Docker Compose. Create a new docker-compose.yml
file in the PHP project directory with the following content:
version: '3'
services:
php8site:
image: php8
container_name: php8site
volumes:
- ./:/var/www/html
ports:
- "8080:80"
- "443:443"
The Apache/PHP container can then be launched from that directory with:
docker-compose up
and stopped in another terminal with:
docker-compose down
The host directory where the Docker container is launched is bind-mounted into the container at the Apache /var/www/html
root. The standard port 443
is available for HTTPS connections and port 8080
forwards to HTTP port 80
to avoid conflicts with applications such as Skype.
You can test PHP execution with an example index.php
file:
<?php
phpinfo();
Launch it in your browser at http://localhost:8080/
or https://localhost/
. The HTTPS version will use the mkcert SSL but, unlike self-signed certificates, the browser will not throw a security alert.
A little knowledge of Docker is all that’s required to create a secure Apache and PHP development environment. The benefits:
Finally…
]]>You’re not limited to PHP and Apache! Docker can manage whatever server, language runtimes, databases, or other software dependencies your project needs.
This is the third chapter of the “Docker for Web Developers” book. It explains how to install Docker on all popular operating systems. The full course can be purchased from DockerWebDev.com.
Docker can be installed on Linux, mac OS, or Windows.
Requirements and installation instructions can be found on the Docker Docs help pages.
Docker Desktop for Linux can be downloaded from Docker Hub. The installer includes the Docker server, CLI, Docker Compose, Docker Swarm, and Kubernetes.
Alternatively, the Docker command-line tool is available in official Linux repositories although these are often older editions. The latest edition is supported on recent 64-bit editions of popular Linux distros:
Static binaries are available for other distros, although Googling “install Docker on [your OS]” may provide easier instructions, e.g. “install Docker on a Raspberry Pi”.
Follow the Docker documentation for your distro. For example, Docker for Ubuntu is installed with the following commands:
sudo apt-get remove docker docker-engine docker.io containerd runc
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg lsb-release
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
Convenience scripts are also available to run these commands for you, but the Docker documentation warns they are a security risk and should not be used in production environments:
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
To run Docker commands as a non-root user (without sudo
), create and add yourself to a docker
group:
sudo groupadd docker
sudo usermod -aG docker $USER
Then reboot to apply all changes.
Docker Desktop for macOS Sierra 10.13 and above can be downloaded from Docker Hub. The package includes the Docker server, CLI, Docker Compose, Docker Swarm, and Kubernetes.
Two editions are available: stable and edge with experimental features. The stable version is best for most developers.
Double-click Docker.dmg
to open the installer, then drag the Docker icon to the Applications folder. Double-click Docker.app in that folder to launch Docker.
After completion, the whale icon in the status bar indicates Docker is running and commands can be entered in the terminal.
Docker Desktop for Windows requires either WSL2 or Hyper-V.
WSL allows you to run full Linux environments directly on Windows 10 or Windows 11.
IMPORTANT!
You can not install the Linux edition of Docker within a WSL-powered Linux distro. You must install Docker Desktop for Windows which allows Docker commands to be run in all Windows and Linux terminals.
WSL2 is the recommended default option for Docker on Windows. It is faster than Hyper-V and available in all editions of Windows 11 and Windows 10 from the May 2020 update (version 2004, OS build 19041).
Docker cannot be installed on Windows S but you can normally switch to Windows Home at no additional cost in the Settings.
To install WSL2:
Enable hardware virtualization support in your BIOS.
This will be active on most devices, but check by rebooting and accessing your PC’s BIOS panels – typically by hitting DEL, F2, or F10 as your system starts. Look for Virtualization Technology, VTx or similar options. Ensure they are enabled, save, and reboot.
WARNING! Be careful when changing BIOS settings – one wrong move could trash your PC.
Enable the Virtual Machine Platform and Windows Subsystem for Linux options in the Turn Windows features on or off panel:
This can be accessed by hitting the Start button and typing the panel name or from Programs and Features in the classic Control Panel.
Reboot, then enter the following command in a Windows Powershell or cmd
prompt to set WSL2 as the default:
wsl --set-default-version 2
Download and install your preferred distro by searching for “Linux” in the Microsoft Store app. Ubuntu is a good choice.
To complete the installation, launch your distro by clicking its Store’s Launch button or choosing its icon from the Start menu.
You may be prompted to install a kernel update – follow the instructions and launch the distro again.
Enter a Linux username and password. These are separate from your Windows credentials although choosing the same ones can be practical.
Ensure your distro is up-to-date. For example, on an Ubuntu bash prompt enter:
sudo apt update && sudo apt upgrade
You can now install Docker Desktop (see below). For the best performance and stability, store development files in your Linux file system and run Docker from your Linux terminal.
More information about installing and using WSL2:
The Microsoft Hyper-V hypervisor is provided free with Windows 10 and 11 Professional and Enterprise. (Windows Home users must use WSL2.)
To install Hyper-V:
Enable hardware virtualization support in your BIOS.
This will be active on most devices, but check by rebooting and accessing your PC’s BIOS panels – typically by hitting DEL, F2, or F10 as your system starts. Look for Virtualization Technology, VTx or similar options. Ensure they are enabled, save, and reboot.
WARNING! Be careful when changing BIOS settings – one wrong move could trash your PC.
Enable the Hyper-V option in the Turn Windows features on or off panel then reboot.
This can be accessed by hitting the Start button and typing the panel name or from Programs and Features in the classic Control Panel.
You can now install Docker Desktop.
Docker Desktop for Windows 10 and 11 can be downloaded from Docker Hub. The installer includes the Docker server, CLI, Docker Compose, Docker Swarm, and Kubernetes.
Two editions are available: stable and edge with experimental features. The stable version is best for most developers.
Double-click Docker Desktop Installer.exe
to start the installation process. After completion and launch, the whale icon in the notification area of the task bar indicates Docker is running and ready to accept commands in the Windows Powershell/cmd
terminal (and Linux if using WSL2).
Docker uses WSL2 as the default engine when available. You will be prompted to confirm this choice during installation and after WSL2 is installed.
Alternatively, WSL2 can be enabled by checking Use the WSL 2 based engine in the General tab of Settings accessed from the Docker task bar icon. Unchecking the option reverts to Hyper-V.
When using WSL2, at least one Linux distro must be enabled – the default is chosen. You can also permit Docker commands in other distros by accessing the WSL integration panel in the Resources section of the Docker Settings:
When using Hyper-V, Docker must be granted access to the Windows file system. Select the drives it is permitted to use by accessing the File Sharing panel in the Resources section of the Docker Settings:
(This option was named Shared Drives in previous editions of Docker Desktop.)
Check Docker has successfully installed by entering the following command in your terminal:
docker version
A response similar to the following is displayed:
Client: Docker Engine - Community
Version: 19.03.12
API version: 1.40
Go version: go1.13.10
Git commit: abcdef0
Built: Mon Jun 22 15:45:36 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.12
API version: 1.40 (minimum version 1.12)
...etc...
Ensure Docker Compose is working by entering:
docker-compose version
To receive something like:
docker-compose version 1.27.2, build 8d51620a
docker-py version: 4.3.1
CPython version: 3.7.7
OpenSSL version: OpenSSL 1.1.1c 10 Sep 2019
Optionally, try entering:
docker run hello-world
to verify Docker can pull an image from Docker Hub and start containers as expected…
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete
Digest: sha256:f9dfddf63636d84ef479d645ab5885156ae030f611a56f3a7ac
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows your installation appears to be working correctly.
What you’ve learned in this chapter:
The following chapters demonstrate how to use Docker during development…
…but to continue reading, you need to buy the book.
]]>More than 40% of all sites run the WordPress Content Management System. As a web developer, you’re almost certain to have encountered it.
WordPress requires Apache, PHP, MySQL, and the WordPress source code. A lot of dependencies are required for your development environment.
You could choose to install the applications:
These options take time and there’s no guarantee you’ll be able to match the versions of each dependency used on your live server. You may also encounter issues running two or more sites, especially if you require different editions of PHP or MySQL.
Docker solves WordPress woes. It can:
Make sure you have Docker and Docker Compose installed then create a new project directory, e.g.
mkdir wpsite
cd wpsite
Create a new file named docker-compose.yml
with the following content:
version: '3'
services:
mysql:
image: mysql:5
container_name: mysql
environment:
- MYSQL_DATABASE=wpdb
- MYSQL_USER=wpuser
- MYSQL_PASSWORD=wpsecret
- MYSQL_ROOT_PASSWORD=mysecret
volumes:
- wpdata:/var/lib/mysql
ports:
- "3306:3306"
networks:
- wpnet
restart: on-failure
wordpress:
image: wordpress
container_name: wordpress
depends_on:
- mysql
environment:
- WORDPRESS_DB_HOST=mysql
- WORDPRESS_DB_NAME=wpdb
- WORDPRESS_DB_USER=wpuser
- WORDPRESS_DB_PASSWORD=wpsecret
volumes:
- wpfiles:/var/www/html
- ./wp-content:/var/www/html/wp-content
ports:
- "8001:80"
networks:
- wpnet
restart: on-failure
volumes:
wpdata:
wpfiles:
networks:
wpnet:
(The tabs and spacing is important. Port 8001
can be changed if it conflicts with another application.)
Now run docker-compose up
from your terminal to launch WordPress. It will take several minutes on the first run since all dependencies are downloaded and initialized.
A new wp-content
sub-directory will appear in your project folder. This contains the WordPress theme and plugin code you can edit and test. Those using Linux, macOS, and Windows WSL2 will find it’s been created by the root
user. Grant read and write privileges to you and WordPress by running this command in another terminal:
sudo chmod 777 -R wp-content
Open http://localhost:8001/
in your browser and follow the WordPress installation process:
You will then be prompted to log on at http://localhost:8001/wp-admin
using the ID and password you chose during installation:
You can now create content and edit themes as you would do for any other WordPress installation.
Code in your wp-content
sub-directory can be backed-up or added to version control.
WordPress database data is stored in a Docker volume named wpdata
mounted in the mysql
container. You can export the data to a file using the WordPress Export option in the Tools menu.
Alternatively, you can back-up the data using mysqldump. First, find the ID Docker Compose assigned to the MySQL container:
docker container ls
A name similar to wpsite_mysql
should appear in the list. Use this in place of <ID>
in the following Linux/macOS command run from another terminal:
docker exec <ID> /usr/bin/mysqldump -u root -pmysecret mydb > backup.sql
The equivalent command on Windows PowerShell:
docker exec <ID> /usr/bin/mysqldump -u root -pmysecret -r mydb | Set-Content backup.sql
The root
user and mysecret
password will need to be changed if you used different credentials in the docker-compose.yml
file.
To shutdown WordPress, enter docker-compose down
in another terminal window. Starting again with docker-compose up
will be almost instantaneous and the application will be in the same state you left it.
]]>“Docker for Web Developers” provides futher information about running WordPress with Docker and explains how to add and develop your own custom theme.
Docker makes no configuration changes to your system … but it can use a significant volume of disk space. Use it for a short while and you may be shocked to see some scary usage statistics returned when entering:
docker system df
Fortunately, Docker allows you to reclaim disk space from unused images, containers, and volumes.
To safely remove stopped containers, unused networks, and dangling images it’s a good idea to run the following command every so often:
docker system prune
A slightly more risky option is:
docker system prune -a
This also wipes any image not associated with a running container. That can be a little drastic but Docker will re-download any image it requires. The first attempt will be a little slower but the image is then cached for further use.
The following sections describe additional ways to remove specific items.
A Docker image is a disk snapshot of an application such as a web server, language runtime, or database management system. You can view all images, both active and dangling (those not associated with a container), by entering:
docker image ls -a
A Docker image can be deleted by entering:
docker image rm <name_or_id>
Any number of images can be added to this command – separate them with a space character.
A Docker container is a running instance of an image and any number of containers can be started from the same one. Containers are usually small because they are stateless and reference the image’s file system. View all containers, both running and stopped, by entering:
docker container ls -a
You can only delete a container once it has been stopped. Stop containers by entering:
docker container stop <name_or_id>
Containers can then be deleted by entering:
docker container rm <name_or_id>
Again, any number of space-separated container names/IDs can be added to this command.
It’s rarely necessary to retain stopped containers. The --rm
option can be added to any docker run
command to automatically delete a container once it terminates.
Containers can be attached to a Docker-managed network so they can communicate with each other. These are configuration files which do not use much disk space. View all Docker networks by entering:
docker network ls
One or more unused networks can be deleted by entering:
docker network rm <name_or_id>
Again, any number of space-separated network names/IDs can be added to this command.
A Docker volume is a virtual disk image. It must be attached to a running container so it can save files or other state information between restarts. Volume sizes depend on the application using it, but a typical database will require several hundred megabytes of space even when it’s mostly empty.
View all Docker-managed disk volumes by entering:
docker volume ls
Removing a Docker volume will wipe it’s data forever! There is no going back.
If you’re developing a database-driven application it’s usually practical to retain one or more data dumps which can be used to re-create a specific set of records. Most database client tools provide a dump or export facility, such as the Export link in Adminer.
Most database systems will provide a backup tool such as mysqldump
utility in MySQL. These can be executed on a running container using the docker exec
command.
The following Linux/macOS command backs up a MySQL database named mydb
running on a container named mysql
to a file named backup.sql
. The MySQL root
user with the password mysecret
is used:
docker exec mysql /usr/bin/mysqldump -u root -pmysecret mydb \
> backup.sql
the equivalent command for Windows PowerShell:
docker exec mysql /usr/bin/mysqldump -u root -pmysecret -r mydb | \
Set-Content backup.sql
You can also copy data files to or from a running container with the docker cp
command. This is passed source and destination paths where containers are referenced by their name/ID followed by a colon and their path, e.g.
docker cp mycontainer:/some/file ./host/directory
Assuming your data is safe, you can delete any unused volume by entering:
docker volume rm <name>
All unused Docker volumes – those not currently attached to a running container – can be removed with:
docker volume prune
Alternatively, docker volume prune -a
will delete them all. You did back-up first, didn’t you?..
Every unused container, image, volume, and network can be wiped with a single command:
docker system prune -a --volumes
Add -f
if you want to force the wipe without a confirmation prompt. Your system will be back to a pristine state without any Docker data.
“An Introduction to Docker” is a LIVE online course delivered over Zoom. Your tutor is Craig Buckler, author of the “Docker for Web Developers” book & video course.
“An Introduction to Docker” is a hands-on one-day course split over two half-days:
Day one: 3.5 hours
An overview of Docker benefits, concepts, and techniques for all participants with live demonstrations, practical examples, and a Q&A session.
Day two: 3.5 hours
Hands-on practical projects including application development, live code editing, debugging, and best-practice techniques. Participants are split into smaller groups for personal assistance.
Please sign-up for my newsletter to find out the dates, times, and prices of the next course.
The course is aimed at developers, DevOps professionals, and IT managers who want to learn more about Docker or improve their application development and deployment processes.
The course specifically refers to web technologies but the Docker concepts can be applied to any stack.
All participants receive:
Docker can revolutionise project development and deployment.
Install project dependencies in minutes
Docker can install and manage all the software your project requires.
Dependencies are lightweight and isolated
Other than disk space, your PC is not changed. Multiple or legacy versions of the same software can be run concurrently without conflicts. Dependencies can be started, stopped, removed, or reinstalled at any time.
Applications become portable
Your project and its dependencies can be distributed to other development machines and production servers. It will work identically even when some software is not supported on that OS. Docker ends “but it works on my PC” complications!
Continue to use your existing OS, tools, and workflows
Developers can continue to use their preferred tools. It does not matter whether they are using Windows, macOS, Linux, or specific IDEs – Docker provides all the benefits of local development and debugging.
Deployments become faster and more robust
Docker can scale your application according to demand and keep it running if instances fail.
To attend the course, you require:
If you would like to follow the hands-on examples and run Docker on your PC, you will also require:
The course demonstrates example web projects using a variety of dependencies. You do not need experience of that software, but a basic understanding of web technologies is useful:
Craig is a freelance web consultant, speaker, writer, and trainer with more than twenty years in the industry. He has worked on many successful projects for organisations including Microsoft, Vodafone, Sky, the UK and European Parliaments and more.
Craig has written more than 1,200 tutorials for SitePoint.com, created video courses for O’Reilly, and has authored many books including:
You can ask Craig anything on Twitter @craigbuckler.
The course is organised by Software Cornwall and is open to attendees worldwide. The final price will be published here shortly, but participants in the UK and EU are eligible for HALF PRICE TICKETS thanks to backing from The European Social Fund
Please sign-up for my newsletter to be alerted when the next course is announced:
]]>This Docker tutorial has also been published on Medium.com. It provides a quick overview of Docker concepts with an example which launches a MySQL database and Adminer client.
Docker runs applications. That’s it. But by “application”, I’m referring to big web dependency stuff…
It doesn’t matter which Operating System you’re using. If your web project has a dependency, it can be downloaded, configured, and launched on Windows, macOS, or Linux in minutes.
The following sections provide an overview of Docker features which directly benefit web developers.
Docker runs apps in an isolated environment known as a container. In essence, you can think of it as a lightweight virtual machine containing an OS and the installed/running application.
Your host OS is not modified or configured in any way. It’s easy to add, remove, or update a container as necessary.
This also allows you can run different versions of the same dependency – even at the same time. For example, perhaps you require PHP5 for a legacy application but PHP7 for a new project.
A containerized application is still accessed from localhost
. It normally exposes one or more TCP ports, such as 80
or 443
for a web server or 3306
for MySQL. A MySQL client can attach to localhost:3306
in the same way as it would for a local installation.
Folders on your host PC can be mounted inside a container. You can edit code locally using your existing editor and tools, but have it update and run inside the container.
Finally, it’s possible to connect to the container’s shell using SSH to run administrative commands.
Once you have a good set-up, developing with Docker is easier and safer than developing locally. It encourages risk-free experimentation:
Your Docker development environment is portable. It can be stored and reproduced elsewhere, e.g. uploaded to a Git repository and cloned by others on your team.
It doesn’t matter what OS they use or whether a dependency is available on their platform. Your web app will work identically.
Docker finally ends those “but it works on my PC” conversations!
During development, you will typically run your own web app code in a single container. Optionally, you can run any number of the same app containers on a live server (Kubernetes and Docker Swarm are designed to do just that).
Your application will become faster and more robust. Any instance can fail and be restarted while others keep running. You could also update the application with no downtime.
If Docker is so practical, why do few developers use it?
The main reason: it looks complex. There are many features, numerous options, and it’s not always clear how to get started.
Here’s a quick-start summary to launch the latest version of MySQL and Adminer, a PHP database client.
First, install the latest stable edition of Docker on your OS.
Docker’s command-line interface allows you to start individual containers. Docker Compose is an additional tool which can launch multiple containers in one step using a configuration file normally named docker-compose.yml
.
Create a folder with a file named docker-compose.yml
and add the following content (or download it directly from Gitub):
version: '3'
services:
mysql:
image: mysql
container_name: mysql
environment:
- MYSQL_ROOT_PASSWORD=mysecret
volumes:
- mysqldata:/var/lib/mysql
ports:
- "3306:3306"
networks:
- mysqlnet
restart: on-failure
adminer:
image: adminer
container_name: adminer
depends_on:
- mysql
ports:
- "8080:8080"
networks:
- mysqlnet
restart: on-failure
volumes:
mysqldata:
networks:
mysqlnet:
Spacing is important in YML files so be careful when copying!
Open a terminal, cd
to that folder, and enter:
docker-compose up
It will take several minutes to download the container images and initialize the database the first time you run this command. MySQL is ready when you see:
mysql | ... [Server] X Plugin ready for connections.
Open http://localhost:8080/
in your browser to launch Adminer. Enter the login credentials:
mysql
root
mysecret
You can then browse, create, edit, or drop databases, tables, indexes, users, and other items.
You could even create a web application which stores data in the MySQL database at http://localhost:3306/
.
To stop the containers, press Ctrl | Cmd + C in the terminal or enter:
docker-compose down
in another terminal window. Starting MySQL and Adminer again with docker-compose up
is almost instantaneous.
It’s incredible that you’ve installed, configured, and launched MySQL, PHP, and Adminer with a few minutes effort.
The “Docker for Web Developers” book and video course will revolutionize your web development workflow.
Docker is one of the most useful web development tools you’re not using!
The course has one objective:
to quickly demonstrate how you can use Docker in your web development projects.
It starts with a concise explanation of Docker terminology and concepts before demonstrating typical projects such as local WordPress, Node.js, and Single-Page App (SPA) environments.
You can update source files, execute it instantly, and debug client and server code in Chrome DevTools and VS Code. The examples can be adapted to any technology stack, new projects, or existing apps.
]]>This is the second chapter of the “Docker for Web Developers” book. It provides an overview of Docker concepts and technologies. The full course can be purchased from DockerWebDev.com.
Most tutorials attempt to explain Docker concepts first. That can be daunting so here’s the TL;DR alternative…
Docker runs an application such as MySQL in a single container.
It’s a lightweight virtual machine-like package containing an OS, the application files, and all dependencies.
Your web application will probably require several containers; your code (and language runtime), a database, a web server, etc.
A container is launched from an image.
In essence, it’s a container template which defines the OS, installation processes, settings, etc. in a Dockerfile configuration. Any number of containers can be started from the same image.
Containers start in clean (image) state and data is not permanently stored.
You can mount Docker volumes or bind host folders to retain state between restarts.
Containers are isolated from the host and other containers.
You can define a network and open TCP/IP ports to permit communication.
Each container is started with a single Docker command.
Docker Compose is a utility which can launch multiple containers in one step using a docker-compose.yml
configuration file.
Optionally, orchestration tools such as Docker Swarm and Kubernetes can be used for container management and replication on production systems.
You’re welcome to skip the rest of this chapter and jump straight into the Docker examples. It’s worth coming back later: the concepts discussed below may change how you approach web development.
Recall how you could use a Virtual Machine (VM) to install a web application and its dependencies. VM software such as VMware and VirtualBox are known as hypervisors. They allow you to create a new virtual machine, then install an appropriate operating system with the required application stack (web server, runtimes, databases, etc.):
In some cases, it may not be possible to install all applications in a single VM so multiple VMs become necessary:
Each VM is a full OS running on emulated hardware in a host OS with access to resources such as networks via the hypervisor. This is a considerable overhead, especially when a dependency could be tiny.
Docker launches each dependency in a separate container. It helps to think of a container as a mini VM with its own operating system, libraries, and application files.
In reality:
A container is effectively an isolated wrapper around an executable so Docker requires far fewer host OS resources than a VM.
It’s technically possible to run all your application’s dependencies in a single container, but there are no practical benefits for doing so and management becomes more difficult.
Always use separate containers for your application, the database, and any other dependencies you require.
Each container is available at localhost
or 127.0.0.1
, but a TCP port must be exposed to communicate with the application it runs, e.g.
80
or 443
for a HTTP or HTTPS web servers3306
for MySQL27017
for MongoDBDocker also allows you to access the container shell to enter terminal commands and expose further ports to attach debuggers and investigate problems.
Data written to the container’s file system is lost the moment it is shuts down!
Any number of containers can be launched from the same base image (see below). This makes scaling easy because every container instance is identical and disposable.
This may change the way you approach application development if you want to use Docker on production servers. Presume your application has a variable which counts the number of logged-in users. If it’s running in two containers, either could handle a login so each would have a different user count.
Dockerized web applications should therefore avoid retaining state data in variables and local files. Your application can store data in a database such as redis, MySQL, or MongoDB so state persists between container instances.
It may be impractical to deploy an existing application using Docker containers if it was developed in a non-stateless way from the start. However, you can still run the application in Docker containers during development.
Which begs the question: what if your database is running in a container?
It will also lose data when it restarts, so Docker offers volumes and host folder bind mounts.
You may be thinking:
“ahh, I can get around the state issue by never stopping a container!”
That’s true. Presuming your application is 100% bug-free. And your runtime is 100% reliable. And the OS never crashes. And you never need update the host OS or the container itself.
It doesn’t matter what host OS you’re using: Docker containers run natively on Linux. Even Windows and macOS run Docker containers inside Linux…
The macOS edition of Docker requires VirtualBox.
The Windows edition of Docker allows you to switch between either:
the Windows Subsystem for Linux (WSL) 2: a highly-integrated seamless VM which is available on all editions of Windows, or
Hyper-V: the Microsoft hypervisor provided with Windows 10/11 Professional and Enterprise.
It is therefore more efficient to run Docker on Linux but this rarely matters on a development PC. Use whatever OS and tools you prefer.
However, if you are using Docker to deploy your application, Linux is the best choice for your live server.
A Docker image is a snapshot of a file and operating system with libraries and application executables. In essence, an image is a recipe or template for creating a container. (In a similar way that some computer languages let you define a reusable class
template for instantiating objects of the same type.)
Any number of containers can be started from a single image. This permits scaling on production servers, although you’re unlikely to launch multiple containers from the same image during development.
The Docker Hub provides a repository of commonly-used images for:
Reminder: sign-up for Docker Hub account if you’d like to publish your own images.
An image is configured using a Dockerfile. It typically defines:
localhost
on the hostIn some cases, you will use an image as-is from Docker Hub, e.g. MySQL. However, your application will require it’s own custom Dockerfile.
It is possible to create two Dockerfile configurations for your application:
one for development.
It would typically activate logging, debugging, and remote access. For example, during Node.js development, you might want to launch your application using Nodemon to automatically restart it when files are changed.
one for production.
This would run in a more efficient and secure mode. For Node.js deployment, it’s likely to use the standard node
runtime command.
However, a simpler process is described throughout this book.
Docker Hub is to Docker images what Github is to Git repositories.
Any image you create can be pushed to Docker Hub. Few developers do this, but it may be practical for deployment purposes or when you want to share your application with others.
Images are name-spaced with your Docker Hub ID to ensure no one can use the same name. They also have a tag so you can create multiple versions of the same image, e.g. 1.0
, 1.1
, 2.0
, latest
etc.
<Your-Docker-ID>/<Your-Docker-Hub-Repository>:<tag>
Examples: yourname/yourapp:latest
, craigbuckler/myapp:1.0
.
Official images on Docker Hub don’t require a Docker ID, e.g. mysql
(which presumes mysql:latest
), mysql:5
, mysql:8.0.20
, etc.
Containers do not retain state between restarts. This is generally a good thing; any number of containers can be started from the same base image and each can handle incoming requests regardless of how or when they were launched (see Orchestration).
However, some containers – such as databases – absolutely must retain data so Docker provides two storage mechanism types:
Either can map to a directory on the container, such as /data/db
for MongoDB storage.
Volumes are the recommended way to persist data. In some cases, it’s the only option – for example, MongoDB does not currently support bind mounts on Windows or macOS file systems.
However, bind mounts are practical during development. An application folder on the host OS can be mounted within the container so any file changes trigger an application restart, browser refresh, etc.
It is possible to mount the same volume or bind mount on two or more containers. Read-only access should be fine, but you could encounter issues if more than one container attempted to write to the same file at the same time!
Any TCP/IP port can be exposed on a container, such as 3306
for MySQL. This allows the applications on the host to communicate with the database system at localhost:3306
.
An application running in another container could not communicate with MySQL because localhost
resolves to itself. For this reason, Docker creates a virtual network and assigns each running container a unique IP address. It’s then becomes possible for one container to communicate with another using its address.
Unfortunately, Docker IP addresses can change every time a container is launched. An easier option is to create your own Docker virtual network. Any container added to that network can communicate with another using its name, i.e. mysql:3306
resolves to the correct address.
Container TCP/IP ports can be exposed:
Presume you are running two containers on the same Docker network:
phpapp
which exposes a web application on port 80
mysql
which exposes a database on port 3306
.During development, you would want both ports exposed to the host. The application can be launched in a web browser at http://localhost/
(port 80 is the default) and MySQL clients can connect to http://localhost:3306/
.
In production environments, the mysql
port need not be exposed to the host. The phpapp
container can still communicate with mysql:3306
, but unscrupulous crackers would not be able to probe port 3306
on the host.
With careful planning, it’s possible to create complex Docker networks which heighten security, e.g. mysql
and redis
containers can be accessed by phpapp
but they cannot access each other.
A single container is launched with a single docker
command. An application requiring several containers – say Node.js, NGINX, and MongoDB – must be started with three commands. You could launch each in three terminals in the correct order (probably MongoDB, then the Node.js application, then NGINX).
Docker Compose is a tool for managing multiple containers with associated volumes and networks. A single configuration file, normally named docker-compose.yml
, defines the containers and can override Dockerfile settings where necessary.
It’s practical to create a Docker Compose configuration for development. You could also create one for production, but there are better options…
Containers are portable and reproducible. A single application can be scaled by launching identical containers on the same server, another server, or even a different data center on the other side of the world.
The process of managing, scaling, and maintaining containers is known as orchestration. Docker Compose can be used for rudimentary orchestration, but it’s better to use specialist tools such as:
Cloud hosts offer their own orchestration solutions, such as AWS Fargate, Microsoft Azure, and Google Cloud. These are often based on Kubernetes but may have custom options or tools.
Docker is a client-server application. The server is responsible for container management and is controlled via a REST API. The command-line interface (CLI) communicates with this API, so it’s possible to run a server daemon anywhere and connect from another device.
This rarely matters during development: the Docker server and CLI is installed on the same PC.
You can communicate with the API using any HTTP client such as cURL. This is beyond the scope of this book, but it allows you to programmatically run any Docker process.
You can use Docker and containers in any way that is practical for your project.
This book suggests you always use Docker during development. It allows you to create robust and portable environments where your application and each dependency run in separate containers. Chapters 4, 5, 6, and Appendix D provide recipes you can adapt to your projects.
However, deploying your application to a live server raises further options to consider…
Docker is used to emulate your live server’s production environment on your development PC. The live server itself does not use containers.
This may be practical when you’re using infrastructures, platforms, or software as a service (IaaS, PaaS, SaaS) where a pre-built environment is provisioned for you. Possible examples include serverless and WordPress hosts.
Your live production server uses Docker containers for some – but not all – dependencies. Your application is likely to be a good candidate, but a database could be provided by a cloud service, and a load balancer could be supplied by the hosting company.
Your development PC can still emulate this environment using Docker containers. That said, a test database could be provided by the same cloud service to eliminate compatibility issues.
You use mostly identical Docker containers in both development and production. It may be necessary to create slightly different live server configurations or consider orchestration options.
Runtimes such as Node.js and Python run scripts on a single processing thread. A server with 16 CPU cores executing a single instance of an application will have fifteen cores sitting idle!
Note: some stacks alleviate this situation with a web server. PHP is single-threaded, but Apache launches a thread for each user request so multiple PHP processes run in parallel. This method has its own resourcing problems, though.
Multiple instances of Node.js applications can be launched on the same server using clustering or process managers such as PM2. However, it is generally more practical to use Docker to launch and manage multiple application containers as resources permit. Each container is isolated so, if an individual instance crashes, it will not affect others and can be restarted.
This book uses the following approach where practical:
An image can therefore be used as-is on production servers regardless of whichever orchestration or deployment process is adopted.
Don’t worry about this for now – the process will become clearer in the following chapters.
Using Docker during development has no downsides. It enables you to install dependencies on any OS and emulate a live system. You can easily share that isolated environment with others while retaining your favorite editor and tools.
However, Docker is not a magical solution which solves all your production woes! There are situations when Docker may not be appropriate…
Your application is not stateless
Dockerizing an existing monolithic application can be difficult if it was not originally designed for a container-based deployment. Programs which store state in variables or files will need to be adapted to use other data stores.
You’re using a Windows Server
Docker is native on Linux but Windows runs containers in a Hyper-V virtual machine or WSL2 (effectively another VM). It’s an additional overhead and, although Docker lets you run Linux dependencies, it may be more practical to provision a Linux server.
Performance is critical
Docker containers have imposed CPU and RAM limits. These are configurable, but an application running on the host OS will always be faster.
That said, Docker can implement parallel processing by scaling horizontally if your application generally runs on a single CPU core.
Stability is important
Docker is mature, but it’s another dependency to install, update, and manage. Do you have in-house container management expertise?
Your application may seem more robust since containers can be scaled and automatically restarted. That doesn’t mean it’s crashing less often than before!
To store mission-critical data
Volumes and bind mounts can store persistent data, but these are more difficult to manage and back-up than standard file system options.
To improve security
Containers are isolated but, unlike a real VM, they are not fully sandboxed from the host OS. Docker provides options for hiding dependencies, but it’s not a substitute for robust security.
To create GUI applications
Someone, somewhere will have created a cross-platform graphical interface application using containers. That doesn’t make Docker the ideal solution!
Because Docker is cool
Jumping on a technology bandwagon without proper investigation and justification is doomed to fail.
Docker is the most-used container solution but it’s not the only option. Alternatives include:
What you’ve learned in this chapter:
The Docker server manages containers.
It’s an isolated wrapper around an application, which seems similar to a virtual machine but is more lightweight.
Containers are launched from a single image template configured by a Dockerfile.
Images for hundreds of applications are available on Docker Hub.
Containers are stateless, but can attach to Docker disk volumes or bind-mounted folders on the host OS.
Containers can expose application ports and communicate over internal Docker networks.
Ports can also be exposed to the host OS.
Docker Compose can be used to launch multiple containers at once.
Orchestration tools such as Docker Swarm and Kubernetes can be used to launch and scale containers across multiple systems in production environments.
Docker is practical during development.
However, it’s not necessarily essential or practical to use it for every application component on production systems.
Enough theory. It’s time to install Docker…
]]>This is the first chapter of the “Docker for Web Developers” book. It explains the benefits of using Docker for web development. The full course can be purchased from DockerWebDev.com.
Does our web development stack really need another technology?
Modern web development involves a deluge of files, systems, and components:
Managing this stack can be a challenge.
How many hours do you spend installing, configuring, updating, and managing software dependencies on your development PC?
Imagine your latest application has become successful. You’ve had to hire another developer to give you more time to rake in money. They turn up at work on day one, clone your repository, launch the code, and – BANG – it fails with an obscure error message.
Debugging may help, but your environments are not the same…
The differences mount up.
You may be able to solve these issues within a few hours, but…
Some companies would implement a locked-down device policy, where you’re prevented from using the latest or most appropriate tools. (Please don’t be that boss!)
Rather than restricting devices and software, the application could be run within a Virtual Machine (VM). A VM allows an operating system to be installed in an emulated hardware environment; in essence, it’s a PC running on your PC.
Cross-platform VM options include VMware and VirtualBox. You could create a Linux (or other) VM with your application and all its dependencies. The VM is just data: it can be copied and run on any real Windows, macOS, or Linux device. Every developer – and the live server – could run the same environment.
Unfortunately, VMs quickly become impractical:
Docker solves all these problems and more. Rather than installing dependencies on your PC, you run them in lightweight isolated VM-like environments known as containers.
In a single command, you can download, configure, and run whatever combination of services or platforms you require. Yes, a single command. (Admittedly, it can be quite a complicated command, but that’s where this book comes in!)
Development benefits include:
Similar Docker environments can also be deployed in production:
Neither was I.
When I first encountered Docker, it seemed like an unnecessary and somewhat daunting hurdle. I had plenty of experience running VMs and configuring software dependencies – surely I didn’t need it?
Docker documentation is comprehensive but it has a steep learning curve. Tutorials are often poor and:
presume the reader fully understands all the jargon,
fail to explain or over-explain esoteric points, and
rarely address how Docker can be used during development.
When I started, I presumed Docker couldn’t handle dynamic application restarts or debugging. Tutorials often claimed every code change required a slow and cumbersome application rebuild.
I gave up.
I was eventually shown the light by another developer (thanks Glynne!) That led to several months deep-diving into Docker and I realised what I’d been missing.
Example: I’ve created many WordPress-based websites.
I’d usually develop these directly on Windows or an Ubuntu VM, where it’s necessary to install/update Apache, SSL, PHP, MySQL, and WordPress itself. All before commencing the real development work.
The equivalent Docker process takes minutes to initialize and can be cloned for every new project (see WordPress development with Docker). Each installation exists in its own isolated environment which can be source-controlled and distributed to other developers.
That said, I’ve never deployed WordPress to a production server using Docker. WordPress hosting is ubiquitous and inexpensive; I’m happy to let someone else manage those dependencies. However, potential problems are minimized because I replicated the production server environment on my development PC.
It is considerably easier to build applications with Docker. Without wanting to sound like a salesperson, Docker will revolutionize your development!
Docker helps regardless of which web development approach and stack you’re using. It provides a consistent environment at build time and/or closely matches the dependencies on your production server(s).
Your Docker environment:
Monolithic applications contain a mix of front-end and back-end code. Typically, the application uses a web server, server language runtime, data stores, and client-side HTML, CSS, JavaScript and frameworks to render pages and provide APIs. WordPress is a typical example.
Docker can be used to replicate that environment so all dependencies are available on your development PC.
Serverless applications implement most functionality in the browser typically with a JavaScript framework to create a Single Page Application (SPA). The core site/application is downloaded once.
Additional data and services are provided by small APIs perhaps running as serverless functions. Despite the name, servers are still used – but you don’t need to worry about managing them. You create a function which is launched on demand from a JavaScript Ajax request, e.g. code that emails form data to a sales team.
Docker can be used in development environments to:
A static site is constructed using a build process which places content (markdown files, JSON data, database fields, etc.) into templates to create folders of static HTML, CSS, JavaScript, and media files. Those pre-rendered files can be deployed anywhere: no server-side runtime or database is required.
Static sites are often referred to as the JAMstack (JavaScript, APIs, and Markdown). All content is pre-rendered where possible, but dynamic services such as a site search can adopt server-based APIs.
Docker can be used to provide a reproducible build environment on any development PC.
What you’ve learned in this chapter:
Docker can launch all your application’s dependencies in individual containers.
This includes servers, databases, language runtimes, etc. In most cases, these will require little or no configuration.
Docker is cross-platform.
It runs on Windows, macOS, and Linux. Your application will work on any PC.
Docker can – and should – be used in your development environment.
You can also use it in production systems if it’s practical to do so.
The next chapter describes Docker concepts in more detail.
]]>