Building in Public: Deploy a PHP application with Kamal, part 2

“Those fools at Radio Shack called me mad!”

This is the second part of a series about deploying a non-Rails application with Kamal 2. Read the first article here.

First, Some Housecleaning

Kamal places the generated deployment configuration file into config/deploy.yml. For our project this is a minor problem.

The sherldoc project contains the application code itself in the root of the respository and includes it’s own config/ directory. This directory is used for the PHP application configuration.

Adding the Kamal deployment configuration here muddies the water – mixing our application and deployment configuration. I move the file out of the config/ directory to the root of the project.

Since Kamal looks to the config/deploy.yml location by default, I now specify the location of the config file for every command. For example:

kamal deploy -c ./deploy.yml

Towards a Minimally Functional, Deployed Web App

I really want to see the sherldoc application deployed and reachable by HTTP by the public IP address of our VM. I can observe the state of the web application image both locally and during deployments.

Since the sherldoc application isn’t configured and prepared by the deployment process yet the application will not respond to any HTTP requests. That means that kamal-proxy will also not detect the app as healthy and will not forward requests to the running container.

The logs for the container show the state of the web application during the container execution:

jdjeffers@snappy-13g:~/kamal/sherldoc$ curl localhost:80
<br />
<b>Warning</b>:  require(/app/public/../vendor/autoload.php): Failed to open stream: No such file or directory in <b>/app/public/index.php</b> on line <b>13</b><br />
<br />
<b>Fatal error</b>:  Uncaught Error: Failed opening required '/app/public/../vendor/autoload.php' (include_path='.:') in /app/public/index.php:13
Stack trace:
#0 {main}
  thrown in <b>/app/public/index.php</b> on line <b>13</b><br />

Warning: require(/app/public/../vendor/autoload.php): Failed to open stream: No such file or directory in /app/public/index.php on line 13

Fatal error: Uncaught Error: Failed opening required '/app/public/../vendor/autoload.php' (include_path='.:') in /app/public/index.php:13 Stack trace: #0 {main} thrown in /app/public/index.php on line 13

I think it’s clear there are more steps we need to take to get the web application ready for action.

The scripts/ directory contains sets of commands that indicate what preparation the web application needs before it can serve requests. When using the docker compose as a deployment tool, these scripts would be used to perform the preparation steps.

scripts/
├── dockerbuild
├── dockerdown
├── dockerfirst
├── dockerstart
├── dockerstop
├── dockerup
├── shell
└── storagesetup.sh

Peeking into scripts/dockerfirst we can see commands to issue via docker compose on the app container. Specifically, I need to install PHP packages with composer, then run migrations and cache generation with the PHP artisan utility. Finally the frontend JavaScript runtime should be started with npm.

cp -n .env.example .env;
./scripts/dockerbuild;
docker compose -f docker-compose.yml up -d;
./scripts/storagesetup.sh;
time docker compose exec app composer install;
time docker compose exec supervisor composer install;
time docker compose exec app php artisan migrate:fresh --seed --force;
#time docker compose exec app php artisan route:cache;
#time docker compose exec app php artisan config:cache;
docker compose exec app npm install;
docker compose exec app npm run dev;

Modifying the PHP web application Dockerfile

To keep things simple, I modify the PHP dockerfile to run the preparation steps into both RUN and CMD directives. I can use RUN for actions that can occur before the container is in a run state like composer package installation and storage directory creation. The CMD as steps to run once the container is running in the deploy environment, like database migrations.

WORKDIR /app
RUN mkdir -p storage storage/framework storage/framework/cache storage/framework/sessions storage/framework/testing storage/framework/views storage/logs;
RUN composer install;
...
CMD php artisan migrate --force; \
    php artisan route:cache; \
    php artisan config:cache; \
    npm install; \
    { npm run dev & }

Next, I can the next deployment attempt with kamal deploy -c ./deploy.yml. The results:

#20 1.649   Database file at path [/app/database/database.sqlite] does not exist. Ensure this is an absolute path to the database. (Connection: sqlite, SQL: select name from sqlite_master where type = 'table' and name not like 'sqlite_%' order by name)
#20 1.649 
#20 1.649   at vendor/laravel/framework/src/Illuminate/Database/Connection.php:813
#20 1.654     809▕                     $this->getName(), $query, $this->prepareBindings($bindings), $e
#20 1.654     810▕                 );
#20 1.654     811▕             }
#20 1.654     812▕ 
#20 1.654   ➜ 813▕             throw new QueryException(
#20 1.654     814▕                 $this->getName(), $query, $this->prepareBindings($bindings), $e
#20 1.654     815▕             );
#20 1.654     816▕         }
#20 1.654     817▕     }
#20 1.654 
#20 1.654       +30 vendor frames 
#20 1.654 
#20 1.654   31  artisan:13
#20 1.654       Illuminate\Foundation\Application::handleCommand(Object(Symfony\Component\Console\Input\ArgvInput))
...
--------------------
  38 |     RUN mkdir -p storage storage/framework storage/framework/cache storage/framework/sessions storage/framework/testing storage/framework/views storage/logs;
  39 |     RUN composer install;
  40 | >>> RUN php artisan migrate:fresh --seed --force;
  41 |     RUN npm install;
  42 |     RUN npm run dev;
--------------------
ERROR: failed to solve: process "/bin/sh -c php artisan migrate:fresh --seed --force;" did not complete successfully: exit code: 1

Adding a Postgres Database

Based on the logs showing the last deployment failure, I know that the next step would be add a database to our deployment process. The included .env.example indicates that the preferred database should be Postgres:

#DB_CONNECTION=sqlite
DB_CONNECTION=pgsql
DB_HOST=pgsql
DB_PORT=5432
DB_DATABASE=sherldoc
DB_USERNAME=sherldoc
DB_PASSWORD=sherldoc

Kamal makes it easy to prepare a database with the accessories declarations. The result looks like this in the deploy.yml:

accessories:
  postgres:
      host: 159.203.76.193
      image: pgvector/pgvector:pg16
      port: 5432
      env:
        POSTGRES_PASSWORD: sherldoc
        POSTGRES_USER: sherldoc
        POSTGRES_DB: sherldoc
      directories:
        - ./docker/pgdata161:/var/lib/postgresql/data

I selected the pgvector image because it’s based on the postgres16 container builds, but already included the vector extensions required by sherldoc.

Testing the HTTP requests to running container show me that there is another problem. The original web serving infrastructure uses nginx and php-fpm, but currently there is no way to serve the PHP files.

No web service for you!

Alternatives to Nginx/PHP-FPM

At this point I have to decide to continue with nginx somewhere in our deployment configuration. Does it belong in the web app image, or does it live as an accessory? It’s not clear to me how kamal-proxy would handle the gapless deployments in this orientation. It may also not make as much sense to use nginx as an addition the web app image. I can’t see what would be gained with that approach.

I found a standalone PHP web server called frankenphp. There is a container image for this server that I can base the sherldoc we app on. I found the resulting image bloat for sherldoc to be more than I’d like (nearly twice the size of the php-fpm base image).

Some containers are just too fat.

I stick with the php-fpm image, but since we are overriding the entrypoint command we still have to start the frankenphp server as well.

I amend the sherldoc Dockerfile to install frankphp and then start the webserver:

RUN curl https://frankenphp.dev/install.sh | sh \
    && mv frankenphp /usr/local/bin/
...
EXPOSE 80
CMD php artisan migrate --force; \
    php artisan route:cache; \
    php artisan config:cache; \
    npm install; \
    { npm run dev & }; \
    cd /app/public; \
    frankenphp php-server

There is another adjustment I make to the proxy configuration health check. I change the health check path to reflect an expected path, /main/login/:

proxy:
  #ssl: true
  host: 159.203.76.193
  # kamal-proxy connects to your container over port 80, use `app_port` to specify a different port.
  healthcheck:
    interval: 3
    path: /main/login
    timeout: 3

Another deployment attempt finally shows a stable image:

  ...
  INFO Container is healthy!
  INFO [fdcb4fa8] Running docker tag registry.digitalocean.com/team-james-demo/sherldoc:dd83294d6a178fa10cb8797b1584f7e14b04f280_uncommitted_ef152c1c348ee1bc registry.digitalocean.com/team-james-demo/sherldoc:latest on 159.203.76.193
  INFO [fdcb4fa8] Finished in 0.690 seconds with exit status 0 (successful).
Prune old containers and images...
  INFO [84bd3457] Running docker ps -q -a --filter label=service=sherldoc --filter status=created --filter status=exited --filter status=dead | tail -n +6 | while read container_id; do docker rm $container_id; done on 159.203.76.193
  INFO [84bd3457] Finished in 0.453 seconds with exit status 0 (successful).
  INFO [109e763d] Running docker image prune --force --filter label=service=sherldoc on 159.203.76.193
  INFO [109e763d] Finished in 0.476 seconds with exit status 0 (successful).
  INFO [3e2925c5] Running docker image ls --filter label=service=sherldoc --format '{{.ID}} {{.Repository}}:{{.Tag}}' | grep -v -w "$(docker container ls -a --format '{{.Image}}\|' --filter label=service=sherldoc | tr -d '\n')registry.digitalocean.com/team-james-demo/sherldoc:latest\|registry.digitalocean.com/team-james-demo/sherldoc:<none>" | while read image tag; do docker rmi $tag; done on 159.203.76.193
  INFO [3e2925c5] Finished in 0.650 seconds with exit status 0 (successful).
Releasing the deploy lock...
  Finished all in 299.5 seconds

I can finally reach the web application through the VM’s public IP address, as kamal-proxy is forwarding the requests to the sherldoc web application container

Your Reward for Your Hard Work is Yet More Hard Work

Sometimes your purpose in life is just to serve as an example to others.

It occurs to me that while there is this momentary victory I still have a ways to go until sherldoc is fully functional. For example, I still need to stand up the application queue workers, the Apache Tika service, and a Redis queue.

The original sherldoc deployment uses Docker volumes to share a deployed set of files into each concerned container (as /app). Since Kamal expects a different paradigm on container orchestration, this approach is not ideal.

Instead of sharing the application files among different containers, each provisioned for a distinct concern), I adopt the Kamal expectation that the same web application image is deployed, but with distinct “roles”. The differentiating factor is the configuration for each of the non-“web” roles.

In this case, for example, I want to provide a specific entrypoint command for the web app vs a job queue worker container.

I separate the common web application steps into a bash shell script that is always invoked (docker/app/php/prepare_app.sh). I restrict the script commands to only steps that are common among the web app role and any other role using the same image.

The web app role now includes a specific “cmd” configuration:

servers:
  web:
    hosts:
      - 159.203.76.193
    cmd: bash -c "/app/prepare_app.sh && cd /app/public; frankenphp php-server"

I check the deployment again:

...
Prune old containers and images...
  INFO [5eaef66c] Running docker ps -q -a --filter label=service=sherldoc --filter status=created --filter status=exited --filter status=dead | tail -n +6 | while read container_id; do docker rm $container_id; done on 159.203.76.193
  INFO [5eaef66c] Finished in 2.491 seconds with exit status 0 (successful).
  INFO [044dc517] Running docker image prune --force --filter label=service=sherldoc on 159.203.76.193
  INFO [044dc517] Finished in 1.208 seconds with exit status 0 (successful).
  INFO [74bda6c6] Running docker image ls --filter label=service=sherldoc --format '{{.ID}} {{.Repository}}:{{.Tag}}' | grep -v -w "$(docker container ls -a --format '{{.Image}}\|' --filter label=service=sherldoc | tr -d '\n')registry.digitalocean.com/team-james-demo/sherldoc-web:latest\|registry.digitalocean.com/team-james-demo/sherldoc-web:<none>" | while read image tag; do docker rmi $tag; done on 159.203.76.193
  INFO [74bda6c6] Finished in 3.217 seconds with exit status 0 (successful).
Releasing the deploy lock...
  Finished all in 50.6 seconds

It’s looking good! Again, I check the web service an HTTP request through the public IP:

curl -v -s http://159.203.76.193/main/login 1> /dev/null
*   Trying 159.203.76.193:80...
* Connected to 159.203.76.193 (159.203.76.193) port 80 (#0)
> GET /main/login HTTP/1.1
> Host: 159.203.76.193
> User-Agent: curl/7.81.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
...
* Connection #0 to host 159.203.76.193 left intact

Next, I plan to add the additional role for the job queue worker and other accessories.

Follow along at my fork of sherldoc on github.

2 thoughts on “Building in Public: Deploy a PHP application with Kamal, part 2

Leave a comment