Building in Public: Deploy a PHP Application with Kamal, part 3

This is the third part of a series about deploying a non-Rails application with Kamal 2. Read the previous article here. Follow this journey at https://github.com/jjeffers/sherldoc.

Quitters don’t win and winners don’t quit. Or until they pass out from blood loss.

Deploying a Queue Worker

I need to provision a “version” of sherldoc that will handle workers to process asynchronous jobs for the web application. The original sherldoc docker compose layout used a shared docker volume and the same sherldoc web container to run the queue workers.

In our configuration the containers don’t share a mounted volume at runtime. Instead, each image uses the same container image with similar post-run commands. The final docker entrypoint commands will diverge, with the queue worker starting the artisan worker process instead of the web application server steps.

Our deploy.yml includes the following additional server entry:

...
image: sherldoc-web 
...  
servers:
  web:
    hosts:
      - 159.203.76.193
    cmd: bash -c "/app/prepare_app.sh && cd /app/public; frankenphp php-server"

  workers:
    hosts:
      - 159.203.76.193
...

Kamal uses every server entry to deploy a container of the image entry, “sherldoc-web”. I can override the “workers” container with a new CMD.

After a another deploy I check kamal app details :

kamal app details -c ./deploy.yml 
  INFO [006857a0] Running docker ps --filter label=service=sherldoc --filter label=role=web on 159.203.76.193
  INFO [006857a0] Finished in 2.108 seconds with exit status 0 (successful).
App Host: 159.203.76.193
CONTAINER ID   IMAGE                                                                                                                          COMMAND                  CREATED              STATUS              PORTS              NAMES
bc7b3029bda6   registry.digitalocean.com/team-james-demo/sherldoc-web:[...]   "docker-php-entrypoi…"   About a minute ago   Up About a minute   80/tcp, 9000/tcp   sherldoc-web-[...]

  INFO [4c339292] Running docker ps --filter label=service=sherldoc --filter label=role=workers on 159.203.76.193
  INFO [4c339292] Finished in 0.229 seconds with exit status 0 (successful).
App Host: 159.203.76.193
CONTAINER ID   IMAGE                                                                                                                          COMMAND                  CREATED              STATUS              PORTS              NAMES
a913b7fcd417   registry.digitalocean.com/team-james-demo/sherldoc-web:[...]   "docker-php-entrypoi…"   About a minute ago   Up About a minute   80/tcp, 9000/tcp   sherldoc-workers-[...]

Both containers are up, but the container with the name “sherldoc-workers” started with the artisan queue:work command.

Why No Supervisor?

“If I can’t see you working, how do I know if you are getting anything done?”

I debated adding the supervisor (the process control utility) indicated in the source project’s docker compose configuration.

I suspect if sherldoc-workers container halted kamal-proxy would then restart it. I am not aware of any restart policy set for the docker containers on the server. Any restart would have to be external to the docker engine.

I tested this theory, attaching to the sherldoc-worker container and killing the main process, artisan queue:work. Within a few moments a new sherldoc-workers container was up and running.

Given this, I decide not to install or run supervisor.

Booting Additional Accessories

Next, I stand up Redis and Apache Tika.

...
accessories:
  ...
  redis:
     host: 159.203.76.193
     image: redis:6
     directories:
      - data.redis:/data

  tika:
    image: apache/tika:2.9.2.1-full
    ports:
      - 9998:9998

With that configuration change I start the redis instance:

kamal accessory boot redis -c ./deploy.yml 
  INFO [2f687e13] Running /usr/bin/env mkdir -p .kamal on 159.203.76.193
  INFO [2f687e13] Finished in 1.466 seconds with exit status 0 (successful).
Acquiring the deploy lock...
  INFO [bdb656a9] Running docker login registry.digitalocean.com/team-james-demo -u [REDACTED] -p [REDACTED] on 159.203.76.193
  INFO [bdb656a9] Finished in 0.936 seconds with exit status 0 (successful).
  INFO [34829155] Running docker network create kamal on 159.203.76.193
  INFO [16f49764] Running /usr/bin/env mkdir -p $PWD/sherldoc-redis/data.redis on 159.203.76.193
  INFO [16f49764] Finished in 0.210 seconds with exit status 0 (successful).
  INFO [bf65cb9c] Running /usr/bin/env mkdir -p .kamal/apps/sherldoc/env/accessories on 159.203.76.193
  INFO [bf65cb9c] Finished in 0.171 seconds with exit status 0 (successful).
  INFO Uploading .kamal/apps/sherldoc/env/accessories/redis.env 100.0%
  INFO [f431adff] Running docker run --name sherldoc-redis --detach --restart unless-stopped --network kamal --log-opt max-size="10m" --env-file .kamal/apps/sherldoc/env/accessories/redis.env --volume $PWD/sherldoc-redis/data.redis:/data --label service="sherldoc-redis" redis:6 on 159.203.76.193
  INFO [f431adff] Finished in 1.865 seconds with exit status 0 (successful).
Releasing the deploy lock...

And finally, I spin up the Tika instance:

kamal accessory boot tika -c ./deploy.yml 
  INFO [32435277] Running /usr/bin/env mkdir -p .kamal on 159.203.76.193
  INFO [32435277] Finished in 1.398 seconds with exit status 0 (successful).
Acquiring the deploy lock...
  INFO [42be2d9b] Running docker login registry.digitalocean.com/team-james-demo -u [REDACTED] -p [REDACTED] on 159.203.76.193
  INFO [42be2d9b] Finished in 0.537 seconds with exit status 0 (successful).
  INFO [e9abb23f] Running docker network create kamal on 159.203.76.193
  INFO [b36fa775] Running /usr/bin/env mkdir -p .kamal/apps/sherldoc/env/accessories on 159.203.76.193
  INFO [b36fa775] Finished in 0.207 seconds with exit status 0 (successful).
  INFO Uploading .kamal/apps/sherldoc/env/accessories/tika.env 100.0%
  INFO [94a1e92e] Running docker run --name sherldoc-tika --detach --restart unless-stopped --network kamal --log-opt max-size="10m" --publish 9998:9998 --env-file .kamal/apps/sherldoc/env/accessories/tika.env --label service="sherldoc-tika" apache/tika:2.9.2.1-full on 159.203.76.193
  INFO [94a1e92e] Finished in 16.586 seconds with exit status 0 (successful).
Releasing the deploy lock...

So far everything looks like it’s working, or at least the parts are operational. I’ll dive into the application itself next and see is working under the hood.

“I have no idea what I’m doing. Please come again.”

Building in Public: Deploy a PHP application with Kamal, part 2

“Those fools at Radio Shack called me mad!”

This is the second part of a series about deploying a non-Rails application with Kamal 2. Read the first article here.

First, Some Housecleaning

Kamal places the generated deployment configuration file into config/deploy.yml. For our project this is a minor problem.

The sherldoc project contains the application code itself in the root of the respository and includes it’s own config/ directory. This directory is used for the PHP application configuration.

Adding the Kamal deployment configuration here muddies the water – mixing our application and deployment configuration. I move the file out of the config/ directory to the root of the project.

Since Kamal looks to the config/deploy.yml location by default, I now specify the location of the config file for every command. For example:

kamal deploy -c ./deploy.yml

Towards a Minimally Functional, Deployed Web App

I really want to see the sherldoc application deployed and reachable by HTTP by the public IP address of our VM. I can observe the state of the web application image both locally and during deployments.

Since the sherldoc application isn’t configured and prepared by the deployment process yet the application will not respond to any HTTP requests. That means that kamal-proxy will also not detect the app as healthy and will not forward requests to the running container.

The logs for the container show the state of the web application during the container execution:

jdjeffers@snappy-13g:~/kamal/sherldoc$ curl localhost:80
<br />
<b>Warning</b>:  require(/app/public/../vendor/autoload.php): Failed to open stream: No such file or directory in <b>/app/public/index.php</b> on line <b>13</b><br />
<br />
<b>Fatal error</b>:  Uncaught Error: Failed opening required '/app/public/../vendor/autoload.php' (include_path='.:') in /app/public/index.php:13
Stack trace:
#0 {main}
  thrown in <b>/app/public/index.php</b> on line <b>13</b><br />

Warning: require(/app/public/../vendor/autoload.php): Failed to open stream: No such file or directory in /app/public/index.php on line 13

Fatal error: Uncaught Error: Failed opening required '/app/public/../vendor/autoload.php' (include_path='.:') in /app/public/index.php:13 Stack trace: #0 {main} thrown in /app/public/index.php on line 13

I think it’s clear there are more steps we need to take to get the web application ready for action.

The scripts/ directory contains sets of commands that indicate what preparation the web application needs before it can serve requests. When using the docker compose as a deployment tool, these scripts would be used to perform the preparation steps.

scripts/
├── dockerbuild
├── dockerdown
├── dockerfirst
├── dockerstart
├── dockerstop
├── dockerup
├── shell
└── storagesetup.sh

Peeking into scripts/dockerfirst we can see commands to issue via docker compose on the app container. Specifically, I need to install PHP packages with composer, then run migrations and cache generation with the PHP artisan utility. Finally the frontend JavaScript runtime should be started with npm.

cp -n .env.example .env;
./scripts/dockerbuild;
docker compose -f docker-compose.yml up -d;
./scripts/storagesetup.sh;
time docker compose exec app composer install;
time docker compose exec supervisor composer install;
time docker compose exec app php artisan migrate:fresh --seed --force;
#time docker compose exec app php artisan route:cache;
#time docker compose exec app php artisan config:cache;
docker compose exec app npm install;
docker compose exec app npm run dev;

Modifying the PHP web application Dockerfile

To keep things simple, I modify the PHP dockerfile to run the preparation steps into both RUN and CMD directives. I can use RUN for actions that can occur before the container is in a run state like composer package installation and storage directory creation. The CMD as steps to run once the container is running in the deploy environment, like database migrations.

WORKDIR /app
RUN mkdir -p storage storage/framework storage/framework/cache storage/framework/sessions storage/framework/testing storage/framework/views storage/logs;
RUN composer install;
...
CMD php artisan migrate --force; \
    php artisan route:cache; \
    php artisan config:cache; \
    npm install; \
    { npm run dev & }

Next, I can the next deployment attempt with kamal deploy -c ./deploy.yml. The results:

#20 1.649   Database file at path [/app/database/database.sqlite] does not exist. Ensure this is an absolute path to the database. (Connection: sqlite, SQL: select name from sqlite_master where type = 'table' and name not like 'sqlite_%' order by name)
#20 1.649 
#20 1.649   at vendor/laravel/framework/src/Illuminate/Database/Connection.php:813
#20 1.654     809▕                     $this->getName(), $query, $this->prepareBindings($bindings), $e
#20 1.654     810▕                 );
#20 1.654     811▕             }
#20 1.654     812▕ 
#20 1.654   ➜ 813▕             throw new QueryException(
#20 1.654     814▕                 $this->getName(), $query, $this->prepareBindings($bindings), $e
#20 1.654     815▕             );
#20 1.654     816▕         }
#20 1.654     817▕     }
#20 1.654 
#20 1.654       +30 vendor frames 
#20 1.654 
#20 1.654   31  artisan:13
#20 1.654       Illuminate\Foundation\Application::handleCommand(Object(Symfony\Component\Console\Input\ArgvInput))
...
--------------------
  38 |     RUN mkdir -p storage storage/framework storage/framework/cache storage/framework/sessions storage/framework/testing storage/framework/views storage/logs;
  39 |     RUN composer install;
  40 | >>> RUN php artisan migrate:fresh --seed --force;
  41 |     RUN npm install;
  42 |     RUN npm run dev;
--------------------
ERROR: failed to solve: process "/bin/sh -c php artisan migrate:fresh --seed --force;" did not complete successfully: exit code: 1

Adding a Postgres Database

Based on the logs showing the last deployment failure, I know that the next step would be add a database to our deployment process. The included .env.example indicates that the preferred database should be Postgres:

#DB_CONNECTION=sqlite
DB_CONNECTION=pgsql
DB_HOST=pgsql
DB_PORT=5432
DB_DATABASE=sherldoc
DB_USERNAME=sherldoc
DB_PASSWORD=sherldoc

Kamal makes it easy to prepare a database with the accessories declarations. The result looks like this in the deploy.yml:

accessories:
  postgres:
      host: 159.203.76.193
      image: pgvector/pgvector:pg16
      port: 5432
      env:
        POSTGRES_PASSWORD: sherldoc
        POSTGRES_USER: sherldoc
        POSTGRES_DB: sherldoc
      directories:
        - ./docker/pgdata161:/var/lib/postgresql/data

I selected the pgvector image because it’s based on the postgres16 container builds, but already included the vector extensions required by sherldoc.

Testing the HTTP requests to running container show me that there is another problem. The original web serving infrastructure uses nginx and php-fpm, but currently there is no way to serve the PHP files.

No web service for you!

Alternatives to Nginx/PHP-FPM

At this point I have to decide to continue with nginx somewhere in our deployment configuration. Does it belong in the web app image, or does it live as an accessory? It’s not clear to me how kamal-proxy would handle the gapless deployments in this orientation. It may also not make as much sense to use nginx as an addition the web app image. I can’t see what would be gained with that approach.

I found a standalone PHP web server called frankenphp. There is a container image for this server that I can base the sherldoc we app on. I found the resulting image bloat for sherldoc to be more than I’d like (nearly twice the size of the php-fpm base image).

Some containers are just too fat.

I stick with the php-fpm image, but since we are overriding the entrypoint command we still have to start the frankenphp server as well.

I amend the sherldoc Dockerfile to install frankphp and then start the webserver:

RUN curl https://frankenphp.dev/install.sh | sh \
    && mv frankenphp /usr/local/bin/
...
EXPOSE 80
CMD php artisan migrate --force; \
    php artisan route:cache; \
    php artisan config:cache; \
    npm install; \
    { npm run dev & }; \
    cd /app/public; \
    frankenphp php-server

There is another adjustment I make to the proxy configuration health check. I change the health check path to reflect an expected path, /main/login/:

proxy:
  #ssl: true
  host: 159.203.76.193
  # kamal-proxy connects to your container over port 80, use `app_port` to specify a different port.
  healthcheck:
    interval: 3
    path: /main/login
    timeout: 3

Another deployment attempt finally shows a stable image:

  ...
  INFO Container is healthy!
  INFO [fdcb4fa8] Running docker tag registry.digitalocean.com/team-james-demo/sherldoc:dd83294d6a178fa10cb8797b1584f7e14b04f280_uncommitted_ef152c1c348ee1bc registry.digitalocean.com/team-james-demo/sherldoc:latest on 159.203.76.193
  INFO [fdcb4fa8] Finished in 0.690 seconds with exit status 0 (successful).
Prune old containers and images...
  INFO [84bd3457] Running docker ps -q -a --filter label=service=sherldoc --filter status=created --filter status=exited --filter status=dead | tail -n +6 | while read container_id; do docker rm $container_id; done on 159.203.76.193
  INFO [84bd3457] Finished in 0.453 seconds with exit status 0 (successful).
  INFO [109e763d] Running docker image prune --force --filter label=service=sherldoc on 159.203.76.193
  INFO [109e763d] Finished in 0.476 seconds with exit status 0 (successful).
  INFO [3e2925c5] Running docker image ls --filter label=service=sherldoc --format '{{.ID}} {{.Repository}}:{{.Tag}}' | grep -v -w "$(docker container ls -a --format '{{.Image}}\|' --filter label=service=sherldoc | tr -d '\n')registry.digitalocean.com/team-james-demo/sherldoc:latest\|registry.digitalocean.com/team-james-demo/sherldoc:<none>" | while read image tag; do docker rmi $tag; done on 159.203.76.193
  INFO [3e2925c5] Finished in 0.650 seconds with exit status 0 (successful).
Releasing the deploy lock...
  Finished all in 299.5 seconds

I can finally reach the web application through the VM’s public IP address, as kamal-proxy is forwarding the requests to the sherldoc web application container

Your Reward for Your Hard Work is Yet More Hard Work

Sometimes your purpose in life is just to serve as an example to others.

It occurs to me that while there is this momentary victory I still have a ways to go until sherldoc is fully functional. For example, I still need to stand up the application queue workers, the Apache Tika service, and a Redis queue.

The original sherldoc deployment uses Docker volumes to share a deployed set of files into each concerned container (as /app). Since Kamal expects a different paradigm on container orchestration, this approach is not ideal.

Instead of sharing the application files among different containers, each provisioned for a distinct concern), I adopt the Kamal expectation that the same web application image is deployed, but with distinct “roles”. The differentiating factor is the configuration for each of the non-“web” roles.

In this case, for example, I want to provide a specific entrypoint command for the web app vs a job queue worker container.

I separate the common web application steps into a bash shell script that is always invoked (docker/app/php/prepare_app.sh). I restrict the script commands to only steps that are common among the web app role and any other role using the same image.

The web app role now includes a specific “cmd” configuration:

servers:
  web:
    hosts:
      - 159.203.76.193
    cmd: bash -c "/app/prepare_app.sh && cd /app/public; frankenphp php-server"

I check the deployment again:

...
Prune old containers and images...
  INFO [5eaef66c] Running docker ps -q -a --filter label=service=sherldoc --filter status=created --filter status=exited --filter status=dead | tail -n +6 | while read container_id; do docker rm $container_id; done on 159.203.76.193
  INFO [5eaef66c] Finished in 2.491 seconds with exit status 0 (successful).
  INFO [044dc517] Running docker image prune --force --filter label=service=sherldoc on 159.203.76.193
  INFO [044dc517] Finished in 1.208 seconds with exit status 0 (successful).
  INFO [74bda6c6] Running docker image ls --filter label=service=sherldoc --format '{{.ID}} {{.Repository}}:{{.Tag}}' | grep -v -w "$(docker container ls -a --format '{{.Image}}\|' --filter label=service=sherldoc | tr -d '\n')registry.digitalocean.com/team-james-demo/sherldoc-web:latest\|registry.digitalocean.com/team-james-demo/sherldoc-web:<none>" | while read image tag; do docker rmi $tag; done on 159.203.76.193
  INFO [74bda6c6] Finished in 3.217 seconds with exit status 0 (successful).
Releasing the deploy lock...
  Finished all in 50.6 seconds

It’s looking good! Again, I check the web service an HTTP request through the public IP:

curl -v -s http://159.203.76.193/main/login 1> /dev/null
*   Trying 159.203.76.193:80...
* Connected to 159.203.76.193 (159.203.76.193) port 80 (#0)
> GET /main/login HTTP/1.1
> Host: 159.203.76.193
> User-Agent: curl/7.81.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
...
* Connection #0 to host 159.203.76.193 left intact

Next, I plan to add the additional role for the job queue worker and other accessories.

Follow along at my fork of sherldoc on github.

Building in Public: Deploy a PHP application with Kamal

The Challenge

After Michael Kimsal released his proof of concept project sherldoc, I was wondering if I could deploy this using Kamal, a deployment tool typically used for Rails web applications. Supposedly Kamal is application and framework agnostic.

Getting this to work is like looking for Bigfoot or that Giant Squid you see in those late night shows on the History Channel. It should be possible!

First, let us look at sherldoc’s description:

Web service endpoint to scan a document for

  • existence of keyword or phrase
  • absence of keyword or phrase

The project provides a Docker Compose configuration for deployment. Several assumptions are built into the current configuration, which I will outline later. The challenge will be to convert these assumptions into Kamal configuration directives and then get a running sherldoc instance.

Why Kamal?

Kamal promises:

Kamal offers zero-downtime deploys, rolling restarts, asset bridging, remote builds, accessory service management, and everything else you need to deploy and manage your web app in production with Docker. Originally built for Rails apps, Kamal will work with any type of web app that can be containerized.

and

Kamal basically is Capistrano for Containers, without the need to carefully prepare servers in advance. 

Capistrano was the old reliable deployment tool for many years in the Rails world. The idea that we can use the same ease of deployment with containers on a naked VM is attractive.

The whole point is to make it easy to put your application on a low-cost VM, either in some hosted environment or perhaps an on-premise machine. Consider this to be a potent weapon in the crusade against BigVMTM.

I am not a PHP expert, so getting this to work will push me outside of my comfort zone. I may fail. I may not find the Giant Squid or Bigfoot. In the worst case, I’ll learn something new.

Analyzing the Docker Compose configuration

I start by examining the original docker-compose.yml file for clues:

services:
     nginx:
          image: nginx:stable-bullseye
...
     app:
          image: sherldoc/web:1.0
...
     redis:
          image: redis:6
...
     tika:
          image: apache/tika:2.9.2.1-full
...
     supervisor:
...
     psql:
          image:postgres:16.1-bullseye
...

We start with the entry “app”, which is the PHP web application. This will be the source of our Kamal application image. Kamal creates this image and then stores it in a registry we specify. Once successfully connected to our provisioned virtual machine, Kamal installs docker and then download the image from the registry.

We can see that the original configuration required a docker supervisor process “supervisor”. Since we will only require a single image, the PHP application, the supervisor is not needed. Therefore, we omit this in our Kamal configuration. [Edit: This is not correct. The supervisor image is actually the sherldoc application “jobs” worker process. We will need to replicate this and as we will see, Kamal anticipates this need.]

The “nginx” entry hints that the PHP web application depends on the Nginx application proxy. We can peek inside the associated Nginx dockerfile (docker/app/nginx/conf.d/app.conf) and see that the application directives. Kamal provides an application proxy (“kamal-proxy”, https://kamal-deploy.org/docs/configuration/proxy/) which, in theory, provides the same capabilities.

The “redis”, “tika”, and “postgres” entries indicate additional services that the web application relies on. Each of these services has an associated container image.

Kamal provides configuration options for “accessory” services as well (https://kamal-deploy.org/docs/configuration/accessories/). As long as we can use the same images and apply similar configuration options to match the original values in the docker-compose.yml file it should work.

Preparing for Kamal

I forked the project to avoid bombarding the original project with PR requests. Perhaps my work will be merged in later.

Next I installed Kamal and initialized the local workspace with kamal init.

Next, I edited the default Kamal configuration (config/deploy.yml) with the following, removing all the comments for easier reading:

service: sherldoc

# Name of the container image.
image: sherldoc

# Deploy to these servers.
servers:
  web:
    - 159.203.76.193
registry:
  server: registry.digitalocean.com/team-james-demo
  username: my-user

  password:
    - KAMAL_REGISTRY_PASSWORD

builder:
  arch: amd64
  dockerfile: docker/app/php/php.dockerfile

Let’s review the contents:

service: sherldoc

# Name of the container image.
image: sherldoc

The name is just the name of the original project. No magic here.

servers:
  web:
    - 159.203.76.193

The IP address is the same address as the provisioned in my VM provider of choice at Digital Ocean. This is the cheapest configuration I could find. It might be too small or under provisioned, but we can fix that later.

registry:
  server: registry.digitalocean.com/team-james-demo
  username: my-user

  password:
    - KAMAL_REGISTRY_PASSWORD

Kamal will generate a docker image then push that image into your registry. Because I am using Digital Ocean I can use the Digital Ocean registry service. I could have also used Docker Hub, AWS Elastic Container Store, or any other container registry.

The KAMAL_REGISTRY_PASSWORD is an environment variable set to the credentials (an authentication token) provided by Digital Ocean. For security reasons, I don’t want to commit the actual value to the configuration file. I’ll leave this to be constituted at runtime.

First deployment attempt

All these things in place, we kick off the build with “kamal setup”.

INFO [fe0776d2] Running /usr/bin/env mkdir -p .kamal on 159.203.76.193
INFO [fe0776d2] Finished in 1.702 seconds with exit status 0 (successful).
Acquiring the deploy lock...
Ensure Docker is installed...
INFO [edde3944] Running docker -v on 159.203.76.193
INFO [edde3944] Finished in 0.186 seconds with exit status 0 (successful).
Log into image registry...
INFO [8eb7c038] Running docker login registry.digitalocean.com/team-james-demo -u [REDACTED] -p [REDACTED] as jdjeffers@localhost
... (lots of logs cut out here) 
INFO [9ce887de] Running docker container ls --all --filter name=^sherldoc-web-7ceb4de587a2119c9b007f40973a40cd7eb88b8e$ --quiet | xargs docker inspect --format '{{json .State.Health}}' on 159.203.76.193
INFO [9ce887de] Finished in 0.249 seconds with exit status 0 (successful).
 ERROR null
INFO [54c5ab35] Running docker container ls --all --filter name=^sherldoc-web-7ceb4de587a2119c9b007f40973a40cd7eb88b8e$ --quiet | xargs docker stop on 159.203.76.193
INFO [54c5ab35] Finished in 0.404 seconds with exit status 0 (successful).
  Finished all in 571.2 seconds
Releasing the deploy lock...
  Finished all in 573.5 seconds
ERROR (SSHKit::Command::Failed): Exception while executing on host 159.203.76.193: docker exit status: 1
docker stdout: Nothing written
docker stderr: Error: target failed to become healthy

This result is expected for several reasons. The original application:

  1. doesn’t provide a default 200OK to the kamal heartbeat request at “/up”,
  2. expects a redis instance,
  3. expects an Nginx application proxy,
  4. expects a tika server process,
  5. expects a PostgreSQL database.

Without these other services, the sherldoc PHP application is probably not going work! We’ll fix these issues next.

Want to follow this quest? Read part 2!