I'm looking for the recommended way to build a docker container for Volto with addons
This thread hints at it and shows a possible approach: Is there a way to tell the plone/plone-frontend container to include additional Volto addons? - #16 by pigeonflight
Failed attempt
Since, in my setup, my addons are defined in my volto package in the package.json
and the Dockerfile
is also located in the same volto package, I tried adding a line in the Dockerfile that copies over the package.json.
I also had some weirdness where I had both a jsconifg.json and a tsconfig.json. My solution, for now, is to delete the tsconfig.json
I also abandoned the heredoc formatting because the docker builder in my CI/CD on gitlab doesn't support it and I was feeling too lazy to spend to time to wrap my head around buildx at the moment.
Here's the Dockerfile that I tried with and failed with most recently:
# syntax=docker/dockerfile:1
ARG VOLTO_VERSION
FROM plone/frontend-builder:${VOLTO_VERSION} as builder
# Copy jsconfig.json configuration
COPY --chown=node:node jsconfig.json /app/
COPY --chown=node:node package.json /app/ # <====== COPY the package.json
# We copy here to install all addons (package.json)
COPY --chown=node:node src/addons/ /app/src/addons/
RUN set -e && \
rm -rf tsconfig.json && \
/setupAddon && \
yarn install --network-timeout 1000000 && \
yarn build && \
rm -rf cache omelette .yarn/cache
FROM plone/frontend-prod-config:${VOLTO_VERSION}
LABEL maintainer="Plone Foundation <conf@plone.org>" \
org.label-schema.name="ploneconf-2023-frontend" \
org.label-schema.description="Plone Conference 2023 frontend." \
org.label-schema.vendor="Plone Foundation"
COPY --from=builder /app/ /app/
Didn't work.
I'm clearly missing something that should be staring me in the face.
/setupAddon
is the name of the helper script that I thought would do what I wanted with the package.json
Simple plone 6 deployment is not yet ready for the "distracted" developer.
So with a bit more motivation, attention, reading and research, I've been able to get a reliable deployment scenario for gitlab ci.
I got the big ideas from reading the Dockerfile in the source of plone-frontend
and the Dockerfile used for the ploneconf 2023 site
The Dockerfile
Important note: Since I have some customizations and components in my root volto package, it was necessary to copy my
src/customizations
folder,src/components
folder andsrc/config.js
file to my target container. Perhaps in the future, I'll restrict all customizations to the add-ons, it will lead to a simpler Dockerfile.
# syntax=docker/dockerfile:1
ARG VOLTO_VERSION
FROM plone/frontend-builder:${VOLTO_VERSION} as builder
# Copy jsconfig.json configuration
COPY --chown=node:node jsconfig.json /app/
# We copy here to install all addons (package.json and src/addons)
COPY --chown=node:node src/addons/ /app/src/addons/
COPY --chown=node:node package.json /app/
# We copy the src/component, src/customizations, config.js and public/ for favicons and other static files from the root of the package
COPY --chown=node:node src/components/ /app/src/components/
COPY --chown=node:node src/customizations/ /app/src/customizations/
COPY --chown=node:node src/config.js /app/src/
COPY --chown=node:node public/ /app/public/
# We use a jsconfig.json, so below, we remove the tsconfig.json
RUN set -e && \
rm -rf tsconfig.json && \
/setupAddon && \
yarn install --network-timeout 1000000 && \
yarn build && \
rm -rf cache omelette .yarn/cache
FROM plone/frontend-prod-config:${VOLTO_VERSION}
LABEL maintainer="Plone Foundation <conf@plone.org>" \
org.label-schema.name="ploneconf-2023-frontend" \
org.label-schema.description="Plone Conference 2023 frontend." \
org.label-schema.vendor="Plone Foundation"
COPY --from=builder /app/ /app/
Gitlab for CI
I use gitlab as my CI/CD, the default approach that I see in the community now is Github CI. As a result, the process of building an image in CI is a bit different than the one based on a .github/workflow
setup.
One big difference I see is that there's a special buildx image used on the github ci side of things.
I had to configure my .gitlab-ci.yml
file explicitly with the buildx command.
Here's my full .gitlab-ci.yml
file:
# This is a variation on something created by Florent CHAUVEAU <florent.chauveau@gmail.com>
# Mentioned here: https://blog.callr.tech/building-docker-images-with-gitlab-ci-best-practices/
stages:
- build
- test
- push
- deploy
build image:
stage: build
image: docker:25.0.3
services:
- docker:25.0.3-dind
script:
- echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER $CI_REGISTRY --password-stdin
- docker pull $CI_REGISTRY_IMAGE:latest || true
# syntax combines docker build command across multiple lines
# terminates with a '.'
- >
docker build
--pull
--cache-from $CI_REGISTRY_IMAGE:latest
--tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
.
# RETAG IMAGE LATEST WITH DIGEST FROM PREVIOUS IMAGE
#- IMAGE_ID=$(docker images | grep $CI_REGISTRY/$CI_PROJECT_PATH\/$IMAGE_NAME | awk '{print $3}')
#- docker tag $IMAGE_ID $CI_REGISTRY/$CI_PROJECT_PATH/$IMAGE_NAME:latest
# PUSH IMAGE COMMIT SHA
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
build frontend image:
variables:
VOLTO_VERSION: "17.14.0"
stage: build
image: docker:25.0.3
services:
- docker:25.0.3-dind
script:
- echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER $CI_REGISTRY --password-stdin
# syntax combines docker build command across multiple lines
# terminates with a '.'
- cd frontend/
- >
docker buildx build
--tag $CI_REGISTRY_IMAGE/frontend:$CI_COMMIT_SHORT_SHA
--build-arg VOLTO_VERSION=$VOLTO_VERSION
--provenance=false
.
# RETAG IMAGE LATEST WITH DIGEST FROM PREVIOUS IMAGE
#- IMAGE_ID=$(docker images | grep $CI_REGISTRY/$CI_PROJECT_PATH\/$IMAGE_NAME | awk '{print $3}')
#- docker tag $IMAGE_ID $CI_REGISTRY/$CI_PROJECT_PATH/$IMAGE_NAME:latest
# PUSH IMAGE COMMIT SHA
- docker push $CI_REGISTRY_IMAGE/frontend:$CI_COMMIT_SHORT_SHA
test-image:
stage: test
services:
- name: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
alias: site
# this command must run successfully to confirm that the image is viable
#command: ["bin/zconsole", "run", "/etc/relstorage.conf", "scripts/cors.py"]
image: python:3.8
script:
- curl http://site:8080/
# Finally, the goal here is to Docker tag any Git tag
# GitLab will start a new pipeline everytime a Git tag is created, which is pretty awesome
Push latest:
variables:
# We are just playing with Docker here.
# We do not need GitLab to clone the source code.
GIT_STRATEGY: none
stage: push
image: docker:20.10.10
services:
- docker:20.10.10-dind
only:
# Only "master" should be tagged "latest"
- master
script:
- echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER $CI_REGISTRY --password-stdin
# Because we have no guarantee that this job will be picked up by the same runner
# that built the image in the previous step, we pull it again locally
- docker pull $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
# Then we tag it "latest"
- docker tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA $CI_REGISTRY_IMAGE:latest
# Annnd we push it.
- docker push $CI_REGISTRY_IMAGE:latest
Push tag:
variables:
# We are just playing with Docker here.
# We do not need GitLab to clone the source code.
GIT_STRATEGY: none
stage: push
image: docker:20.10.10
services:
- docker:20.10.10-dind
only:
# We want this job to be run on tags only.
- tags
script:
- echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER $CI_REGISTRY --password-stdin
- docker pull $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
- docker tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME
Deploy:
stage: deploy
only:
# We want this job to be run on tags only.
- tags
script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- mkdir -p ~/.ssh
- chmod 400 $SSH_PRIVATE_KEY
- ssh-add "$SSH_PRIVATE_KEY"
- chmod 700 ~/.ssh
- ssh-keyscan $VM_IP_ADDRESS >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- ssh $SSH_USER@$VM_IP_ADDRESS "cd deployment_folder && ./update_container -t $CI_COMMIT_SHORT_SHA"
Thoughts/Considerations
- If I switch from using jsconfig.json to tsconfig.json, I may have to revisit this approach
- I definitely will be doing all my customisations in src/addons in the future
- There's probably a way, but I found the best way to customise my favicons was to put them in the public folder at the root of my Volto package.Once I've sorted that out, I may be able to do without using that strategy.
Everything now works for me, the last part is a script that deploys the updated containers on tagging releases in the repo. Maybe this is a the beginning of a Plone talk
Why this approach matters to me:
The benefit is that builds happen away from the server. This results in the following:
- Easier deployments, just need to update a configuration file and pull the new containers
- Less RAM required to run the Volto site. Volto builds can take up 6 to 8GB of RAM. Running Volto is doable on 4GB or less.
- Catch broken code in CI/CD, rather than when building and deploying in production.
This brings the benefits of containers to smaller (non-kubernetes-type deployments)