Docker and Plone for add-on development

This is inspired by this discussion: Ease installation of Plone at hosters
Hopefully someone can point me in the right direction.

I mostly need guidance on best practice for using docker locally against source code in src for add-on development.
I'm able to create my own custom dockerfiles and I followed an example to get docker-compose to work.

For the sake of context, I've added a bit more background below.


I had a discussion with one of our devs yesterday and he almost expected that we should have a docker image for a client project. This has me looking into best practices for Docker and Plone.

It felt a bit fuzzy in 2016 but ... more than a year later, it seems like a very sensible way forward.

From the perspective of on-boarding a new developer (with something like docker-compose).
My dream workflow might look like this:

  1. git clone
  • docker-compose up
  • some solution for working with the src folder, still using the editor of their choice (I need to look at using volumes properly)
  • run tests (in docker)
  • deploy to a docker based host

This approach looks like a win to me, but I'm not 100% there on the mechanisms for step 3.

update 1: A few notes for those working with Docker:

Modify the Dockerfile provided by plone.docker (unless you don't need to)

For my use case I slightly modified the file. For my usecase I modified my buildout.cfg to remove a lot of extras and created a docker.cfg file.

Then altered my buildout line in the Dockerfile to use docker.cfg instead of buildout.cfg

COPY docker.cfg /plone/instance/docker.cfg
... snip...
 && sudo -u plone LIBRARY_PATH=/lib:/usr/lib bin/buildout -v -c docker.cfg \

Beware of the pypi http vs https issue

Add this to your buildout.cfg/docker.cfg (or equivalent file)

index =

It fixes this issue:

may I ask why? can you elaborate? I'm trying to understand your use case.

I'd like to reduce the number of moving parts needed to get started on development. Managing dependencies across different development environments (Ubuntu, Arch Linux, Centos, Mac OS X) gets tiresome. One person uses Arch Linux, another uses Mac OS X and another uses Windows (well Ubuntu in a VM on Windows). Each person loses "X" number of minutes/hours spinning up their environment. In the past I've just tried to have everyone use the same thing, the latest Ubuntu LTS, but this is still, believe it or not, a high bar (or a time sink). I'd like to get back the "X" number of minutes we're losing per developer per project.

Docker is emerging as way for us to have a common target, managed in a container. Everyone is then developing the exact environment that will be deployed to production. I know you can do something similar with vagrant but I like the deployment story that containers promise.

When I started I had a few years of Linux tinkering under my belt, so I knew a good deal of *nix. Dealing with makefiles, permissions etc... was just part of getting things done. I think *nix knowledge is still important but not for all the devs, let the *nix guys set up the containers and let the devs consume the containers. Then our devs just need to worry about pulling the right "base" container and getting to work.

This is the first time I'm gathering my thoughts into words so I hope they make some sense.

1 Like

makes sense.

this is pretty important, IMO, because it helps you to understand well the problem you're trying to solve and if your proposed solution will help you or not.

good luck!

To be clear, it's the first time I'm putting into words why I think Docker could be the solution to the issue.

update 2: the missing piece for me is getting mr.developer to work with a dockerized plone

I think I'm looking for the best way to connect mr.developer into a sensible docker workflow. The best clue I've found so far is here:

It assumes a git url which may be acceptable for my use case (we will see).

Docker and Plone for production is a NoGo in my own projects. I am using Docker for running some Plone-based
demo sites like and some internal projects however I refuse to use Docker in customer projects for deployments. Building Docker images and in particular debugging them is very time consuming and error-prone. On normal system you can re-run buildout in case of a buildout error and it will continue with the installation instead of repeating the whole buildout from scratch. A buildout that works nicely on your local system must not necessarily install in the same way on some Docker image with some base image. It usually takes me several hours for updating a particular Dockerfile in order to make it work with new code, new Plone version, new dependencies. Lot of trial and error. Debugging a failed buildout under Docker is a major pain in the @@@. One workaround in some cases is to build a dedicated Plone base image with all the bits and pieces and use this as base layer for a more specific container with e.g. additional add-ons etc.

My point when to use Docker and Plone: for stuff like demo instance and you want to restart arbitrarily.
Traditional deployment either manually or using some kind of automation still feels more safer and reproducable and in particular much faster than tinkering around with Docker.


This is the direction I'm exploring now. Spend time getting a base image right and then build customizations on that. The base image becomes the core for new projects and we improve it at intervals.

update 3: plone.docker is suffering from the pypi http/https issue

As a quick experiment I ran the following on a server with docker

docker run -p 8080:8080      -e PLONE_ADDONS=""      -e PLONE_DEVELOP="src/"      -v $(pwd)/src:/plone/instance/src plone fg

My for custom site includes as a dependency.

I'm getting the following error, which I'm sure is related to the fact that pypi doesn't accept http traffic anymore. I just need to figure out how to "convince" dockerized plone to use the https index.

/plone/instance/lib/python2.7/site-packages/pkg_resources/ RuntimeWarning: You have iterated over the result of pkg_resources.parse_version. This is a legacy behavior which is inconsistent with the new version class introduced in setuptools 8.0. In most cases, conversion to a tuple is unnecessary. For comparison of versions, sort the Version instances directly. If you have another use case requiring the tuple, please file a bug with the setuptools project describing that need.
Couldn't find index page for '' (maybe misspelled?)
Getting distribution for ''.
Couldn't find index page for '' (maybe misspelled?)
  Installing instance.
  Getting distribution for ''.
Error: Couldn't find a distribution for ''.
1 Like

update 4: workaround for the pypi http/https issue

I had to add a custom buildout file custom.cfg

docker run -p 8080:8080 -e PLONE_ADDONS="" -e PLONE_DEVELOP="src/" -v $(pwd)/src:/plone/instance/src -v $(pwd)/custom.cfg:/plone/instance/custom.cfg plone

In my custom.cfg I set the index in the [buildout] section

.. snip ...
index =

While this works, I don't like it. I'll use it for quick tests, but I'm going to need my own base image very soon! Next stop will be this -->

update 5: creating a base image for my project

I needed an image that was closer to what my add-on will be deployed against. Without such an image I'd have to constantly be building against a custom.cfg (noted above).

So I followed the guide at

And ended up with something like this:

A site.cfg

extends =
show-picked-versions= true
index =
extensions = mr.developer
parts +=
auto-checkout = *

Products.OneTimeTokenPAS = git = fs

recipe = plone.recipe.zope2instance
user = admin:admin
http-address = 8080
resources = ${buildout:directory}/resources
eggs +=

and a Dockerfile
I needed git, so I added it to my build dependencies in the Dockerfile.

FROM plone/plone:5.0.8-alpine

COPY site.cfg /plone/instance/
COPY ./ /plone/instance/src/
USER root
RUN apk add --no-cache --virtual .build-deps \
    curl \
    bzip2-dev \
    gcc \
    libc-dev \
    ncurses-dev \
    openssl-dev \
    readline-dev \
    zlib-dev \
    sudo \
    make \
    libxml2-dev \
    libxslt-dev \
    libjpeg-turbo-dev \
    libpng-dev openssl \
    git \
    && cd /plone/instance \
    && chown -R plone:plone /plone /data \
    && sudo -u plone LIBRARY_PATH=/lib:/usr/lib bin/buildout -c site.cfg \
    && apk del .build-deps \
    && apk add --no-cache --virtual .run-deps \
    bash \
    libxml2 \
    libxslt \
    libjpeg-turbo \
    rsync \
    && find /plone \( -type f -a -name '*.pyc' -o -name '*.pyo' \) -exec rm -rf '{}' +
USER plone

My new problem is that after building

docker build -t my-base-image .

When I run

docker run -p 8080:8080 my-base-image

I can access the front page at
where I am prompted to add a plone site.

... but when I try to add a plone site, the container exits.

new problem to solve......

Everything that I have "discovered" so far is already in these slides

1 Like

@zopyx There are ways around the buildout starting from scratch. The way we use is that you have a recursive image.

  1. You have a Dockerfile-bootstrap that just installs system packages and buildout.
  2. You build it and tag it "myapp"
  3. You have a Dockerfile that uses "myapp" as a base image and runs buildout.
  4. You build and tag that image "myapp"

Anytime you want a compact repeatable build do steps 1-4. Any time you want debug the buildout or just do a quick build with just the changes, just do 3-4.

Works well for us. We do use Docker in production. Currently with rancher. We have it setup so VHM controls the routing to various versions of plone etc, ZRS for replication etc.
But everything is still more fiddly than it should be.

I know the feeling, but it sounds like you're moving in the right direction.

1 Like

That's what I do but even updating a base image often leads to strange issues that are hard to debug.


may I ask why you're not using Vagrant for this? that's the recommended way of doing things according to our own training:

I would like to know the reasons your developer gave to you on pushing Docker into the equation because, according to your own thread you're just making things worst by adding _just-another-moving-part_™ :wink:

this is good reading:


For me, Docker is more lightweight. I can see how that may not be so on an OS like Windows though.
I expect to have concluded this exploration by early December. I can then say whether it is working for us.

Update. We are now using docker for some deployments. In terms of development, not yet.

Could you detail your data persistency / storage stack?