I mostly need guidance on best practice for using docker locally against source code in src for add-on development.
I'm able to create my own custom dockerfiles and I followed an example to get docker-compose to work.
For the sake of context, I've added a bit more background below.
Background
I had a discussion with one of our devs yesterday and he almost expected that we should have a docker image for a client project. This has me looking into best practices for Docker and Plone.
It felt a bit fuzzy in 2016 but ... more than a year later, it seems like a very sensible way forward.
From the perspective of on-boarding a new developer (with something like docker-compose).
My dream workflow might look like this:
git clone
docker-compose up
some solution for working with the src folder, still using the editor of their choice (I need to look at using volumes properly)
run tests (in docker)
deploy to a docker based host
This approach looks like a win to me, but I'm not 100% there on the mechanisms for step 3.
I'd like to reduce the number of moving parts needed to get started on development. Managing dependencies across different development environments (Ubuntu, Arch Linux, Centos, Mac OS X) gets tiresome. One person uses Arch Linux, another uses Mac OS X and another uses Windows (well Ubuntu in a VM on Windows). Each person loses "X" number of minutes/hours spinning up their environment. In the past I've just tried to have everyone use the same thing, the latest Ubuntu LTS, but this is still, believe it or not, a high bar (or a time sink). I'd like to get back the "X" number of minutes we're losing per developer per project.
Docker is emerging as way for us to have a common target, managed in a container. Everyone is then developing the exact environment that will be deployed to production. I know you can do something similar with vagrant but I like the deployment story that containers promise.
When I started I had a few years of Linux tinkering under my belt, so I knew a good deal of *nix. Dealing with makefiles, permissions etc... was just part of getting things done. I think *nix knowledge is still important but not for all the devs, let the *nix guys set up the containers and let the devs consume the containers. Then our devs just need to worry about pulling the right "base" container and getting to work.
This is the first time I'm gathering my thoughts into words so I hope they make some sense.
this is pretty important, IMO, because it helps you to understand well the problem you're trying to solve and if your proposed solution will help you or not.
Docker and Plone for production is a NoGo in my own projects. I am using Docker for running some Plone-based
demo sites like plone-demo.info and some internal projects however I refuse to use Docker in customer projects for deployments. Building Docker images and in particular debugging them is very time consuming and error-prone. On normal system you can re-run buildout in case of a buildout error and it will continue with the installation instead of repeating the whole buildout from scratch. A buildout that works nicely on your local system must not necessarily install in the same way on some Docker image with some base image. It usually takes me several hours for updating a particular Dockerfile in order to make it work with new code, new Plone version, new dependencies. Lot of trial and error. Debugging a failed buildout under Docker is a major pain in the @@@. One workaround in some cases is to build a dedicated Plone base image with all the bits and pieces and use this as base layer for a more specific container with e.g. additional add-ons etc.
My point when to use Docker and Plone: for stuff like demo instance and you want to restart arbitrarily.
Traditional deployment either manually or using some kind of automation still feels more safer and reproducable and in particular much faster than tinkering around with Docker.
This is the direction I'm exploring now. Spend time getting a base image right and then build customizations on that. The base image becomes the core for new projects and we improve it at intervals.
My setup.py for custom site includes plone.app.mosaic as a dependency.
I'm getting the following error, which I'm sure is related to the fact that pypi doesn't accept http traffic anymore. I just need to figure out how to "convince" dockerized plone to use the https index.
/plone/instance/lib/python2.7/site-packages/pkg_resources/__init__.py:193: RuntimeWarning: You have iterated over the result of pkg_resources.parse_version. This is a legacy behavior which is inconsistent with the new version class introduced in setuptools 8.0. In most cases, conversion to a tuple is unnecessary. For comparison of versions, sort the Version instances directly. If you have another use case requiring the tuple, please file a bug with the setuptools project describing that need.
stacklevel=1,
Couldn't find index page for 'plone.app.mosaic' (maybe misspelled?)
Getting distribution for 'plone.app.mosaic'.
Couldn't find index page for 'plone.app.mosaic' (maybe misspelled?)
While:
Installing instance.
Getting distribution for 'plone.app.mosaic'.
Error: Couldn't find a distribution for 'plone.app.mosaic'.
I needed an image that was closer to what my add-on will be deployed against. Without such an image I'd have to constantly be building against a custom.cfg (noted above).
@zopyx There are ways around the buildout starting from scratch. The way we use is that you have a recursive image.
You have a Dockerfile-bootstrap that just installs system packages and buildout.
You build it and tag it "myapp"
You have a Dockerfile that uses "myapp" as a base image and runs buildout.
You build and tag that image "myapp"
Anytime you want a compact repeatable build do steps 1-4. Any time you want debug the buildout or just do a quick build with just the changes, just do 3-4.
Works well for us. We do use Docker in production. Currently with rancher. We have it setup so VHM controls the routing to various versions of plone etc, ZRS for replication etc.
But everything is still more fiddly than it should be.
I would like to know the reasons your developer gave to you on pushing Docker into the equation because, according to your own thread you're just making things worst by adding _just-another-moving-part_™
For me, Docker is more lightweight. I can see how that may not be so on an OS like Windows though.
I expect to have concluded this exploration by early December. I can then say whether it is working for us.