Benefits of docker

I use docker only for wordpress (basically for developing themes).

When developing for Plone, the 'new way' of using pip makes it very easy… just add a policy product and the only thing I copy is the var folder.

Without knowing too much about docker: what are the benefits of developing with docker?

For newcomers: Someone else figures out the build time requirements, like libjpeg for Pillow, for you. Enables you to concentrate on the main problem at hand and not the systems requirements side of Plone.

Using Plone and Docker for development makes no sense for me. You often have buildout failures. Restarting a buildout inside Docker restarts the complete build process from scratch. A buildout on a normal system resumes. Analyzing buildout errors within a container is a pita. Docker is an efficiency killer.

I'd assume one replaces buildout with docker. Both solve about the same scope of 'this needs to be there and there on the filesystem for x to work'.

Docker for the sake of development only makes sense for components that need to interact with Plone like database, external services etc. Docker for Plone development alone...as said makes no sense to me because it does not solve any problem..or where and how would Docker help me with Plone development instead of using Buildout?

There are solutions for this.
For example you have a Dockerfile that doesn't fail on buildout failure and builds on its own image.

exactly.
The only way I could get a newcomer who wasn't familar with python and buildout and decoding build errors was with docker. Esp on windows.
For example I had someone writing robot tests for me using a few docker commands. esp easy with docker images like selenium/standalone-chrome

I know this is a very old post but please elaborate for us docker newbies.

I used this post to getting me access to a pdb but every time I make a change to the addon I must restart the instance. And plone.reload only refreshes template because it thinks it's in production mode.

It looks like the restart is taking long because I have collective.exportimport in my addons. Is there a way to create a dev image with c.exportimport included and 'boot' from that?

@mikemets what I'm talking about I don't think is a solution to your problem.
I was talking about speeding up build time by not making buildout restart again from scratch each time it gets a build issue. ie making buildout work like it does outside of docker.

I don't really understand your issue. autoreloading addons in plone has never worked well. You are just saying that docker is taking awhile to start not becuase buildout is running on startup (which it can be set to do) but becaose of some other reason?
if you want to create a different dockerfile just edit the dockerfile and build from that.

WRT plone.reload, I use it for python code and templates all the time and only restart the instance after zcml changes. So I was wondering if there is a way to tell plone.reload in docker that this is dev instance so the reload code options becomes available.

Hopefully this will become obvious to me at some time in the future but at the moment it's kinda confusing. I only have a docker-compose.yml file in my directory, no dockerfile.

Reading more in the plone and docker docs and looking at your reply again @djay, how do a prevent buildout from running every time I restart?

Maybe relevant? The Big List of Small Volto Rules · Issue #2810 · plone/volto · GitHub

It's old so it won't work but here was the code

Dockerfile-bootstrap

FROM plone:5.0 as robot_tests
USER root
RUN apt-get update && apt-get install -y --no-install-recommends curl git libyaml-dev zlib1g-dev libjpeg-dev gcc  build-essential autoconf  libxml2-dev libxslt1-dev libsasl2-dev  
libssl-dev

ENV NVM_DIR /usr/local/nvm
ENV NODE_VERSION 6.1

RUN ["/bin/bash","-c", "curl -sL https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh | bash \
    && ls /usr/local/nvm \
    && source $NVM_DIR/nvm.sh \
    && nvm install $NODE_VERSION \
    && nvm alias default $NODE_VERSION \
    && nvm use default \
    && npm i -g npm@3 \
    && npm i -g typescript@2.1.4 \ 
    && npm cache clean \
    && npm set registry http://registry.npmjs.org \
    && npm update -g"]
ENV NODE_PATH $NVM_DIR/v$NODE_VERSION/lib/node_modules
ENV PATH      $NVM_DIR/v$NODE_VERSION/bin:$PATH
ENV ZSERVER_HOST  0.0.0.0

Dockerfile

FROM robot_tests as robot_tests
ENV NVM_DIR /usr/local/nvm
ENV NODE_VERSION 6.1
ENV NODE_PATH $NVM_DIR/v$NODE_VERSION/lib/node_modules
ENV PATH      $NVM_DIR/v$NODE_VERSION/bin:$PATH

COPY --chown=plone:plone ./src ./ide ./*.cfg ./*.sh ./*.js /*.py ./*.md ./*.rst ./docs /plone/instance/
COPY --chown=plone:plone ./buildout-cache ./.npm /plone/

SHELL ["/bin/bash", "-c"]
USER plone

RUN mkdir -p /plone/.buildout && printf "[buildout]\neggs-directory=/plone/buildout-cache/eggs\ndownload-cache=/plone/buildout-cache/downloads\n" > /plone/.buildout/default.cfg

RUN ls -al src
RUN cd /plone/instance \
    && bin/buildout -c buildout.cfg annotate
RUN cd /plone/instance \
    && bin/buildout -c buildout.cfg install test resources robot instance \
    && source $NVM_DIR/nvm.sh && cd /plone/instance/ide  \
    && npm install \
    && npm run build \
    || true
USER root
RUN chown -R --from=root:root plone:plone .
USER plone

The basic idea is you build the bootstrap one first. from then on use the other dockerfile.
The main dockerfile builds ontop of it's own image since both files reference the same image name.
And buildout is set to always succeed so it's output is never rolled back.

The consequence is that you get an image that keeps getting layers upon layers as you try to debug why your buildout is failing. But you can always just bootstrap again and rest it back to zero.

@mikemets sorry i answered the wrong question.

I don't know how you prevent buildout running on start. you need to read the dockerfile. it defines the ENTRYPOINT command which is what happens when docker starts up.

Thanks @tiberiuichim, this will same me a lot of time! To get the runwsgi to work however, I had to create my own dev.conf file (copied from etc/zope.conf) that defines values for the vars because wsgi could not find values for them. Unless I am doing something wrong.

Here is my conf file:

%define INSTANCE /app
instancehome $INSTANCE

%define DEBUG_MODE on
%define SECURITY_POLICY_IMPLEMENTATION python
%define VERBOSE_SECURITY on
# %define DEFAULT_ZPUBLISHER_ENCODING
%define ZODB_CACHE_SIZE 30000

debug-mode $DEBUG_MODE
security-policy-implementation $SECURITY_POLICY_IMPLEMENTATION
verbose-security $VERBOSE_SECURITY
# default-zpublisher-encoding $(DEFAULT_ZPUBLISHER_ENCODING)

<environment>
    CHAMELEON_CACHE $INSTANCE/var/cache
</environment>

<zodb_db main>
    # Main database
    cache-size $ZODB_CACHE_SIZE
    # Blob-enabled FileStorage database
    <blobstorage>
      blob-dir $INSTANCE/var/blobstorage
      # FileStorage database
      <filestorage>
        path $INSTANCE/var/filestorage/Data.fs
      </filestorage>
    </blobstorage>
    mount-point /
</zodb_db>

That's because inside the container you need to execute docker-entrypoint.sh.

That's why my trick was to run

./docker-entrypoint.sh bash

which sets the proper env variables and starts a subshell. Then, in that subshell, I can start/stop/debug, etc directly, as the lengthy pip install process that happens when docker-entrypoint.sh is initially performed, will not happen again.

All this can be found in the entrypoint file: https://github.com/plone/plone-backend/blob/6.0.x/skeleton/docker-entrypoint.sh

@mikemets keep in mind this docker image is designed to be easy for newbies to just add addons using an env var and not have to deal with custom docker layers. But you really don't want to run pip on startup in a real prod setup if you can avoid it, so you should be building your own custom image that does addons at build not on entrypoint.

Thanks @tiberiuichim however when I set the command and tty vars and run

docker compose up -d frontend

It says

service "backend" is not running

Hence I went back to Manually start plone 6 instance inside docker container - Stack Overflow which starts the back in one terminal and then in the second I can get to the bash prompt.

Yes indeed. Thanks @djay

Containers offer some level of isolation but they have access to the kernel subsystems.

If not properly managed, this can lead to potential security vulnerabilities.