[Solved] Debugging the Cookieplone based docker swarm stack Plone backend – How to grep for Errors in the log!

TL;DR;

Even when stout and stderr are combined in the live Terminal output of docker service logs -follow backend piping is different!

With docker service logs output piping + grep only works for stdout out of the box.

This works to filter Errors out of stderr!

docker service logs backend 2>&1 | grep -i "error"

This is not mentioned in the official Docker docs!


Original issue

  1. When debugging a Cookieplone based docker swarm stack backend you can use the make command to watch backend logs

These commands only offer output from the instance-access.log but not from instance.log

How do I get a grip on the instance.log in the containers? I cannot find any definitions to that in the docker-compose.yaml or created stacks in the devops folder. Any ideas?

Purpose : I have 6000 lines log errors for one event in my localhost backend running in foreground because of url retries and want to check if they are also on staging.

Known commands

make stack-logs-backend

resuting in a command equal to execute on the server root:

docker service logs [stackname]_backend

to keep the tail following you can use:

docker service logs --follow [stackname]_backend

or grep for errors

docker service logs [stackname]_backend | grep -i error

We had a similar problem once. I'm not sure if this will solve your problem, but you can try disabling output buffering for Python in the Dockerfile.

FROM plone/plone-backend:6.1.3

[..]

# Turns off output buffering (e.g. to show print() statements in docker logs)
ENV PYTHONUNBUFFERED 1
[..]

RUN  ..
1 Like

I will give this a try. I am not sure where exactly to enter that in the stack config yml of the Cookieplone based swarm and how to make sure it does not break the deployment ;-).

docker-compose.yml # this is cookieplone generated

devops/stacks/my-fqdndomain.com.yml # this is created by the make commands in the deploy folder and finally deployed

there is an environment section in both files in the backend service:

…
  backend:
  …
    environment:
      RELSTORAGE_DSN:
      PYTHONUNBUFFERED: 1   # could go here?
…

There is an devops//.env provided to the github actions via environment variables

ANSIBLE_REMOTE_PORT=22
DEPLOY_ENV=prod
DEPLOY_HOST=my.fqdn.com
DEPLOY_PORT=22
DEPLOY_USER=plone
DOCKER_CONFIG=.docker
STACK_NAME=my-fqdn-com
PYTHONUNBUFFERED=1   # could go here?

We we have a staging and a production environment.

Staging Actions and environment were generated manually from the original prod setup.

Need to dig deeper to check exactly where to enter the ENV value you suggest and deploy.

This surprises me. The docker container logs include everything that the container’s main process outputs to stdout or stderr. The plone-backend image has a WSGI server configuration which outputs the Zope access log to stdout and the Zope event log to stderr (plone-backend/skeleton/etc/zope.ini at 6.1.x · plone/plone-backend · GitHub). So I would not expect there to be separate instance-access.log and instance.log files when you are running Zope in this container.

1 Like

Thanks you for your note. I missed a seperate instance.log as file when poking around inside the container. There is are empty log files but for a not existing filestorage.

I could not find any obvious instance.log related lines when grepping the docker service logs output. I will look deeper. It is hard to understand without a more serious docker background.

I will try again to look and grep for lines obvious belonging to the stuff usually on the terminal when running the Plone backend in the foreground.

For example: Where do I find the actual access log for archiving / logrotation. And so on.

I my case I am looking for an error I have on my local instance and hope I never find on the docker instance. But if you do not know how to and find nothing, that does not mean there is nothing.

When using the official docker container it outputs at the same time both the access and instance logs.

That's why we use a custom zope.ini file to supress the access log:

There is no access or instance log as a such, it is something that belongs to the ‘outside world of the container', you should handle it in some other ways, such us sending the logs to a Prometheus server, handling them using docker swarm options, …

1 Like

I was still struggling to find Errors in the logs. Grepping the docker service logs … | grep -i error just refuses to filter out errors I know of.

Cause: The stderr was missing in the piped output to grep stuff

Solution

How To Watch the errors in the docker logs came from this stackoverflow answer:

answered Jul 5, 2016 at 15:16 by @Edoardo
(edited to match my Plone backend challenge)

this can happen if the container is logging to stderr, piping works only for stdout, so try:

docker service logs backend 2>&1 | grep -i "error"
  • "2>&1" will tell the shell to redirect stderr to stdout
  • "|" to give the "unified" output to grep using pipe

Another part for my cookieplone devops tipps, tricks & troubleshooting essay.

@mliebischer
When you are not experienced with other Plone docker setups, your hint is a bit cryptic and not unambigous.

The backend Dockerfile in current Cookieplone full setups is located in the ./backend/Dockerfile

uvx cookieplone

all defaults confirmed -> Result: Cookieplone (0.9.10) and cookieplone-templates (697694b)

Dockerfile looks like this:

ARG PLONE_VERSION=6.1.3
FROM plone/server-builder:${PLONE_VERSION} AS builder

WORKDIR /app


# Add local code
COPY scripts/ scripts/
COPY . src

# Install local requirements and pre-compile mo files
RUN <<EOT
    set -e
    bin/pip install mxdev uv
    sed -i 's/-e .\[test\]/./g' src/mx.ini
    cd /app/src
    # remove potentially existing virtualenv from local build
    rm -rf .venv
    ../bin/mxdev -c mx.ini
    ../bin/uv pip install -r requirements-mxdev.txt
    ../bin/python /compile_mo.py
EOT

FROM plone/server-prod-config:${PLONE_VERSION}

LABEL maintainer="me <me@me.com>" \
      org.label-schema.name="project-title-backend" \
      org.label-schema.description="Project Title backend image." \
      org.label-schema.vendor="my org"

# Copy /app from builder
COPY --from=builder /app /app

RUN <<EOT
    set -e
    ln -s /data /app/var
EOT

Despite I found a better solution, I am still curious where to add an ENV value pair in the Dockerfile (there are two RUN statements)