How to access the ZMI inside a Docker based Volto Deployment

I want to access the backend server/ZMI inside a docker container of a running Plone 6 Volto deployment from a local machine via tunnel. The port 8080 is not offering the backend directly on the host server running the docker stack. I can access the host via ssh and know how to make up a tunnel to services running plain on the host. You can start a container with the option p 8080:8080 to forward the internal port to the host. But I do not want to intercept the setup. Is there a way to make this happen in a running system? So I mean:

  • Opening the container for access from the host on same or tranbslated port.
  • Use a standard tunnel to access the port locally

I am already reading this, but this covers mostly the connection inside the docker stack docker network connect | Docker Docs

Docker Documentation

docker network connect

As far as I can follow the available hints you need to activate port exposition to the host during start of the container. I missed any instructions to invoke that later to a running container. Having a ssh commandline access to the container may not result in the baility to instantiate a tunnel from there. Allowing these kind of tricks can be a serious risk. Even if the port is exposed to the host, networking could (and should?) be limited to localhost by design. If there is a way to allow this during deployment an example for the docker-compose.yml would be appreciated.

After a hint by @yurj on Discord to use something like classic ui/manage I found this post related to a CookiePlone based setup:

Adding https://my-public-domain.com/ClassicUI and then after http auth opening https://my-public-domain.com/ClassicUI/manage_main opened access to the ZMI but only at the Plone site level not the ZMI root.

One of the targets of the mission was to change the default admin:admin credentials on the Zope Level of the backend from the ZMI.

addzopeuser command

In the past you could easily use the addzopeuser command on the commandline.

mtrebronNorbert shared:

Dec 2022

I use this when I need to create a new instance with a custom admin password:

/usr/local/Plone/venv/bin/addzopeuser admin secret

It expects the zope.conf file to be in /usr/local/Plone/venv/etc

Entering docker

You can enter a shell into the running docker container like this:

  1. login into the host machine plone user via ssh
  2. command to list containers: `docker ps``
  3. note down the container ID from the first column for ..._backend1
  4. enter a bash inside of the container: docker exec -it [container ID] bash
  5. prompt is now: root@[container ID]:/app
  6. you can now visit the /app/etc/zope.conf
  7. the addzopeuser command is at `/app/bin/addzopeuser

Looking ahead

Due to failing adjustment to the context of the zope.conf the addzopeuser command fails. No idea if I get it to work, but just wanted to share the approach entering the container for now.

This does not help to make up a way to open straightfoward a tunnel to the ZMI root for now.

1 Like

By the way: Does anybody know the places where the /ClassicUI url access from a Volto site is explicitly mentioned in the official Plone docs or trainings or the docs of a package? Or is it officially buried by the holy inquisition of the volto grail ?

I did not find a resource at first sight.

The VHM url rewrite for the ClassicUI magic is configured in the docker-compose.yml of a Cookieplone project with traefik and varnish around line 99 below the comment:

## VHM rewrite /ClassicUI/

      - "traefik.http.middlewares.mw-backend-vhm-classic.replacepathregex.regex=^/ClassicUI($$|/.*)"
      - "traefik.http.middlewares.mw-backend-vhm-classic.replacepathregex.replacement=/VirtualHostBase/http/[Voltoproject Slug].localhost/Plone/VirtualHostRoot/_vh_ClassicUI$$1"

I never heard of it. It might be an undocumented and unsupported feature, so I would not rely on it.

I always look at the console output when starting the backend to get the URL, which is included in the documentation, but not always explicitly saying so.

2024-09-25 16:47:17,772 INFO    [waitress:486][MainThread] Serving on http://[::1]:8080
2024-09-25 16:47:17,772 INFO    [waitress:486][MainThread] Serving on http://127.0.0.1:8080

and

Proxying API requests from http://localhost:3000/++api++ to http://localhost:8080/Plone
🎭 Volto started at 0.0.0.0:3000 πŸš€

followed by

Note that the Plone frontend uses an internal proxy server to connect with the Plone backend. Open a browser at the following URL to visit your Plone site.

http://localhost:3000

@stevepiercy my challenge is not a local development machine. Its the deployed production machine and access the default ZMI user setup just in case. This is not reachable from outside the docker containers and the firewall setup.

I was thinking of the same approach we used after closing ZMI access without a tunnel in the past -> Open a kind of tunnel into the docker swarm.

Hi Armin,

I haven't gone back to my 2022 attempts to deploy Plone6 with Docker yet. Last year, I added some comments to my notes from back then. FWIW...

While installing, we made a mistake to create a virtual environment in /usr/local/Plone/venv and our own custom packages in its ./src subfolder.

root@plone6:~# cd /usr/local/Plone
root@plone6:/usr/local/Plone# source ./venv/bin/activate
(venv) root@plone6:/usr/local/Plone# runwsgi ./instance/etc/zope.ini

Whoops! The normal setup would be to install the virtual environment in the Plone root, with a ./src subfolder with its packages.

It explains why we had problems with incorrect file locations

Also, maybe useful:

Note: in order to change the port (from 8080 to 8686) we need to edit both the instance.yaml and the zope.ini file, since the ini is based on the yaml and created by cookiecutter during the initial install.

I also came across a note about the ZMI password having been set in the yaml configuration file:

Our yaml configuration file for testing is:

default_context:
    initial_user_name: 'admin'
    initial_user_password: 'secret'

    load_zcml:
        package_includes: ['collective.easyform']

    db_storage: direct

    wsgi_listen: 10.0.10.32:8080

Hey @davisagli, I get in touch with you asap on this. Until then I remove your post for some reason. I include the content back in my message then!