I want to access the backend server/ZMI inside a docker container of a running Plone 6 Volto deployment from a local machine via tunnel. The port 8080 is not offering the backend directly on the host server running the docker stack. I can access the host via ssh and know how to make up a tunnel to services running plain on the host. You can start a container with the option p 8080:8080 to forward the internal port to the host. But I do not want to intercept the setup. Is there a way to make this happen in a running system? So I mean:
Opening the container for access from the host on same or tranbslated port.
As far as I can follow the available hints you need to activate port exposition to the host during start of the container. I missed any instructions to invoke that later to a running container. Having a ssh commandline access to the container may not result in the baility to instantiate a tunnel from there. Allowing these kind of tricks can be a serious risk. Even if the port is exposed to the host, networking could (and should?) be limited to localhost by design. If there is a way to allow this during deployment an example for the docker-compose.yml would be appreciated.
It expects the zope.conf file to be in /usr/local/Plone/venv/etc
Entering docker
You can enter a shell into the running docker container like this:
login into the host machine plone user via ssh
command to list containers: `docker ps``
note down the container ID from the first column for ..._backend1
enter a bash inside of the container: docker exec -it [container ID] bash
prompt is now: root@[container ID]:/app
you can now visit the /app/etc/zope.conf
the addzopeuser command is at `/app/bin/addzopeuser
Looking ahead
Due to failing adjustment to the context of the zope.conf the addzopeuser command fails. No idea if I get it to work, but just wanted to share the approach entering the container for now.
This does not help to make up a way to open straightfoward a tunnel to the ZMI root for now.
By the way: Does anybody know the places where the /ClassicUI url access from a Volto site is explicitly mentioned in the official Plone docs or trainings or the docs of a package? Or is it officially buried by the holy inquisition of the volto grail ?
The VHM url rewrite for the ClassicUI magic is configured in the docker-compose.yml of a Cookieplone project with traefik and varnish around line 99 below the comment:
I never heard of it. It might be an undocumented and unsupported feature, so I would not rely on it.
I always look at the console output when starting the backend to get the URL, which is included in the documentation, but not always explicitly saying so.
2024-09-25 16:47:17,772 INFO [waitress:486][MainThread] Serving on http://[::1]:8080
2024-09-25 16:47:17,772 INFO [waitress:486][MainThread] Serving on http://127.0.0.1:8080
and
Proxying API requests from http://localhost:3000/++api++ to http://localhost:8080/Plone
π Volto started at 0.0.0.0:3000 π
followed by
Note that the Plone frontend uses an internal proxy server to connect with the Plone backend. Open a browser at the following URL to visit your Plone site.
@stevepiercy my challenge is not a local development machine. Its the deployed production machine and access the default ZMI user setup just in case. This is not reachable from outside the docker containers and the firewall setup.
I was thinking of the same approach we used after closing ZMI access without a tunnel in the past -> Open a kind of tunnel into the docker swarm.
I haven't gone back to my 2022 attempts to deploy Plone6 with Docker yet. Last year, I added some comments to my notes from back then. FWIW...
While installing, we made a mistake to create a virtual environment in /usr/local/Plone/venv and our own custom packages in its ./src subfolder.
root@plone6:~# cd /usr/local/Plone root@plone6:/usr/local/Plone# source ./venv/bin/activate (venv) root@plone6:/usr/local/Plone# runwsgi ./instance/etc/zope.ini
Whoops! The normal setup would be to install the virtual environment in the Plone root, with a ./src subfolder with its packages.
It explains why we had problems with incorrect file locations
Also, maybe useful:
Note: in order to change the port (from 8080 to 8686) we need to edit both the instance.yaml and the zope.ini file, since the ini is based on the yaml and created by cookiecutter during the initial install.
I also came across a note about the ZMI password having been set in the yaml configuration file: