Recommended setup for multiple sites

Hello everyone!

Today I have plone 5.2 (python 3.9) buildout setup with almost 50 plone sites each with its own mount point (plone.recipe.filestorage) for easy management. Also, each site is using one of three themes we developed.

We are considering strategies for future migration to Plone 6 (without volto initially), and tinkering with docker images for plone-backend and plone-zeo. Since those images are pip based we can’t use plone.recipe.filestorage.

What you guys recommend? A container for each site? Or just stick with buildout?

Any experiences with similar setups are appreciated.

1 Like

Roughly, w/o more details given, I would say:

  • run one central PostgreSQL with an own database per site and use RelStorage. Also use a connection-pooler like pgbouncer to deal with the connections.

  • I would go one container per site. I tend to run it on small Kubernetes cluster (like K3S), but that is a matter of taste, i.e. a dependent on the size and redundancy needs of the sites a single node Docker Swarm setup can be enough.

  • Check out cookiecutter-zope-instance as a replacement to configure Zope/Plone and its storage. I am looking forward to get feedback on this to improve with your use case. I use it with a small tweak and script on top of the official Docker images.
    A brief introduction how to use it in Docker is here Cookiecutter-zope-instance and enviroment variables - split sensitive data in an extra file - #2 by jensens
    But meanwhile cookiecutter has more features AFAIK and this might be achieved with simpler methods on the command-line.

1 Like

Awesome! That looks like a nice setup, I will certainly check it out.

What is the recommended way in an scenario with Volto Sites?
If we have say 50 backend-containers. Is it recommended to have also 50 frontend-containers?

If the sites have really reduced traffic/requests: Is it possible/recommendable to have 1 frontend-container serving more than one backend-container? Is this technically feasible at all? Does this make sense?

No, not right now. One frontend container for each site, due to technical limitations. There is an initiative to solve this issue, but it's stalled right now.

And if we have different sites on the same instance?

I've been playing with this and I was somehow able to access the different Sites with localhost:3000/Plone1 and localhost:3000/Plone2. But when trying to login the urls were confusing the server.

Am I correct assuming that this scenario too is not yet feasible?

The problem is not the Plone backend. It's like this:

  • your browser connects to the Volto nodejs server
  • which runs the Volto config registry as a singleton
  • so the server-side generation of the HTML, when handling multiple websites, is wrong, because the config is not supposed to be shared between websites.

Now, you may say "all my websites are configured the same", or you have logic that can accomodate the identical config. But the problem is that the apiPath is held in the config. So the task would be to untangle the apiPath and always compute it in the helpers and also make sure that the existing code is also properly updated.

1 Like

@tiberiuichim Thank you very much for your explanation and the links to the code!

The overhead of running two instances/containers with one site each vs one with multiple sites is minimal. But the headache with multiple sites in one Zope is hurting.

You probably mean the nightmare with different versions of add-ons and their dependencies. I agree!

Also ZODB caching/RAM shared usage/ RAM Caching (and some more) gets really hurting.

Warning: this is not a supported combination. I think quite some fixes were needed in Zope to support Python 3.9, and that has only gone in Zope 5, which can only be used with Plone 6. Some fixes in Plone were likely needed as well, which may also be available only in Plone 6. And you will need to upgrade the pyScss package to 1.4.0, otherwise Plone won't startup.
Also on our central testing server, Jenkins, you can see with which Python versions Plone 5.2 is tested. Python 3.9 is not one of them.

So you should use Python 3.8. But... if your 50 Plone sites currently run without problems, then you can choose to ignore this advice. :slight_smile:

I setup a PostgreSQL cluster too with each site having its on DB. I am working on using pgbouncer to deal with connection pooling but I can't tell if my pooler is working correctly (from the logs). my clients are pointing at the db server on port 5432... how did you manage to get zope clients able to talk on say 6432? would you be able to describe (briefly) how you did your setup? once I finish my documentation, I can contribute it to the new docs if the use case warrants.

FWIW, I'm keen on docker as a dev tool but I don't feel comfortable with it as production. Our team is a little pressed learning how to use the new tooling in the Plone6 universe, I'd rather for now stick with supervisor or pm2 running wsgi than worrying about the surface area docker presents.

so any ideas, thoughts are appreciated.

cheers!

The DSN host part allows to pass a port devided from the domain part by a : like

dbname='myplone' user='plone' host='some.host.tld:6423' password='verysecret'

1 Like

wow, that was easy :blush:

the relstorage docs are a little out-of-date and didn't mention the how-to part just what not to use lol.

By any chance have you used pg_auto_failover in your travels? My PostgreSQL cluster lives within premise VMware so I am planning on straddling both datacenter within the building and using pgbouncer to make sure connections play nicely with the server and the before mentioned failover plugin to keep things nice if something were to go south (sorta my own flavor of ZRS?).

Thanks again for the assistance!

I never used this one.

thanks again for your help..

I found that

> dbname='myplone' user='plone' host='some.host.tld:6423' password='verysecret'

had to be adjusted to

dbname='myplone' user='plone' host='some.host.tld' port='6432' password='verysecret'

works a treat now.

going to duplicate my cluster and try integrating pg_auto_failover on my test instance and then try for production. If this is successful, I'll be sure to share.

Our migration to P6/Volto is underway!

cheers@ :beers:

1 Like