Roughly, w/o more details given, I would say:
-
run one central PostgreSQL with an own database per site and use RelStorage. Also use a connection-pooler like pgbouncer to deal with the connections.
-
I would go one container per site. I tend to run it on small Kubernetes cluster (like K3S), but that is a matter of taste, i.e. a dependent on the size and redundancy needs of the sites a single node Docker Swarm setup can be enough.
-
Check out cookiecutter-zope-instance as a replacement to configure Zope/Plone and its storage. I am looking forward to get feedback on this to improve with your use case. I use it with a small tweak and script on top of the official Docker images.
A brief introduction how to use it in Docker is here Cookiecutter-zope-instance and enviroment variables - split sensitive data in an extra file - #2 by jensens
But meanwhile cookiecutter has more features AFAIK and this might be achieved with simpler methods on the command-line.