I have this mostly working on a dev environment and wanted to share my experience. This is how I set up the python environment and the configs needed to run ZEO/WSGI and all of the database management tools (everything that was in buildout in Plone 5). For some of our systems we have ZODB Mount Points containing a single site per database. So those are ~10-12 Plone Sites running on the same ZEO/Zope.
-
Install Plone and ZEO in python venv. For this dev environment I also used mxdev to install packages in edit mode. We have a lot of in house packages that have small, specific scopes and would generally want to be able to edit them at the same time. mxdev is designed for that purpose.
-
Instead of using the mkwsgiinstance and mkzeoinstance scripts, I used cookiecutter templates to build the config files needed for the WSGI clients and ZEO server. We will probably have a lot of very similar YAML files that only differ by port number. To be fair, that was easier to manage in buildout because buildout configs could inherit from each other.
This fork of cookiecutter-zope-instance GitHub - ewohnlich/cookiecutter-zope-instance: It bakes configuration for Zope 5 adds some default env variables due to a bug with defaults of dict data types and adds "mounted_dbs" as an option. I'll make separate MRs for each. ZODB Moint Point management was previously handled in buildout with collective.recipe.filestorage. There's no cookiecutter template for that recipe, and I don't know if it even makes sense to have two separate templates that both need to modify zope.conf, so I included it here. I also created a minimal ZEO template GitHub - ewohnlich/cookiecutter-zeo-server. This does not try to do everything that plone.recipe.zeoserver did, it is just enough for my own needs. Creating a custom cookiecutter template was easy to do and easy to use.
I had the ZEO template use the same yaml file. This is my instance1.yaml that builds alpha_zeo/etc and alpha_client1/etc and configures the var areas.
default_context:
target: 'alpha_client1'
zeo_target: 'alpha_zeo'
wsgi_listen: $SERVER:8083
environment:
"CHAMELEON_CACHE": "{{ cookiecutter.location_clienthome }}/cache"
"ENTREZ_EMAIL": "wohnlice@imsweb.com"
"ENTREZ_API_KEY": [REDACTED]
"OPENSEARCH_HOSTS": [REDACTED]
"OPENSEARCH_HTTP_USERNAME": [REDACTED]
"OPENSEARCH_USE_SSL": true
initial_user_name: 'admin'
initial_user_password: 'admin'
db_storage: zeo
db_blobs_mode: shared
db_blobs_location:$FILEBASE/alpha/blobs
db_zeo_server:$SERVER:8100
db_filestorage_location: $FILEBASE/alpha/filestorage
mounted_dbs: db01,db02
debug_mode: false
-
I can now run the ZEO server with "runzeo -C alpha_zeo/etc/zeo.conf" and run WSGI clients with "runwsgi alpha_client1/etc/zope.ini". TODO: daemonize with supervisord. This is what I used before, and in fact I used buildout to create the supervisor.conf. I plan to have one supervisor for the server (multiple ZEO servers and WSGI clients). If possible, it would be nice if cookiecutter can be setup to read from multiple YAML files. Otherwise I'll just create the conf manually.
-
For backup and recovery I created bash scripts per site/db using repozo and rsync. This example is a dev site where I don't actually do backups, but we do want a "recovery" script that pulls from production.
#!/bin/sh
FILESTORAGE_DIR=/plndata/plone6/alpha/filestorage
BACKUP_FILESTORAGE_DIR=/sprj/btp_zope_plone5_backup/plone5_main_alpha/backups/
repozo --recover -o $FILESTORAGE_DIR/Data.fs -r $BACKUP_BLOBSTORAGE_DIR
FILESTORAGE_DIR=/alpha/filestorage/db01/
BACKUP_FILESTORAGE_DIR=/prod_backup/plone5_main_alpha/backups_db01/
BLOBSTORAGE_DIR=/alpha/blobs-db01/
BACKUP_BLOBSTORAGE_DIR=/prod_backup/plone5_main_alpha/blobstoragebackups_db01/blobstorage-db01.0/blobstorage-db01/
repozo --recover -o $FILESTORAGE_DIR/Data.fs -r $BACKUP_FILESTORAGE_DIR
rsync -a $BACKUP_BLOBSTORAGE_DIR $BLOBSTORAGE_DIR
I just made this manually but this seems like something that would be easily created with cookiecutter, maybe even putting that into cookiecutter-zeo-server (can it create a dynamic number of script files?). Also TODO: parameterize a timestamp. Newer Zope allows blob backups to be saved with a timestamp so I believe it should be trivial to pass that timestamp to repozo and to refer to e.g. blobstorage-db01.2023-06-10/blobstorage-db01 as the source location for blobs.
- For database packing, I originally looked at CLI options but decided to instead use the through-the-web option. The reasoning is simple - it's already built through the web and I don't risk accidentally misconfiguring somehow. This python script reads from the yaml file:
from imsplone.utils import get_password
import requests
import yaml
import os
username = 'admin_cron'
pw = get_password('admin_cron')
def main():
with open(os.path.join(os.path.dirname(os.path.realpath(__file__)), 'instance1.yaml')) as file:
config = yaml.safe_load(file)['default_context']
data = {
'days:float': 1,
}
# Pack main db
url = f'http://{config["wsgi_listen"]}/Control_Panel/Database/main/manage_pack'
requests.post(url, data=data, auth=(username, pw))
# Also pack all mounted dbs
for db in config['mounted_dbs'].split(','):
url = f'http://{config["wsgi_listen"]}/Control_Panel/Database/{db}/manage_pack'
requests.post(url, data=data, auth=(username, pw))
if __name__ == '__main__':
main()
I think that's everything we were using buildout for, now moved to a couple yaml files and scripts. The python virtual environment is completely separated from this whole process. I did see that cookiecutter-zope-instance has some ZCML settings but I didn't see a purpose for this in my setup - perhaps this is only for packages that aren't pip installed?
Thanks for your work on all this @jensens !