Scaleout Plone with Relstorage

We currently have a Plone 4.3 system which using Relstorage. Our setup including:

  • 1 AWS RDS Postgresql Database server
  • 1 EC2 server which runs nginx/varnish/Haproxy and 8 zope instances,
    My question is: is this possible to scaleout the system to several small EC2 instance, which each of them run 1 zope instance, which I think will scale better, easier, cost effective than adding more power to the single machine. The main issue that I run into when solving this issue is blobstorage. When 8 instances run in one machine, they all can shared the blobstorage, what happen when move each instance run in different machine? I have some ides but it does sound best to me:
  • having blobstorage in each machine and sync them? Is this really possible?
  • having blobstorage in NFS , so all instance can access? Does is perform well and scalable?
    Any opinions are welcome!

Scale out and splitting services is probably a good idea.

Blob storage can (and should) be shared if instances are on different machines (see option shared-blob for the plone.recipe.zope2instance). Having the blobs transferred over the ZEO has disadvantages in both, bandwidth/speed and disk usage of the ZEO-client machine.

If on Amazon EC2 I would use an Amazon EFS for the shared blob storage part.

@jensens have you seen any test data on the differences in performance between using ZEO to cache blobs locally vs NFS, Gluster, EFS or any other form of shared filesystem?
From a configuration perspective ZEO caches are certainly easier to setup so it would be good to know the emperical tradeoff. We used to use glusterfs and it was a pain in the arse so we took it out.

We are using NFS for our own stuff. This is well documented and works. I never tried Gluster. If we host at customer sites we are using their infrastructure network mounts (which can be anything, including VMware, containers). On Amazon, EFS is what they provide with an NFS4.1 protocol.

With RelStorage one does not want to store blobs in the DB (except with Oracle they say it's not a problem, I have never tried this). Thus, you need a storage server anyway.

My measurements regarding ZEO blob transfer versus shared blobs with NFS are about 5 years old and I can not find my protocol from this time. But on load, the blobs slowed down the Zeo connection significantly. These days, there is a newer ZEO in latest Plone 5.1 and this might change all this. We need new measurements.

The second disadvantage is, the local blob disk cache needs plenty of space. So every (virtual-) machine (or container) needs a large cache for blobs. This may be downsized but at cost of speed. If this is no problem in your infrastructure, that is fine. Here it was often - also on customer hosting centers - a problem. Most of them outsource their infrastructure, so they pay for all disc space.

If you want to get all binary load away from Plone/Zope and ZEO there is also collective.xendfile. We developed/ used it successfully for some Plone 4.3 sites with large blobs. It utilizes web server capability to turn a backend response header into a file delivery. I suppose it needs some update to work with Plone 5, thus saif I never tried it.

We fixed c.xsendfile for plone 5 awhile back. Works fine

1 Like

Oh, great. This should be reflected in README and classifiers.

Have a look at https://www.slideshare.net/GuidoStevens/highperformance-highavailability-plone

@jensens
Any updates or new learning regarding RelStorage with EFS?