Advice on caching

I am in the progress of setting up two sites for a school on the same, new, ubuntu virtual server.

One site will be just for 'documentation': Basically only a few users (3-4) will be working on writing and editing (long) pages.
There will be almost no other visitors ( basically only Editors ), so what counts here that editing pages does not feel 'sluggish'.

The other will have content added, just a few pages a day, and almost no updates to content already added, except for two content types which 'automatically' gets updated every night at about 1 o clock.
Maximum visitors will be 100-200 at the same time, but this could increase a lot in the near feature (= next year if other schools are interested in the project)

I currently have 4GB ubuntu 16, apache in front, mod cache / mod cache disk (and webmin) installed.

Advice would be welcome.
For example: is there any point in setting up for example varnish for the site that is only for a few users? Is it better than just using mod cache in apache. Will this run as 'nice' as standalone since it will only have a few users at the time.

Indeed, no varnish for the 'documentation' site. Keep that one simple, one instance with 4 threads should be ok. Configure cache settings so all stable/static stuff sticks in the browser cache.

I have no recent experience with mod_cache. I doubt it beats varnish. If you collect experience please share it with us!
In case of varnis: Try to cache as much as possible for the public site in varnish. Use Munin or similar to monitor varnish hit rates. Try to get to hit-rates of 90-95% (or more).

By one instance, do you mean:
1 Zeo client with
server-threads = 4

I usually take a different approach; resuming:

  • use nginx as it uses less memory than Apache and you have limited resources (I can share a configuration with you in case you have never tried it)
  • enable caching on nginx and don't use Varnish; on 4GB you have not enough memory for it
  • install and configure plone.app.caching on both sites
  • adjust the number of objects in the ZODB so you use your memory efficiently

you don't tell us how many processors your virtual server has; I would recommend no less than 2 (4 would be better).

with 4 GB of memory you can use for Zope/Plone instances no more that 3GB; so if you have 4 processors you can use 3 instances with just one thread.

@jensens never use more than one thread on your instances, it will be slower; @davisagli wrote an interesting post some time ago about the implications of the Python GIL

on the other side, I never mix sites on the same server; adjusting Zope/Plone would be more difficult.

1 Like

I've lately been using nginx and CloudFlare. Definitely use plone.app.caching.

2 Likes

I take it you've gone through http://docs.plone.org/manage/deploying/index.html and particularly http://docs.plone.org/manage/deploying/caching/index.html

I doubt that the database will be bigger than 1GB, probably only 500Mb if I pack it every week.
The main reason for having many instances is if you have many users (?).
Currently, I have only 2 processors: would you still go for 3 instances ? ( The site will be for one (large) school, but might (quite likely) be for many within a year. (in which case, I can upgrade to more processors (since there will be a bigger budget if that happens).

About Cloudflare: basically all the users will be from the same area (town)

Its not as simple as saying to never use more than one thread. Even if it has a CPU overhead, using multiple threads saves memory since the memory used by the plone code is shared. It only makes sense to have a single thread per instance if your app is 100% CPU bound or you have plenty of ram.

1 Like

@djay is right: I should have been less authoritative while giving advice to @jensens; as most of the time, the answer is "it depends…".

if your site is small then you can try different configurations playing with the server resources you have; the key is that you have to monitor them and try to maximize the use of RAM and minimize the use of CPU: move one thing, check the results and iterate.

It will still cache and accelerate delivery of page content and reduce load on your own server. FOR FREE. :slight_smile:

This really depends on, be careful to generalize this. And do not over estimate the Global Interpreter Lock impact.

If you really need to tune performance, it's worth having varnish and haproxy in the mix (rather than trying to do it all with nginx). They give you a lot more control over the way caching and load balancing work, as well as better instrumentation.

An example: haproxy allows you to make sure that you aren't feeding too many connections to any given zope client. Zope clients will accept new connections even when they're busy with old ones. A naive load balancer will be fooled by that behavior into giving new connections to a busy zope client while other clients sit idle. The effect can be long stalls (for the end user) while load is still low on the server.

Thanks a lot for all the answer.
To try to 'sum it up'.
Is it ok to go for:
apache => varnish => zeo with 3 instances.

then monitor 'hit rate' and CPU, memory, and if I dont have enough, add haproxy in the mix ( or get more RAM (??)

as you see, there are many possible solutions; after reading arguments by the others I would say:

  • Apache consumes more memory than nginx, but you may want to use it because you know it
  • a Zope instance for a small site can easily consume 512MB of memory on the standard configuration using 30,000 cache objects; you have to monitor that as it can consume even more because the way memory allocation works and you may even have to restart instances if it grows up too much (use Supervisor's memmon for that) or you better reduce the number of cache objects, but that will affect your performance
  • a Varnish instance for a small site with 512MB of memory for cache could easily consume up to 1GB (that's why I think you may have not enough memory to run it)
  • I don't think HAProxy makes sense for such a small site; both, nginx (ngx_http_upstream_module) and Apache (mod_proxy_balancer) support least connection methods on load balancing; just take into account that, if you use this method you'll be wasting memory on the instances cache, as the objects will be cached in all the instances; also, adding more moving parts for your site will only complicate your memory situation
  • use Cloudflare or something similar if you can to reduce the load of your site and to protect you against DoS attacks

resuming, start with a Zope instance with 4 threads and Apache with load balancing and caching modules and monitor it; I think that could be good enough for your use case but, if you find a bottleneck, change something and monitor again to see if your change resulted on an enhancement or not; ask for advise again, but bring more information on what's going on.

also, don't forget to install and configure plone.app.caching in any case; a Zope instance could give you a good req/sec rate that could be enough for your site.

I'll try to make some benchmarks this week using separate instances and one instance with many threads to see results.

If all requests were created equal, or if a ZEO client actually worked on all requests it received (rather than just the number for which it has threads), then a least-request balancing algorithm would work. Since neither is true, I still think the best balancing algorithm is for the load balancer to maintain the queue and to only dispense requests to clients that are handling no more requests than they have threads available.

One exception to this analysis is that a ZEO client creates streaming threads to handle blob returns. Those do not tie up the main threads. If a load balancer could distinguish between when a client was returning blobs and when not, we could increase efficiency. Does anyone know of a load balancer that can do that trick?

we have a site that had been using the hash algorithm without issues until we implemented a subscribers area; now I have a lot of requests on a specific instance and sometimes it gets sick.

I'm willing to give the least connections a try and compare the results as we're using 2 identical servers with nginx, Varnish and 4 Zope instances (1 thread) running ZEO.

@smcmahon I raised this is an potential feature request with the author of haproxy. If enough people asked perhaps he would consider it. The feature was to consider backend instance "free" on first byte received (or end of headers) rather than last byte received.

His answer from 2012 was

That's quite a strange software architecture, isn't it ? Do you know what
makes the request parsing limited to one connection if the machine is
capable of processing the remaining data asynchronously ?

I'm not opposed to finding a solution to make haproxy capable of improving
such a use case, but it would be nice to understand why such bottlenecks
exist in the first place, so that we know whether we should cover other
use cases at the same time.

Regards,
Willy```

You can use GitHub - collective/collective.xsendfile: Deliver blobs direct through your webserver using X-Sendfile/ X-HTTP-ACCEL to free haproxy for the streaming of a file. Apache or Nginx will read the blob directly from the blobstorage folder (a zope streaming thread is not even used).

That is a very interesting idea. I may have a testable case...

Does blobstorage still have a weird permission issue if you use different users for instances and front-end?

Sean