Volto Waitress Queue

The Volto (15.x) docker container has very poor response time, with requests getting queued by waitress.

Any suggestions on how to improve this performance? Am currently using single plonebackend instance, the performance on Classic UI is fine.

backend_1     | 2022-05-03 08:25:41 WARNING [waitress.queue:114][MainThread] Task queue depth is 1
backend_1     | 2022-05-03 08:25:41 WARNING [waitress.queue:114][MainThread] Task queue depth is 2
backend_1     | 2022-05-03 08:25:41 WARNING [waitress.queue:114][MainThread] Task queue depth is 3
backend_1     | 2022-05-03 08:25:41 WARNING [waitress.queue:114][MainThread] Task queue depth is 4
backend_1     | 2022-05-03 08:25:41 WARNING [waitress.queue:114][MainThread] Task queue depth is 5
backend_1     | 2022-05-03 08:25:41 WARNING [waitress.queue:114][MainThread] Task queue depth is 6 
1 Like

With Volto, you've multiple parallel requests, thus 1 instance will block. Classic run on the same thread, so 1 instance is ok If your concurrent users are 1 or 2. So, yes, use multiple clients. If it is a public site, you can use a web cache to improve further.

Thanks Yuri, Do you have any pointers to caching documentation?

Does creating a cluster of zeo server to handle multiple requests also work?

plone.recipe.varnish should handle Volto caching:

also plone.app.caching has a restapi profile:

Yes, creating a cluster will help, but you need to balance them. plone.recipe.varnish can do balancing too. The simpler solution to balance between zeoclient is to use pound, very simple to setup. Here a recipe:

GitHub - collective/plone.recipe.pound (old but should work or you can just copy the templates).

1 Like

balance with apache:

RewriteRule ^(.*) balancer://cluster/VirtualHostBase/https/yourdomain.local:443/plone/VirtualHostRoot/$1 [L,P]
Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED
<Proxy balancer://cluster>
    BalancerMember http://127.0.0.1:8081 route=1
    BalancerMember http://127.0.0.1:8082 route=2
    BalancerMember http://127.0.0.1:8083 route=3
    BalancerMember http://127.0.0.1:8084 route=4
    ProxySet lbmethod=bybusyness
</Proxy>
ProxyPass / balancer://cluster/
ProxyPassReverse / balancer://cluster/
2 Likes

I think of launching multiple instance using docker swarm. I believe docker swarm automatically does the load balancing. If not, I will use your nginx proxypass rules.

Wonder if running multiple instances would cause data conflicts to the database. Am using Relstorage.

No, it does not. I recommend using Traefik Webproxy instead of NGINX, because Traefik can dynamically handle multiple backends.

No, all is fully transactional (ACID) and with a decent conflict resolution. Under very high write load unsolvable conflicts may happen. This results in an transaction rollback and data wont be written (and an error raised). To avoid this keep your transactions as short as possible. If you have lots of BLOB data and you use RelStorage 3, store the blobs within Relstorage (and not in separate shared FS) to reduce synchronization time between different storages.

2 Likes

Except for these warning messages, which is a more of a logging misconfiguration, did you do any other tests, maybe actual load testing that shows that the performance is slow.

At the conference last year I heard from larger Volto setups that they didn't even need to set up a cache like varnish because the restapi responses are so fast and nobody complains.

For a smaller deployments, Volto might be fast. But when you scale the database to large number of objects created with custom content types. Caching seems to be needed.