Waitress: "Client disconnected while serving ...."

Hi,

since I updated my app from Zope 2.13 to Zope 4 I get lots of log entries a la

2020-01-26 09:40:50,981 INFO    [waitress:359][waitress] Client disconnected while serving /VirtualHostBase/https/myapp.de:443/bliss/VirtualHostRoot//bliss/++resource++assets/packages/jquery/jquery.js

The file is available via
https://myapp.de/bliss/++resource++assets/packages/jquery/jquery.js - and it loads properly.

Can anybody explain what this means? It is at info level... but disconnected does not sound good :slightly_smiling_face:

Thank you!

I'm seeing the same thing preceded by these kind of warnings. I'm not sure how there could be a queue at all since I have haproxy infront with a max conn limit=2 since only 2 threads running. Or am I misunderstand how waitress is working with zope now?

2021-06-03 23:40:19,078 WARNING [waitress.queue:116][MainThread] Task queue depth is 13
2021-06-03 23:40:19,801 WARNING [waitress.queue:116][MainThread] Task queue depth is 14
2021-06-03 23:40:19,801 WARNING [waitress.queue:116][MainThread] Task queue depth is 15

I see this message frequently on a high-volume Plone site. But I see this only one particular browser view (serving some SVG or other resources from the filesystem).

I have seen the "Task queue depth..." in former times but some point, the messages went away
(the sites runs on 5 ZEO clients with Apache and HAProxy in front, no caching proxy involved).
Our haproxy config is like this:

defaults
    mode http
    log global
    option httplog
    option dontlognull
    option http-server-close
    timeout connect 5s
    timeout client 50s          # should be equal to server timeout
    timeout server 50s          # should be equal to client timeout
    timeout queue 25s           # discard requests sitting too long in the queue
    option httpchk GET /VirtualHostBase/http/www.onkopedia.com:80/onkopedia/VirtualHostRoot/ HTTP/1.0\r\nUser-agent:\ HAProxy-LB-Check

backend stats
    stats uri /admin/stats
    stats refresh 5s

backend zope
    server onkopedia1 localhost:10001 weight 1 check inter 15s rise 2 fall 1 maxconn 2
    server onkopedia2 localhost:10002 weight 1 check inter 15s rise 2 fall 1 maxconn 2
    server onkopedia3 localhost:10003 weight 1 check inter 15s rise 2 fall 1 maxconn 2
    server onkopedia4 localhost:10004 weight 1 check inter 15s rise 2 fall 1 maxconn 2
    server onkopedia5 localhost:10005 weight 1 check inter 15s rise 2 fall 1 maxconn 2
```

You are informed that the (HTTP) client (browser or proxy) has closed the connection (to Zope) before the request response was sent. This may be due to a (response) timeout.

1 Like

I get that client disconnects happen if there are delays which seems the main root cause of the problems I'm seeing. What puzzles me is I have a similar setup to @zopyx esp the maxconn 2 so how can there be any internal queue inside zope if 2 connections and 2 threads.
The only way I can think this could happen is if haproxy thinks the client disconnected but waitress thinks the connection is still fine. Then haproxy would send in another request which would then get queued. Is there a way that is possible?

The "client disconnected while serving" message comes from waitress. Thus, it detects connection closes. If request processing has started, it may not be able to react immediately (because Python does not allow to abort a running thread). Thus, there is a limited possibility that haproxy can view a request as aborted while it is still being served. However, I do not think that requests being served are still in the task queue.

Yes I think this is whats going on. There is something causing an extremely long request. Haproxy times out that request and then sends another, then times out that one and then sends another etc.