Plone.app.async: asyncworker not un/re-reregistering in queue

I have an existing project where plone.app.async is required by add'on products (collective.documentviewer and/or eea.pdf) to to asynchronous work in the background. We already used plone.app.async in a previous project, which works on the server.

There are some new customisations needed for which I also needed async, so I am re-using plone.app.async. Seen a lot of problem reports with this and alternatives like collective.taskqueue, but it seems a bit silly to have 2 async solutions in one site. plone.app.async has been set up using the instructions in collective.documentviewer and in plone.app.async itself with the extra zcml needed and the ZC_ASYNC_UUID set to separate files for zeoclient and worker. I'm using the single_db_instance setup until now, because it's not a heavy used site in terms of background jobs queued.

While developing my own async jobs I have a returning problem of the asyncworker zeoclient being started, but not registering itself in the (default queue), so nothing put in the worker queque gets processed. The zope processes are managd by supervisor.

The warning I get in the log is:

2017-12-21T14:36:08 INFO zc.async.events attempting to activate dispatcher 82a32d28-e4c5-11e7-80f7-acbc328c9d67
------
2017-12-21T14:36:08 ERROR zc.async.events UUID 82a32d28-e4c5-11e7-80f7-acbc328c9d67 already activated in queue  (oid 117153): another process?  (To stop poll attempts in this process, set ``zc.async.dispatcher.get().activated = False``.  To stop polls permanently, don't start a zc.async.dispatcher!)

upon shutdown of the asyncworker I do see the dispatcher being deactivated from the queue with a log message, but not always.

2017-12-21T14:13:37 INFO zc.async.events deactivated dispatcher 82a32d28-e4c5-11e7-80f7-acbc328c9d67

Once it gets stuck in not being able to register the unregistering doesn't happen anymore either, but after restarting after some waiting it does register again.

Also after a succesful startup of the the asyncworker, upon shutdown an error is thrown while trying to deregister from the queue:

2017-12-21T14:39:30 CRITICAL zc.async.events Error while cleanly deactivating UUID 82a32d28-e4c5-11e7-80f7-acbc328c9d67
Traceback (most recent call last):
  File "/Users/fred/.buildout/eggs/zc.async-1.5.4-py2.7.egg/zc/async/utils.py", line 316, in try_five_times
    res = call()
  File "/Users/fred/.buildout/eggs/zc.async-1.5.4-py2.7.egg/zc/async/dispatcher.py", line 507, in deactivate_das
    queues = self.conn.root().get(zc.async.interfaces.KEY)
  File "/Users/fred/.buildout/eggs/ZODB3-3.10.7-py2.7-macosx-10.12-x86_64.egg/ZODB/Connection.py", line 366, in root
    return RootConvenience(self.get(z64))
  File "/Users/fred/.buildout/eggs/ZODB3-3.10.7-py2.7-macosx-10.12-x86_64.egg/ZODB/Connection.py", line 248, in get
    p, serial = self._storage.load(oid, '')
AttributeError: 'NoneType' object has no attribute 'load'
------
2017-12-21T14:39:30 INFO zc.async.events deactivated dispatcher 82a32d28-e4c5-11e7-80f7-acbc328c9d67

Did anybody else see this issue with plone.app.async? I'm developing on mac, haven't moved this part to production yet, another Plone site with the same collective.documentviewer required plone.app.async has worked fine in production, but I'm getting a bit nervous after seeing this behavior while developping.

Should I always use multi_db_instance instead also for low volumes to make this more reliable, is it mac-related or is there something else I could do? (except for removing p.a.async alltogether) but that's a costly alternative with existing big add'ons using it).

Did you manage to find a solution or an answer to the error code?
Because I have the same error code and can’t find the reason for that problem.

2019-07-03T15:05:58 ERROR trollius Exception in callback close_threadsafe(<Future ...=pending>, False)
handle: <Handle close_threadsafe(<Future ...=pending>, False)>
Traceback (most recent call last):
  File "/data/plone/buildout-cache/eggs/trollius-2.1-py2.7.egg/trollius/events.py", line 136, in _run
    self._callback(*self._args)
  File "/data/plone/buildout-cache/eggs/ZEO-5.1.1-py2.7.egg/ZEO/asyncio/client.py", line 666, in close_threadsafe
    self.close()
  File "/data/plone/buildout-cache/eggs/ZEO-5.1.1-py2.7.egg/ZEO/asyncio/client.py", line 398, in close
    self.cache.close()
  File "/data/plone/buildout-cache/eggs/ZEO-5.1.1-py2.7.egg/ZEO/cache.py", line 404, in close
    f.close()
IOError: close() called during concurrent operation on the same file object.
------
2019-07-03T15:05:58 ERROR ZODB.Connection Couldn't load state for plone.scale.storage.ScalesDict 0x8139
Traceback (most recent call last):
  File "/data/plone/buildout-cache/eggs/ZODB-5.3.0-py2.7.egg/ZODB/Connection.py", line 796, in setstate
    p, serial = self._storage.load(oid)
AttributeError: 'NoneType' object has no attribute 'load'

@keabk No unfortunately I haven't found a fix for my problem. And allthough you see the same error I'm not sure they are related. Mine was an error in Plone 4 with the Zserver implementation, your error is occuring in Plone 5.1 or Plone 5.2 with the new server implementation

When does your error occur, when you stop the zeoserver instance? It seems one part of the server is already closing database files (and the cache from the error above) and nother is still trying to load objects from the database where the _storage object no longer has a connection to the database. But this is high level guessing.

(This was one of the errors in plone.app.async why I recently advised in another post on this forum (Calling (slow) rest from Plone) not to use plone.app.async anymore if you need async tools but something like collective.taskqueue which is more active in use, simpler and somewhat maintained. )

Plone Foundation Code of Conduct