Scan the ZODB for broken BLOBs (POSKeyError)

I had a few broken objects in my Plone 4.3 site, and I found that old How-To by Mikko Ohtamaa (which deletes all objects with a broken FileField in a grok view and simply writes to stdout).

I like some more control, so I created a BrowserView which creates an HTML list of the broken objects, and then allows to delete the checked objects.

I could imagine a view as well which does a local check (current context only, no recursion) and, if broken fields are found, allows to upload new contents or delete the broken value for each of them. Currently, both the view and edit actions fail in such cases.

Now I wonder whether we perhaps have some Plone extension already which contains such functionality, or whether I should create one. IMO, it would be important to have such a tool at hand; those nasty POSKeyErrors can be real show-stoppers ...

And, by the way: is it possible to update the visible HTML page from a long-running backend process, e.g. telling about the start and end of operation, and of any hit found?

You can't do it nicely without a queuing system and a seperate worker instance.
Running a long running process on a normal instance isn't a great idea.

Something like plone.app.async · PyPI has a status update mechanism. so you could create some JS that polls and returns the current status to the current webpage.
I'm not sure if collective.taskqueue or other queuing solutions have a way to return a custom job status.
Alternatively you can just write the status somewhere in the DB and poll for changes in that.

Yes, that's what I was after; thank you.

I looked around a little bit further and found there to be at least three different options for asynchronous tasks in Plone:

  • plone.app.async
    • Requires a multi-worker (ZEO or RelStorage) setup
    • Uses a dedicated Zope worker for background tasks
    • No additional non-Zope servers required
    • latest version 1.7.0 (2016-02-25)
    • Based on zc.async (latest version 1.5.4; 2011-03-03)
  • collective.taskqueue
    • No depenceny on multi-worker setup.
    • latest version 1.0 (2020-02-10)
    • Optionally uses a Redis server, via the redis Python client
  • collective.celery
    • “Celery for Plone”
    • latest version 1.1.4 (2018-12-06)
    • Requires a Celery server (written in Python; latest version 5.0.2, 2020-11-02; documentation), which may in turn use Redis as well (see above).

Phew. From the installation point of view, the plone.app.async option looks easiest to me, because no non-Zope servers seem to be involved; but the last release of zc.async is really quite old, which makes me unsure whether this would be the way to go; it should work with Plone 4, at least.

Yes those are all the options :slight_smile: well researched.

I've used the first two. It's annoying there is still not an official plone api to abstract out differences.

p.a.async at least works on plone 5.0 but not sure about 5.2. Pretty sure it hasn't been updated to python 3 and possibly won;t.

c.taskqueue is possibly the simplest to setup because p.a.async still requires a special mounted zodb to work I think. If all you want is a background thread and don't need a single worker for all your zeo instances then that is pretty simple.
Even if it doesn't have an official status update mechanism you can use an annotation somewhere and multiple transactions to provide updates.

for the record there is one other way to somehting async in plone (also written by @datakurre). https://pypi.org/project/collective.futures/. but I can't see how it can help you with getting progress back to client since the future doesn't have either a transaction to write with or a response object.

oh there is one more option. Asko's experimental python3 version of ZServer which supported websockets. https://www.youtube.com/watch?v=mV2FfZdyNrU. It was a cool idea but don't think it got picked up. Which is a real shame. It would have been nice to say just use 5.2 and you get pubsub, websockets and queuing all builtin :frowning:

My latest lightweight take on this problem was to directly use celery. Starting an independent (from Plone code) celery worker somewhere. Sen dmessage to the queue from Plone, Use the plone.restapi or other views to communicate Plone what to do (or get data from there to process). This is very straight forward. Celery is a mature project and as I use Redis as its storage, with it its easy to install.

For some inspiration here the code:


1 Like

Looks like the interesting feedback to my side question totally hijacked this thread :wink:

Well, if it turns out to be the best solution - did I get you right that this one does provide an “official status update mechanism”? - someone, perhaps me, might one happy day look into this. Currently I'm still on Plone 4.3 ...

I had a further look in our docs (Clock and asyncronous tasks, Plone 4 version) and found the information there to be 10 years old. Perhaps we could put there some information about the current options and a few working examples.

So you don't need any collective.celery functionality and use Celery directly to access ElasticSearch, right?
Yes, this looks interesting. Perhaps it should be mentioned in our “Using external catalogs” chapter?

1 Like