Cron for pack Data.fs

I would like to set up a cron job to pack my Data.fs directly into Zope5 (I don't use Plone). Can you guide me on the subject?
Can the collective.recipe.backup product be used directly in ZOpe5 for this?
thank you in advance

Zope 5 does not have an own cron job mechanism out of the box. It should be the easiest way to use the system cron daemon, e. g. via z3c.recipe.usercrontab ยท PyPI.

Thank you. A supplementary question. Is there the possibility of defining in Zope a Folder object that would not be historized? Indeed, for needs I create and destroy folders, but the impact is that it makes my ZODB grow, because of the historization. The impact is that the ZOBD has increased to 8GB, while my hosted solution (once the ZODB is packed) is more than 200MB.
Do you have an idea?

History is an intrinsic feature of (some) ZODB storages. It cannot be enabled/disabled on a per class basis.

A "normal" Zope folder writes all its content (on the first level with chilrdren prepresented by persistent references (consisting of a 8 byte object id and the class path) to the ZODB. If your folders are large, this will result in large transactions and rapid storage increase. There is a specialized folder for large content (--> Products.BTreeFolder2). It stores its content in a tree structure. On modification, onle the tree nodes which actually have been modified are written to the ZODB (not all content). This gives smaller transactions and reduced storage growth.

Thanks for these informations.
However, I found how to pack my zodb via a cron task, via Zeo.

Plone Foundation Code of Conduct