Plone memory issue

Hi,

I am running 4 plone sites (5.2.4) served via 2 zeoclients. I have a bunch of monitoring scripts setup on the server to monitor memory, cpu usage, etc and do graceful restarts.

I have crons to restart plone every Sunday night.

Once restarted, each client uses 10% memory, no biggie. Every few hours each client goes up by about 1%. Usually by the end of the week, Saturday, I start get pinged about low server memory (80% in use).

Now, prior to this I had these exact 4 sites running 5.0.4 without this memory growth issue.

Has anyone else been experiencing this?

Thanks,
David

Now it is happening every 3 days. Not sure what to do but switch to nightly restarts.

David

...which is not an unusual usage pattern with Python applications.

Hi David.
We moved a 2.5 million object system from 4.3 to 5.2 (python2 -> python3 also!) 18 months ago.
During process we learnt how our own python code impacts RAM usage

Could you add some more details related size of Data.fs, your python version, number of threads for each zeo client and maybe also number of objects.

Then maybe could give you a starting point for your own inspection of what impacts RAM usage.

/Niels

What's of most interest to me is that the only change was an upgrade from 5.0.4 to 5.2.4.
No change from Python 2.7 to Python 3?

Thanks everyone. Here is some more information.

Plone: Upgraded from 5.0.4 to 5.2.4
Python: 2.7 to 3.8
Data.fs: 967.1MB
Total Objects: 1,672,715
Zeoclients: 2
Threads: Whatever the plone default is.
Addons: A few simple products with only dexterity models, nothing special. EasyForm, PlonePAS.

All objects were freshly made, no moving the data.fs. A pipe was made to push out the data as json and create new objects in 5.2.4.

David

If you use a debug version of Python 3, sys._debugmallocstats gives you information about Python's memory management. This manages smaller memory blocks. The information allows you to estimate the memory fragmentation for those blocks.

If you are on Linux and use a fairly new glibc, then there is the "C" library function mallinfo2. You can call it from Python via ctypes. It provides information about the "C" memory management -- used for the management of large blocks. With it, you can check whether large blocks are leaked.

With older glibc version, there is only mallinfo. It provides the same information as mallinfo2 but uses only 4 bytes. If the memory management exceeds 4 GB, important information is truncated.

2 Likes

Thank you! That is perfect. I am going to get that setup right now to track this down.

For zeo clients we used decorators based on the following code:

def cacheit(method):
""" """
def cachedme(*args, **kw):
""" """
newkw = {}
variables = inspect.getargspec(method)[0]

    if kw:
        for element in kw.items():
            if element[0] in variables:
                newkw[element[0]] = element[1]
    if newkw:
        result = method(*args, **newkw)

    else:
        result = method(*args)

    self = portal.get()
    APP_ = self.getPhysicalRoot()
    MAIN_DB = APP_.Control_Panel.Database["main"]._getDB()

    logs = "###### ZODB CACHE DETAILS ###########\n"
    logs += "getCacheSize: {0}\n".format(MAIN_DB.getCacheSize())
    #logs += "getCacheSizeBytes: {0}\n".format(MAIN_DB.getCacheSizeBytes())
    logs += "cacheSize: {0}\n".format(MAIN_DB.cacheSize())
    logs += "cacheDetailSize: {0}\n".format(MAIN_DB.cacheDetailSize())
    #logs += "cacheDetail: {0}\n".format(MAIN_DB.cacheDetail())
    #logs += "cacheExtremeDetail: {0}\n".format(
    #    MAIN_DB.cacheExtremeDetail())
    logs += "######## DONE #############"

    logger.info(logs)

    return result

return cachedme