Plone install breaks on first attempt to customise buildout.cfg

I wanted to try Plone out as the basis for a new project, and installed 5.2. When I tried to add just collective.easyform to the buildout.cfg, after running bin/buildout and restarting Plone the site is no longer served.

I looked through var/log but there was no errors there to say why it wasn't being served up. I have lynx installed on the Linux server, and visiting http://localhost:8080 works if I don't customise the buildout.cfg, but fails once I add a product.

After reverting to the original buildout.cfg, running buildout and restarting the server, I can once again connect to it with lynx on localhost or over the network.

I tried installing a different product (Products.PloneFormGen) and that too causes the server to stop working. In each case buildout registers no errors and there are no errors in the server logs on restart. The last line in the log on restart is "Zope Ready to handle requests". Since I can't even connect to the Zope server on 8080 clearly this is not so.

I tried starting the server in debug mode after installing a product, but again no error appears in the console yet there is still no way to connect to the server.

I tried installing the last in the Plone 5.1 releases (5.1.5) and found the same behaviour on adding any add-on to the eggs section of the cfg file.

I have an old Plone 5.0.7 installation on this server with some add-ons installed via buildout, so it doesn't look like it's a problem with software outside the Plone install. I can start that old version and connect to it via localhost:8080 or over the network.

I am rather mystified about the lack of error messages when the 5.1.5 and 5.2 server ceases to work.

In both installs (5.1.5 and 5.2) I used the Unified Installer from, and let the installer download and compile the necessary python version.

I don't really know what to do next to try understand why these two products break two separate installs of different versions of Plone.

Here's the version info from one of the installs:

  • Plone 5.1.5 (5115)
  • CMF 2.2.12
  • Zope 2.13.27
  • Python 2.7.15 (default, Aug 22 2019, 16:01:46) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
  • PIL 5.4.1 (Pillow)

Maybe you coulĸ share your buildout cfg

You may face a so called "startup problem" (note that the logfile is not cleared on restart - and your "Ready to handle requests" might be a message from an earlier successful start). Startup problems are best analysed by starting Zope/Plone in the foreground ("bin/instance fg"). It causes log messages to be written to the console (in normal mode, log messages during the startup phase might get lost).

Thanks Dieter and Espen for the quick responses.

I went today to try the "instance fg" and both 5.1.5 and 5.2 servers work (with some deprecation warnings on collective.easyform). As I hadn't tried this fg mechanism for starting the servers yesterday, I went back to what I was doing yesterday and the servers now work using those startup procedures. I reproduced all the steps I tried yesterday and used the same buildout.cfg as yesterday, and the problem has gone. For me this is the worst kind of problem. I'd much prefer to at least be able to identify the problem as being user error. With all the test coverage and automation that goes into configuring a Plone server, I was scratching me head for hours yesterday about what could be the problem.

So whilst I have working Plone servers I have no idea what could possibly have caused the problems yesterday. The Linux server itself was running overnight. So the only difference is that for approx 14 hours there was no Zope/Plone server running. If it wasn't for the fact that I was consistently able to restart the Plone servers using the default builout.cfg then I'd have expected the problem to be that the Zope server was not shutting down properly and that this was blocking any subsequent restarts. But being able to get consistently repeatable fails only after adding the products to the cfg, I assumed that wasn't the case.

Now that the problem is gone, I'll have to assume it was user error. And sadly the user is too dumb to even identify what that might have been.

As dieter said: When you have problems like this, start your (Plone) instance with

bin/instance fg

Usually, things are much clearer then.