Paragon vs ploneawesome

One of the goals of paragon was to make recommendations based on more than popularity... overall reputation, translations, documentation, upgrade support, longevity, test stability, etc. I'm not sure that is automatable. But you're right: we do need to kick it in gear and get going.

If you think there are still add-ons missing from the list, please submit them - I think the form should still work.

I only just came across this concept of "awesome-x" listing.
Looks like there are already "awesome-django", "awesome-flask", "awesome-pyramid" and of course "awesome-python" listings:

Did the paragon project bring results, or is the hunt still on?

1 Like

I think to have a ranking of the add-ons is very important for newbies that want to begin with Plone!
And maybe one important criteria should be the quality of the add-on docs...
Me, for instance, I get lost... I've asked this in in another post, can someone help?
-> with Plone 5, are collective.dexteritytextindexer or collective.folderishtypes needed??
do they add some new feature/functionality to the P5 default content types?
Also, by the way, are there any recommended add-ons that extend or add important and/or relevant features to the P5 dexterity content types?

(I believe these are the type of questions a newbie will ask...?)

I've registered a while ago and wanted to create something like ( ) which lists all repos in github/plone and github/collective, ranked for github stars or watches or pypi download counts.
But that's on my TODO list, I didn't start it yet because of lack of time.

IMO that would be a nice thing along the curated paragon site.

It would be better not to have another place to look for Plone related let's avoid another domain

One issue I'm wondering about is why count the number of views on Github? Only developers will be looking at add ons there. Pypi similarly counts only downloads, not popularity of use, or the size of a site that an add on is used on.

Let's try to incorporate your thoughts into paragon, and let's kick paragon back in gear.

PyPI download numbers are broken since the beginning.
Ranking by Github stars...that's something for developers. If you focus on Plone users then other criteria are more important in addition to the fact that you need to distinguish between add-ons (providing visible additional functionality to Plone) and "internal" packages that are not of interest to Plone users.


Right, totally agree with @zopyx, that distinction is important.

By the way, I did not manage to install collective.cover in Plone 5:
/Plone5/buildout-cache/eggs/", line 5.4-11.10
ImportError: No module named intid.interfaces

Maybe related to this??

2016-01-31 13:57:30 ERROR should NOT be installed with this version of You may experience problems running this configuration. now has dexterity suport built-in.

Anyway, my time testing Plone is coming to an end... tomorrow a decision will be made for the new project, and unfortunatly the winner seems to be Drupal :frowning:
Plone is very good but right now is too much trouble!
Hope in the future to come back and see if things improve.

Are there other options?

We also might have a look at Django Packages and it's source code on Github.

Seems that already been some effort to create the Plone variant in 2005. See
Unfortunately seems down.

I think Django Packages works great, especially the cross table comparison.

Plone 5 is clearly not listed as supported in collective.cover:

I would have to start adding more restrictions to packages to avoid people trying to install add-ons under Plone 5 until a clear support path is defined:

Yes, my mistake, but Plone and is ecosystem is just too much confusing, probably not for normal people.
I wonder how many developers actually know the insights of the Plone core??
Good luck, I will come back when there is a new major release, to see if things improve.

@ebrehault :


" But anyway, if you inspect your page in your browser, you should see the 2 resources. "

No, I still don't see, but maybe my Plone install that is broken?
Anyway, thanks, but that does not matter anymore.

An update :slight_smile:

The page, which you get to from , is one curated list that came out of Paragon v1.

You can recommend additions to the list by filing an issue at the new repo

Let's try this and see what happens.

To recap, we do not have reliable, meaningful statistics on which to base an automated ranking. Our attempt at a thumbs up/down manual voting on the old did not have a good response (too sparse, not a good quality measure). What we have now is a good start with a promisingly workable process that incorporates human judgement.

The list mixes Plone 4 and Plone 5 addons. For example, collective.contentleadimage doesn't make sense in Plone 5, since this is a core feature of Some of these plugins do not work in Plone 5. I think we need to clearly point out Plone compatibility.


I don't get is this short list taken into count from the long dead or not.
For example: I don't find a very common add-on like

We need to repeat the voting process on ?

we can fix it using a simple table with Plone 4/Plone 5 columns and and a mark.

I told @tkimnguyen the same thing yesterday; I added a proposal:

"vertical markets" separation is a bad idea, imho.

  • who decides what are the verticals?
  • many add-ons serve multiple verticals
  • many sites/integrators won't recognize themselves as being in a specific vertical

and yes, that could theoretically be solved with tags instead of separate lists. Although tags describing what the add-on actually does would be much more helpful.

But then again, remember what happened to the old plone Product centre: nobody maintained it, nobody had any clue if any of the add-ons mentioned there would totally blow up your site and mix red socks with your white laundry. In end effect, that was useless.

So, paragon was an attempt to have a small, human-curated list. Yes, the paragon initiative failed, for multiple reasons. Still, we need something where (human) maintainability will be prime concern, above grand (technical) designs.

So: separating things into Plone4 and Plone5: totally good idea. But only if we can get the info directly from pypi, otherwise that will be a burden on whoever gets to maintain this site in a years time.

1 Like

the current list is a bad one as those are not the most popular add-ons by any measure; that list can only be valid by using real download statistics.

I can buy the idea of tags; now, trying to get the supported Plone versions from PyPI is by no means the only solution: classifiers can or can not be accurate, as already pointed here.

question: how to obtain those? We don't have access to every pypi mirror in the known universe. And over what time-period?

and the original premise was: 'most-downloaded list' does not necessarily translate into 'best', or even 'understandable and useful guidance for new site admins and integrators', but that is another discussion.

That of course can mean that having two (or more) lists is a better:

  • most downloaded
  • subjective, hand-curated list

so if you have tips on how to generate that most-downloaded one, that's most welcome.

@keul this initial list is the result of @sally's work with the Jazkarta team: they did evaluate a bunch of add-ons and this was their recommendation. So, yes, this is partly a result of Paragon.

The process in the github repo is not voting. Voting becomes a popularity contest, and as Paul wrote, that is not what we are aiming for.

Download statistics does not tell us whether an add-on is well maintained, whether it has tests, whether it has been translated, etc. It would be far more work to try to create the infrastructure to get meaningful stats than it is to get a human curated list up.

If someone would like to implement "phone home" (opt in?) feature that tells us which add-ons are actively used, that would be nice, but that's a separate issue. Ploud also tried to provide error statistics in add-ons (among other things).

TL;DR: please file issues in the repo and we will try to move this forward.

we don't need access to all mirrors in the known universe: we can work with what we have; for instance:

what else we can add? we can use passing tests on Travis CI for different versions of a package as a proof of compatibility; for instance:

we can use Coveralls or Landscape information as proof of quality; for instance:

resuming: we use the information we have on hand: cold, objective and verifiable information, maybe in the form of badges.

you can add more criteria here.