“AI-first” open source makes me choke

I nearly lost my cool at the Stellenbosch sprint. It was bad enough, that one of the participants kept trying to offload our brainstorms to an AI chatbot. When the whole point of the strategy session was the process of jointly converging on shared core values, and internalizing that convergence — if you short-circuit that process and outsource the outcome, the outcome would have no value. It would be just words, instead of a deeply felt conviction that we’re on a mission together.

What pushed me over the edge was another participant, who repeatedly insisted Plone should be an “AI-first CMS”. Surely we’ve all seen the massive community backlash, when Firefox declared itself to be an “AI-first browser”? My blood was boiling. After the third such statement, I shouted: “AI is toxic!” He was clearly surprised by that. As I was surprised by his surprise. Because everybody knows that AI is toxic, right? It’s friggin obvious, innit?

Apparently not.

Let’s talk

Thankfully, the Plone people are a bunch of nice guys. Over lunch, my AI-first colleague and I got together and quickly found that despite our differences, we have a lot of common ground when it comes to AI in open source. We started working on a joint position statement, and figuring out a process to widen our conversation to the whole community.

I wrote a blog post, to document where I’m coming from: the stuff I read which makes it so blindingly obvious to me that AI is toxic. You can read the full post on my blog:

The next post in this series tries to outline a way forward: how do we relate to the current AI hype, without damaging our community?

I hope to start a constructive and productive community dialogue about this topic. This is not about blocking AI or whatever. It’s about engaging with it in a way that supports our shared values. And it’s definitely about communicating about AI in a way that makes Plone stronger.

10 Likes

This is how it works, usually. Hyper hype new things, everyone knows about and try them, get a lot of resources, 1% can use them to do a real change. How cars, mobile phone has been introduced and then became a standard?

When I started looking at the work advertisements, It was full of "driver license needed". Now they even ask you. Same for the mobile/smart phone. Wars are made collecting info about your enemy and killing hundreds just to get some points.

People usually is scared and adapt until they find to be at the top of the list to be hit by the change.

We used to bash other societies because they don't have fancy cars and commodities. The hype is an outcome of commodity fetishism. We already know it.

Your blog posts are missing "what is to be done". That is the hard part.

2 Likes

I’m in.

Here is where I’m coming from:

  • LLMs are here to stay,
  • future iterations only marginally better,
  • they are useful to some degree: mostly for collecting / summarizing data and common approaches,
  • always requires vetting and refinement by a human with domain knowledge,
  • limited application in mission critical situations (finance, scheduling, accounting)
  • can not generate “new” insights
  • huge data collection and exfiltration risk by the usual suspects
  • threat of losing specialist acuity with prolonged usage

What is a “Plone” to us and can we agree on “our shared values” as a community?

2 Likes

I have read your blogs completely and it literally made me think about it specially the psychological trauma part and the ecological and social damage. Climate change was never a human problem. It has always been a capitalist problem. I have subscribed to your newsletter and waiting for the upcoming blog. Until then I would love to explore and research more on this.

1 Like

Thanks Yuri. My second blog post will go deeper into the “what is to be done?” question. But most importantly I think we need the community hive mind to find the answer to that!

I am more or less with you.

Let's talk about the good things that we achieved (those that I know of) lately:

  • @jensens came up with a new native ZODB storage with JSON as storage format in Postgres
  • @zopyx developed a complete new solution for doing forms and surveys in Plone which is ages ahead of what we have in Plone so far. See https://demo.privacyforms.studio/

AI is not about intelligence, it's the statistical parrot.

I studied computer science and AI in the 90s and the AI of the 80s and 90s is completely different from the AI to today. At that time, parts of the AI research was about modelling knowledge, building expert system and trying to model knowledge and researching knowledge representation.

20-30 years later, this research became more or less a dead end. The "old" AI approach has not helped very much solving real world problems. The current statistical parrots in terms of LLMs can be very helpful and beneficial.

2 Likes

Thanks Andreas. I was aware of Jens’ AI fueled breakthrough, but not of yours. I sense some fascinating talks coming up for next PloneConf ;-).

As to knowledge network engineering: It’s what Marcus calls “neuro-symbolic AI”: combining the pattern strenghths of LLMs with the reasoning capabilities of Good Old-fashioned AI.

Thanks Gangadhar. Please share any interesting findings. I hope some of my references will be useful to you.

Hey Norbert, I find myself in agreement with all your bullets. My upcoming follow-up post elaborates some of them, but also misses some. I look forward to hearing your insights.

Missing from the discussion are the legal implications of using AI. This is only for the US, and international law may vary. The Plone Foundation Board of Directors ought to take notice, since it holds the copyright to the Plone code base.

First, there's copyright.

From the "Report on Copyright and Artificial Intelligence, Part 2: Copyrightability"

Based on an analysis of copyright law and policy, informed by the many thoughtful comments in response to our NOI, the Office makes the following conclusions and recommendations:

  • Questions of copyrightability and AI can be resolved pursuant to existing law, without the need for legislative change.
  • The use of AI tools to assist rather than stand in for human creativity does not affect the availability of copyright protection for the output.
  • Copyright protects the original expression in a work created by a human author, even if the work also includes AI-generated material.
  • Copyright does not extend to purely AI-generated material, or material where there is insufficient human control over the expressive elements.
  • Whether human contributions to AI-generated outputs are sufficient to constitute authorship must be analyzed on a case-by-case basis.
  • Based on the functioning of current generally available technology, prompts do not alone provide sufficient control.
  • Human authors are entitled to copyright in their works of authorship that are perceptible in AI-generated outputs, as well as the creative selection, coordination, or arrangement of material in the outputs, or creative modifications of the outputs.
  • The case has not been made for additional copyright or sui generis protection for AI- generated content.

Another legal issue is ownership of intellectual property. Who is the author when AI is used? Yet to be tested in courts is whether the acceptance of code that lies somewhere between purely AI-generated and AI-assisted would invalidate the code owner's copyright, thus turning their entire code base over to the public domain. In other words, if Plone accepts AI-assisted code, then the Plone Foundation may expose itself to the risk of being unable to protect its entire code base. Until there's legal clarity, I'd err on the side of caution to avoid the risk.

2 Likes

AI Is Here

It's definitely disruptive. It's definitely a hype. Nobody knows where the journey will go.

Open Source and AI

I ask myself how this can work together. I think it can. But we need to move forward. Or die out.

People profiling themselves with how many GitHub PRs got merged, how many issue comments they wrote — they are flooding the communities. I think these metrics won't last long and are a phenomenon for a short period.

But we can improve our projects with AI.

The Iron Man Suit

As a developer I can produce the software for my FOSS project I dreamed about for years. I got an Iron Man suit.

I can fix bugs I would never have time to fix. I understand the problem, I can guide an agent to provide the fix, I understand the solution.

Tackling Huge Tasks

Or I can tackle huge tasks. I know what I want, I know the internals of our software, the needs, the pitfalls, the desires, the impact, the customers.

Now I write this down in plain English (or German!) and let the agent research, follow different paths, explore. I read the report, ask questions, read answers. I decide on a direction, let it dig deeper. I find a path to go and then I let it write a rough plan.

I read the plan, let the agent refine it. Divide it into phases, plan a phase in depth with the agent, review the plan, refine it with the agent.

And then I let it write the code with almost no interaction. TDD, self-verifying, >90% coverage. Unit tests, integration tests, start up the code, let it poke around with the REST API and curl, it finds bugs and fixes them, writing tests along the way.

Then I have a first version of the executed plan. Code.

Next cycles: Let it write benchmarks, run profilers, find bottlenecks. Discuss solutions, decide, rinse and repeat. Let it review the code for best practices and refactor, rinse and repeat. Give it the role of a security engineer and pentest the software. Review. Let it fix it, rinse and repeat.

I did all this without a special setup. Just Claude Opus 4.6 and a naked Plone checkout. No skills files, no sophisticated CLAUDE.md, no MCP servers.

A Creative Process

So, this is a creative process — I just don't write a single line of code. And I get in a week software better than what I would be able to produce in a year as a senior dev with 40 years of programming experience and more than 20 years of Python — not to speak of code in languages where I only took an introductory course, like Rust.

IANAL, but it's a work in the sense of Urheberrecht (which is different from Anglo-American copyright — I don't know enough about the latter to judge). Lawyers need to decide at some point...

Plone and AI

Do we need to make Plone an "AI-first CMS"? What's that even? Actually, with the REST API an AI like Claude can remote-control Plone very well. I did it without any special setup — Claude just explored by itself what's possible. MCP servers are overrated in my opinion. CLI tools and APIs are way more valuable.

Plone is the data layer, AI is the processing layer. And nobody wants yet another chatbot in Plone — looking at the market, nobody wants yet another chatbot, period.

What does this mean for Plone's roadmap? Instead of chasing "AI features," let's invest in what already works: a clean REST API, good documentation for humans and agents, structured content types, a frontend an agent can understand and expand with custom features. These are exactly what AI agents consume well.

The boring infrastructure is the AI strategy.

8 Likes

AMEN!

I had a very similar experience while building Privacy Forms Studio (for Plone):

  • a full-fledged, polished solution that brings a modern form/survey experience to Plone
  • at its core, a complete replacement for collective.easyform—and it goes far beyond easyform’s feature set
  • solid options for storing, exporting, and distributing collected form/survey data
  • AI support for generating forms from prompts
  • AI support for generating forms from mockups (PDF)
  • AI support for converting fillable PDF forms into online forms
  • a strong security model to reduce misuse and abuse
  • a complete project website (EN and DE)
  • an almost full replacement of the Plone UI for the end-user UX (no toolbar, etc.)
  • a demo site with lots of examples, fully translated into 10 languages
  • full CI/CD
  • automatic deployment of updates

Disclaimer: the underlying forms framework is SurveyJS.

Built with AI in roughly four to five weeks—with almost no manual intervention.

Success factors

  • I had a clear vision for the project because I understood the full scope and the problems I wanted to solve.
  • As a long-time Plone developer, I had the expertise to iterate through a larger project in the right steps.
  • Knowing your tools matters: in 2025 I experimented extensively with a wide range of AI solutions.

Downsides

  • Some loss of control over code and architecture due to the speed—there are definitely areas that need refactoring.
  • I partially lost the overview of the codebase.
  • Debugging AI-introduced issues (especially in Plone-specific context) can be time-consuming—though it didn’t happen too often.
5 Likes

I studied computer science and AI in the 90s and the AI of the 80s and 90s is completely different from the AI to today. At that time, parts of the AI research was about modelling knowledge, building expert system and trying to model knowledge and researching knowledge representation.

I think that this reasearch is used in the part (Transformers) that helped to make LLMs really works now. Actually, IAs don't use the direct input but a preprocessed one that remove and standardise the input so the LLMs can work on a better input. This part is done by algorithms developed in the last 30 years.

See the famous paper of Google in 2017r: [1706.03762] Attention Is All You Need

I like this part a lot. Thanks for this, and for showing the community how to leverage AI and produce revolutionary packages (I am chomping at the bit to try both out).

My efforts are becoming AI assisted but not AI executed yet. Things like Digital Asset Management with AI (tagging and categorizing images), BI dashboards (slicing relational data and surfacing exceptions), CRM-ish things (ad-hoc projects, tasks). Applications that a small company like ours would get a subscription to and only use marginally can now be developed to match our usage patterns and tweaked to fit our working methods.

The Plone API is central here: anything represented in JSON can be shared between systems. So far, I am ignoring the lighter alternatives to Plone (e.g. FastAPI) because I am focused on delivering and Plone happens to be the hammer we use. This is why I asked: what is a “Plone” to us?

You say it is the data layer, I agree and raise you: Plone is the data orchestrator: the manager of all kinds of different activities that have to do with security, access and publishing (for human and machine use). It handles data collation, transformation and presents the results to its consumer.

This is what systems at EEA, and things like BIKA, ONKOPEDIA, Quaive, and the work of RedTurtle, iMio etc. focus on. They all bring a huge amount of disparate data from varied sources together, “make it all make sense” (to use a modernism) and present it in a digestible format. To me, the cataloguing and organizational part aligns quite well with what AI is good at.

AI is the cart, not the horse.

2 Likes

Yes there’s potential legal issues with AI generated code. When discussing AI, it may be helpful to separate the developer / code contribution perspective, from the user / product owner perspective. I’m more focused on the latter in my writing.

1 Like

The boring infrastructure is the AI strategy.

That is super well said, and also very close to one of the discussions we had in Stellenbosch about product strategy and branding. I said there that “boring is good”. Because it means mature, solid, battle tested, secure, stable. That’s a major selling point. And I like how that approach is validated by providing a solid foundation for AI work, even the radical stuff you’ve been doing.

1 Like

Hello there,

I’ve appreciated the discussion so far. That said, there’s one aspect of the original post that has been on my mind since it was published.

As one of the participants in the Stellenbosch sprint, this paragraph made me uncomfortable. I don’t think referring to specific behaviors of unnamed individuals in a public post is the best way to make a broader point — especially if the concern wasn’t raised directly with the person at the time.

On a more personal note, being neurodiverse, I found myself spending far too much time wondering whether I was the person being referenced. That kind of ambiguity can be quite unsettling.

Regarding the “AI-first CMS” discussion: I clearly remember that conversation, and I also remember that both @gyst and I were openly opposed to framing Plone in that way. There was a healthy disagreement in the room, and from my perspective, it remained within the bounds of a normal strategic debate.

As I mentioned during the sprint, I believe our views align far more than they diverge. My only suggestion going forward would be to avoid indirectly singling out individuals when reflecting publicly on intense discussions. It helps keep the focus on ideas rather than people.

Best,
ea

2 Likes

Yes you are right. I was called out on that by the first participant. I apologized and am making it up with him. I had not considered the impact of unsettling ambiguity this introduces for other people, and now I’m apologizing to you and anybody else who felt uncomfortable about that.

As to the second passage, about “AI-first”, that’s a deliberate rhetoric that I checked beforehand with the person involved, and that he actively leaned into on LinkedIn, in a coordinated communication strategy that worked out pretty nicely. I already took care to describe the constructive dynamic and agreement we found in the article itself. I hope the takeaway for everyone on that is that we have healty diversity of viewpoints and nice interpersonal dynamics.

But yes, I need to be more careful. Thanks for bringing this up in a clear and constructive way.

3 Likes