Following Stellenbosch, we tried to publish a page about AI in Plone and that surfaced the need to have a more wide-ranging discussion about the topic. I’d like to see if we can have a conversation about how we, as a community, relate with AI. While we can of course discuss that in the forum here, I’d also like to try and get a working group together to work on this in video calls and in-person on sprints.
Ideally, this will result in a consensus vision and roadmap, that we can present at PloneConf in September. Please let me know, if you’d like to be involved in that AI policy team.
I’ve given the topic some thought and hope the following provides a good starting point for this conversation.
Resisting AI whiplash in Plone
Working with AI in open source requires that we grapple with its downsides. We need to acknowledge the ethical problems and minimize negative impacts; ensure quality and security despite fundamental limitations of AI technology; and resist the centralizing power dynamic inherent in the current AI trajectory.
AC/DC: Acceleration/Deceleration

Whiplash is the colloquial term for Cervical Acceleration-Deceleration Syndrome. The typical cause, is that your car is rear-ended by another car. You get a sudden acceleration, then immediately afterwards an intense deceleration when your own car crashes to a stand-still. You may suffer long-term damage to your spine and cognitive functioning, as a result.
Whiplash provides a useful metaphor for the current moment in tech. The current AI boom provides an Acceleration-Deceleration challenge. The movement behind AI is explicitly accelerationist (Gebru and Torres 2024), with hallmarks of a cult worshiping the machine god (and of course the money god).
Brace for impact
We’re currently at the end of the acceleration stage. A forceful deceleration is sure to follow, as I have argued in my previous post. What goes up, must come down. The money pile is nearly burnt and the end of the runway is near.
The economic destruction that AI deceleration will cause, will compound the societal and ecological damage the acceleration phase is already causing.
Damage control
In open source, our challenge is to navigate both the acceleration and deceleration phases while minimizing permanent damage. How do we pick up the good bits, the ways machine learning can improve our stacks? Without hitching our wagons to the fascist agenda? Without having wasted our efforts and our credibility, on tainted technologies?
Product focus
AI in open source has two sides: using AI in development, and shipping AI in software products. This article is firmly focused on the latter, taking a product owner perspective. While I touch on the developer experience in some places, that’s just to illustrate my main argument, which is about empowering open source product users to make choices that align with their ethics, threat models and quality requirements.
The reason for this is simple: we need to align on product vision and roadmap, because that’s what we ship. Moreover, what we ship impacts our users and the wider world.
In contrast, the developer side is more an internal debate. There is no need at all, to align on developer workflows. The risk there is that it quickly becomes a your-word-against-my-anecdote equivalent of the Emacs versus Vi editor wars. I also think, that aligning on product roadmap first, will reduce noise in the developer side discussion.
Finally, there are legal and quality control considerations on the contribution side of creating software, that need attention but not as first-class citizens of the external facing communications of our projects. This article is way too long already, without also addressing those concerns.
Working with AI in open source requires that we grapple with its downsides
There is a world in which generative AI, as a powerful predictive research tool and a performer of tedious tasks, could indeed be marshalled to benefit humanity, other species and our shared home. But for that to happen, these technologies would need to be deployed inside a vastly different economic and social order than our own, one that had as its purpose the meeting of human needs and the protection of the planetary systems that support all life.
And as those of us who are not currently tripping well understand, our current system is nothing like that. Rather, it is built to maximize the extraction of wealth and profit – from both humans and the natural world – a reality that has brought us to what we might think of it as capitalism’s techno-necro stage. In that reality of hyper-concentrated power and wealth, AI – far from living up to all those utopian hallucinations – is much more likely to become a fearsome tool of further dispossession and despoilation.
AI is a toxic menace, a cancerous outgrowth of what Klein in the quote above calls capitalism’s techno-necro stage.
This is not a normal tech adoption discussion we can have a civilized discourse about, and agree to disagree. Making the wrong choices here is not merely suboptimal. It corrodes ethics and destroys our future.
The only valid starting point, in my opinion, is that we acknowledge the horrible shadow of AI technologies, and consciously find ways to mitigate its evil tendencies.
We need to:
- Acknowledge the ethical problems and minimize negative impacts
- Ensure quality and security despite fundamental limitations of AI technology
- Resist the centralizing power dynamic inherent in the current AI trajectory
Read the full blog post on darkedge.world.