This last week a colleague pointed me to the Skill collection and the django-expert skill, so I decided to create a proof-of-concept of a "Plone Expert Developer" skill and have been playing with it for some time these days.
The skill has managed to tell the agent how to create the Plone project, add content-types, vocabularies, a Diazo theme, create views, etc. quite well. It needs improvements of course (I drafted it initially using Gemini and then polished it with Claude Code), specially in the Volto part, and perhaps rewrite some other sections.
If someone is interested, I have published the skill in our GitHub repo, but if you find it useful I will be happy to transfer it to the plone organization.
You are absolutely right — this is a very good idea.
I have seen a lighter and simpler version of this concept a few months ago at a Google for Startups / Scaler event, where tools like Gemini Gems allow us to predefine instructions and behavior, and the user asks questions to get responses based on that setup.
What you are proposing goes a step further and is clearly more advanced. Instead of focusing mainly on answers, this approach creates a Plone AI expert with real development capabilities, such as project setup, code generation, and following an actual developer workflow.
That difference makes the scope and value very clear. I’d be happy to contribute or help improve this in any way possible.
I just started to use skills in Claude code. So far, I cannot assess the impact yet. As of now, I have used multiple agents and comprehensive global and local Claude.md files that also describe the tasks/work very well.
Currently, I ended up having like 10 skills:
plone restapi
classic browser views
theming with plone
Service integration
etc….
Thanks for publishing! I’ll try it for sure and see how it performs compared to “just“ agents and my defined skills.
I still have some Plone-specific skills active, but especially for an existing code base (brownfield), the following instructions for Claude were very helpful and had the most impact.
Especially for the workflow, I had to mention certain points multiple times, Claude would not obey all the time.
## GIT
- Make atomic commits (one logical change per commit)
- Propose commits first, needs approval
- Always add Co-Authored-By: Claude $version of model
- Use md format for documentation unless told otherwise
## Workflow
- Always use custom agents
- **Default agent behavior**: When no specific agent is mentioned, always use the **codebase-explorer** agent first to understand the relevant code, then use the **feature-planner** agent to propose a plan — never implement directly unless the user explicitly says "execute directly" or similar
- The feature-planner agent must never implement directly - always present the plan first for approval
- Use `/agents` with code-refactor-analyzer after completing feature implementations
- Always verify tests pass after refactoring
- After working initial implementation always run the code refactor agent to optimize code
## Testing Patterns
- Prefere end2end Functional tests over unit tests
- Unittests only if there is no DB interaction. For example no DB interagtion.
- Only mock if really necessary. Show why and prompt solutions. Mocks need approval.
- Never assert a string in full html page. Always make a specific selector
- Prevent True is not False error messages in test by adding a appropriate error message to assertions
- Test agains real services, like keycloak, redis, elasticsearch, etc. (if available)
- Split large test files by functionality
- No docstrings/comments in test methods if the method name is descriptive enough
I’m running 5 agents:
Code explorer
Feature planner
Refactor analyzer
Tester
Documentation architect.
Running /insights from time to time also helps to expand your config.