I suspect the topic was discussed before, but I'm too lazy to dig out answers.
It took me a while to build a Plone web site, as a part time activity. I started with Plone 5.1 and ended with Plone 5.2. Now that the web site is online I must document the useful parts of my adventure (before I completely forget), but I must also find a reproducible method to reinstall the web site, exactly as it's being deployed, because it may not be maintained regularly.
The use of a VM for deployment helps for backing up a copy of the whole site (including the OS), but what I'm looking for is a way to redeploy Python packages in situations like upgrading the OS or the hosting method. My hacks and small fixes could break if I don't use the same versions of packages.
Buildout is nice, and I guess I could use its pinning features, but I'm afraid it's not enough to ensure that some important packages will not change.
Maurits mentioned pip-tools in its description of a dream:
Pip-tools does look useful, but it does not prevent repositories to go offline or packages to become more or less obsolete. So my dream is an offline backup of dependencies for Plone 5.2.
There's no short-term plan to migrate the site to Plone 6...
I don't want to start a long thread, but any information/opinion is welcome.
I too have been frustrated by the lack of a good user story for reproducible custom installations of Plone. I'd like to have 3 deployments, one for local, one for staging to share with the client to preview changes before going into production, and one for production. I think that story has improved, but a lot is left to the developer to fill in the gaps.
For Plone 6, the Documentation and Training Teams and contributors are actively working on improving the documentation for deployment. The Docker story for Plone 6 is pretty good, and there are several good examples of a basic stack. I have not yet seen Ansible for Plone 6, although it might exist somewhere.
I used to have no interest in Docker because it was changing and breaking faster than I cared to follow, so I stuck with what I knew. These days its documentation is useful and easy to follow, and I've been working with teams who have helped me learn enough to be dangerous. I would suggest using Docker Compose (or equivalent, such as Podman) for deploying a stack.
This would be my preferred training course to attend at the next PloneConf, if only there was a pool of talented and experienced Plone administrators (there is) who could be persuaded to do such a training (pretty please).
I should check the Plone 5.2 Ansible book... My earlier experience with Ansible was not conclusive (it looked good at first); I started with a beta version (I have an old tee-shirt), so it certainly improved since I gave up, but I still prefer simple scripts. I figured that ssh can be used to provision remote servers (without the levels of abstraction imposed by Ansible).
I have similar feelings about Docker; each time I tried to use it it felt wrong (no details). I prefer to use VMs or LXC or no virtualization (I'm a dinosaur). My humble excuse is that I'm not in charge of the hosting infrastructure so it's easy to avoid proposing Docker (or Podman).
At this point I'm reluctant to refactor my provisioning method; all I want is a method to backup remote resources that could go away (python packages, source code, etc). I was thinking about a sort of download proxy that could backup (and even "replay") everything downloaded when provisioning. I could try to hack something using mitmproxy... Maybe a "wayback machine" for python packages and git repos is useless (or impossible), so I may just backup the Plone installation and hope for the best.