@MrTango what do you mean with:
As for checking links of docs.plone.org, we do run a linkcheck over the whole docs, currently the check is 'only' reporting and not failing, that has simply to do with the reason that we still have way too many.
Here for example is the ticket regarding plone.org -> https://github.com/plone/documentation/issues/618
The situation is far from perfect, we know that.
At least for plone.org this has to do with the reason that for various reasons the new plone.org was released in kind of a hurry and ppl forgot or do not had the resources to do that more carefully. One of the many perks in a small community where a small amount of ppl doing too much at the same time, but that is a different story.
So yeah, I guess we have to deal with the fact that we will not get it perfect, taken the amount of resources [people and time] which is sad, true but that is how it is right know ....
Usually we try to fix a couple of broken links, which every new release of the docs, but like I said we have a lot of them and thus it will take time. It would be helpful already if ppl stand up and help with that.
I do not understand the other part of the posting, everyone can run a scraper over a website and host that copy, there is not much we can fix there.
IMHO for now what we have is not really cool, but still good enough, you end up on the main site of docs.plone.org, that is better than ending up on old.plone.org.
Mapping all the old content to the right places of docs.plone.org also brings some disadvantages, for example we started to improve the use experience by moving stuff to different locations.
What we maybe could do is a new template on docs.plone.org and configure the webserver in a way that, lets say if your request is coming from a plone.org redirect you get the new template telling you that docs.plone.org is the 'new place' bla bla bla ....