Also note that Google will get URLs from Chrome users who navigate to the pages, and will attempt to start crawling from there.
Next to hinting that those documents should not be indexed via robots.txt, you can add X-Robots-Tag: noindex, noarchive
headers to the documents as they are served. See Robots Meta Tags Specifications | Google Search Central | Documentation | Google for Developers for the full documentation.
Once the pages include a noindex directive, you can either manually remove them via the Search console or wait until the next time the pages will be crawled.
Additionally, in your front end server, you could prevent links to these pages being followed by requiring the HTTP_REFERER
header passed by the browser to be empty. e.g. in Apache something like (untested)
RewriteCond %{HTTP_REFERER} ^$
RewriteRule \.pdf$ - [F]
Personally, besides exclusion from internal search, I would not think of solving such a problem within Plone.