How much RAM should a system have to build the front end?
I've got an AWS Lightsail with 1 GB RAM and it's churning away for a long time on
make build-frontend with no apparent progress Install Plone from its packages – Install — Plone Documentation v6.0
Or maybe there is progress being made, but really really slowly? It's been sitting there for 15 minutes
Install Plone from its packages – Install — Plone Documentation v6.0 says 256 MB but that seems really small. I don't think I was able to build Plone 5.2 on a Raspberry Pi with that amount of RAM.
➤ YN0013: │ @webassemblyjs/helper-api-error@npm:1.11.1 can't be found in the cache and will be fetched from the remote registry
➤ YN0013: │ @webassemblyjs/helper-api-error@npm:1.9.0 can't be found in the cache and will be fetched from the remote registry
➤ YN0013: │ @webassemblyjs/helper-buffer@npm:1.11.1 can't be found in the cache and will be fetched from the remote registry
➤ YN0013: │ @webassemblyjs/helper-buffer@npm:1.9.0 can't be found in the cache and will be fetched from the remote registry
➤ YN0013: │ @webassemblyjs/helper-code-frame@npm:1.9.0 can't be found in the cache and will be fetched from the remote registry
➤ YN0000: ⠴ =================---------------------------------------------------------------
14:52:03 up 3 days, 13:55, 3 users, load average: 27.69, 35.49, 27.95
One option is to build the image on a more powerful machine (like maybe a github actions runner using GitHub - docker/build-push-action: GitHub Action to build and push Docker images with Buildx), then push to a container registry, and let the target machine pull the pre-built image from there.
That makes sense. I was hoping to build and run everything without Docker... is that too old school?
Oh sorry -- you can do that, I guess I just jumped to thinking about Docker when you said "build the front end," which may just be revealing the bias in how I am doing things these days.
I'm not sure how much RAM is needed but I would think 1GB would be enough to not have extreme slowness. Maybe it's a network issue instead?
top might help you verify whether running out of RAM is the problem.
I just saw the
uptime line. That is a high load average. That might mean the CPU is underpowered, or it might mean the CPU is waiting on some other resource (like disk I/O).
I've been lazy and have been using just the Lightsail web console so I can't run top at the same time, but whenever I stop the build and check uptime the load is huge. I didn't think to check vmstat.
Turns out it's not possible to grow a Lightsail instance's RAM so I will make a new one with 2GB of RAM out of a snapshot and see if it gets past this. If so, I'll have my answer...
Oh yeah: I made a new 2GB instance and it's getting past the point where it was just sitting...
FYI, it's not possible to increase the RAM of a Lightsail instance... you have to make a new one out of a snapshot: amazon web services - Upgrade / Increase AWS Lightsail virtual machine's memory for better performance - Stack Overflow
It would be good to update the docs page with these more detailed RAM requirements... but I'm afraid of @stevepiercy
Yeah, definitely using more than 1GB to build the front end.
top - 15:35:14 up 13 min, 2 users, load average: 2.61, 2.49, 1.56
Tasks: 123 total, 3 running, 120 sleeping, 0 stopped, 0 zombie
%Cpu(s): 20.2 us, 1.0 sy, 0.0 ni, 13.1 id, 0.0 wa, 0.0 hi, 0.2 si, 65.4 st
MiB Mem : 1921.4 total, 818.6 free, 532.5 used, 570.3 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 1188.8 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1842 ubuntu 20 0 1008736 189296 33408 R 177.1 9.6 0:30.88 node
a bit more used while yarn build:
top - 15:37:00 up 15 min, 2 users, load average: 1.79, 2.29, 1.59
Tasks: 123 total, 2 running, 121 sleeping, 0 stopped, 0 zombie
%Cpu(s): 19.1 us, 0.9 sy, 0.0 ni, 25.9 id, 1.0 wa, 0.0 hi, 0.1 si, 53.0 st
MiB Mem : 1921.4 total, 594.1 free, 723.0 used, 604.2 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 998.3 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1842 ubuntu 20 0 1221292 409416 33408 R 139.0 20.8 3:00.36 node
1824 ubuntu 20 0 881212 100024 31616 S 1.7 5.1 0:07.35 node
...and the front end build is complete
Get off my lawn, punk!
This is the first verification of system requirements to build, not run, Plone. It should go in the docs.
Both install methods refer to the hardware requirements to run, not build, Plone. I'd accept a PR with text that clarified the difference. Of course, if one person builds a container locally, then pushes that to a repo for others to consume, then the build step is avoided and lacks the extra RAM requirement for consumers.
The impression I have is that you end up needing more RAM solely to support your builds.
One trick would be to create a swap on your VM for this purpose.
Maybe try this swap trick -> How to setup Docassemble with less than 4GBs of RAM
That said, building your docker images ahead of time is another approach that will give your server a ready docker image that will likely need less RAM. Also if you have a semi-useful test of your builds in CI/CD, then failed builds will never propagate to your production machine.
(update: just read @stevepiercy response and he is already making the distinction between build-time RAM needs and run-time RAM needs. The only thing I'm adding here is the use of the swap "trick".)