Future of JavaScript in Plone

Yes, I've seen how grunt/gulp work but I really dislike how manual everything feels. I have this project called repodono.storage that implemented some extension to the mockup structure pattern, at the same time working on the mockup code because of the various fixes I have to do to it. Sure I just end up copy/pasting the scripts, but the setup is so specific to that repo/directory layout that makes it not portable. I like having tools that allow some common modifications available via command line arguments (like specifying the browser to run the tests, which packages to extract the tests from, which package to test), not bound to the development source tree, and have the option to run their tests easily against production.

As for dealing with npm, if I had needed more dependency over what mockup has, I would have to copy/paste their package.json (and bower.json) into my project root and extend on it (rather than just lazily symlink that). Yes, if mockup is in npm this problem might go away, but it doesn't make it easy to work with in a development mode in conjunction with my extension (namely, live reloading selectively). I also like having a way to communicate this to Python packages (and avoid the nested dependency issue that npm introduces), as mentioned.

At the very least provide a way to link from Python (server side) package with explicit JavaScript package versions. Without this in place forces everyone to dig for these information in VCS rather than having the ability to find this information from pre-installed Python packages.

The use case I had in mind was this: we have C++ libraries that provide Python bindings for working with 3D, and this includes export to WebGL. While the bulk of the JavaScript code are provided by libraries sourced from npm (like ThreeJS), that alone doesn't provide much UI controls from within the browser. This is where the bits of JavaScript code will be added along with the relevant Python code as they would work in one cohesive unit. The fact is that the PhD students here don't exactly have a lot of time to do their own research plus learning the ever deprecating Node.js tooling (plus knowing how to craft/modify the various configuration files); why would I want to force them to learn webdev tooling when they don't even want/have time to do webdev? If I could give them a single command line tool that will generate/build/open their result page from their browser without them having to know all of the various Node.js tooling, wouldn't it be better (because my tooling actually wraps around Node.js tooling; for a more concrete example please see run through this example, starting from the Installing Prerequisites section - I will need to build one for webpack)? Isn't programming about automation and solving repetitive problems? I hate having to be forced to go along with the endless churning that the Node.js community love to waddle in, and sure as heck those PhD students wouldn't want to take part in that.

Also that I feel that a good tool should be here to solve problems and get out of the way, and in my experience Node.js tools just simply get in my way too much while not really solving the problems I have. For example the r.js optimizer, and I've explained this somewhat in my previous post (how user have to edit some config file, instead of just easily stub out the dependencies).

Yeah, similar, though I would also need to provide an API of some sort that would expose those pre-generated artifacts from the installed Python packages, too.

No. I make no requirements on anything, aside from the ability to ask a Python package on what dependencies they need from npm. I am sorry for the confusion, so please let me spell this out some more. If developer wants to have a Python package (the.upstream) for only Python stuff and a completely separate JavaScript package (upstream.js) for their client side stuff, and wants to use Calmjs for that, they simply include that upstream.js package into the package.json declaration mechanism. Ideally, also declare the JavaScript files (typically webpack bundles) they might require (an example, although I am currently thinking of streamlining and reworking this bit).

Now comes a developer that want to consume the.upstream package (as downstream). Once they declared their dependency on the.upstream (by putting that package in install_requires), they can simply invoke calmjs npm --install downstream and lo and behold - the required Node.js environment is now installed into the current directory, which will include the separately developed upstream.js package in node_modules (plus their npm dependencies). Then at that point, if the downstream package developer wishes, they can with Calmjs help, develop and use JavaScript directly within their Python source tree as a single package, in conjunction with the sources/packages acquired via npm. This also doesn't really stop people from generating the required bits to build a package that would also be deployed to npm from the same source tree that also build a Python package also.

As I noted previously, the final generation of the artifact for inclusion with Python wheels still needs to be done, which is partly why I had hesitated to introduce Calmjs, but given how fast everything related to Node.js move I felt that I should at least show my hand, to at least get some more words and ideas on how it could work better.