Future of JavaScript in Plone

first of all I want to thank you guys for taking care of this; I can recognize parts of my reasoning and proposals for the rejected PLIP I prepared months ago to try to simplify the Resource Registry.

I can understand why you want to distribute all JS/CSS as npm modules, but I disagree on forcing everybody to do this: it makes no sense for me and it will add another layer of complexity and another point of failure.

I see 2 main use cases here:

end user/integrator

this person downloads Plone and a bunch of add-ons and wants a working site; he doesn't want to learn webpack at all, and will probably include his own JS/CSS code hard-coded in the theme.

hard-core developer/integrator

this person downloads Plone and a bunch of add-ons and she wants to add and bundle some other JS/CSS code together her own way, using webpack or other super cool and brand new technology.

the first case is solved with a simplified version of the Resource Registry that only takes care of serving resources in the order they were registered; this Resource Registry only has the option to define if some bits of code are going to be served to anonymous or authenticated users on certain conditions; no need to release or download anything from npm and will generate one request for each new resource (which is what is desired in HTTP/2). when I add or update an add-on I just simply serve the new resources from it; when I remove an add-on, I just erase those entries from the Resource Registry. I love this use case.

the second case is solved by disabling the Resource Registry and providing a way to download the source code of JS/CSS resources the way you defined above; the user has to know how to configure and run webpack and all the resources are going to be served in a few requests (which is not what is desired in HTTP/2). when I add, update or remove an add-on I have to reconfigure and rebuild the whole thing again and again. you love this use case.

the only issue I see right now is how to patch JS/CSS code on Plone core after release: Plone depends on package A and package A contains JS/CSS code it has a bug and a new release is there to solve that issue; how can I release and replace in order the person on my first use case can use this release? we need to think on this.

these are my thoughts on this and, finally, I want to ask everybody to keep calm and be open to both, new ideas and criticism.

the reason Plone 5 Resource Registry story is so sad is because core developers never tried to understand use cases beside their own, ignored early concerns and criticism, and tried to force everybody to use a broken feature until it was pretty obvious for everyone it was not working.

fixing a broken feature is very hard; to quit is very easy. we need matureness, self-criticism and responsibility.

1 Like

I think its more complex than this. Because there is another use case

add on developer wants to have addon with dependencies.

Lets say your buildout has both easyform and plomino. Let's so both now require reactjs + jquery. The addon develops for this shouldn't have the extra burden of having to release every version of react they want to use as an egg.
Next the integrator wants a simple build process which results in a zope instance that she can then at 100s of plone sites, some with plomino, some with easyform and some with both.
Finally each of those sites should have reasonable loading times for the viewer.

This is why RR is complex, as it was trying to solve this case. This is also what this new proposal is trying to do I believe (but I'm still trying to work out how it handles the multisite case).

I hate that buildout is already complex and has too many moving parts that can stop a newbie in their tracks. And I totally get that adding npm into that mix will make this worse. On the other hand, how do we deal with increasing complex modularisation of js code and the fact plugins now reuse shared libraries? I suspect they might be right, that either a two step process (buildout + npm) or a buildout that runs npm, is needed so that js modules can be downloaded and requirement constraints resolved between multiple plone plugins at the commandline.

I don't think thats it. I do think however there is a case where you want a highly optimised site with a fixed set of plugins and a lot of custom theme js. Here you will likely want to use webpack on the commandline to build a custom theme which overrides the default RR css/js in order to reduce load times etc.

Can't we just copy what other CMSs do? For example:

https://www.drupal.org/docs/7/api/javascript-api/managing-javascript-in-drupal-7
https://www.drupal.org/docs/7/api/javascript-api

Contrary to JS fadness, I wonder if "Namespaces are one honking great idea -- let's do more of those!"

The problem of import and dependencies is still a late-binding problem, not a packaging problem? If we make it the latter outside of a narrow path of what core ships, I think we are doing something wrong.

There needs to be an easy way to flag exports in anything that's getting built by toolchain(s) whether AMD or just stuffed in zero-namespace IIFE, such that those dependencies can be accessed by add-on JS via a namespace, even if a non-global one obtained by something like require('plonestuff').jQuery in some compatibility script to expose that core library to your other JS library/app dependent on access via namespace, without it needing to know about Plone's packaging of this stuff.

My wishlist, in abstract:

  • No dupes
  • No magic incantation
  • No Plone-specific hacks to third-party libraries
  • No requirement of tight coupling.

For example, let's suppose I have multiple add-ons that use D3, and I use webpack to build (from ES6) their respective JS. It does not seem far-fetched to build JS bundles for each add-on both with, and without D3, and copy the "without" (smimmed) full/minified versions to each Plone add-on, and have a common package to provide D3 (via requireJS, a global namespace, or whatever).

Likewise, a react or jQuery dependency that might be shipped by core could have the same assumption: if you build your JS in its own package (shim, you would push built assets into your add-on (assuming that each maintains separate VCS).

In the simple cases, where you don't need a build tool for each add-on, and you just write plain-old ES5 that works in all browsers, you want to just write JS and reload, and have simple (non-packaging) way to depend on libraries that ship in core. And in the just-write-and-reload case, whether you minify stuff is up to you and a manual task, but should be a decision made and implemented with a non-idiomatic workflow chosen by you the add-on developer.

I am a bit late to this party, and I see that there are a lot of good points by everyone here that I can respond to. In order to do so, first I am going to air a bit of "grievances" with regards to the current Plone JS development and tools, then lead into the solution I have worked on for the last half year or so that may be of use for everyone here.

One thing that I find after working with Plone mockup is that the existing way (Plone 5) of providing/using JavaScript is that the reuse story is quite specific to Plone. Reusing the JS code outside in a more generalized Python environment (i.e. with other Python web frameworks that is not Plone) requires a lot of hand holding and configuration file crafting to make sure the development environment can execute the relevant tests. (In my case, with my project was able to symlink to the mockup directory to pick up the library but this really is not portable/cross-platform (Windows users be damned) and not ideal and is an absolutely terrible solution, given that this is at mercy to some file structure, rather than depending on what Python speaks, modules). The alternative is to have every project write up custom, special snowflake configuration file that is minimally reusable/portable outside of that Python project (which is what everyone else does, and it drives me mad) - some use Bower, but this leads to dupes between the Python and Bower installation which is a different form of madness.

Secondly, the integration story. It is a great thing that Plone operates on a loosely coupled principle, however in practice (with how things are currently done) this also leads to a lack of cohesion between the projects such that integration testing suffers. There was a case where the Plone upstream library required an API change, but the mockup library did not update the test data for those tests and a new pull request that was merged simply commented them out because those errors looked irrelevant. Then this XSS issue came along, a fix was written, tests passed, but that feature was silently broken by it until those tests were uncommented and corrected before the actual XSS fix was proven to function correctly. Point is, if the mockup library had used actual test data generated by Products.CMFPlone, the API mismatch could have get picked up earlier. This is where I realized that a way to pin the versions against each other must be done somewhere, especially for testing.

Thirdly, no unified rendering source for both server and client side rendering. Specifically to Plone, the folder_contents view is JavaScript only. I like the idea where there is a dedicated module that deal with navigable tabular data with all the client-side bells and whistles, except for the lack of server-side rendering. I get it, people think server side rendering is legacy, however aside from Google no other search engines will run client-side JavaScript to trigger the rendering, thus damaging the SEO prospect (never mind the usability prospects for users that don't want to enable JavaScript at all for security reasons). With Plone mockup, the rendering was done using the Backbone library and as far as I know, no Python library can render that. I've heard solutions such as passing it through to PhantomJS on the server side to trigger the initial rendering but seriously that sounds like a headache to do. Why not have a templating language/system that can be used on both Python server side and JavaScript client side.

I could go on more (actually forgot a couple more points that I wanted to add to that while writing this; I could go on about inadequate testing with mockup but that's just reiterating points), but for the mean time, a solution will be nice.

To fix the last point, I abandoned mockup and decided to roll a solution that bring together Jinja2 for Python server side rendering and nunjucks for JavaScript client side rendering, and did it in a way that Plone can be out of the picture until I really need it by building on top of Python directly (i.e. use distutils entry points for registry like features). Do note, I pick Jinja2 mostly because the greater Python web development community is most familiar with it.

However, the templating alone does not solve the development and deployment story. I need a way to support the live reloading use case for rapid prototyping, being able to build and run tests against not only the code, but also the resulting artifacts that will also be generated as part of the system, and that the Python packages that I create must have a way to export the various npm packages required and the versions required. Lastly, not coupled to any specific JavaScript framework, but the thing need to be extensible to support future frameworks that may come along.

After about three months of work, I created and released calmjs as the core system to glue what I have together - it provides the basic integration with npm by introducing a package_json key for setuptools / distutils, so that these end up as package metadata that can be reused by other packages through the included calmjs npm tool which will generate a package.json for a given Python package in a way that also include npm dependencies declared by its Python dependencies. It also introduce another keyword that allows the declaration of JavaScript sources that should be exported under some specific key out of node_modules. Finally, allow the declaration of Python namespaces within that package that should also export JavaScript modules, creating a framework agnostic way to export JavaScript sources that is embedded within a Python module.

Using that as the foundation, I have also created calmjs.rjs which is the integration package with RequireJS (where I learned to use, love and hate with Plone 5), where the calmjs.rjs provides a tool (calmjs rjs <package>) that will generate an AMD artifact that will contain ALL required JavaScript (provided that all npm dependencies have been installed) for the given Python package(s). I did plan to also provide something similar to webpack, but due to the time constraints I have I was unable to start this (yet).

As for testing, I have created calmjs.dev which extends upon calmjs so that a test registry is declared, and introduces karma integration (along with the suggested minimum devDependencies which can be seen in the setup.py file) where to run JavaScript tests provided by a Python package (to tests the JavaScript exported by that module) through the RequireJS AMD framework is simply calmjs karma rjs <package> will build and execute the tests for the Python <package> and generate the artifact only on success. All of the above packages are released and available on pip for some time now, with more information available on the top level README file that goes a little bit deeper to the points I've made, along with a description on how the packages work and their goals.

With calmjs in place, I continued on the development of the Jinja2 / nunjucks integration package, nunja. This package is currently not released yet, but I've had success building on top of calmjs. Also, in order to facilitate the live reloading development methodology, nunja.serve was created for integration with various serving mechanism, most notably for the hot reloading without rebuilding (only apply to nunja templates and scripts, as I had neglected to do this for generic modules declared under the calmjs registry). First implemented serving mechanism was Sanic, but Plone support will be done if/when I figure out how my sponsors are going to address Plone going forward.

For demonstration, I have also created a separate project, nunja.stock, where a number of useful reusable templates have been created, one of which (nunja fsnavtree) is reminiscent of the structure pattern from mockup. Navigation on the client side will trigger pushState and client-side rendering, with server side rendering also available at any given path. To demonstrate the entire system, I have an example written; under the prerequisites section; a description on how the entire stack can be setup is available. The nunja bits are very alpha/beta quality (i.e. no release on pypi yet), but everything is fully tested (note the test coverage; yes this includes the JavaScript code) and fully cross-platform (tested under Linux/Windows/OSX, Firefox/Chrome/IE/Safari). Test data for the template rendering is also used by both Python and JavaScript tests and are exported, thus making it easier to pick out API changes that result in potential breakages, addressing the second "grievance" point.

Working at the Python package level ensures compatibility and reusability with other Python frameworks and packages and also other JavaScript frameworks, provided the compatibility packages are written at the correct layer. This is the approach I've done and so far it is working out.

It is still not perfect, there are still things I need to do. There needs to be a way to declare pre-built artifacts, so that when the Python wheel is generated, the relevant JavaScript artifact(s) will also be generated, included and be exported/declared in a way that makes reuse from other frameworks straightforward (i.e. the goal is to make npm/nodejs completely optional in production usage; only Python wheels are to be required for deployment). Also CSS declarations (or SASS or whatever CSS framework) are not done yet, so those who have tried the example and is attentive will note that the CSS is declared in a rather ad-hoc way. While I would like other developers to adopt the system I've created, having others just look and see the idea that having a more generic, lower level compatibility layer that bridges Python/pypi and JavaScript/Node.js/npm that is reusable for all Python package is basically one of my main goals here. Perhaps the Plone community might find the work done in calmjs and nunja be interesting and of use.

2 Likes

This deserves a new thread :slight_smile:

Thats a lot to process.

But I gather what you a saying is that any plugin author will release the required npm packages on pypi? Is that sustainable?
What happens when author A needs 1.1 of blahjs and author B needs 1.2? They need a shared pw on pypi?

I created Calmjs originally to avoid the problem of having to split a repository that contain both Python server side components apart from the JavaScript client side components that achieve a single unified goal. Granted, an argument can be made to split it into three packages (Python only onto PyPI, JavaScript only onto npm, some glue package that bring the two together) but I personally find that annoying to manage dependency problems within what ultimately is a single package. So no, to be very clear: I hate the idea where Python package authors cannot ship JavaScript code with their Python package and hate the idea of being forced to split the JavaScript onto npm. I want a single Python wheel containing all the things required to start the web server, and I am working towards that goal (until artifacts can be declared within calmjs framework, this goal is incomplete).

Anyway, the fact that the Python package requires/contains specific client-side JavaScript code that work directly with the included Python code within that package that ultimately target a Python environment means that the JavaScript code will have minimum value on npm anyway. So ultimately the Python wheel (package) that gets generated will also contain the raw JavaScript code (or whatever variants) and the tools that integrates with Calmjs (currently only calmjs.rjs) will be able to extract them from the package, together along with the dependencies on npm, generate the target artifact which gets served to the client.

As for dealing with versions, this is where the downstream packages have to make a decision. They can provide explicit version pins, and it gets away from the nested dependency structure that npm implicitly introduces, at the cost of having to manually test the viability of their generated bundle. Which is why I also made it much easier to declare/build/generate the test harness during the build process (where the JavaScript tests are extracted from the provided Python packages) and run them against the development or even the generated artifact(s). If the tests passed (and with sufficient test coverage) it really doesn't matter too much which explicit versions the upstream packages have pinned. So for instance, if some package extending on nunja really needs nunjucks 3.1.0 they can just specify that in their package_json attribute and it will shadow the version defined in nunja.

Also, if this behavior is undesirable, another package can simply consume the package.json in the egg-info (via the setuptools/distutils metadata API) generated by the calmjs supported packages and create new ways to harmonize the versions, or whatever. I am simply using and extending the core Python libraries to expose JavaScript development files that no current Python packages/frameworks have even tried to do so (putting a package.json/bower.json/webpack.json or whatever static/hard-coded configuration specific to the project's git repo/tools is really not a portable way to drive development and integration between projects).

I agree that this is all very annoying and a lot to process, because nobody really thought hard about communicating Node.js package dependencies (i.e. npm) down/with Python package dependencies (i.e. pypi), that the approach done by both of these package management frameworks have differences in how pretty much everything are done and putting all of these things together is a lot to take in at one time (spoiler: I've been thinking about how to bridge Python and JavaScript package dependencies together for about 3 years on and off before I finally grok the problem and built calmjs, and I don't even claim to fully understand the full problem at hand, so I only tackled what were blatantly obvious to me and that alone is a huge problem already. My attempt to solve this is basically doing it at a level that works for me and in a way that can be extended, and to use core Python libraries and work with existing things from other software/development ecosystems to avoid inventing completely brand new standards).

I do like the idea of (optionally) being able to release both Python backend code and JS/CSS backend resources in the same package in a way compatible for both buildout/pip and npm. I've also done that myself.

The main issue IMO is to find good enough conventions to map the required metadata for both Python code and npm. CalmJs does that, but I still wonder could we do that with just stdlib. E.g. define everything JS related in package.json, but the read it in setup.py into somewhere available in Python runtime. @metatoaster probably tried that and found issues?

There is nothing stopping anyone from having a package.json in their project root and setup.py read that into the package_json field that Calmjs understands, if using Calmjs is permitted. Much like how most Python projects read README.rst into the long_description field (in their setup.py), one can do the same thing for reading package.json into the Calmjs package_json field (don't even need to decode it with JSON; the validation for that field by Calmjs is done on the string). As the stdlib (distutils) is really limited in what it can understand/convey, setuptools was created to rectify the situation; and likewise setuptools doesn't understand things like package.json or other Node.js conventions, a way to convey that must be done and that is one of the many reasons the calmjs package was created.

If pure Python stdlib solution is desirable, simply commit and include the package.json in the package's egg-info (or dist-info) directory (or link the copy at the root level into it). In fact, all Calmjs really does for handling of package_json flag for the setup method is to do exactly that (plus some small sanity checks), and use the couple helpers from pkg_resources provided by setuptools to do the heavy lifting. The code to access that from Python is simply this (example assumes calmjs.rjs was installed in the running Python environment).

from pkg_resources import Requirement
from pkg_resources import working_set
pkg = working_set.find(Requirement.parse('calmjs.rjs'))
package_json = pkg.get_metadata('package.json')

Since the JavaScript tooling for standardized packaging is pretty lacking in features compared to what Python has to offer, going the reverse direction (i.e. using pure Node.js based packaging tools, whatever they are, to feed back into Python) is more problematic.

1 Like

@tisto @sneridagh @tkimnguyen What do you think of CalmJS-approach?

I like the basic idea that we could use package.json to define frontend dependencies and entrypoints in the package and then include package.json also in Python package metadata to make that data accessible from Python. (And those, who want to completely ignore JS tooling, could just define the same info directly into setup.py.)

CalmJS adds a lot of tooling on top of that basic idea. I think I understand its reasons, but once I've become comfortable with JS tooling, CalmJS wrappers on top of that feel complex.

A large part of the complexity comes from my desire to expose the module/namespace structure presented within a Python environment to the accompanied JavaScript environment, so that the layout of the modules and associated tests would feel a bit more natural for a stubborn Python programmer like I. That said, the tooling is designed to interoperate with existing JavaScript tools, and this is accomplished by the generation of configuration files that build the tests and artifacts, a task that I've found annoying to do using standard Node.js tools due to the need to manually manage all those configuration files (I guess I should mention that I hate writing configuration files and would rather write programs to write them for me; also for building the configuration files to start the tests against the specific packages I want to run the tests against, too).

Example: let's supposed there is an existing framework that already provides all the standard client side JavaScript libraries (e.g JQuery, Underscore and say MathJax). We want to introduce a new $library (that I or anyone could have written) that uses some of those provided libraries, plus a bunch more that we don't already have (Handlebars + Backbone + whatever else). In the traditional way, hard-coded configuration files (such as grunt/gulp) would be provided to bundle everything together for that $library, which may or may not be structured in a way that is compatible with the existing framework or way of loading things. So from what I understand someone will have to craft a new configuration file that bridge the two together to deploy with the framework. Okay, simple, manually building configuration files is a perfectly fine way for working with just Node.js packages, but problem comes when we are talking about interoperability with Python packages, and that Python packaging tools by default does not provide a way to communicate all things Node.js with other Python packages, especially these pesky configuration files. Though if the problems are limit it to this, Calmjs doesn't offer much, especially for a workflow that is open to git clones (to grab the grunt/gulp files as Python wheels don't typically ship them) and Node.js tooling.

The real complicated part is to allow stubborn Python programmers (like me) to develop/ship JavaScript with their Python code as complete Python packages, but do it in a way that at least open and compatible with existing JavaScript tooling and module frameworks. A few commenting here (like @zopyx) have expressed that being forced to split JS/CSS from their Python package into a separate Node.js/npm package is undesirable, and this reason is precisely why I've created calmjs, just to reiterate this point once more in a more direct way (also that the nunja templating engine was the driver for this). One major thing I need to get done to fully achieve this goal is to figure out how to declare, generate (with the Node.js tooling integration done by Calmjs) and ship complete JavaScript artifacts with Python wheels, so that only Python wheels (and pip) are required for deployment, to make Node.js not a requirement on the server.

1 Like

I haven't looked into CalmJS. Though, in my experience mixing JS and Python code/packaging is never a good idea.

In the end, it all comes down to the question if Plone/Python web developers need to master the JS toolchain. IMHO, it is an illusion to think you can get away without understanding the JS toolchain, as a web developer in 2017.

2 Likes

Never seems too strong of language, no? We have many people in the larger Python webdev community doing this for reasons that are too varied and complex to dismiss out of hand as historical baggage, I think?

Edit: Maybe I am misunderstanding, conflating packaging with distribution?

JS tools and their configuration may be complex, but that complexity is shared, documented and hopefully eventually solved by the JS developer community.

Yet, I understood that CalmJS would allow developers familiar with JS tools and practices to keep following them, but those afraid of JS tools would have as "pythonic" alternative. That would sound good for me, but the real question is: would CalmJS be easy enough for the critics of Plone 5 resource registry and bundling tools?

Isn't this similar issue to compiling po files to mo files during packaging? Plone does that currently with zest.releaser, which supports plugins like

Still confused. Doesnt this still require a plugin developer to release all their js dependencies as python packages which is impracticle? I get packaging your own custom js code but remaking js packages in Python seems like a lot work.

Thanks Asko but I'm out of my depth here, and I defer to those of you who have been trying to address the issue! "better minds" etc. :slight_smile:

Yes, I've seen how grunt/gulp work but I really dislike how manual everything feels. I have this project called repodono.storage that implemented some extension to the mockup structure pattern, at the same time working on the mockup code because of the various fixes I have to do to it. Sure I just end up copy/pasting the scripts, but the setup is so specific to that repo/directory layout that makes it not portable. I like having tools that allow some common modifications available via command line arguments (like specifying the browser to run the tests, which packages to extract the tests from, which package to test), not bound to the development source tree, and have the option to run their tests easily against production.

As for dealing with npm, if I had needed more dependency over what mockup has, I would have to copy/paste their package.json (and bower.json) into my project root and extend on it (rather than just lazily symlink that). Yes, if mockup is in npm this problem might go away, but it doesn't make it easy to work with in a development mode in conjunction with my extension (namely, live reloading selectively). I also like having a way to communicate this to Python packages (and avoid the nested dependency issue that npm introduces), as mentioned.

At the very least provide a way to link from Python (server side) package with explicit JavaScript package versions. Without this in place forces everyone to dig for these information in VCS rather than having the ability to find this information from pre-installed Python packages.

The use case I had in mind was this: we have C++ libraries that provide Python bindings for working with 3D, and this includes export to WebGL. While the bulk of the JavaScript code are provided by libraries sourced from npm (like ThreeJS), that alone doesn't provide much UI controls from within the browser. This is where the bits of JavaScript code will be added along with the relevant Python code as they would work in one cohesive unit. The fact is that the PhD students here don't exactly have a lot of time to do their own research plus learning the ever deprecating Node.js tooling (plus knowing how to craft/modify the various configuration files); why would I want to force them to learn webdev tooling when they don't even want/have time to do webdev? If I could give them a single command line tool that will generate/build/open their result page from their browser without them having to know all of the various Node.js tooling, wouldn't it be better (because my tooling actually wraps around Node.js tooling; for a more concrete example please see run through this example, starting from the Installing Prerequisites section - I will need to build one for webpack)? Isn't programming about automation and solving repetitive problems? I hate having to be forced to go along with the endless churning that the Node.js community love to waddle in, and sure as heck those PhD students wouldn't want to take part in that.

Also that I feel that a good tool should be here to solve problems and get out of the way, and in my experience Node.js tools just simply get in my way too much while not really solving the problems I have. For example the r.js optimizer, and I've explained this somewhat in my previous post (how user have to edit some config file, instead of just easily stub out the dependencies).

Yeah, similar, though I would also need to provide an API of some sort that would expose those pre-generated artifacts from the installed Python packages, too.

No. I make no requirements on anything, aside from the ability to ask a Python package on what dependencies they need from npm. I am sorry for the confusion, so please let me spell this out some more. If developer wants to have a Python package (the.upstream) for only Python stuff and a completely separate JavaScript package (upstream.js) for their client side stuff, and wants to use Calmjs for that, they simply include that upstream.js package into the package.json declaration mechanism. Ideally, also declare the JavaScript files (typically webpack bundles) they might require (an example, although I am currently thinking of streamlining and reworking this bit).

Now comes a developer that want to consume the.upstream package (as downstream). Once they declared their dependency on the.upstream (by putting that package in install_requires), they can simply invoke calmjs npm --install downstream and lo and behold - the required Node.js environment is now installed into the current directory, which will include the separately developed upstream.js package in node_modules (plus their npm dependencies). Then at that point, if the downstream package developer wishes, they can with Calmjs help, develop and use JavaScript directly within their Python source tree as a single package, in conjunction with the sources/packages acquired via npm. This also doesn't really stop people from generating the required bits to build a package that would also be deployed to npm from the same source tree that also build a Python package also.

As I noted previously, the final generation of the artifact for inclusion with Python wheels still needs to be done, which is partly why I had hesitated to introduce Calmjs, but given how fast everything related to Node.js move I felt that I should at least show my hand, to at least get some more words and ideas on how it could work better.

@datakurre This is the problem with any of these efforts to simplify JS in Plone. The people most affected (simple integrators) can't understand the options. I don't really understand it.

The only way around this that I can see is some documentation first.

If you installing some plugins, here are the commands you will have to run.
If you are creating a plone plugin that has JS dependencies, here is what you have to run.
If you are a creating a custom theme for one site, here is what you have to run.

1 Like

Documentation is something I will have to work on, especially for creating brand new package that extend up on either calmjs or nunja (only the latter I have the example of, but only for deployment not so much building a brand new thing). Also, I am not doing this for simplifying JS in Plone, nor really simplifying anything, but more providing the information to assist in using the Node.js ecosystem from Python (specifically with integration with things provided by setuptools) (also simplifying everything is unfeasible, I just settled for making things less painful for myself, and write a thing that has an option to support Webpack later even though I've focused on RequireJS).

Another key point is this: I built Calmjs so that specific and explicit commands can be run to create the environment needed to build whatever extensions required (like my nunja.stock examples I linked previously), and also for the generation of artifacts that are to be served. This seems to be what you desire, however I still need to build the Plone integration bits.

Heck, even within Zope/Plone itself, building pure Python integration packages isn't all that easy either - the documentation for some of the internal stuff was pretty lacking also (if they existed, were hard to find). While I was building pmr2.oauth (an OAuth 1.0 provider for Plone) I basically had to read through various source code, trace through execution through the PAS framework to see what actually is happening before I was able to actually put that together.

1 Like