Saturday, August 23, 2008

Native JavaScript? Almost...

For several years it's been clear that standard practice in software architecture has been moving away from client/server (with big, clunky application programs that have to be installed on each user's computer) toward browser-based applications (where nothing specific to the application has to be installed on the client computer). This is true not only on the public Internet (think Google Maps, or Yahoo Mail, or any one of thousands of similar web sites) but also of corporate applications. Browser-based applications still are mostly HTML-based, with the use of JavaScript confined mainly to a little user interface “glue” where plain old HTML won't do the trick.

Increasingly, though, browser-based applications are making extensive use of the JavaScript programming environment built into the web browser. Generally the first step in this direction is to put all the rendering code – the code that turns data into something visual that the user can interact with – into the browser. Applications written like this communicate essentially just data between the client and the server, not the HTML that renders the user interface.

Another step in this direction is to move the application's logic into JavaScript on the browser. Like any architectural choice, there are pros and cons to this. Two main reasons to move application logic into the browser seem to dominate when this choice is made: (1) scalability is improved because CPU cycles are moved from the central server to the distributed users, and (2) the application's responsiveness is improved because many visible decisions can be made entirely on the user's computer, with no communication to the server required.

This logical next step in the evolution of browser-based architectures has been held up to some extent by something that software developers have little control over: the speed of the JavaScript environment. By “speed”, I mean the number of instructions per second that can be executed. JavaScript is one of the sloths of the programming world, and that means that computationally intensive application programs suffer from sluggishness when run in a browser.

But that may be changing, and sooner than I'd have thought: the Mozilla team is readying a new optimization technology that promises to make JavaScript much faster. The promise is made more believable by the fact that they're already demonstrating a more than 2:1 improvement overall, and more than 20:1 in certain areas that matter greatly to certain kinds of applications. From the Ars Technica article:

The theories behind tracing optimization were pioneered by Dr. Michael Franz and Dr. Andreas Gal, research scientists at the University of California, Irvine. The tracing mechanism records the path of execution at runtime and generates compiled code that can be used next time that a particular path is reached. This makes it possible to flatten out loops and nested method calls into a linear stream of instructions that is more conducive to conventional optimization techniques. Tracing optimization is particularly effective in dynamic languages and also has a very light memory footprint relative to alternative approaches.

...

To get a real-world performance increase right now, Mozilla has adapted the tracing technology and Adobe's nanojit so that they can be integrated directly into SpiderMonkey, the JavaScript interpreter that is used in Firefox 3. This has produced a massive speedup that far surpasses what is currently possible with Tamarin-tracing. In addition to empowering web developers, the optimizations will also improve the general performance of the browser itself and many extensions because many components of the program are coded with JavaScript.

This is very welcome news for anyone who (like me!) writes web applications for a living. It's yet another reason to move to Firefox, if you haven't done so already!

No comments:

Post a Comment