In a previous post I explained how to embed the Rhino JavaScript engine into a Java application. I didn’t choose the new Nashorn engine because it depended on Java 8 that was still not available. Well, Java 8 has just been released so I’ve decided to explore this new implementation.

What I have done is just to clone the previous post’s repo and then refactor it to use the new engine:

  • remove external mvn dependencies as Nashorn is native on Java 8,
  • some minor package names to be consistent,
  • and refactor the engine class from RhinoEcmaEvaluator to NashornEcmaEvaluator

Executing scripts with Nashorn

Let’s have a look how executing a script with Nashorn looks like

 

As we see, executing a script with Nashorn is a matter of two steps:

  1. get the Nashorn engine (lines 3 and 4),
  2. and execute the script on the engine (line 8).

Some considerations:

  • The EcmaValue class is not part of the Nashorn engine but my own class to wrap values that get into and out of the Ecma engine.
  • Line 7 passes Java variables to the JavaScript context (as we will see), but this step is not necessary if no binding needs to be performed.

Passing Java objects to Nashorn interpreter

The way that Nashorn handles the binding between Java and JavaScript contexts is simpler than in Rhino. There is no need to setup or manipulate JavaScript contexts and create wrapping objects, just:

  1. instantiate a Bindings object, a Map like class (line 4),
  2. add the Java objects with a proper name (line 20),
  3. and finally set the Bindings object into the engine in the proper scope (line 10).

Handling JavaScript errors

The way Nashorn handles JavaScript execution errors is similar to Rhino. The ScriptEngine#eval method throws a generic exception (an instance of ScriptException) and you must inspect its content to determine what ECMA error produced the problem.

The following snippet shows how we determine a JavaScript reference error.

 

Conclussions

  • JDK8 has been released and with it, the brand new Nashorn JavaScript engine.
  • It is the new standard Java engine (last published release of Rhino was in 2012-06-18).
  • It provides a more concise api and less boilerplate than Rhino to make the interpreter work (not necessary to handle contexts and scopes or to initialize standard JavaScript objects).

Nashorn provide many other new features that I have to explore, as for example better performance, powerful console interaction, easier Java and JavaScript api integration and other features that take advantage of new Java 8 possibilities.

The challenge

In a Java project that I am working we allow the user to extend the features of the tool by providing some ECMA (JavaScript) scripts at different extension points. Those scripts are used for different purposes as for example:

  • defining arbitrarily complex boolean expressions to define the conditions on control flow structures in the context of a workflow edition feature,
  • accessing and manipulate data generated by the application.

Those extension points are to be considered isolated islands that don’t share execution scopes between them.

To build this feature we need then:

  1. a JavaScript interpreter to execute those scripts,
  2. and be able to pass Java objects to the JavaScript execution context.

As building a home-made JavaScript is an overkill, the natural step is to see what modern interpreters we could use for our feature. Some options are the following:

The choice was quite easy to do:

  1. Nashorn is build up on JDK8 that is not currently production ready, so it was discarded.
  2. We don’t really have high performance requirements because this is a side feature that is executed on a heavyweight client. This discards V8 because although it is the most modern and fast implementation, it requires a C++ to Java integration.
  3. Between two versions of the same implementation I prefer to use the most stable, so I finally chose Rhino 1.7R4.

In the real project, I have hidden the specific implementation behind an abstract factory so we can easily switch to another version when required. Here I will concentrate on how to implement it on Rhino removing all the non interesting noise.

The solution

Setting up Rhino

To illustrate the solution I will do it through an example (that I have published at GitHub). It is a standalone maven project. Let’s explore first the interesting parts of the POM file. Basically you only need to  specify the Rhino dependency.

Executing JavaScript

The class RhinoEcmaEvaluator illustrates how to use Rhino to execute JavaScript. It contains just a public method (evaluateExpression) that receives the script to execute and a list of Java variables to be used by this script. Let’s see how it works.

As we see, execute a script with Rhino it is a matter of three steps:

  1. get the initial context (line 13),
  2. get a scope by initializing the standard JavaScript objects (line 25),
  3. and then evaluate the script in the context using the proper scope (line 15). This evaluation return the value of the script as a Java object.

Some considerations:

  1. EcmaVariable and EcmaValue are classes on my project (not part of Rhino) that simply acts as a wrapper of the Java objects that can get into and from a JavaScript execution.
  2. On line 26 I add the Java objects within the Rhino scope. I will discuss that later.
  3. The execution must be embedded in a try / catch block to ensure that all resources are freed (line 20) even if some error occurs.

Let’s see some tests to understand how this works.

These are basic examples that execute very simple scripts:

  1. No variables are passed to the JavaScript engine so the SymbolTable is empty.
  2. The returned EcmaValue wraps the actual Java object that is got through getValue method.

Passing Java objects to Rhino interpreter

Rhino provides a nice and easy Java to JavaScript integration. Thanks to its LiveConnect feature, access to Java from within the JavaScript execution is almost trivial. To add a Java object, then only thing that you need to do is to put the instance as a property with a given name in the proper scope.

Here it is the code:

As we saw in the code snippet “Execute script”, we called the method putJavaVariablesIntoEcmaScope to put our Java objects in the JavaScript execution scope. Let’s remember that EcmaVariable and EcmaValue are not Rhino classes but mine to model values that travel between the two languages. Let’s see how it works:

  1. we get the variable name and the native Java object (lines 12-14),
  2. we wrap the value (line 16)
  3. and finally add it to the scope (line 17)

Let’s see it on action with some tests.

Handling errors

What happens if the script is trying to use a variable that is not accessible in any JavaScript context? In our case this could happen if we, for example, invoke the evaluateExpression method with a script that refers a variable called “i” but no “i” variable has been passed within the SymbolTable.
As we want to control this case in our application, we have decided that this scenario should be signaled by an IllegalArgumentException. The following test illustrates this expectation:

Rhino throws an EcmaError (that is an unchecked exception) whenever happens a problem during the JavaScript execution. This EcmaError mimics the analogous error on the ECMA specification. Rhino’s EcmaError has no subclasses so to determine what is the actual cause, you have to explore the errorName field. Let’s see how we handle our requirement:

Conclussions

  • If you need to integrate a JavaScript engine within your Java project there are several options
  • Nashorn will be the native implementation on JDK8 and promises new features and high performance
  • In the meanwhile, Rhino is good option, very easy to integrate and with powerful interoperability features

I’m a programmer and I like the Java language. I know that this can sound a little old-fashioned these days, but I really think it is a great language with a huge tools and libraries ecosystem and I’m quite sure that it will enjoy a long life yet. Having said that I feel legitimate to criticize what I don’t like of the platform.

One of the things that bothers me more when distributing a Java app is all the hassles related with the dependencies and the CLASSPATH settings. They’re different things but intimately related ones.

When distributing a desktop app, the usual procedure consists on creating an executable jar file which is done by adding a proper MANIFEST.MF file that includes the path to the main class that contains the well-known main method. The main problem is that a standard jar file cannot contain dependencies (as other package units can, as for example war files). That means that if our application is using other libraries we have to distribute them separately and then make sure that the final user is setting the CLASSPATH properly. This is tedious both for developers and for users. Obviously you can always distribute your app together with some scripts that setup the environment for you but in my opinion this is just a workaround and not a real solution (for how many different platforms are you going to prepare scripts?).

Solution 1: Assemble a new jar that contains all the code

A first working solution (not very elegant) consists on create a new jar file that contains the source code of our application and all the required dependencies, i.e. we must unpack all the jars and then repackage them in a new one. Obviously this solution works but it is not the best choice if you want to keep a healthy development cycle (artifacts overhead manipulation, namespace collisions between different dependencies, etc.). Having said that, maybe it is a good solution for a little project with just a few dependencies. If we want to do it this way, we don’t have to implement all the procedure by ourselves: we can take advantage of an existing maven plugin that performs exactly this.

I’ve created a simple sample project to show how it works. Let’s have a look to the most important lines on the POM:

  • the plugin is set up on lines 5 to 25,
  • on lines 16 to 20 we specify what class contains the main method, that will be written on the generated manifest to make jar executable,
  • and finally we include some dependencies that will be assembled on the target jar.

After having executed mvn install we will got the file HelloWorldAssembly-1.0.0-SNAPSHOT-jar-with-dependencies.jar on the target folder. This artifact contains our code and also the classes our dependency. To test it we can simply execute java -jar .

Solution 2: one-JAR

It would be nicer to have a lib directory within our jar where we could include all the required dependencies as it is done with other packaging artifacts such as the war files. When executing the jar, the classloader should be able to find those dependencies. That solution would combine a good modular approach and easiness to distribute and execute.

This is the exact strategy that follows the open source project one-JAR. The magic consists on using a custom classloader that is able to load classes and resources directly from within the jar file instead of the file system, and having to write any additional code to make it work!

To assemble this jar, the project provides an Ant task and a . In this repository you can find the previous example rewritten with this new approach. The most important lines of the POM file are the following:

  • from 5 to 21 the plugin is configured,
  • line 12 tells the plugin what is the main class of our application
  • line 15 specifies the qualifier of the generated artifact

Once again, after executing mvn install, we’ll get the artifact HelloWorldOneJar-1.0-SNAPSHOT.one-jar.jar on the target folder. Let’s check that it works as expected:

As we’re curious beings, we want to understand how one-JAR works and we explore the inner parts of our new jar:

  • in line 4 we see the jar that contains our application’s code,
  • in line 5, within the lib folder, we can see that our dependency has been included,
  • in line 42 we have the open source license of the tool,
  • and finally the rest of classes constitute the one-JAR core that performs the magic by replacing the classic classloader.

As we have seen, the JVM knows where is the main method to execute the jar by an entry on the MANIFEST.MF file. The tool replaces our main by its own one to perform the bootstrap (replace the classloader) before calling to our main method. Let’s have a look to the generated MANIFEST.MF:

Conclussion

  • The standard distribution artifact on the Java world is the executable jar file.
  • A classic classloader doesn’t allow to include a jar within another one, which makes distribution and execution harder.
  • A solution is to unpack our application jar and all the dependencies and repackage it all together in a new jar. This procedure can be performed by the maven assembly plugin.
  • A better solution is to the use the one-JAR project that allows to package jars within jars by replacing the classic classloader by a more powerful one that is able to load classes and resources directly from within a jar.