javascript

.NET JavaScript Engine Performance Results - Updated 2016

A little over a year ago I ran some performance tests against six .NET JavaScript engines. Here are the updated results for this year. Things that have changed are:

  • Upgraded engines. Not all have updates from last year though.
  • Upgraded some of the javascript libraries. Again, not all have had updates since last year.
  • Upgraded project to run on .NET 4.6.1 because of NiL.JS. NiL.JS project page says its supposed to run on .NET 4.5 but I got compiler errors telling me otherwise.

Due to such major changes I expected the results to change quite a bit, and they did.

Engines

Results

** All times are in milliseconds*

The biggest improvement came from Jint, but it's still the slowest overall.

IronJS is slow as well. It hasn't been updated in years. I think it was a "just for fun" project. Use with caution!

Javascript.NET is definitely the fastest, followed closely by ClearScript (V8).

Jurassic came in at about the middle. Not bad.

NiL.JS showed major improvements across the board and now is a little faster than Jurassic.

.NET JavaScript Engine Performance Results

Back in August of last year I did some tests to determine which .NET JavaScript engine was the fastest. I wanted to get a better picture of the overall performance of each so I went back and grabbed all of the tests from Dromaeo to run. Below are the engines I compared and how fast they ran each test.

Engines

Results

** All times are in milliseconds*

If I didn’t have a timeout then NiL.JS would never have finishing loading knockout. Jurassic’s timeout on v8-earley-boyer is okay, it just runs really slow.

I’ve been thinking of adding more tests which show the performance of .NET types being used in JavaScript and JavaScript variables being retrieved by .NET after the script has run. Stay tuned.

The source code for these tests are on GitHub.

Evolution of the JavaScript Rule Engine

I was working on RulePlex this week and came across a couple of things I wanted to share. First is a change in the way that rules are “compiled”. You can’t really compile JavaScript per se but this is the process of how things came about working the way they do…

In the first iteration, JavaScript rules were executed individually. If there were 100 rules in a Policy I would execute 100 little JavaScript snippets for an incoming request. The snippets were “compiled” when the call to the API was made. I soon realized that this might be okay if a Policy had a few rules but for large Policies it was slow – even if I executed all of the snippets in parallel.

For the next iteration I took all of the rules and compiled them into one big script. In order to do this I had to wrap the rules with some advanced JavaScript techniques. Because this big script contains the results of every rule I had to append something unique to each result’s variable name – the rule’s Id. This makes the script look horrifying but I am okay with it for now (it’s not hurting performance). Executing one big script increased performance tremendously. Here is an example of what a Policy with 1 rule that simply returns true looks like compiled:

At the same time I had extended this technique to the C# rule engine. I took it a step further though and actually compiled C# rules into a DLL. I took the binary code for the DLL and stored it along with the Policy. I did the compilation whenever a rule in the Policy changed – not when the API was called like I had been doing with the JavaScript engine. When the API was called, I got the DLL’s binary data and loaded it into memory to be executed against the incoming data.

I mimicked the binary compilation and loading of the C# rules into the JavaScript engine as well. The thing is, I never really liked doing it in the JavaScript engine because I had to convert the compiled script (text) to binary, so I could store it in the same database field, and then from binary back to text when it was time to be executed. In C# it made sense but not in JavaScript. Now that the C# engine is gone I had a chance to go back and change this.

Presently, when rules are changed (or added/deleted/etc), RulePlex will compile it’s big script during the save of the rule. I saves it to the Policy as text. When the API is called the script is retrieved and executed.

I haven’t thought about tweaking this process any more but I may in the future. Instead I have been thinking about how this affects the workflow from a business perspective. The more I think about it the more I like the changes that I’ve made. If I ever change how I “compile” the big script it won’t affect policies that are currently working (a certain way). What if I’ve got a bug in the script that you’ve accounted for in you’re rules, knowingly or unknowingly. If I change how the script is compiled and it’s being compiled during the API request then it could be different day-by-day without any actions by you. This is bad because I may have fixed or introduced a bug that changes the results. Now the application you’ve integrated with RulePlex is broken!

The ideal workflow is that there are two copies of the same Policy, maybe even three, or N. One copy would be designated as a Production copy, while the others are for Dev/Staging/whatever. When the engine changes, you want to test those changes in a non-Production environment first. When you’ve verified that the changes do not affect your application then that non-Production copy can be promoted to Production. This also applies to the workflow of building out a Policy too, not just back-end changes to the engine. The concept of environments will be included in the next version of RulePlex.

Supporting One Language Makes RulePlex Intrinsically Faster

In my previous post I wrote that one of the decisions I made about RulePlex was to only support one rule language. This will make the engine intrinsically faster and I’ll show you why.

When you create a Policy (a set of rules) one of the options you are given is to chose a rule language. When it’s time to run your rules you provide the data and the Name or Id of the policy. From that Name/Id the Policy is found which will tell us the rule language. Once we know the language we can begin processing the rules.

At first you might think of writing code for this that fits into a switch pattern. I might look something like this:

Some things to note here – I am using enums and I left out the implementation of how the rules are processed. That’s because I just want to talk about the switch pattern. It’s not the cleanest way of writing this. My example only has 3 cases but the worst switch I’ve ever seen had over 80 cases, used strings for the case values, and had an average of 12 lines per case – over 1000 lines in total! This is an amplification of my point which makes it very clear, that, switches can become unreadable and hard to maintain. Whenever you have a switch statement you should ask yourself if using a Strategy pattern makes more sense.

In RulePlex I used the Strategy Pattern, something to the effect of this:

The downside to this pattern is that a dictionary lookup is slower than a switch statement. The upside is that it cleans up code very nicely. I could have added support for 10 new rule languages and the lookup code would have stayed the same. It’s up to you to decide between these trade-offs. My original goal was to support as many languages as possible in RulePlex so using the Strategy Pattern would have saved me headache down the road.

That all changes now that RulePlex is only using JavaScript rules. I don’t need the strategy pattern and I don’t even need a switch statement. Instead I can new-up a JavaScriptRuleEngine and call Execute() on it. Faster and cleaner!

On a side note (back to my comment on using enums): You should never use “magic strings”. Your world will be a much better place without them.

Introducing RulePlex

Almost a year ago I began on a mission to create the first cloud-based rules engine. I called it RulePlex. After a few months of work I had succeeded. A lot of rules engines can be run (on virtual machines) in the cloud but mine was designed with a cloud-first approach in mind. On top of being cloud-based I had a couple of other goals I wanted for the engine:

The Wright brothers first flight.

- Allow rules to be written in any language - Allow integration with any app - Make it incredibly secure - Keep the cost low

I accomplished all of these. The engine was awesome. When I was at a good place to ease up on the development, I did, and started to connect with as many people as I could trying to drum up business. I contacted all of my developer buddies, business contacts, previous employers… but no one was interested. I tried to generate business by presenting my software at business start-up events like 1MillionCups. I tried Google Adwords. I threw a hail mary to Scott Hanselman hoping he would do a hanselminutes with me. I even gave the service away for free just to get signups so people would look at it… but all of my attempts at getting the business going failed. I don’t think I did enough even though it may sound like I had.

I’m not giving up.

I’m changing things up!

Instead of being cloud-based I am falling in line with all of the other rule engines. RulePlex will be an on-premise install. It can still be run in the cloud but it won’t be touted as the first cloud-based rules engine any more. It’s nice that I was able to accomplish that feat but ultimately I think proximity to where it will be used will benefit companies more. The latency between the cloud an a corporate network is too much when apps need instant results.

Another thing I am changing is the ability to write rules in any language. I had support for JavaScript, C#, VB.NET, Ruby, and Python – but from now on rules will only be written in JavaScript. By going with only one language it will save development time tremendously. I chose JavaScript because it is the most widely used language today. It’s also not going anywhere anytime soon. It has become an incredible server side language thanks to V8 and the popularity of Node.js. It’s flexible/forgiving thanks to dynamic typing and business users can learn the basics easily.

The last thing I will be doing is blogging a lot more about RulePlex. I’ve been working on it for almost a year and I haven’t written anything (directly) about it. I want people to follow along with it’s progress and see how it is evolving and why things work the way they do. I want you to see what is influencing me into making design choices. Hopefully this will transform into a community of RulePlex users but I won’t get too ahead of myself. Let’s just start with clicking publish on this post.

Stopwatch Class in JavaScript

I needed the equivalent of .NET’s Stopwatch class in JavaScript today. I did a quick search and could only find actual stopwatches so I figured it would be faster to write it myself.

Here is a fiddle that shows the usage

and here is the code for the Stopwatch class

Which .NET JavaScript Engine is the fastest?

UPDATE 6/17/2015: Added NiL.JS, updated to the latest version of ClearScript and Jint, fixed ClearScript “compiled” test, and updated results for all.

UPDATE 6/18/2015:Reran with more tests

In RulePlex users are allowed to write rules in JavaScript, make an API call which passes in data, and execute those rules in the cloud. RulePlex is written in .NET. So how do we execute JavaScript in .NET? It turns out there are a bunch of JavaScript engines that can do this, but which one is the fastest?

I took an inventory of the more popular .NET JavaScript engines:

My initial thoughts were that JavaScript.Net would be fast since it is just a wrapper for Google’s V8 engine which is the fastest JavaScript engine currently. I also thought IronJS would be fast since it uses Microsoft’s Dynamic Language Runtime. jint and Jurassic I was skeptical about.

The Tests

I created a project and referenced each engine by using NuGet. I called each engine 5 times to execute a snippet of code and took the average. The snippet of code I executed came from a suite of array tests I found at Dromaeo. You can view the tests in this gist.

I also did another test where I loaded the linq.js library (one of my favorite, lesser known, JavaScript libraries) but I ran it 50 times.

The Results

Array test results:

jint14,028 ms
IronJS1,622 ms
JavaScript.Net20 ms
Jurassic237 ms
ClearScript263 ms
ClearScript (compiled)111 ms
NiL.JS1,680 ms
Linq.js load results:
jint17 ms
IronJS176 ms
JavaScript.Net13 ms
Jurassic114 ms
ClearScript37 ms
ClearScript (compiled)22 ms
NiL.JS17 ms
If you come across any other .NET JavaScript engines feel free to let me know and I’ll add them to my comparison.

One More Test

I wasn’t entirely happy with the tests I had done so I added one more. The script I executed only does one small thing – set a variable to true. This shows more or less the overhead of each engine. I ran this this test 5000 times each and took the average.

One variable results:

jint<1 ms
IronJS<1 ms
JavaScript.Net9 ms
Jurassic3 ms
ClearScript31 ms
ClearScript (compiled)22 ms
NiL.JS<1 ms
Here is the complete script I used. I swapped out currentScript and changed N as needed.

Target Framework was .NET 4.5.1, Target Platform was x86. Run on a quad score i7 CPU @ 2.40 GHz.

Updated Results 6/17/2015:

  • Jint does well with small scripts. Has become faster since last August.
  • IronJS did okay
  • JavaScript.Net looks like it is the overall fastest
  • Jurassic did pretty good
  • ClearScript is fast but has a lot of overhead. Might want to try ClearScript.Manager to help with this. (Even so, I’ve had problems getting it to scale.)
  • NiL.JS did okay