Presenting RulePlex at 1Million Cups

Today I presented my startup company, RulePlex, at 1Million Cups. Here is a video of the presentation.

I’ve gotten a few comments about it being too technical and that I should give a demo of the product while presenting. This was my first presentation to a large audience on the service and I think those ideas are great. I’ll be working them in for next time.

Expiring links after a given timeframe

Here is one way to expire a link/web page after a certain amount of time has passed. Instead of keeping track of whether a link is valid or not through a database look-up, this is done by verifying that the expiration date in the url generates the same token that is also passed in through the url. If the user tries to change the date value it will invalidate the token. Users cannot generate the token without the secret key you keep on your server. Here is how it’s done.

First we need to create a model for our expiration date and token:

public class ExpiresModel
    public DateTime ExpiresOn { get; set; }
    public string Token { get; set; }

Next we need a utility for generating and checking tokens:

public class TokenHelper
    private const string HashKey = "secret";

    public static string GetToken(ExpiresModel model)
        if(model == null)
            throw new ArgumentNullException("model");

        return Crypto.HashSHA512(String.Format("{0}_{1}", HashKey, model.ExpiresOn.Ticks));

    public static bool IsValidToken(ExpiresModel model)
        if (model == null)
            throw new ArgumentNullException("model");

        return model.Token == GetToken(model);

Notice that in the TokenHelper class is where the secret key lives. The Crypto class I used can be found here.

For the front end I have one page which creates the link and another to check the status. Here is the controller for that:

public class HomeController : Controller
    public ActionResult Index()
        var model = new ExpiresModel();
        model.ExpiresOn = DateTime.Now.AddSeconds(30);
        model.Token = TokenHelper.GetToken(model);
        return View(model);

    public ActionResult Check(long dateData, string token)
        var model = new ExpiresModel();
        model.ExpiresOn = DateTime.FromBinary(dateData);
        model.Token = token;

        if (!TokenHelper.IsValidToken(model))
            ViewBag.Message = "Invalid!";

        else if (model.ExpiresOn >= DateTime.Now)
            ViewBag.Message = "Still good: Expires in " + (model.ExpiresOn - DateTime.Now);

            ViewBag.Message = "Not good: Expired on " + model.ExpiresOn;

        return View();

And the Views for those controller methods…

@model WebApplication4.Models.ExpiresModel
    ViewBag.Title = "Home";
    Layout = "~/Views/Shared/_Layout.cshtml";


<p>A new link has been generated that expires in 30 seconds. Use the link below to check the status:</p>
<p><a href="@Url.Action("Check", new { dateData = Model.ExpiresOn.ToBinary(), token = Model.Token })">Check Now</a></p>


    ViewBag.Title = "Check";
    Layout = "~/Views/Shared/_Layout.cshtml";



<p>@Html.ActionLink("Gernate another link", "Index", "Home", new { area = "" }, null)</p>

The entire solution can be downloaded here.

Stopwatch Class in JavaScript

I needed the equivalent of .NET’s Stopwatch class in JavaScript today. I did a quick search and could only find actual stopwatches so I figured it would be faster to write it myself.

Here is a fiddle that shows the usage

and here is the code for the Stopwatch class

Which .NET JavaScript Engine is the fastest?

UPDATE: Added ClearScript

In RulePlex users are allowed to write rules in JavaScript, make an API call which passes in data, and execute those rules in the cloud. RulePlex is written in .NET. So how do we execute JavaScript in .NET? It turns out there are a bunch of JavaScript engines that can do this, but which one is the fastest?

I took an inventory of the more popular .NET JavaScript engines:

My initial thoughts were that JavaScript.Net would be fast since it is just a wrapper for Google’s V8 engine which is the fastest JavaScript engine currently. I also thought IronJS would be fast since it uses Microsoft’s Dynamic Language Runtime. jint and Jurassic I was skeptical about.

The Tests

I created a project and referenced each engine by using NuGet. I called each engine 5 times to execute a snippet of code and took the average. The snippet of code I executed came from a suite of array tests I found at Dromaeo. You can view the tests in this gist.

I also did another test where I loaded the linq.js library (one of my favorite, lesser known, JavaScript libraries).

The Results

Array test results:

jint 31,378 ms
IronJS 2,499 ms
JavaScript.Net 21 ms
Jurassic 494 ms
ClearScript 261 ms
ClearScript (compiled) 24 ms

Linq.js load results:

jint 35 ms
IronJS 245 ms
JavaScript.Net 14 ms
Jurassic 170 ms
ClearScript 49 ms
ClearScript (compiled) 1 ms

It turns out I was right about JavaScript.Net. It is much faster than any of the others. I did run into sort of a quirk with that library though and that was that I couldn’t use “Any CPU” as my platform target but instead had target x86 (you can probably target x64 as well it just can’t be Any CPU because the correct C++ libraries need to be targeted)

I was completely wrong in thinking IronJS would perform well. jint was by far the worst due to it’s incredibly bad array execution time. Jurassic was a pleasant surprise. It performed both the test and linq.js load reasonably fast.

If you come across any other .NET JavaScript engines feel free to let me know and I’ll add them to my comparison.

I was made aware of ClearScript which I’ve now added to the results list. This library also runs V8 but doesn’t need the C++ libraries. It looks almost as impressive as JavaScript.Net.

One More Test

I wasn’t entirely happy with the tests I had done so I added one more. The script I executed only does one small thing – set a variable to true. This shows more or less the overhead of each engine. I ran this this test 5000 times each and took the average.

One variable results:

jint 0 ms
IronJS 1 ms
JavaScript.Net 10 ms
Jurassic 8 ms
ClearScript 29 ms
ClearScript (compiled) 0 ms

Here is the complete script I used. I swapped out currentScript and changed N as needed.

Choosing a service framework

The release of Web API marks what I count as the 4th service framework Microsoft has released for .NET. In this post I will discuss my reasoning behind which one to use. Here are the 4 in order of when they were released:

  1. ASMX Web Services
  2. WCF Services
  3. OData
  4. Web API

These aren’t all upgrades from one to the other despite what popular culture may dictate. There are actually good cases for using each one of these.

…except ASMX Web Services. You should never create anything using this. When I see vendors who still use this technology I cringe and will not use them at all. It’s a sign that they haven’t kept up with technology. It’s not just old products using this. It can be new products too – ones that have old developers who aren’t up to speed. Old does not necessarily mean “mature” or “weathered”. Actually in the cases I’ve seen it means those services are full of bugs and the developers are slow to fix them.

WCF Services

This is the one technology that was meant to handle all web service scenarios. It completely replaces ASMX. The pros of using WCF are that you get a contract to code against, it’s fast and flexible, and it works over all kinds of protocols. The downside is that it takes more work to setup and the configuration choices can be confusing. It’s been out for a while and so I think it’s lost that “new hotness” appeal, but overall WCF is the best option for creating a serious service. I choose to create WCF services when I am building an internal Service Oriented Architecture. The reason for this is because I can use special protocols which make communication faster. For example I use net.tcp for internal client to server or server to server communication. net.tcp does binary serialization which is fast. For services that are talking to each other on the same machine it gets even better because you can use net.pipe. In this case nothing is serialized at all – the communication happens in-memory which is the fastest way possible. It’s like hooking up your brain to another person’s brain and just thinking to each other. WCF Services can be hosted by any .NET application type. Most typically that means through IIS or as a Windows Service. I prefer to host through a Windows Service so that the service is readily available whereas IIS application pools can shut down and will take a while to start back up when a new request is made of it. OData and Web API can’t do any of these things.


This was meant to be used for broad access to shared data. OData hooks up a database directly to a web service. It makes the data query-able through manipulating the query string. A good example for OData would be for the US Census Bureau to share yearly survey results with the public over the internet. Or for a library to share it’s catalog of books and media over the internet. This type of web service is less common. It should never be used with a transactional database.


I’m not exactly sure why this technology was created other than to say Microsoft has a REST based API solution. It has evolved into a useful solution over time though. It’s is a good choice for public services being accessed over the internet. The two most prominent features are: 1) Content coming from the client is deserialized based on the Content-Type (header value) and serialized back to the client in the same format. The most common content types are XML, JSON, and BSON (binary JSON) 2) It follows the same pattern as ASP.NET MVC so it’s easy to pick up for developers familiar with that technology. Because Web API is the “new hotness” a lot of folks have been creating these types of services instead of WCF, even when WCF is more appropriate. It’s not an upgrade. Web API and WCF are useful for different scenarios. Usually when I create a Web API service I put a WCF service layer behind it because you normally don’t just have a standalone API. Most of the time you have an accompanying application which uses the same functionality as the API.

Run SQL Query and Email Results

The program below will run a SQL script, convert the results to an HTML table, and email you with it. Everything is passed in via command line parameters which makes it great for running on the fly from a batch file or as a scheduled task. Here is how you use it:


sql_emailer.exe <script> <title> <email> <conn>


  1. script – Path to a SQL script which contains the query to be run.
  2. title – will be used as the email subject and header within the body.
  3. email – A single email address or comma delimited list of email addresses to send the query results to.
  4. conn – A reference to an appSettings key in the config file which contains the connection string of the database to connect to. If this parameter is omitted then “DefaultConnectionString” is used.

In the config file you will need to add the appropriate appSettings keys with connection strings to your database(s). You will also need to configure the SMTP section in the config with your SMTP server settings.


Here is the source code minus the table styling from the exe:


Sometimes I want to have a dynamic variable that no matter what property I try to access, never throws an exception. ExpandoObject doesn’t do this. Here is an example:

dynamic source = new ExpandoObject();
source.Property1 = "test";

Trying to access source.Property3 throws a RuntimeBinderException which is what I am trying to avoid. What I want is for source.Property3 to return either null or some other value (like an empty string). The solution I created to solve this problem is called TurboObject. ExpandoObject is a cool name so I tried to come up with an equally, if not more, cool name. Here it is!

Now when I try accessing source.Property3 I get back an empty string…

dynamic source = new TurboObject(string.Empty);
source.Property1 = "test";

If you just create a new TurboObject() with no constructor parameters then null would have been returned; that is the default.

Cast Hacking

Have you ever wanted to cast a base class into a subclass? Consider these two classes:

public class Animal { }
public class Dog : Animal { }

Because of the inheritance we can create an Animal from a Dog because we know that Dog has all of the necessary properties to set on Animal. Quick example:

var dog = new Dog();
var animal = (Animal) dog;

This works in .NET. But what if you wanted to create a Dog from an Animal? Unfortunately Dog may have some properties that Animal doesn’t and so you will get an exception at runtime when trying to cast from an Animal to a Dog:

var animal = new Animal();
var dog = (Dog) animal;

Causes this runtime exception:

An unhandled exception of type 'System.InvalidCastException' occurred.
Additional information: Unable to cast object of type 'Animal' to type 'Dog'.

Getting around this casting issue means you would have to create a new Animal, then for every property on Animal set the value to the corresponding property on Dog. THAT SUCKS. Who has time or the memory to keep that conversion up-to-date? What happens when you add a new property to Animal – will you remember to update this property-copying code as well?

It wasn’t intended specifically for getting around casting issues but for property copying Automapper is AWESOME! It does pretty much what you would expect. It maps the properties of one class to another so you don’t have this problem of manually maintaining the mappings. Here is how you can create an Animal from a Dog using Automapper:

Mapper.CreateMap<Animal, Dog>();
var animal = new Animal();
var dog = Mapper.Map<Animal, Dog>(animal);

For the properties of Dog that Animal doesn’t have, if the property is a value type it’s value will be the type’s default value and for reference types the value will be null. Simple huh?

Generate an Enum from a SQL Server Database Table

I have a lookup table in SQL Server and a corresponding enum in my code – two different sources that represent the same thing. The problem I was having was that one would become out of sync with the other. To solve this I created a T4 template that will generate an enum based on database values. The database becomes the master source and I just have to run the T4 template to update my code… easy!

Here is the T4 template I created if you would like to use it as well. It generates the enum in C#. The enum name is determined by the name of the .tt file so if you want an enum named LanguageType then rename the .tt file to “”. Customize the variables at the top of the script with your connection string, table name, and the columns used for the enum member name and value.

Linq.js CRUD Methods

One of my favorite JavaScript libraries is Linq.js (LINQ for JavaScript). Unfortunately the author is not responding to pull requests so I’m going to post my update here.

I’ve added CRUD methods to the library so that you can Add, Delete, and Update items easily. I’ve found this very useful pattern where I load a lot of data when the page loads, convert it to an Enumerable using Linq.js, then as the user modifies data update it locally and send updates back to the server asynchronously.

Here are examples of each method I’ve added:

var data = Enumerable.From([
    { "id": 1, "url": "", "title": "Amazon" },
    { "id": 2, "url": "", "title" : "Google" },
    { "id": 3, "url": "", "title" : "Wrong" }

data.Add({ "id": 4, "url": "", "title" : "MSN" });

data.Delete(function(x) { return == 1; });

data.Update(function(x) { return == 3 }, { "id": 3, "url": "", "title" : "Yahoo!" });

You can download my linq.js file here. You can also play with a fiddle I created using the example above.

Alternatively you can pass a function to Update instead of an object (makes sense if you are updating more than one item at a time):

data.Update(function(x) { return x; }, function(x) {
    x.title = "updated " +;
    return x;