I wanted to run .NET on linux since I’ve seen people talking about it so much recently. I couldn’t find a start-to-finish tutorial though so I am attempting to do that here.

  1. In VMWare Workstation (you can use VirtualBox just the same) create a VM and install the latest version of Ubuntu desktop.
  2. Once installed and you login, update everything that needs updating in the Ubutntu Software Center and reboot
  3. Type everything in the sub-lists below as commands in a terminal…
  4. Install Mono:
    1. sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
    2. echo "deb http://download.mono-project.com/repo/debian wheezy main" | sudo tee /etc/apt/sources.list.d/mono-xamarin.list
    3. sudo apt-get update
    4. sudo apt-get install mono-complete
  5. Install libuv:
    1. sudo apt-get install automake libtool curl
    2. curl -sSL https://github.com/libuv/libuv/archive/v1.4.2.tar.gz | sudo tar zxfv - -C /usr/local/src
    3. cd /usr/local/src/libuv-1.4.2
    4. sudo sh autogen.sh
    5. sudo ./configure
    6. sudo make
    7. sudo make install
    8. sudo rm -rf /usr/local/src/libuv-1.4.2 && cd ~/
    9. sudo ldconfig
  6. Install the .NET Version Manager (DNVM):
    1. curl -sSL https://raw.githubusercontent.com/aspnet/Home/dev/dnvminstall.sh | DNX_BRANCH=dev sh && source ~/.dnx/dnvm/dnvm.sh
  7. Install the .NET Execution Environment (DNX):
    1. dnvm upgrade
  8. More installs for the next part:
    1. sudo apt-get update
    2. sudo apt-get upgrade
    3. sudo apt-get install build-essential openssl libssl-dev curl git
  9. Install NVM:
    1. git clone git://github.com/creationix/nvm.git ~/.nvm
  10. To load NVM whenever a terminal is opened:
    1. echo '[[ -s "$HOME/.nvm/nvm.sh" ]] && source "$HOME/.nvm/nvm.sh"' >> ~/.bash_profile
  11. Start NVM in the current terminal:
    1. . ~/.nvm/nvm.sh
  12. Install Node.js (0.12.6 is the latest version as of this post, you should replace this with whatever the current version is when you install it):
    1. nvm install v0.12.6
    2. nvm alias default 0.12.6
  13. Install Yeoman (Yo) and the scaffolding template for ASP.NET projects:
    1. npm install -g yo generator-aspnet
  14. Generate an empty ASP.NET project. A wizard will come up and ask you what type of project you want to create. I created a Simple website and named it “MyFirstDotNetAppOnLinux”:
    1. yo aspnet
  15. Switch to the new directory/template that was created:
    1. cd MyFirstDotNetAppOnLinux
    2. dnu restore
  16. Start Web Server:
    1. dnx . kestrel
  17. Open Firefox and go to http://localhost:5000
  18. To kill the web server hit Ctrl+Z then enter kill %1

Here are the articles I referenced when putting together this start-to-finish guide:

Don’t forget to install Visual Studio Code so you can edit your project in Ubuntu!

There have been a couple of times recently where I wanted to implement double-checked locking so that I could pull data from cache and fall back on a database lookup. This is a simple technique if I just had one thread but I am doing this in the context of a multi-threaded application (a RESTful API). If I place a lock on an object it would block all other threads. Because requests include a key (think int Id property, Guid, or unique string name) I would like to put a lock on the key so that other threads can continue being processed unless they pertain to the same key. This way I am only doing one database lookup per key. I didn’t find anything baked into .NET that would allow me to do this. I also wanted it to look as much like the typical lock(object){} syntax as possible so that it could be easily understood by other developers. Here is the solution I came up with:

You can combine this with a using statement to achieve the desired feel of the lock syntax:

using (new KeyLocker("mykey"))
{
//only one thread per key will execute code in this block
}

Here is an example:

Back in August of last year I did some tests to determine which .NET JavaScript engine was the fastest. I wanted to get a better picture of the overall performance of each so I went back and grabbed all of the tests from Dromaeo to run. Below are the engines I compared and how fast they ran each test.

Engines

Results

* All times are in milliseconds

  Jint IronJS JS.NET Jurassic ClearScript NiL.JS
dromaeo-3d-cube 744 649 34 287 163 164
dromaeo-core-eval 138 79 19 28 48 12
dromaeo-object-array 12958 1306 20 205 76 1646
dromaeo-object-regexp 14494 1998 225 1754 264 2511
dromaeo-object-string 9712 ERROR 42 999 161 1228
dromaeo-string-base64 1368 287 16 253 48 150
v8-crypto 30578 ERROR 29 1465 58 1666
v8-deltablue 1051 415 28 212 50 168
v8-earley-boyer 19396 24271 42 TIMEOUT 80 1898
v8-raytrace 4368 8564 34 1489 68 609
v8-richards 654 167 15 98 42 93
sunspider-3d-morph 508 35 13 33 44 32
sunspider-3d-raytrace 946 437 21 168 49 98
sunspider-access-binary-trees 743 68 13 94 35 89
sunspider-access-fannkuch 1685 59 14 77 37 72
sunspider-access-nbody 757 139 14 78 38 72
sunspider-access-nsieve 2611 56 13 164 36 193
sunspider-bitops-3bit-bits-in-byte 1353 28 12 26 35 83
sunspider-bitops-bits-in-byte 1253 30 13 16 36 78
sunspider-bitops-bitwise-and 362 17 12 9 40 14
sunspider-bitops-nsieve-bits 1586 49 13 80 36 66
sunspider-controlflow-recursive 3116 73 13 66 36 177
sunspider-crypto-aes 4505 351 18 347 45 226
sunspider-crypto-md5 684 233 16 106 40 43
sunspider-crypto-sha1 638 72 14 59 43 36
sunspider-date-format-tofte 430 48 14 183 39 31
sunspider-date-format-xparb 923 98 15 49 40 22
sunspider-math-cordic 526 38 12 28 36 34
sunspider-math-partial-sums 143 33 13 20 47 12
sunspider-math-spectral-norm 710 41 13 46 40 44
sunspider-regexp-dna 372 416 19 385 48 ERROR
sunspider-string-fasta 758 88 15 100 43 70
sunspider-string-tagcloud 438 7822 20 135 47 97
sunspider-string-unpack-code 642 311 27 138 55 100
sunspider-string-validate-input 972 66 16 75 42 102
d3.min 143 ERROR 34 766 68 ERROR
handlebars-v3.0.3 86 560 27 204 52 58
knockout-3.3.0 38 1070 30 326 49 TIMEOUT
lodash.min 161 776 25 362 55 38
qunit-1.18.0 50 203 19 116 44 86
underscore-min 22 344 16 93 42 20


If I didn’t have a timeout then NiL.JS would never have finishing loading knockout. Jurassic’s timeout on v8-earley-boyer is okay, it just runs really slow.

I’ve been thinking of adding more tests which show the performance of .NET types being used in JavaScript and JavaScript variables being retrieved by .NET after the script has run. Stay tuned.

The source code for these tests are on GitHub.

I am little old school in that I’ve used Winamp since the 90’s. At the beginning of last year it was bought by Radionomy from AOL. I thought that meant they would finally update it, but it’s been over a year and I haven’t heard any news of this ever happening. I decided to switch to AIMP as my audio player instead. It looks a lot like Winamp but the it’s much more usable.

Anyways, back in the day when I used to use AIM (I still use AIM but on Pidgin now, occasionally) there was a Winamp plugin that would update your AIM profile with what you were listening to in Winamp. There are more modern plugins that do basically the same thing but post to Twitter instead.

I use Lync at work. I thought it would be neat to create something that would update Lync’s “Personal Note” with what I am listening to in AIMP. Lync, for some reason doesn’t allow plugins, I read that somewhere so I gave up quickly. That’s when I looked into how to create AIMP plugins… again, didn’t find much. AIMP was developed by some Russians and it’s in C which I don’t know well enough to be writing plugins in. So what I did was create my own app. It starts when I log into Windows and runs in the background. It takes about 2MB of memory and 0% CPU. All it does is monitor if a file has changed and then calls the Lync API to update my note. Monitor a file? Yes… there is a plugin you need to install in AIMP called “Current Track info to file v3.1″. It writes the currently playing track info to a file.

Here is what you need to do in order to get this working on your Windows machine.

  1. Install AIMP 3 if you haven’t already.
  2. Install the “Current Track info to file v3.1” plugin for AIMP
  3. In the Current track info plugin settings… the path should be to your user account’s “My Documents” folder and the file name be called “CurrentTrackInfo.txt” (ex. C:\Users\rfrisby\Documents\CurrentTrackInfo.txt)
  4. For the plugin’s template use this:
    %IF(%R,%R - %T,%Replace(%F,.mp3,))
  5. The setting for “Remember list of, files” should be 1. None of the other options should be checked.
  6. Download the LyncAimpUpdater zip file and extract it to your hard drive
  7. In Windows Explorer go to the startup folder for your user account. (ex. C:\Users\rfrisby\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup) In this folder right click and choose New > Shortcut.
  8. Where it says “Type the location of this item” paste this line:
    C:\Windows\System32\cmd.exe /c start /min C:\Apps\LyncAimpUpdater\LyncAimpUpdater.exe ^& exit

    Then change the path to LyncAimpUpdater.exe so that it points to where you extracted the zip on your hard drive.

That’s it. You can open that shortcut to run the app right away but if not it will start the next time you log into your computer.

.NET 4.5 is required to run the app.

* Download LyncAimpUpdater.zip

UPDATE:
This also works for “Skype for Business”.

protobuf-net cannot serialize everything you throw at it. It’s picky about what it does because it wants to be fast. If it were to accommodate every type of object it would have to sacrifice speed. The author did however create a hook so that things it can’t serialize can be turned into something it can serialize. This is done with what it calls Surrogates. As you can tell from the name, we tell protobuf-net that for a given type (that it can’t serialize) we want to use a surrogate type (that it can serialize).

Take this class for example:

MyNameValueInfo can’t be serialized because it doesn’t know how to serialize the Value property (typeof object). It will throw an exception: “No Serializer defined for type: System.Object”

To get around this we need to provide a surrogate for MyNameValueInfo that protobuf-net can serialize. First register the surrogate type (only needs to be done once):

RuntimeTypeModel.Default.Add(typeof(MyNameValueInfo), false).SetSurrogate(typeof(MyNameValueInfoSurrogate));

Then implement MyNameValueInfoSurrogate so that it can be transformed from/to MyNameValueInfo and is serializable by protobuf-net:

WARNING
Doing binary serialization like this will include Type information in the serialized byte array. This is only useful if the receiving system is also in .NET. For a more universal approach you could use JSON serialization.

Bootstrap is a handy tool and I use it a lot. I decided to use it with a WordPress plugin I am developing but when I included bootstrap’s css file in my plugin page it blew up the wordpress admin panel’s design. Thus started my journey of hacks to get it working. Here is how it’s done…

This is what my plugin folder looks like:

  • Stylesheets (and a less css file as you will soon find out) live in the “css” folder.
  • Bootstrap fonts in the “fonts” folder.
  • Javascript in the “scripts” folder.

In your plugin file/installer/whatever you probably have a line which was loading the bootstrap css in the html head element…

wp_enqueue_style('admin_css_bootstrap', plugins_url('/myplugin/css/bootstrap.min.css'), false, '1.0.0', 'all');

Get rid of that. You can load the bootstrap javascript file this way but for the stylesheet we begin the hacks…

Create a script called bootstrap-hack.js and load it with your plugin.

wp_enqueue_script('admin_js_bootstrap_hack', plugins_url('/myplugin/scripts/bootstrap-hack.js'), false, '1.0.0', false);

The content of that file is this:

As you can see first we are dynamically adding a .less (LESS CSS) file. Next we are loading the LESS Javascript to transform the .less file. Then we are loading any stylesheets that may override bootstrap styles. This is the main part of the hack but there is a little more to it as I will explain.

The content of bootstrap-wrapper.less is this:

What this does is load the bootstrap css file as LESS and then outputs it with all of the styles wrapped with the “.bootstrap-wrapper” class. This means you have to add a div that wraps your content so that the bootstrap styles will be available to it. It will look something like this:

Now back to bootstrap-hack.js… It’s loading less.js file so download and include it in your scripts folder.

Make sure you load any stylesheets that override bootstrap styles in the same way bootstrap’s CSS is loaded afterwards. You don’t have load the stylesheet using less – we just did that because we needed to wrap bootstrap’s styles with another class so that they won’t conflict with wordpress’ styles. Don’t forget that your overrides must be prefixed with .bootstrap-wrapper now as well.

I was working on RulePlex this week and came across a couple of things I wanted to share. First is a change in the way that rules are “compiled”. You can’t really compile JavaScript per se but this is the process of how things came about working the way they do…

In the first iteration, JavaScript rules were executed individually. If there were 100 rules in a Policy I would execute 100 little JavaScript snippets for an incoming request. The snippets were “compiled” when the call to the API was made. I soon realized that this might be okay if a Policy had a few rules but for large Policies it was slow – even if I executed all of the snippets in parallel.

For the next iteration I took all of the rules and compiled them into one big script. In order to do this I had to wrap the rules with some advanced JavaScript techniques. Because this big script contains the results of every rule I had to append something unique to each result’s variable name – the rule’s Id. This makes the script look horrifying but I am okay with it for now (it’s not hurting performance). Executing one big script increased performance tremendously. Here is an example of what a Policy with 1 rule that simply returns true looks like compiled:

At the same time I had extended this technique to the C# rule engine. I took it a step further though and actually compiled C# rules into a DLL. I took the binary code for the DLL and stored it along with the Policy. I did the compilation whenever a rule in the Policy changed – not when the API was called like I had been doing with the JavaScript engine. When the API was called, I got the DLL’s binary data and loaded it into memory to be executed against the incoming data.

I mimicked the binary compilation and loading of the C# rules into the JavaScript engine as well. The thing is, I never really liked doing it in the JavaScript engine because I had to convert the compiled script (text) to binary, so I could store it in the same database field, and then from binary back to text when it was time to be executed. In C# it made sense but not in JavaScript. Now that the C# engine is gone I had a chance to go back and change this.

Presently, when rules are changed (or added/deleted/etc), RulePlex will compile it’s big script during the save of the rule. I saves it to the Policy as text. When the API is called the script is retrieved and executed.

I haven’t thought about tweaking this process any more but I may in the future. Instead I have been thinking about how this affects the workflow from a business perspective. The more I think about it the more I like the changes that I’ve made. If I ever change how I “compile” the big script it won’t affect policies that are currently working (a certain way). What if I’ve got a bug in the script that you’ve accounted for in you’re rules, knowingly or unknowingly. If I change how the script is compiled and it’s being compiled during the API request then it could be different day-by-day without any actions by you. This is bad because I may have fixed or introduced a bug that changes the results. Now the application you’ve integrated with RulePlex is broken!

The ideal workflow is that there are two copies of the same Policy, maybe even three, or N. One copy would be designated as a Production copy, while the others are for Dev/Staging/whatever. When the engine changes, you want to test those changes in a non-Production environment first. When you’ve verified that the changes do not affect your application then that non-Production copy can be promoted to Production. This also applies to the workflow of building out a Policy too, not just back-end changes to the engine. The concept of environments will be included in the next version of RulePlex.

If you wanted to launch a REST based API today, what technology would you use? For most the answer would be Web API. Have you ever thought about what Web API consists of though? I mean, do you know how much code a request has to go through until it reaches your controller method?

While designing an API recently I forwent Web API and tried to get as low-level as possible in hopes that my service would be faster. I read an interesting tutorial on how this could be done in Azure using OWIN self hosting capabilities, but, for no good reason I am not a fan of OWIN. I get the sense that there is still a lot of other people’s code between the incoming request and my code. In my quest to get as low-level as possible I stumbled upon the HttpListener class which is essentially a wrapper for HTTP.sys. Surely this is as low-level as I can get without getting too carried away.

So, which method out of these three will serve HTTP requests the fastest: Web API, OWIN Self Host, or HttpListener? My hypothesis is that HttpListener will be because it the most low-level. The tests for each method will consist of returning the current date and time. There will be no input (no post data or query string) and the result will be returned serialized as JSON. JSON.NET will be used for serialization in each of the projects for consistency. You can get faster performance by using Jil but we’ll leave it alone for this run. I want the out-of-the-box Web API project to be the baseline because that’s what most people are using. The Web API project will be hosted in IIS while the others will be hosted by a Console app. A fourth project will be created which makes 1000 HTTP requests to each host and records the results. The requests will be made from the same machine the servers are running on to eliminate network latency.

Here is my solution with source code for the servers and tester if you’d like to try it for yourself – WebServerComparison.zip

Here are my results: (all times are in seconds)

Web API OWIN HttpListener
Run 1 0.8059442 0.6924348 0.2742231
Run 2 0.6600578 0.3289284 0.1906594
Run 3 0.640202 0.3297216 0.1872897
Run 4 0.6189885 0.3406656 0.1953822
Run 5 0.6118996 0.3280714 0.1898794
Avg 0.66741842 0.40396436 0.20748676

It looks like the first run primed our servers since it took considerably longer to complete compared to the ensuing runs. I won’t throw this run out because in the real world you will have to prime your servers for various reasons. It doesn’t affect our outcome either. My hypothesis was correct in that HttpListener was the fastest option. Keep in mind that the difference between HttpListener and Web API/IIS is less than half a microsecond per request but it is a difference nonetheless. I did not show the raw responses but the Web API responses were larger in size because IIS tacks on a couple of headers. This would have made a greater difference if we weren’t making request from the same machine.

As with anything there are some trade-offs. With IIS you get a lot of management features that you would never get by running your own web server. It also has a lot more security and is more robust. It will log requests and handle errors for you. Writing your own web server will give you faster responses but you’ll have to spend time solving problems that IIS has already solved. The trade-off is yours to decide upon. In the case of RulePlex or other extreme performance needy services I think it’s better to go with the faster option.

The OWN self hosting option is neither the fastest nor does it give you any management features. It does mean you can setup your server in more of a Web API way and gives you some added security, but, I don’t think this middle-of-the-road option is worth much. You either want the performance or you want the management. Right?

Other notes

If you have an API that is used by your web app’s front-end via ajax requests and both are on the same domain you should pay attention to the cookies being sent in the request. If possible host the API on a different domain to avoid the cookies from being sent with the request.

Compression may also play a factor in larger requests. My next post will explore compression options.

In my previous post I wrote that one of the decisions I made about RulePlex was to only support one rule language. This will make the engine intrinsically faster and I’ll show you why.

When you create a Policy (a set of rules) one of the options you are given is to chose a rule language. When it’s time to run your rules you provide the data and the Name or Id of the policy. From that Name/Id the Policy is found which will tell us the rule language. Once we know the language we can begin processing the rules.

At first you might think of writing code for this that fits into a switch pattern. I might look something like this:

Some things to note here – I am using enums and I left out the implementation of how the rules are processed. That’s because I just want to talk about the switch pattern. It’s not the cleanest way of writing this. My example only has 3 cases but the worst switch I’ve ever seen had over 80 cases, used strings for the case values, and had an average of 12 lines per case – over 1000 lines in total! This is an amplification of my point which makes it very clear, that, switches can become unreadable and hard to maintain. Whenever you have a switch statement you should ask yourself if using a Strategy pattern makes more sense.

In RulePlex I used the Strategy Pattern, something to the effect of this:

The downside to this pattern is that a dictionary lookup is slower than a switch statement. The upside is that it cleans up code very nicely. I could have added support for 10 new rule languages and the lookup code would have stayed the same. It’s up to you to decide between these trade-offs. My original goal was to support as many languages as possible in RulePlex so using the Strategy Pattern would have saved me headache down the road.

That all changes now that RulePlex is only using JavaScript rules. I don’t need the strategy pattern and I don’t even need a switch statement. Instead I can new-up a JavaScriptRuleEngine and call Execute() on it. Faster and cleaner!

On a side note (back to my comment on using enums): You should never use “magic strings”. Your world will be a much better place without them.

Almost a year ago I began on a mission to create the first cloud-based rules engine. I called it RulePlex. After a few months of work I had succeeded. A lot of rules engines can be run (on virtual machines) in the cloud but mine was designed with a cloud-first approach in mind. On top of being cloud-based I had a couple of other goals I wanted for the engine:

The Wright brothers first flight.

The Wright brothers first flight.

  • Allow rules to be written in any language
  • Allow integration with any app
  • Make it incredibly secure
  • Keep the cost low

I accomplished all of these. The engine was awesome. When I was at a good place to ease up on the development, I did, and started to connect with as many people as I could trying to drum up business. I contacted all of my developer buddies, business contacts, previous employers… but no one was interested. I tried to generate business by presenting my software at business start-up events like 1MillionCups. I tried Google Adwords. I threw a hail mary to Scott Hanselman hoping he would do a hanselminutes with me. I even gave the service away for free just to get signups so people would look at it… but all of my attempts at getting the business going failed. I don’t think I did enough even though it may sound like I had.

I’m not giving up.

I’m changing things up!

Instead of being cloud-based I am falling in line with all of the other rule engines. RulePlex will be an on-premise install. It can still be run in the cloud but it won’t be touted as the first cloud-based rules engine any more. It’s nice that I was able to accomplish that feat but ultimately I think proximity to where it will be used will benefit companies more. The latency between the cloud an a corporate network is too much when apps need instant results.

Another thing I am changing is the ability to write rules in any language. I had support for JavaScript, C#, VB.NET, Ruby, and Python – but from now on rules will only be written in JavaScript. By going with only one language it will save development time tremendously. I chose JavaScript because it is the most widely used language today. It’s also not going anywhere anytime soon. It has become an incredible server side language thanks to V8 and the popularity of Node.js. It’s flexible/forgiving thanks to dynamic typing and business users can learn the basics easily.

The last thing I will be doing is blogging a lot more about RulePlex. I’ve been working on it for almost a year and I haven’t written anything (directly) about it. I want people to follow along with it’s progress and see how it is evolving and why things work the way they do. I want you to see what is influencing me into making design choices. Hopefully this will transform into a community of RulePlex users but I won’t get too ahead of myself. Let’s just start with clicking publish on this post.