Error TF54000 when creating a new team project in TFS 2013

I setup a brand new TFS 2013 server and was in the process of creating my first team project when I received this error:

TF54000: Cannot update data because the server clock may have been set incorrectly. Contact your Team Foundation Server Administrator.

I had changed the time zone on the server after installing TFS so I figured this error was because of that. In TFS there is a table called tbl_Changeset. If you check it there is a CreationDate column which is the date at which you checked in something to TFS. When installing TFS it creates the first changeset automatically. The date for that first record was now 3 hours after the current time. I couldn’t create a team project because changeset 2 would have occurred before changeset 1. The changeset numbers and CreationDate both need to be chronologically in the same order for things to work.

To fix this I ran an update statement to set the first record back a day (it could have been 3 hours but 1 day works just as well). This is the query I used:

UPDATE tbl_Changeset SET CreationDate=CreationDate-1 WHERE ChangeSetId=1

I went back to Visual Studio and was then able to create my team project without issues.

Online markdown editor encrypting files to the cloud

I use text editors a lot. I use them for taking notes, writing code, writing long emails before I send them out, and much more. For a long time I’ve used Notepad++ because it can handle all kinds of text-based files. Quite often the files I create in Notepad++ I also encrypt. I use a plugin called SecurePad for this.

Lately I have been using MarkdownPad to replace a lot of the things I do in Notepad++. Markdown (the syntax) is great because it gives you guidelines on how your text should be formatted. It’s used all over the place: reddit, github, and stackoverflow to name a few. Because of the guidelines markdown provides it can be easily transformed into HTML and rendered in an eye-appealing fashion. MarkdownPad splits it’s window into 2 panes: the left pane is a text editor and the right pane is the HTML formatted view. MarkdownPad is very basic as is markdown in general – because it was intended to be simplistic and only used for basic formatting of text. MarkdownPad has no way of encrypting files.

So I got to thinking… Why is MarkdownPad a Windows application? It’s so basic that it could easily be browser based. It would also be nice if it encrypted files. In today’s landscape you can never be too careful when it comes to protecting your personal data, thoughts, and daily habits. I also thought, if you could make a browser based markdown editor where would you open and save files from? …the cloud!

After doing a quick search on Google I couldn’t find anything that remotely resembled the online tool I wanted, so I made it myself.

It’s called Secure Markdownhttps://securemarkdown.azurewebsites.net

It’s a work in progress but the basics are there:

  • Text editor (left pane)
  • Markdown syntax is shown in HTML format as you type (right pane)
  • Files are read and written to Dropbox
  • Files are encrypted (in the browser) before being saved to Dropbox

In the future I would like to add:

  • Buttons for markdown-formatting text, like MarkdownPad.
  • Option for saving unencrypted.
  • Overwriting files (when you save a file with the same name as an existing file in dropbox now it appends a number to the file name)
  • Seperate save / save as options.
  • Option for viewing HTML source and downloading the HTML version.
  • Option for saving HTML view as a PDF
  • Move to own domain name if warranted

If you have any suggestions please leave a comment.

Improving WebAPI performance with Jil and Protocol Buffers

One of the reasons I don’t use WebAPI for internal services is because of the performance hit incurred when serializing and deserializing JSON and XML messages. Even for public API’s this is somewhat of a pain. After digging around I’ve found ways to improve the performance of WebAPI serialization.

In .NET there are lots of serializers. The most popular being Json.NET which was chosen by Microsoft to be the default JSON serializer for WebAPI. Json.NET is great because it is fast, robust, and configurable. There are serializers that are faster than Json.NET though. Take a look at these benchmarks:

Source: theburningmonk.com

Source: theburningmonk.com

ServiceStack.Text, Jil, and Protobuf-Net are all faster than Json.NET. Protobuf-Net is not a JSON serializer but I’ll get to that in a minute. ServiceStack.Text slightly edges out Json.NET in terms of speed and from what I can tell is as robust as Json.NET. I frown upon using it however because it’s not entirely free. There is a free version but it will only deserialize 10 object types and serialize 20. I will never use it because of this. Json.NET is free and the performance gain in using ServiceStack isn’t enough for me to pay.

There is one more option we need to look at though – Jil. Jil is optimized for speed over memory. The author of Jil goes into detail on his blog on the optimizations he’s made to get the best performance. It is interesting reading if you have time. Jil is also used on the Stack Exchange API which is used by many people. I almost forgot to mention – it’s free!

Protocol Buffers, or Protobuf for short, is Google’s official serializer. They use it for all of their internal services. Instead of JSON this is a binary serializer. I was interested to see the payload size of binary serialization versus JSON so I did a quick test. Json.NET produced a payload of 728 bytes while the same object serialized by Protobuf was 444 bytes (39% smaller). The act of serialization is also faster as you can see in the benchmarks above. The combination of the two means you can receive and send WebAPI messages faster than the default serializer. But how can we use this in WebAPI?

The great thing about WebAPI is that swapping out serializers and adding support for new ones is a breeze. In my projects I am now swapping out Json.NET for Jil and adding support for protobuf. WebAPI will return the correct result to the caller based on the content-type they wish to receive. Here are some articles on how you can do this too!

Using a custom default font color in Lync 2010

I use Lync 2010 at work and wanted to use orange as my default font color. The problem is that it’s not available through the settings dialog box. These are the options it gives me…

lync2010_colors_dialog

No orange!?!

I figured the settings must be stored in the Windows Registry so I opened up the Registry Editor and navigated to the key [HKEY_CURRENT_USER\Software\Microsoft\Communicator]. There is a setting called “IM CharFormat” that stores the default font information. It is a binary value in the registry and you can modify it manually. If you look closely at the value the color is just an HTML HEX color code within this value. Here it is…

lync2010_default_font_custom_color

All you need to do is get the HEX value for your custom color and update this part of the value with it. #F26522 is the orange color I want. You will have to completely close down Lync and then restart it after you save the registry value. Now open up the default font dialog in Lync again. You will see that your custom color is now being used.

lync2010_colors_dialog_with_orange

If you choose one of the other colors then the “Custom Color” option will go away and you’ll have to edit the registry again. If you edit the font type, size, or other options and keep the custom color selected it will stay.

The color picker in Lync 2013 is different and I believe you can select a custom color with it but you can still do this registry hack if you wish. It will also work with earlier versions of Lync/Communicator.

Presenting RulePlex at 1Million Cups

Today I presented my startup company, RulePlex, at 1Million Cups. Here is a video of the presentation.

I’ve gotten a few comments about it being too technical and that I should give a demo of the product while presenting. This was my first presentation to a large audience on the service and I think those ideas are great. I’ll be working them in for next time.

Expiring links after a given timeframe

Here is one way to expire a link/web page after a certain amount of time has passed. Instead of keeping track of whether a link is valid or not through a database look-up, this is done by verifying that the expiration date in the url generates the same token that is also passed in through the url. If the user tries to change the date value it will invalidate the token. Users cannot generate the token without the secret key you keep on your server. Here is how it’s done.

First we need to create a model for our expiration date and token:

public class ExpiresModel
{
    public DateTime ExpiresOn { get; set; }
    public string Token { get; set; }
}

Next we need a utility for generating and checking tokens:

public class TokenHelper
{
    private const string HashKey = "secret";

    public static string GetToken(ExpiresModel model)
    {
        if(model == null)
            throw new ArgumentNullException("model");

        return Crypto.HashSHA512(String.Format("{0}_{1}", HashKey, model.ExpiresOn.Ticks));
    }

    public static bool IsValidToken(ExpiresModel model)
    {
        if (model == null)
            throw new ArgumentNullException("model");

        return model.Token == GetToken(model);
    }
}

Notice that in the TokenHelper class is where the secret key lives. The Crypto class I used can be found here.

For the front end I have one page which creates the link and another to check the status. Here is the controller for that:

public class HomeController : Controller
{
    public ActionResult Index()
    {
        var model = new ExpiresModel();
        model.ExpiresOn = DateTime.Now.AddSeconds(30);
        model.Token = TokenHelper.GetToken(model);
        return View(model);
    }

    public ActionResult Check(long dateData, string token)
    {
        var model = new ExpiresModel();
        model.ExpiresOn = DateTime.FromBinary(dateData);
        model.Token = token;

        if (!TokenHelper.IsValidToken(model))
            ViewBag.Message = "Invalid!";

        else if (model.ExpiresOn >= DateTime.Now)
            ViewBag.Message = "Still good: Expires in " + (model.ExpiresOn - DateTime.Now);

        else
            ViewBag.Message = "Not good: Expired on " + model.ExpiresOn;

        return View();
    }
}

And the Views for those controller methods…
Index:

@model WebApplication4.Models.ExpiresModel
@{
    ViewBag.Title = "Home";
    Layout = "~/Views/Shared/_Layout.cshtml";
}

<h3>Home</h3>

<p>A new link has been generated that expires in 30 seconds. Use the link below to check the status:</p>
<p><a href="@Url.Action("Check", new { dateData = Model.ExpiresOn.ToBinary(), token = Model.Token })">Check Now</a></p>

Check:

@{
    ViewBag.Title = "Check";
    Layout = "~/Views/Shared/_Layout.cshtml";
}

<h3>Check</h3>

<p>@ViewBag.Message</p>

<p>@Html.ActionLink("Gernate another link", "Index", "Home", new { area = "" }, null)</p>

The entire solution can be downloaded here.

Stopwatch Class in JavaScript

I needed the equivalent of .NET’s Stopwatch class in JavaScript today. I did a quick search and could only find actual stopwatches so I figured it would be faster to write it myself.

Here is a fiddle that shows the usage

and here is the code for the Stopwatch class

Which .NET JavaScript Engine is the fastest?

UPDATE: Added ClearScript

In RulePlex users are allowed to write rules in JavaScript, make an API call which passes in data, and execute those rules in the cloud. RulePlex is written in .NET. So how do we execute JavaScript in .NET? It turns out there are a bunch of JavaScript engines that can do this, but which one is the fastest?

I took an inventory of the more popular .NET JavaScript engines:

My initial thoughts were that JavaScript.Net would be fast since it is just a wrapper for Google’s V8 engine which is the fastest JavaScript engine currently. I also thought IronJS would be fast since it uses Microsoft’s Dynamic Language Runtime. jint and Jurassic I was skeptical about.

The Tests

I created a project and referenced each engine by using NuGet. I called each engine 5 times to execute a snippet of code and took the average. The snippet of code I executed came from a suite of array tests I found at Dromaeo. You can view the tests in this gist.

I also did another test where I loaded the linq.js library (one of my favorite, lesser known, JavaScript libraries).

The Results

Array test results:

jint 31,378 ms
IronJS 2,499 ms
JavaScript.Net 21 ms
Jurassic 494 ms
ClearScript 261 ms
ClearScript (compiled) 24 ms

Linq.js load results:

jint 35 ms
IronJS 245 ms
JavaScript.Net 14 ms
Jurassic 170 ms
ClearScript 49 ms
ClearScript (compiled) 1 ms

It turns out I was right about JavaScript.Net. It is much faster than any of the others. I did run into sort of a quirk with that library though and that was that I couldn’t use “Any CPU” as my platform target but instead had target x86 (you can probably target x64 as well it just can’t be Any CPU because the correct C++ libraries need to be targeted)

I was completely wrong in thinking IronJS would perform well. jint was by far the worst due to it’s incredibly bad array execution time. Jurassic was a pleasant surprise. It performed both the test and linq.js load reasonably fast.

If you come across any other .NET JavaScript engines feel free to let me know and I’ll add them to my comparison.

I was made aware of ClearScript which I’ve now added to the results list. This library also runs V8 but doesn’t need the C++ libraries. It looks almost as impressive as JavaScript.Net.

One More Test

I wasn’t entirely happy with the tests I had done so I added one more. The script I executed only does one small thing – set a variable to true. This shows more or less the overhead of each engine. I ran this this test 5000 times each and took the average.

One variable results:

jint 0 ms
IronJS 1 ms
JavaScript.Net 10 ms
Jurassic 8 ms
ClearScript 29 ms
ClearScript (compiled) 0 ms

Here is the complete script I used. I swapped out currentScript and changed N as needed.

Choosing a service framework

The release of Web API marks what I count as the 4th service framework Microsoft has released for .NET. In this post I will discuss my reasoning behind which one to use. Here are the 4 in order of when they were released:

  1. ASMX Web Services
  2. WCF Services
  3. OData
  4. Web API

These aren’t all upgrades from one to the other despite what popular culture may dictate. There are actually good cases for using each one of these.

…except ASMX Web Services. You should never create anything using this. When I see vendors who still use this technology I cringe and will not use them at all. It’s a sign that they haven’t kept up with technology. It’s not just old products using this. It can be new products too – ones that have old developers who aren’t up to speed. Old does not necessarily mean “mature” or “weathered”. Actually in the cases I’ve seen it means those services are full of bugs and the developers are slow to fix them.

WCF Services

This is the one technology that was meant to handle all web service scenarios. It completely replaces ASMX. The pros of using WCF are that you get a contract to code against, it’s fast and flexible, and it works over all kinds of protocols. The downside is that it takes more work to setup and the configuration choices can be confusing. It’s been out for a while and so I think it’s lost that “new hotness” appeal, but overall WCF is the best option for creating a serious service. I choose to create WCF services when I am building an internal Service Oriented Architecture. The reason for this is because I can use special protocols which make communication faster. For example I use net.tcp for internal client to server or server to server communication. net.tcp does binary serialization which is fast. For services that are talking to each other on the same machine it gets even better because you can use net.pipe. In this case nothing is serialized at all – the communication happens in-memory which is the fastest way possible. It’s like hooking up your brain to another person’s brain and just thinking to each other. WCF Services can be hosted by any .NET application type. Most typically that means through IIS or as a Windows Service. I prefer to host through a Windows Service so that the service is readily available whereas IIS application pools can shut down and will take a while to start back up when a new request is made of it. OData and Web API can’t do any of these things.

OData

This was meant to be used for broad access to shared data. OData hooks up a database directly to a web service. It makes the data query-able through manipulating the query string. A good example for OData would be for the US Census Bureau to share yearly survey results with the public over the internet. Or for a library to share it’s catalog of books and media over the internet. This type of web service is less common. It should never be used with a transactional database.

Web API

I’m not exactly sure why this technology was created other than to say Microsoft has a REST based API solution. It has evolved into a useful solution over time though. It’s is a good choice for public services being accessed over the internet. The two most prominent features are: 1) Content coming from the client is deserialized based on the Content-Type (header value) and serialized back to the client in the same format. The most common content types are XML, JSON, and BSON (binary JSON) 2) It follows the same pattern as ASP.NET MVC so it’s easy to pick up for developers familiar with that technology. Because Web API is the “new hotness” a lot of folks have been creating these types of services instead of WCF, even when WCF is more appropriate. It’s not an upgrade. Web API and WCF are useful for different scenarios. Usually when I create a Web API service I put a WCF service layer behind it because you normally don’t just have a standalone API. Most of the time you have an accompanying application which uses the same functionality as the API.

Run SQL Query and Email Results

The program below will run a SQL script, convert the results to an HTML table, and email you with it. Everything is passed in via command line parameters which makes it great for running on the fly from a batch file or as a scheduled task. Here is how you use it:

Usage:

sql_emailer.exe <script> <title> <email> <conn>

Parameters:

  1. script – Path to a SQL script which contains the query to be run.
  2. title – will be used as the email subject and header within the body.
  3. email – A single email address or comma delimited list of email addresses to send the query results to.
  4. conn – A reference to an appSettings key in the config file which contains the connection string of the database to connect to. If this parameter is omitted then “DefaultConnectionString” is used.

In the config file you will need to add the appropriate appSettings keys with connection strings to your database(s). You will also need to configure the SMTP section in the config with your SMTP server settings.

Download sql_emailer.zip

Here is the source code minus the table styling from the exe: