Authly July 2020 Update

Development on Authly is going well. My first task was setting up the architecture and the Azure resources I needed. When I first created the solution I had brought in Identity Server 3.x but since then they've come out with 4.0, so I upgraded to that a few days ago. Doing so immediately affected what I was working on because the persisted grant models changed. I am in the middle of writing my own storage libraries currently. Grants will be stored both in Redis and SQL Azure: Redis for quick read access and SQL Azure for historical and billing purposes... which brings me to the billing model I've been thinking of.

Most identity services are charging around $2 per active user a month. This cost is based on a constantly active user. If you have a user who logs in once a month they are considered active and you're paying $2 for them. It's essentially free money for the identity service. Conversely, you could have multiple users share the same account only be charged $2 total. Neither case accurately reflects the costs associated for the services rendered.

I am thinking of charging on a per-login or per-token basis. Doing so would reflect actual usage more closely than what other companies are doing. I'm still working on the cost model but it will involve multiple tiers.

One struggle I am having is using a dynamic list of Identity Providers. All usage I've seen expects IdPs to be defined in the app Startup. I'm still digging into this.

I will try to blog about my progress once a month.

Tags: authly

Doin it again

Like LL Cool J said back in '96; I'm doin it, and doin it, and doin it again!

LL Cool J Doin It

At the end of 2016 I said I was doin it again too, but never did (follow the link in that post to an empty github repo), and I've already learned .NET Core by now. I've also learned Identity Server and have integrated one of my previous attempts at a SSO with IS4. I've learned a lot of other things since then too. My need now is mostly due to freedom. I want to be free of working for someone else, to do what I please, whenever I want. In order to get there I have to start making money on my own. This will allow me to do that.

I know I can compete with the major players in the Identity Management space. I've built this solution so many times already that I can do it in my sleep. I may not be able to beat them on number of features right away but I can beat them on price. The other guys are expensive at $2/active user/month. It's why a lot of companies choose to roll their own instead of use a provider. I'm hoping to change that by charging per-login (and refresh tokens). It will be cheap - like pennies per login cheap. I'm a single developer using my own free time to do this so I can do that. I have some other tricks up my sleeve but this is the plan at it most basic.

As far as the tech I have already started working on it. It's on .NET Core 3.1 using SQL Azure and hosted as an App Service in Azure. I'm also using the latest version of Identity Server for the OpenId Connect and OAuth stuff. It will auth against local users, social media, and other Identity Providers. The solution is mostly about user, client, authN, authZ, and access control management. It's not just a hosted version of Identity Server.

I'm calling in Authly. Not 100% sold on the name yet. It could be memorable with some witty tie-ins to the word awfully. We'll see. More to come soon!

InterfaceRpc 2.0

This past week my Interface RPC library hit version 2.0 and I want to take a minute to go over how I got there.

I used the first version for a few services that my company depends on. It was running in production just fine, for a while. After a couple months I found an issue however. The issue was with how exceptions were handled by my code which in-turn caused an additional exception. The result of this caused threads to hang and eventually starve the system. It was a simple fix and services continued to run fine.

I ran into a second problem a few months later. This one, however, was much more serious and caused a service to hang until restarted. I knew what method was being called to cause it but that's about it. I didn't do much digging other than that because I didn't want to maintain the HttpListener implementation of this library any more. It was a stop-gap for how I really wanted it to work anyways...

Ideally I wanted to host the service on ASP.NET Core and plug into that ecosystem of things, like; depenency injection, configuration, and security.

So that's what I did. Instead of fixing a bug I chucked most of my code, refactored what was left (A LOT), and made a 2.0 I felt a whole lot better about.

I've had 2.0 running in production for months now with no issues. This past week a co-worker tidy'd up one last thing which pushed me over the edge in removing the pre-release moniker and published the official 2.0 nuget package! Let's take a look.

Service Setup

This is done during Startup within the Configure method. The RPC service is essentially now middleware in the ASP.NET pipeline.

public void Configure(IApplicationBuilder app)
{
    app.UseRpcService<IDemoService>(options => {...});
}

The options you can configure are

  • Prefix - hosts the service under a virtual subdirectory. This is useful for hosting multiple services in the same app.
  • AuthorizationScope - either None, Required, or AdHoc (default). If AdHoc then authorization will only be checked when your method is decorated with AuthorizeAttribute.
  • ServiceFactory - a function that returns the instance of the interface. You can grab this from ASP.NET's DI container by calling app.ApplicationServices.GetService<IDemoService>()
  • AuthorizationHandler - a function that returns true for authorized. See below for an example.

Here we are looking for a security token in the Authorization header to determine if a user is authorized to access the service.

options.AuthorizationHandler = (methodName, instance, context) =>
{
    if(!context.Request.Headers.ContainsKey("Authorization"))
    {
        return false;
    }
    if(!context.Request.Headers.TryGetValue("Authorization", out StringValues authHeader))
    {
        return false;
    }
    var bearerCredentials = authHeader.ToString().Split(" ".ToCharArray(), StringSplitOptions.RemoveEmptyEntries)[1];
    var tokenHandler = new JwtSecurityTokenHandler();

    var validationParameters = new TokenValidationParameters
    {
        IssuerSigningKey = _signingCredentialStore.GetSigningCredentialsAsync().GetAwaiter().GetResult().Key,
        ValidAudience = "demo-service",
        ValidIssuer = "https://sso.domain.com"
    };

    var user = tokenHandler.ValidateToken(bearerCredentials, validationParameters, out SecurityToken securityToken);
    var token = securityToken as JwtSecurityToken;
    return token != null && token.ValidTo >= DateTime.Now;
};

_signingCredentialStore is defined in the Startup class as private static ISigningCredentialStore _signingCredentialStore; and is set in the ConfigureServices method by calling serviceProvider.GetService<ISigningCredentialStore>()

IDemoService is also configured in ConfigureServices just like any other implementation would.

services.AddTransient<IDemoService, DemoService>();

Creating a Client

The client code hasn't changed much.

var client = RpcClient<IDemoService>.Create(new RpcClientOptions
{
    BaseAddress = "https://demo.domain.com",
    SetAuthorizationHeaderAction = () =>
    {
        return new RpcClientAuthorizationHeader
        {
            Credentials = "access token",
            Type = "Bearer"
        };
    }
});

SetAuthorizationHeaderAction will be called before every request.

Wrapping Up

Everything else pretty much works the same as before. The service responds to POST requests where the method name in the URL translates to a method on the interface. Serialization can be any type defined in the SerializerDotNet library (JSON and Protobuf currently).

One thing that is not apparent is that the service and client both handle async methods.

I hope you find this useful and choose to use it for your next project! If you'd like to contribute you can find the source on GitHub.

Interface to RPC Service

I'm currently working on a project to quickly stand up an RPC service based on an interface. It's a .NET Standard library. Here's how it works.

Let's say you have an interface,

public interface IEchoService
{
    string Echo(string echo);
}

, and you have an implementation that needs to be exposed so that others can call it. I chose HTTP for this.

On the server I use HttpListener to create a web server and listen for requests. When a request comes in the path maps to a method name. For example, if my web server is listening on http://localhost:6000/ and a request for http://localhost:6000/Echo comes in - that maps to the Echo method on the interface... then I use reflection to call it on the implementation that was provided.

If Echo didn't map to a method on the interface then a 404 response is returned.

In my library all HTTP requests are POSTs. The content is deserialized based upon the Content-Type header value of the request. I am using another library I created to handle this called SerializerDotNet. At the time of writing it supports JSON and Protobuf. The same serializer used for deserialization is also used to return data in the response and the response's Content-Type is the same as the incoming request.

Remember when I said "quickly stand up an RPC service"? This is how quickly it's done...

var svc = new RpcService<IEchoService>(new EchoService());
svc.Start();

To use this in your own project add InterfaceRpc.Service to your project using NuGet.

There is a file called rpcsettings.json where you can specify additional settings. Currently only the web server prefixes and number of connections.

To create a client it's just as easy...

var client = new RpcClient<IEchoService>.Create("http://localhost:6000/");

To use this in your own project add InterfaceRpc.Client to your project using NuGet.

By default it uses JsonSerializer. There is an optional 2nd argument where you can specify the type of ISerializer (like ProtobufSerializer)

Then you can call any method defined in your interface:

var echoed = client.Echo("hello");

Limitations

(stuff I could use help on)

  • I punted on handling SSL certs so it only runs over plain HTTP right now. This seems reasonable given that it's common to setup a load balancer or proxy in front of web sites and services.

  • 8 parameters in method signatures - this can expand to more in the future, if needed, but I think once you get this many it's better to create a class with properties for each and use 1 parameter in the signature.

  • Because of limitations with ValueTuple, text based serialization reads a little funky, but I'm sure Microsoft will fix this in future versions of .NET Standard. In the meantime I'm curious to see what people think about having "Item1", "Item2", etc... map to method parameters. There could be some work done to replace these with parameter names but it would add (unnecessary?) overhead.

  • A good name. Any suggestions?

  • See the issues list on github for more

Why?

This will help tremendously in moving off of WCF / .NET Framework and onto .NET Core. WCF clients aren't supported in .NET Core. The good thing about WCF is that it is interface-based, so swapping out a server implementation is relatively easy. Assuming a ChannelFactory was used for the client, it can be replaced even easier.

If you'd like to help in fleshing this idea out more please contribute through the github repo!

InterfaceRpc.Service NuGet Status
InterfaceRpc.Client NuGet Status
Tags: rpc services

Losing Our Memory

I read this post by Scott Hanselman a while back and agree with him in that memory management in .NET is not a concern for developers. I have not thought directly about any of the items he listed within the last 10 years. I learned low level programming early on but once I moved into higher level languages, memory management became something I thought about less and less. There are three factors that contribute to this: Memory capacity, cost, and management.

Memory capacity took a huge jump in the late 2000s. It became common for run-of-the-mill servers to have 16GB+ of RAM. The amount of memory available now is so excessive that database servers can run entirely in memory. In the unlikely event that memory runs out, the emergence of the cloud and auto-scaling saves us - which brings me to my next point.

Memory costs nothing! It's so cheap that maxing out a server with the most RAM possible is expected. If your server is running in the cloud it's not even worth mentioning "memory". The cost of cloud servers is cheap and auto-scaling means you're only paying for what you need anyways. If you need more memory the cloud will give it to you automatically and for the most part without you even knowing!

The third reason why, and most specifically for .NET developers, is that the garbage collector does a darn good job of memory management. This was one of the original selling features of the framework! Let the GC handle what it was meant to do and let developers solve the problem at hand.

Recently I've thought a lot memory management. That's because my company develops firmware in C on handheld devices with a low amount of memory. Hardware specs aren't something companies change regularly and we have a good sourcing cost for the chips we get. Let's just say that there is no incentive to add more memory so developers have to make-do with what they've got.

I'm not writing C, so I'm not thinking about memory directly, but I'm writing web services that are consumed by these devices. Here are some things I have to keep in mind during development:

  • The serialization format must be compatible and performant on our device.
  • The data types returned must be platform agnostic.
  • The data structures must adhere to a strict contract.
  • The size of each piece of data could be an issue.
  • The overall size of requests and responses must be as small as possible.

If all of these criteria are met then I have a pretty good chance of the device being able to work well with the service.

As far as the technologies I chose, I went with Protobuf for serialization. It's fast and produces small output. It's supported on all of the systems & languages my company is working with. Both the software and firmware teams can utilize .proto definitions as contracts and generate structures/classes from them. Protobuf isn't a silver bullet though. I still need to be mindful of the data I am working with...

For example, before our device can communicate with the service it must obtain a security token (a string). I found bug in the the way the security token was generated, so I fixed it, and pushed a new version of the service out. Suddenly our devices stopped working! What happened?

It turns out that the token, before my fix, was 1000 characters long. The firmware team allocated 1000 characters for that field in their code. When I made my fix the token generated was 1200 characters long. The device was truncating the string and subsequent requests to the service couldn't be validated because the token wasn't correct. I hadn't given one thought to the size of token value because a string type in .NET can hold a variable length (fairly large) string no problem. On our devices though, that 1000 characters was critical!

In conclusion, I don't expect .NET developers will actively pay attention to memory management. They should keep it in the back of their mind at all times though. It's important to know how the framework you are working with deals with memory. It would be a good investment to dig deep into it when beginning to learn a new framework. Once .NET Core becomes more prevalent I expect memory management will garner some attention, but ultimately it will end up in the state that .NET Framework developers are in now - ignorance.

Hello

Rush Frisby

Rush is a software architect who loves solving problems. You'll usually find him writing about security concepts and sharing solutions to problems he's faced.