Interface to RPC Service

I'm currently working on a project to quickly stand up an RPC service based on an interface. It's a .NET Standard library. Here's how it works.

Let's say you have an interface,

public interface IEchoService  
{
    string Echo(string echo);
}

, and you have an implementation that needs to be exposed so that others can call it. I chose HTTP for this.

On the server I use HttpListener to create a web server and listen for requests. When a request comes in the path maps to a method name. For example, if my web server is listening on http://localhost:6000/ and a request for http://localhost:6000/Echo comes in - that maps to the Echo method on the interface... then I use reflection to call it on the implementation that was provided.

If Echo didn't map to a method on the interface then a 404 response is returned.

In my library all HTTP requests are POSTs. The content is deserialized based upon the Content-Type header value of the request. I am using another library I created to handle this called SerializerDotNet. At the time of writing it supports JSON and Protobuf. The same serializer used for deserialization is also used to return data in the response and the response's Content-Type is the same as the incoming request.

Remember when I said "quickly stand up an RPC service"? This is how quickly it's done...

var svc = new RpcService<IEchoService>(new EchoService());  
svc.Start();  

To use this in your own project add InterfaceRpc.Service to your project using NuGet.

There is a file called rpcsettings.json where you can specify additional settings. Currently only the web server prefixes and number of connections.

To create a client it's just as easy...

var client = new RpcClient<IEchoService>.Create("http://localhost:6000/");  

To use this in your own project add InterfaceRpc.Client to your project using NuGet.

By default it uses JsonSerializer. There is an optional 2nd argument where you can specify the type of ISerializer (like ProtobufSerializer)

Then you can call any method defined in your interface:

var echoed = client.Echo("hello");  

Limitations

(stuff I could use help on)

  • I punted on handling SSL certs so it only runs over plain HTTP right now. This seems reasonable given that it's common to setup a load balancer or proxy in front of web sites and services.

  • 8 parameters in method signatures - this can expand to more in the future, if needed, but I think once you get this many it's better to create a class with properties for each and use 1 parameter in the signature.

  • Because of limitations with ValueTuple, text based serialization reads a little funky, but I'm sure Microsoft will fix this in future versions of .NET Standard. In the meantime I'm curious to see what people think about having "Item1", "Item2", etc... map to method parameters. There could be some work done to replace these with parameter names but it would add (unnecessary?) overhead.

  • A good name. Any suggestions?

  • See the issues list on github for more

Why?

This will help tremendously in moving off of WCF / .NET Framework and onto .NET Core. WCF clients aren't supported in .NET Core. The good thing about WCF is that it is interface-based, so swapping out a server implementation is relatively easy. Assuming a ChannelFactory was used for the client, it can be replaced even easier.

If you'd like to help in fleshing this idea out more please contribute through the github repo!

InterfaceRpc.Service NuGet Status
InterfaceRpc.Client NuGet Status

Losing Our Memory

I read this post by Scott Hanselman a while back and agree with him in that memory management in .NET is not a concern for developers. I have not thought directly about any of the items he listed within the last 10 years. I learned low level programming early on but once I moved into higher level languages, memory management became something I thought about less and less. There are three factors that contribute to this: Memory capacity, cost, and management.

Memory capacity took a huge jump in the late 2000s. It became common for run-of-the-mill servers to have 16GB+ of RAM. The amount of memory available now is so excessive that database servers can run entirely in memory. In the unlikely event that memory runs out, the emergence of the cloud and auto-scaling saves us - which brings me to my next point.

Memory costs nothing! It's so cheap that maxing out a server with the most RAM possible is expected. If your server is running in the cloud it's not even worth mentioning "memory". The cost of cloud servers is cheap and auto-scaling means you're only paying for what you need anyways. If you need more memory the cloud will give it to you automatically and for the most part without you even knowing!

The third reason why, and most specifically for .NET developers, is that the garbage collector does a darn good job of memory management. This was one of the original selling features of the framework! Let the GC handle what it was meant to do and let developers solve the problem at hand.

Recently I've thought a lot memory management. That's because my company develops firmware in C on handheld devices with a low amount of memory. Hardware specs aren't something companies change regularly and we have a good sourcing cost for the chips we get. Let's just say that there is no incentive to add more memory so developers have to make-do with what they've got.

I'm not writing C, so I'm not thinking about memory directly, but I'm writing web services that are consumed by these devices. Here are some things I have to keep in mind during development:

  • The serialization format must be compatible and performant on our device.
  • The data types returned must be platform agnostic.
  • The data structures must adhere to a strict contract.
  • The size of each piece of data could be an issue.
  • The overall size of requests and responses must be as small as possible.

If all of these criteria are met then I have a pretty good chance of the device being able to work well with the service.

As far as the technologies I chose, I went with Protobuf for serialization. It's fast and produces small output. It's supported on all of the systems & languages my company is working with. Both the software and firmware teams can utilize .proto definitions as contracts and generate structures/classes from them. Protobuf isn't a silver bullet though. I still need to be mindful of the data I am working with...

For example, before our device can communicate with the service it must obtain a security token (a string). I found bug in the the way the security token was generated, so I fixed it, and pushed a new version of the service out. Suddenly our devices stopped working! What happened?

It turns out that the token, before my fix, was 1000 characters long. The firmware team allocated 1000 characters for that field in their code. When I made my fix the token generated was 1200 characters long. The device was truncating the string and subsequent requests to the service couldn't be validated because the token wasn't correct. I hadn't given one thought to the size of token value because a string type in .NET can hold a variable length (fairly large) string no problem. On our devices though, that 1000 characters was critical!

In conclusion, I don't expect .NET developers will actively pay attention to memory management. They should keep it in the back of their mind at all times though. It's important to know how the framework you are working with deals with memory. It would be a good investment to dig deep into it when beginning to learn a new framework. Once .NET Core becomes more prevalent I expect memory management will garner some attention, but ultimately it will end up in the state that .NET Framework developers are in now - ignorance.

Are Coding Bootcamps Worth It?

I watched a John Sonmez video today on "Why Are Coding Bootcamps SO EXPENSIVE?" which is embedded below.

Normally John gives great advice but I have to disagree with him on this topic. Yes bootcamps are expensive, but the real questions is, are they worth it? I don't think so and here is why...

I am the Software Development Manager at LSQ. Over the past six month I've had both front-end and back-end software development positions open. I've read thousands resumes and interviewed hundreds of candidates who have come straight out of coding bootcamps looking for a job. The reason why I haven't given any of them a job offer is due to the problems I describe below.

The first problem I've found with bootcamps is that 3 months (or less) is not enough time to gain adequate knowledge that would equip someone to make an impact on the job. During my interview process I ask candidates to write code (using CoderPad) and most of the time they fail. For example, I asked one candidate to list a number of things using HTML and CSS. Inside each item in the list there was one element aligned to the left and one to the right. The candidate was having trouble doing this so I suggested using floats. The response I got was "What are floats?". Incredible! Three months just isn't enough time. That is basic stuff though - I wonder why they don't cover it?

The second problem is that candidates have no drive. They think they deserve everything even though they've done nothing. They're thinking is, "If I shell out $10K for bootcamp"... "companies will be pounding on my door with $100K/yr job offers.", which is such a lazy mentality that it's scary to think that that's how our society thinks. You won't get a (good) job with no experience. On top of that, candidates don't do any research into the company they are interviewing at. I want candidates who are passionate about what LSQ is doing. I'd like to see someone figure out and tell me what our problems are, and potential solutions, when they come to the interview: study and be prepared!

The third problem is that every candidate looks exactly the same. Exactly. The. Same. They all have github accounts with the same repos for the same projects they completed during the same bootcamp. All of the repos have the same boring code in them. There is no Art, Creativity, or Craftsmanship in the code candidates write. If you want to distinguish yourself from everyone else you need to stand out. Work on open source projects outside of the bootcamp; create a fictional company and build solutions for solving the problems that company would face; whatever it is you need to do to stand out - do it!

In conclusion, bootcamps aren't helping anyone. They're not get rich quick schemes. Software development is a craft. If you're interested in doing it as a career it's better off to learn on your own for a longer (than 3 month) period of time. Take an internship to get you in the door. Start small and build a foundation that a career can be built on.

Follow Up

I should have mentioned this in my original post, but, a couple of nation-wide coding bootcamps have shut down recently. Both "Dev Bootcamp" and "The Iron Yard" will cease to exist. This is a good indication that bootcamps are not worth it! You can read more about the closures here.

Meetup: Using Protobuf in .NET

Protocol Buffers is a method of serializing structured data. It is useful in developing programs to communicate with each other over a wire or for storing data.

On August 10th I will be speaking at the Orlando .NET User Group meetup on Using Protobuf in .NET. Come check it out if you are interested in this cool technology! The meetup starts at 6PM at the Melrose Tech Center inside the downtown Orlando Library. Afterwards the group heads over to Harp & Celt for food and drinks over more tech talk!

Application Security and Single Sign On

In 2017 I will be developing a new, open source, application security and single sign on solution. It will compete directly with OneLogin and Okta. Since 2000 I have built four solutions just like this.

In 2000 I took my first stab at this for EVLogix. My solution consisted of a few classes and a few web pages that secured our main web app, using classic ASP. I'm sure it was completely insecure. I hope it's not still being used.

In 2008 I had a grand idea to create an "application framework" which provided a dashboard for users to launch their web applications and gave developers a way of managing apps and accounts. I was building user-management-type-stuff into every app I wrote and this would allow me to skip that and just get to the point of what each app was meant to do. I created a LLC and developed the source under it. Eventually I abandoned the project when I realized what I had built could be done better. It was .NET based but was using ASP.NET web forms and the API requests were parsed and constructed in a non-standard way.

In 2010 I built a new solution for the company I was working for at the time, Digital Risk. The web portion was built using ASP.NET MVC and the API moved to a WCF service. This was much improved, however, some design choices were poor and I started securing other WCF services with it. This resulted in security overload and made it frustrating for other developers to use.

In 2014 I built another solution for Derive Systems (who I currently work for). The web portion and API are again written in ASP.NET MVC and WCF, but I didn't make any mistakes I had made previously. It's secure and easy for developers to use with web, mobile, and desktop apps.

So why am I developing yet another solution?

My primary reason is to use this as a way of learning .NET Core. .NET Core is cross-platform so it can be hosted on Windows and Linux. I also want the solution to be open-source to promote broader use and gain input from different people. I am a big fan of GitHub so I am hosting the source on there.

I haven't decided on one thing yet - whether the solution will be simultaneously used for profit or not. Exceptionless currently works on this model and they do a good job of it. I recently listened to a podcast on Andreessen Horowitz in which they said this model is gaining popularity. Using the solution for profit will affect the type of open-source community members that choose to contribute so I am taking that into consideration as well.

I am looking for community members who want to get involved early. If so, shoot me an email, hit me up on twitter, or create some issues on GitHub!