Customer Service is about the customer, not you

As many of you, last night I was laying on my couch watching the latest Mad Men episode when suddenly it happened again, in the last 10 minutes of the show my TV screen goes blank. Same thing that happened last week. I’ve missed again the end of the episode. I was really pissed. So I did what everybody in the same situation would do, went to vent on twitter.

timewarner_1

As I expected, I got the usual “we are really sorry bla bla bla. Please DM info so we can contact you.” response back. Meanwhile, I was curious and checked Time Warner’s mentions, as I didn’t believe this was an isolated issue. And I was right:

timewarner_2

timewarner_3

timewarner_4

 timewarner_6

And my favorite:

timewarner_5

This is clearly not an isolated issue. And it doesn’t take much to realize that there is a high incidence of users from Austin complaining. Is it because people here in Austin like to complain more on twitter (a totally valid hypothesis), or is it because it’s an issue in the area? If I was the one diagnosing the issue, I would try the latter first.

It’s your job, not your customer’s job, to fix your mess

Fast forward to the next morning, I am driving to work when my phone rings. It’s a Time Warner Customer Service representative. She goes on “I’m sorry bla bla bla” then she asked me if the problem ocurred with this channel only to which my answer was: “I don’t know. I know I was watching AMC and this happened”. That’s when things derailed: “As everything seems to be working now, sir, there is nothing we can do. What I would recommend is that the next time this happens you call Tech Support instead of using Social Media, so we can try to help you.”. And when I say things derailed it is not because Time Warner is still in 2007 and doesn’t consider Twitter a proper medium for support requests. I say it derailed because that’s when Time Warner started putting on me, their customer, the responsibility of diagnosing their own problems. And, to be fair with that representative, the folks behind TW’s twitter account did the same:

timewarner_7 timewarner_8 timewarner_9

Time Warner can’t track their customers issues. They don’t have the tools to detect or diagnose problems in their own network. Their first reaction is to pull a 1998 tech support script and ask the user to reboot some equipment in the hope that it will fix their issue. This is putting the burden on your customer to solve your problem. Diagnosing what went wrong is your job, not your customers’ job. Heck, your customer shouldn’t even need to tell you that something went wrong, you should be able to detect it even when there is no complaint. It’s 2013 already, for Christ’s sake.

Only after I did their job and told Time Warner to check their twitter mentions around 10PM last night, they realized that this is not an isolated issue that can be fixed rebooting my box. So only then, they decided to let their local network engineers know about the issue. Apparently those engineers can’t be bothered to monitor their network or asked to check it when a customer complains there is a problem. The default protocol seems to be to put the customer in charge of diagnosing the issue and only as a last resort have some engineers look into it.

There is a better way

Here is how I wish this issue was handled:

Me on Twitter: “Hey TW, your screen went blank during Mad Men. yadda yadda yadda”

Customer Service Rep reply on twitter: “That’s really unfortunate, Pedro. I would be really annoyed if I had lost the final moments of my favorite show. Even worse this happened 2 weeks in a row.”

See what happened here, the Customer Service Rep put himself on my shoes. He showed to me that he listened to what I said and was sympathetic to the issue.

Then he could go on: “I will figure out what when wrong so that we can make sure it doesn’t happen in the future. Can you DM us your account # so that I can send you updates on this?”

Now he is telling me what his next steps are going to be and also telling me that he is going to fix the issue.

And now what I hoped the lady that called me in this morning had told me:

Rep: “Hi Pedro, this is Jane from Time Warner. Is this a good time for us to talk about the issue you had during Mad Men last night? This shouldn’t take longer than 10 minutes?”

What did she do here: She made sure this was a good time for us to talk. With that, she acknowledge that calling me was a disruptive action and allows me to decide if I have the time to continue or not. She also told me upfront how long the call could take to help me decide.

Me: “Sure, Jane, go on”

Rep: “We understand how bad it is to have a show that you are watching interrupted. It is in our best interest that you are the most satisfied as possible while using our services and we failed doing that last night.”

Here she demonstrate that they care about the issue and they know they have screwed up.

Rep: “As we started investigating the issue, we noticed that this was not an isolated problem, in fact we had few other customers like you sending us messages complaining about the same issue last night. It seems that this is an issue in Austin but we can’t tell for sure at this point. Our Network Engineering team is currently investigating the problem and I will let you know as soon as we have more information about it.”

By saying this, she kept me in the loop and now I feel that they are working to solve the problem.

Rep: “What else can I help you with?”

And finally she give me more opportunity to talk. During this whole fake Customer Service interaction I, as a customer, was put at first priority  Throughout the entire  process I would be feeling like Time Warner cared about me and was working hard to solve my problem. This was about me. Unfortunately, given how they handled the situation, my perception couldn’t be far from that.

We are all doomed, unless there is a new player

It’s sad to say but I’m skeptical that Time Warner can change how they handle customer service. Unless you put the customer first since the beginning, this is something that is really hard to change decades down the road. And, to be fair, Time Warner is not alone at sucking at Customer Service. This is a common thing among Telecom companies. Check the reply I got back from AT&T after I mentioned them on my original tweet.

att_is_bad_too

Yes, they apologized  as if the mistake was theirs. If you think Customer Service reps don’t listen to you on the phone, they don’t even read a 140 character message. The only way I see us out of this mess is if a new disruptive player gets into this market.

Google Fiber can’t come soon enough.

Git alias to delete all local branches

In my current project we have to use TFS as our “remote” repo. Locally I use git-tfs so that I can still be productive and do, ya know, work. Jimmy has a post describing in details the workflow that we use here but the TL;DR: version is:  All work is done on local topic branches; You push to TFS from the topic branch; TFS, after running the build/tests, will commit the changes that were pushed; You pull the commited changes to master. You create a new topic branch, rinse and repeat.

I’ve been working with this workflow for more than an year and it’s working great. It has one side effect, though. It can leave tons of non-merged branches, if you don’t delete the topic branch after you pushed. So, once in a while it is time for some branching cleanup.

At first I was doing the cleanup manually, but I’m really lazy and I’d rather tell the computer to do the work for me. So, here is my git alias to  delete all local topic branches except “master”:

dab = !git checkout master && git branch | grep -v "master" | xargs git branch -D

Now, every time I want to do some branch cleanup I simply do git dab and all the junk branches are gone.

 

WebApiContrib

The  announcement of the ASP.NET Web API got a lot of people interested in playing with, bending, learning the framework to see what was possible to do with it. Some folks were doing that already back when it was WCFWebApi. But the move to the ASP.NET team and having it being part of the ASP.NET MVC 4 beta definitely brought a lot of attention to the Web Api framework.

The WebApiContrib organization

With all that attention, and people playing with it, different contrib efforts were sparkling all around. There was the original WcfWebApiContrib maintained by Darrel Miller. We at Headspring created our own contrib. Folks from Thinktecture created there own. Some other authors were pushing their own extensions to their own repositories. Basically, it was a mess.

Ryan Riley then started the effort to bring everybody together to contribute to a single contrib project. He created an organization on github and a mailing list. After I got contacted by Ryan, I talked with the other Headspring folks and we decided to transfer our WebApiContrib repository from our HeadspringLabs organization to the WebApiContrib one.

This was the first move that we made, after that, Thinktecture extensions were moved and the handlers from the WcfWebApiContrib were also moved.

How can I get it?

Right now, you can head to the github repository and pull the code. I know it’s kinda ghetto in these Nuget and Open Rasta OpenWrap days.

In order to fix this, and Chris Missal is working on it, we need to solve some few issues:

  • Add the packaging task to our build – not a big deal
  • Have a CI server to generate the packages for us – Chris talked with the Codebetter folks and we will have it done by the end of the week approximately
  • Agree on how we are going organize the packages and distribute it.

The latter issue has two parts: package organization and distribution

Package Organization: We are structuring the code so that one can install the core package that has no dependencies. This will give you all extensions that have no extra dependencies. For extensions that depends on other packages we are going to create a specific packages.

For example, if you want to use the NotAcceptableMessageHandler, that has no dependencies, all you have to do is install the core package. On the other hand, if you want to use the ServiceStackTextFormatter, a Media Type Formatter that uses ServiceStack.Text to handle JSON serialization, you will have to install the WebApiContrib.Formatting.ServiceStackTextFormatter package. Our goal is to keep contrib’s users packages dependencies clean, and only have what is really necessary for what the user is trying to do.

Distribution: Nuget currently has two WebApiContrib packages. Those were created for the WCFWeb APIContrib project. This means that replacing those packages will break everything for people that was relying on the WCF version of the package. This might be ok to do, as the WCFWebApi no longer exist and we are going to deprecate the WCFContrib as well. But we haven’t decided that yet. And I’m willing to get some ideas on what should we do to make this as frictionless as possible.

I expect all this to be fixed by next week.

What’s next

We don’t have an official roadmap besides the issues list on github. Besides what’s there, I have some ideas that are now floating on my head but I haven’t started working on them yet:

Provide a better way of creating/embedding links on resources. The ASP.NET Web API has no built in helper to embed links into resources. My goal is to have a helper to make it easier, and hopefully in a strongly typed way, to configure your resource so that the Media Type Formatter can then pull that information and include the links in the resource representation.

Api Documentation Generation. It’s key for the success of a api that it’s well documented. It’s in the ASP.NET Web API team future roadmap to include something to help in this documentation generation. But, until they get there, we still need to document the web apis that we are creating. My initial idea is to implement the Swagger specification and use Swagger-UI to expose the documentation.

Remember it’s a Contrib effort

The goal of a Contrib effort is to group and curate extensions to the ASP.NET Web API that are created by the people using it. So, if you have an idea or created something that extends the ASP.NET Web API please, come talk to us. Send us an email on the mailing list; open an issue to discuss a new idea; send us a pull request. Even if you don’t have an specific idea or feature that you want to discuss but are willing to help our effort, come talk to us.  The WebApiContrib is a community effort and we need your help.

ASP.NET Web API Routing needs a hug

One of my goals with the WebAPIContrib is to be able to write as few code as possible in my API application and let the Contrib help me with the boring boilerplate code.

Looking through the ASP.NET Web API samples, tutorials and blog posts out there, the first thing that jumps on my eye is the whole HttpResponseMessage noise.

I don’t want to have to write all these status code and Location Header settings all over my code. This is boring and adds noise to what actually is important on my controller action. So, my idea was to create objects to represent each status code, something similar to what Restfulie does, and encapsulate all this noise inside these specialized HttpResponseMessages.

I imagined doing something similar to:

But, in the create response I also want to set the Location Header of the new resource. As a first pass, I went with:

Now, in the API code I can have:

Although this achieve my goal of setting the status code and Location header in the HttpResponseMessage I’m still not happy. Mainly because of manually building the URI of the Location. I don’t have to hardcode URLs when I can use the type system to help me with that.

Embracing the type system, or not.

I’m really not happy with hardcoding the URI to set the Location Header. A better way of setting that would be:

With that, all I would have to do is generate the URL for that controller action and set it to the Location Header. Piece of cake, it’s just use the built-in UrlHelper. Or, that’s what I thought.

The System.Web.Http.Routing.UrlHelper has a Route method that returns an URL for a set of route values. OK, so at first glance I just need to translate my Expression<Action<TController>> in a dictionary of route values. This is not a problem, MVCContrib, for example, uses a helper from the MVCFutures assembly to do this. As I’m working in the WebAPIContrib, I didn’t want to take a dependency in the MVCFutures just for this. I ended up coding the reflection part myself.

OK, no I have a RouteValueDictionary, to get the URL back is just call urlHelper.Route(routeValues), right? Not so quickly.

Y’all gotta know the names. All the names

The Route method on UrlHelper is not like the Action method on it’s System.Web.MVC sibling (more on this Mvc x Http later). As you can see in the MSDN documentation, the Route method signature is:

Together with the route values you also have to supply the route name. As I’m building a library for others to use while building their APIs, I don’t know which route to use. My first thought was: I’m just going to iterate over the HttpRouteCollection and the first URL I get back would win. That’s when I started learning more about the ASP.NET Web API routing implementation than I was wanting to.

No foreach for you

When I tried to iterate over the HttpRouteCollection, the following exception was thrown:

Checking on Google, I found out that I had to get a read lock first, before iterating over the collection. This is done through the GetReadLock method that returns a disposable lock object. :SadTrombone: Fooled one more time by the Mvc vs. Http identity issues. Only System.Web.Routing.RouteCollection has the GetReadLock method. The HttpRouteCollection counterpart doesn’t have it.  So, I had to fall back to for loops to get the routes out of the HttpRouteCollection.

Then, another surprise. To refresh your memory, we are doing all this looping things in order to get the route name so that we can use the UrlHelper to get the Location header URI. So, after I was able to get a route object instance, I found out that there is no way to get it’s name. Yeap, you create a route with a name. The UrlHelper asks you for a route name. You can even check in the collection if there are any routes with a given name. But you can’t get the name from a route instance. With dotpeek’s help I figured out that the name is used as the key in the private dictionary that the HttpRouteCollection wraps. But the HttpRouteCollection doesn’t make the Keys property of the dictionaty available to the outside world. That’s it. There is no way for you to programmatically find a routes name.

Ah, the ControllerContext

At this point I was already pretty frustrated. I was working on this during the Dallas Day of .Net and I was afraid I wouldn’t be able to keep cursing only in my head anymore. Actually I believe I said one or two “WTF!” loudly. Anyways, I gave up the whole let’s-find-out-the-route-name-thing and decided to hard code the default one instead. Just so that I could at least see the thing working.

To my despair, I realized that the System.Web.Http.Routing.UrlHelper depends on the HttpControllerContext, not in the RequestContext as System.Web.Mvc.UrlHelper. As I can’t simply create, or at least not as easily, an instance of the HttpControllerContext as I can create one of the RequestContext, I just changed the constructor of the CreateResponse to also include the HttpControllerContext.

I’m definitely not happy with this and I won’t include any of it, as it is, in the WebAPIContrib.

Least Surprise who?

Let’s be clear: this whole thing of System.Web.Mvc vs. System.Web.Http is confusing as hell. Things have the same name but are different types. The api for those types are different. They behave differently. I can see a lot of people struggling as I did when trying to use whatever they are using in their MVC projects in a Web API one.

But the Web API is still in beta and this is what a beta is for, right?

Yes, the ASP.NET Web API is still in Beta and I expect a lot of the library api to change. As Henrik Nielsen asked me to do, I’ll register the issues I had on user voice. And I can only hope they will fix it. Or maybe accept a pull request ;-)

Extending ASP.NET Web API Content Negotiation

The ASP.NET team released the beta version of the ASP.NET Web API, previously known as WCF Web API, as part of the beta release of ASP.NET MVC 4. Having experience implementing web APIs with Restfulie, I was curious and decided to check how the ASP.NET Web API works to compare it with Restfulie.

The first thing I noticed was a difference in the Content Negotiation implementation. I don’t intend to do a full comparison here, but to describe how to use one of the extension points in the Web API to add the behavior that I wanted.

If you are not interested in in the reasons why the Content Negotiation implementation differs, and why I prefer the Restfulie one and just want to see DA CODEZ, feel free to jump to the “Message Handlers to the Rescue” part.

Content Negotiation

Content Negotiation, simply put, is the process of figuring out what is the media-type that needs to be used in the Resource Representation that is going to be returned in a HTTP Response. This negotiation between client and server happens through the interpretation of the HTTP Accept Header values.

When a client makes a request, if the client wants to specify a set of media-types that it accepts the resource to be formatted into, then these media types should be included in the Accept Header. All the semantics and options regarding the usage of the Accept header can be found on section 14.1 of RFC 2616 (HTTP specification). In the Accept Header definition there is a part that reads:

If no Accept header field is present, then it is assumed that the client accepts all media types. If an Accept header field is present, and if the server cannot send a response which is acceptable according to the combined Accept field value, then the server SHOULD send a 406 (not acceptable) response.

The SHOULD key word in the RFC 2616, according to section 1.2 of the RFC, is to be interpreted as described in the RFC 2119. The description of SHOULD in this RFC is:

SHOULD   This word, or the adjective “RECOMMENDED”, mean that there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course.

With this definition, I interpret the Accept Header behavior as: unless you have a specific reason not to, if the HTTP Request contains an Accept Header and the server does not support the requested media-type, a 406 – Not Acceptable response should be returned.  Returning other Status Code might be OK, as it’s not mandatory to return 406, but this should be the exception, not the default behavior.

ASP.NET Web API Content Negotiation Implementation

The namespace System.Net.Http.Formatting has a MediaTypeFormatter class whose instances are used by the Web API (I’m going to omit the ASP.NET part from now on) to format the resource representation to a specific media-type.

The Web API ships with JsonMediaTypeFormatter and XmlMediaTypeFormatter implementations. And it defaults to the JSON formatter when no specific media-type is requested. So, if the Accept Header is not included in the HTTP Request, Web API will use the JSON formatter to format the Resource Representation that is going to be returned.

You can add your own MediaType formatter by adding your formatter implementation to the MediaTypeFormatter collection of the HTTPConfiguration object in the GlobalConfiguration class.

It seems like something is not right here

While I was playing with the Web API, I decided to test the behavior of the Content Negotiation implementation in the scenario where the media-type in the Accept Header is not accepted by the server. That is, when the HTTP Request Accept Header specifies a media-type that the Web API doesn’t have a MediaTypeFormatter implementation that can handle it. By the definition described in the Content Negotiation section above, I expected to receive a 406 response back. To my surprise, I got a 200 – OK response back with a Resource Representation with text/json contant-type. Web API fell back to the default MediaTypeFormatter instead of returning 406.

This was a surprise to me, so I filled a bug report for this in the Web API Forum. Henrik Frystyk Nielsen replied saying:

It’s a reasonable to respond either with a 200 or with a 406 status code in this case. At the moment we err on the side of responding with a 2xx but I completely agree that there are sceanarios where 406 makes a lot of sense. Ultimately we need some kind of switch for making it possible to respond with 406 but it’s not there yet.

Fine. The 406 response is not mandatory, the Web API is still in beta. I understand the 406 handling not being there yet. And, the last thing I want to do is to get into a discussion of RFC semantics with one of the authors of the specification  :P

Message Handlers to the Rescue

When a HTTP Request is made to a Web API application, that request is passed  to an instance of the HttpServer class. This class derives from HttpMessageHandler and it will delegate the processing of the HTTP request to the next handler present in the HttpConfiguration.MessageHandlers collection. All handlers in the collection will be called to process the request, the last one being the HttpControllerDispatcher that, as you can tell, will dispatch the call to an Action in a Controller. FubuMVC Behavior Chains implements an approach similar to this. (Fubu guys, I know it’s not the same thing, but y’all get my point :))

With that said, in order to return a response with 406 status code, what I need to do is implement a HttpMessageHandler that does that.

With the handler created, the next step is to configure Web API to include this handler in the HTTP Request processing pipeline. To do so, in the Application_Start method in the Global.asax , add an instance of the new handler to the MessageHandlers collection.

With the application configured to use the custom handler, when that same request is made, a response with 406 Status Code is now returned.

The power of Russian Dolls

The Web API HTTP Request pipeline provides a powerful extension point to introduce specialized, SRP adherent, handlers keeping the controller action implementation simple and focused.

Installing Less.css on OSX Lion

Today I was following the instructions to upgrade to the latest version of the Twitter Bootstrap.

It’s pretty straight forward. It is really awesome, actually. All you have to do is: Open the terminal, pull the changes, run make.

But, Twitter Bootstrap uses Less.css, so one of the steps of the update script is to compile the .less files into .css. I didn’t have less compiler installed on my MacBook Pro, so instead of successfully upgrading the Bootstrap, I got this on my terminal:

lessc: command not found

So, my first reaction was to try installing less using Homebrew:

Homebrew doesn't have a formula for less.css

Dang it. There is not Homebrew formula for Less. I’ll have to open my browser to install a software, damn it. So I headed to Less.css website and saw that the easiest way to install the compiler to be used on the server side is via the node package manager, npm. Then I did: npm install less

npm installed less

All right, now let’s try running make again.

less not found again

Ok. It’s still not found. Maybe installing it globally will solve the problem: npm install less --global

npm install less with global option

All right, now that less is installed globally, let’s try running make again to update Twitter Bootstrap.

Bootstrap upgraded succesfully. Yay!

Yay, it worked. The --global option did the trick.

Running TeamCity on EC2

 

One of Headspring’s core mantras is to outsource everything that is not part of our core business. We are true believers on running the business in the cloud. We are even giving a talk about it. We use many cloud service providers on a daily basis. We use Google Apps for Email, Calendar, Voice and Intranet. We use Salesforce as our CRM. We use BitBucket to host our source code.

We use all this services to do things that are necessary in order to run the company but that we don’t want to manage or build ourselves. So, we decided to do the same thing with our build system. As we were using TeamCity as our CI server and it has integration with EC2 out of the box we decided to go with EC2.

First, TeamCity is composed of two pieces. The build server, that runs TeamCity’s dashboard and the build agents, that actually run the builds. As TeamCity Server is a Java application, a small Linux instance on EC2 is enough to run it. The server itself won’t leverage the Elastic features on EC2, but having the server running on the same region and availability zone as the agents is important in order to avoid Data Transfer charges.

 

Server

 

To install TeamCity server on Linux, follow this steps:

1) Get TeamCity from JetBrains.

2) Unpack the file Teamcity<version number>.tar.gz as documented in the docs.

3) Elevate your privileges to root:

> sudo su

This is needed because we want the server to listen to port 80 and ports below 1024 can only be listened by the root user.

4) Install Sun/Oracle JDK.

EC2 instance is a RedHat distribution, that uses OpenJDK by default. To the cloud integration on TeamCity to work properly, it needs Sun/Oracle’s JDK.

5) With Sun/Oracle’s JDK installed, set the JAVA_HOME environment variable to use it, instead of OpenJDK:

> JAVA_HOME=/usr/java/jdk<version>

6) Change TeamCity server port to 80 (if you need to)

7) Start the Server. On TeamCity’s bin directory, run:

> ./teamcity-server.sh start

That’s it. The server should be up and running. If you hit instance’s public URL on your browser, after the server finishes starting up you should get a screen to accept TeamCity’s EULA.

TeamCIty_EULA

After accepting the EULA, you will be prompted to create an Administrator Account. With the Administrator Account created, TeamCity server is installed and running properly.

Installing the Build Agent

 

With the Server up and running it’s time to create the agents that will run the builds.

To do so, first launch a instance ( I’m going to use a Windows one as we are doing .NET here) on EC2 and configure it to work as a TeamCity agent. In our case, we chose to use a Large Windows x64 instance.

As it’s necessary to remote into the instance to configure it to run as a TeamCity agent, wait approximately 15 minutes after the instance is launched to Remote into it, otherwise you won’t be able to get the password from AWS.

Wait_for_instance[1]

With the instance credentials, follow this steps:

1) Remote into the instance

2) Open the browser and go to TeamCity Server URL. After authenticating, go to the Agents tab and download MS Windows Installer.

Agents_Tab

3) Run the installer. After the agent is installed, a form to “Configure Build Agent Properties” will be prompted. Change the serverUrl property to point to the correct URI.

4) TeamCity agent-server communication uses port 9090 by default, unless you’ve changed it when configuring Build Agent properties on the previous step. So, in order to enable the communication between agent and server, create a inbound rule on Windows firewall to allow communication trough port 9090.

5) If everything is configured properly, you should have a Unauthorized agent showing up on TeamCity Server.

unauthorized_agent

6) Authorize the agent and check that it’s capable of running properly your builds.

 

Making the Build Agent Elastic

 

Right now we have a EC2 based TeamCity server and a TeamCity Agent running on a EC2 instance but it’s not leveraging the Elastic features of TeamCity yet. To enable Elastic, Cloud Based, build agents one has to create a AMI image of the instance  running the Agent.

1) On AWS, select the running instances and right-click on the instance running the Build Agent. On the context menu, select “Create Image (EBS AMI)”

Create_Image_EBS[4]

2) When the AMI becomes available on aws, go to Cloud Tab under TeamCity Agents page:

cloud

3) On the configuration page, create a new Cloud Profile. As of now, the only option of Cloud Type is Amazon EC2. Choose your location and availability zone to match the ones the server instance is running in order to avoid extra Data Transfer costs.

cloud_profile[5]

 

That’s it. TeamCity is configured to use Elastic Build Agents.

When a build is triggered, if there are no agents available to run it, the build will be placed on the Build Queue. A instance will be requested to EC2. It will take approximately 10 minutes for the instance to be running and authorized as a Build Agent on TeamCity.

instance_starting

My goal with this post was to document the process of migrating Headspring’s build system to leverage EC2 features. I hope this helps not only me.

One last thing, can you do me a favor and vote for this TeamCity feature request?

HotFix for Checkdisk hanging at 1 sec

I have a Dell Studio XPS 1640 running Windows 7 Home Premium x64.

Yesterday it crashed (sigh) and when I rebooted the check disk was triggered. To my, not pleasant, surprise, Check Disk hanged at 1 sec left on the countdown for pressing a key to skip the disk check. Doing a hard reboot several times I was able to start windows again.

Well, after some googling I found the HOTFIX KB 975778 that solved the issue like a charm for me.

Handling Content Type as we should, through HTTP Headers

In the previous post I spiked a solution to return different result types depending on a requested format. While the solution I came up with was sufficient to address the points issued on a thread at .NetArchitects, Eric Hexter hit the nail on the head with the following tweet:

@pedroreys nice post on the formatter. I would consider http headers instead of routing, although that is hard to manually debug

The right way to specify the types which are accepted for the response is by using the Accept request-header. It’s been a while since Eric’s tweet, but here is the solution I came up with.

Lemme first define the expectations I defined:

If the Accept header of the request

- Contains “text/html” a ViewResult should be returned.

- Contains “application/json” and not contains “text/html” a JsonResult should be returned

- Contains “text/xml” and not contains “text/html” a XmlResult should be returned

If the requested content type can not be handled by none of the previous statements, a response with a HTTP Status Code 406 – Not Acceptable should be returned.

For this post. I will start with the same controller class that I used in the previous one:

 

public class ClientController : Controller
    {
        public ActionResult Index()
        {
            var clients = new[] {
            new Client {FirstName = "John", LastName = "Smith"},
            new Client {FirstName = "Dave", LastName = "Boo"},
            new Client {FirstName = "Garry", LastName = "Foo"}
                               };

            return View(clients);
        }

    }

    public class Client
    {
        public string FirstName{get;set;}

        public string LastName{get; set; }
        
    }

 

In order to be able to format the result based on the requested content-type on the HTTP header, I will create a base controller class that will provide this functionality to the controller. I will call it ContentTypeAwareController.

 

public class ContentTypeAwareController : Controller
{
   public virtual IContentTypeHandlerRepository HandlerRepository { get; set; } 

    protected ActionResult ResultWith(object model)
        {
            var handler = HandlerRepository.GetHandlerFor(HttpContext);
            return handler.ResultWith(model);
        }
    }

 

That’s it. This base class is responsible just to ask the Handler Repository for the right handler to the given HttpContext and call the ResultWith() method on the returned handler.

The IContentTypeHandler interface is pretty simple as well:

 

public interface IContentTypeHandler
{
    bool CanHandle(HttpContextBase context);
    ActionResult ResultWith(object model);
}

 

Simple and intuitive, right?

Let’s implement this interface then, but first let’s set the expectations as unit tests:

 

[TestFixture] 
    public class JsonHandlerTest : HandlerTestBase 
    { 
        [Test] 
        public void should_return_a_JsonResult() 
        { 
            var handler = new JsonHandler(); 
            var result = handler.ResultWith("whatever"); 
            result.ShouldBeType<JsonResult>(); 
        } 

        [Test] 
        public void should_Handle_request_when_content_type_requested_contains_Json() 
        { 
            Headers.Add("Accept", "application/json"); 

            var handler = new JsonHandler(); 
            handler.CanHandle(HttpContext).ShouldBeTrue(); 
        } 

        [Test] 
        public void should_not_handle_request_when_content_type_requested_does_not_contain_json() 
        { 
            Headers.Add("Accept", "text/xml"); 

            var handler = new JsonHandler(); 
            handler.CanHandle(HttpContext).ShouldBeFalse(); 
        } 
    }

 

To keep the post short, I will put here just the test above, all the other tests are in the solution on github.

The implementation of the JsonHandler is below:

 

public class JsonHandler : IContentTypeHandler
    {
        public const string _acceptedType = "application/json";
        public bool CanHandle(HttpContextBase context)
        {
            var acceptHeader = context.Request.Headers.Get("Accept"); 

            return acceptHeader.Contains(_acceptedType);
        } 

        public ActionResult ResultWith(object model)
        {
            return new JsonResult
                       {
                           Data = model,
                           ContentEncoding = null,
                        JsonRequestBehavior = JsonRequestBehavior.AllowGet
                       };
        }

 

It’s important to set the value for the JsonRequestBehavior property on the JsonResult object. Otherwise you will end-up with a InvalidOperationException.

The other handlers are pretty straight-forward as well.

The one to handle requests for Xml:

 

public class XmlHandler : IContentTypeHandler
{
    public const string _acceptedType = "text/xml";

    public bool CanHandle(HttpContextBase context)
    {
        var acceptHeader = context.Request.Headers.Get("Accept");

        return acceptHeader.Contains(_acceptedType);
    }

    public ActionResult ResultWith(object model)
    {
        return new XmlResult(model);
    }
}

 

And the other to be the default, Html handler:

 

public class HtmlHandler : IContentTypeHandler
{
    public const string _acceptedType = "text/html";
    
    public bool CanHandle(HttpContextBase context)
    {
        var acceptHeader = context.Request.Headers.Get("Accept");

        return acceptHeader.Contains(_acceptedType);
        
    }

    public ActionResult ResultWith(object model)
    {
        return new ViewResult
                   {
                       ViewData = new ViewDataDictionary(model),
                   };
    }
}

 

 

And finally, we will have a handler to return the 406 code in case of the others handlers not being able to handle the requested content type.

 

public class NotAcceptedContentTypeHandler : IContentTypeHandler
{
    public bool CanHandle(HttpContextBase context)
    {
        return true;
    }

    public ActionResult ResultWith(object model)
    {
        return new NotAcceptedContentTypeResult();
    }
}

 

With all that code in place, all we have left to do is to make the ClientController to inherit from the ContentTypeAwareController and call the ResultWith() method:

 

public class ClientController : ContentTypeAwareController 
{
    public ActionResult Index()
    {
        var clients = new[]{
            new Client {FirstName = "John", LastName = "Smith"},
            new Client {FirstName = "Dave", LastName = "Boo"},
            new Client {FirstName = "Garry", LastName = "Foo"}
                           };

            return ResultWith(clients);
        }

    }

That’s it. The Client Controller now is able to respond to Json, Xml or Html requests. Let’s check it.

The web page is still being rendered as expected. Fine.

 

html

 

But now, let’s see what responses do I get when I change the Http Header to “application/json”:

 

json

 

Yeah,  a Json result. Cool. What about “text/xml”:

 

xml

 

Yeap, that works too. Finally, let’s see if the 406 status code is returned when an invalid content type is requested:

 

406

 

So, with not so much code the Client Controller now is able to format its result based on the Content Type requested on the HTTP Header, as it should have been before. Off course this is a spike and the code can be improved. Feel free to grab it from github and improve it.

Mimicking Rails formatter behavior in ASP.NET MVC

I was reading this (pt-BR) thread at the brazilian .Net mailing list – dotNetArchitects – which, at first, did not have nothing to do with Rails nor output format. But, as usual, the thread deviated from the initial subject – what is not a bad thing – and somehow got into the fact that it would be nice to have in ASP.NET, more specifically in ASP.NET MVC, a behavior similar to what Rails have by default to handle output formatters.

In Rails is possible to handle the format that will be returned by the controller as in the following example:

class PeopleController < ApplicationController

def index 

@people = People.find(:all)

respond_to do |format|

format.html

format.json(render :json => @people.to_json)

format.xml(render :xml => @people.to_xml)

end

end

With the controller above, if  one requests the url /people/index the result will be in html. but, if people/index.json is requested instead, the result will be a json file.

That’s a great feature when one is developing an API and let the client code to decide the format it wants to receive the data. Unfortunately, we can’t do that out of the box with ASP.NET MVC. But, ASP.NET MVC gives us lots of extensions points and that allows us to quite easily implement the functionality to mimic this neat behavior.

First, a disclaimer. The solution that I will present is heavily based on the solution present in the routing chapter of the book ASP.NET MVC in Action. The new edition of the book, now covering MVC 2, is, by the time I write this post, in public review. You can get more information about it on this post from Jeffrey Palermo. If you are an ASP.NET developer and don’t have read the book yet, stop everything you are doing and go buy it now.

All that said, lets have some fun.

In our application we have this simple Person entity:

public class Person
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

 

The application itself is really simple, it just shows a list of People. And yes, I was lazy and used the MVC sample.

Rails_MVC_1 

I will omit the view code for the sake of brevity as it has nothing to do with the goal of this post.

The PeopleController is pretty dumb too, it just returns a collection of Person to the View:

public class PeopleController : Controller
{
    public ActionResult Index()
    {        
        var people = new[]
                {
                    new Person{FirstName = "Joao", LastName = "Silva"},
                    new Person{FirstName = "John", LastName = "Doe"},
                    new Person{FirstName = "Jane", LastName = "Smith"}
                 };
        return View(people);
    }
}

So, it works, pretty straight-forward with default routing. Now, we want to mimic the Rails formatter behavior. So, what we want is that the url /People/Index.json returns a json file instead of the html page. Ok, what we first need to do is check to see if this new url will be routed to the correct Action with the actual route configuration.

With the help of MvcContrib project, we can write the following test to check if our route works as we expect:

[TestFixtureSetUp]
public void Setup()
{
    MvcApplication.RegisterRoutes(RouteTable.Routes);
}

[Test]
public void Should_map_People_url_to_people_with_default_action()
{
    "~/people".Route().ShouldMapTo<PeopleController>(x => x.Index());
}

[Test]
public void Should_map_People_index_json_url_to_people_matching_index_action()
{
    "~/people/index.json".Route().ShouldMapTo<PeopleController>(x => x.Index());
}

As expected, the former test passes, but the latter, the one that tests the new behavior we want to test don’t. And more important, it fails as we expected it to fail.

Rails_MVC_2

In order to make it work, we need to add a new route to our Route Dictionary.

routes.MapRoute(
"Format",
"{controller}/{action}.{format}/{id}",
new {id = ""});

With this new route defined, we now get a green test. Neat.

Rails_MVC_3

With the route working and the request being routed to the right method, the controller now needs to extract the format information from the route data and handle the output format accordingly.  To achieve this, I’ll create a Layer SuperType, a abstract class that will derive from the base Controller class. The PeopleController, then, will derive from this new class instead of the base Controller one.

This new Controller class, that I will name RailsWannabeController, will override the OnActionExecuting method of the Controller base class in order to extract the requested format from the RouteData. It also will have a property named Format who will store the format information. Finally, it will have a FormatResult method that returns the right ActionResult accordingly to the requested format.

Here is the test to ensure that RailsWannabeController correctly extracts the format information out of the RouteData.

[Test]
public void Should_extract_format_information_from_RouteData()
{
    var expectedFormat = "json";
    var routeData = Stub<RouteData>();
    routeData.Values.Add("format",expectedFormat);
    
    var filterContext = 
        new ActionExecutingContext()
        {
            Controller = Stub<RailsWannabeController>(),
            RouteData = routeData
         };

    var controller = new StubController();
    controller.ActionExecuting(filterContext);
    var requestedFormat = controller.RequestedFormat;
    Assert.AreEqual(expectedFormat,requestedFormat);
}

As the RailsWannabeController will be a abstract class, a MockController StubController concrete class has to be created. This class will derive from RailsWannabeController and will have a public method to enable the OnActionExecuting method to be called. It also will have a RequestedFormat property to expose the value of the requested format extracted from the RouteData.

public class StubController : RailsWannabeController
{
    public string RequestedFormat { get { return base.Format; } }
    
    public void ActionExecuting(ActionExecutingContext filterContext)
    {
        base.OnActionExecuting(filterContext);
    }
}

Now, all we need to do to make this test pass is code the RailsWannabeController class.

public class RailsWannabeController : Controller
{
    protected static string[]  ValidFormats = new [] {"html","json","xml"};
    protected string Format { get; set; }

    private const string formatKey = "format";

    protected override void OnActionExecuting(ActionExecutingContext filterContext)
    {
        base.OnActionExecuting(filterContext);
        ExtractRequestedFormat(filterContext.RouteData.Values);
    }
    
    private void ExtractRequestedFormat(RouteValueDictionary routeValues)
    {
        if(routeValues.ContainsKey(formatKey))
        {
            var requestedFormat = routeValues[formatKey].ToString().ToLower();
            if(ValidFormats.Contains(requestedFormat))
            {
                Format = requestedFormat;
                return;
            }
        }
        Format = "html";
    }
}

Now that we got a green bar, we have to test if the ActionResult returned by the FormatResult method is from the type requested. That means that if the “json” format is passed in the RouteData, the FomatResult method the object returned must be of type JsonResult.

public void Should_return_the_correct_Action_Result()
{
    var expectedFormat = "json";
    var routeData = Stub<RouteData>();
    routeData.Values.Add("format", expectedFormat);

    var filterContext = new ActionExecutingContext()
    {
        Controller = Stub<RailsWannabeController>(),
        RouteData = routeData
    };

    var controller = new StubController();
    controller.ActionExecuting(filterContext);

    var people = new[]{
                         new Person{FirstName = "Joao", LastName = "Silva"},
                         new Person{FirstName = "John", LastName = "Doe"},
                         new Person{FirstName = "Jane", LastName = "Smith"}
                     };

    var formattedResult = controller.GetFormattedResult(people);
    Assert.AreEqual(typeof(JsonResult),formattedResult.GetType());
}

 

There is a LOT of code in this test. Much more then I’d like it to have. But to keep the code as explicit as possible, for sake of clarity, I left it that way. I’ll let the refactoring of this code as an exercise for the reader.

As you may guess from the code above, the MockController StubController class had to be modified to include the GetFormattedResult method.

public class StubController : RailsWannabeController
{
    public string RequestedFormat { get { return base.Format; } }
    
    public void ActionExecuting(ActionExecutingContext filterContext)
    {
        base.OnActionExecuting(filterContext);
    }

    public ActionResult GetFormattedResult(object model)
    {
        return base.FormatResult(model);
    }
}

 

In the  RailsWannabeController we have to create a FormatResult method that, as it name states, format the result accordingly to the format requested.

protected ActionResult FormatResult(object model)
{
    switch (Format)
    {
        case "html":
            return View(model);
        case "json":
            return Json(model);
        case "xml":
            return new XmlResult(model);
        default:
            throw new FormatException(
                string.Format("The format \"{0}\" is invalid", Format));
    }
}

ASP.NET MVC framework gives us the Json method out of the box. The XmlResult is provided by the MvcContrib project.

Now, all we have to do is change the PeopleController class to derive from the Layer SuperType instead of the Controller base class. And change the View() method call to be a call to the FormatResult method.

public class PeopleController : RailsWannabeController
{
    public ActionResult Index()
    {        
        var people = new[]
            {
                new Person{FirstName = "Joao", LastName = "Silva"},
                new Person{FirstName = "John", LastName = "Doe"},
                new Person{FirstName = "Jane", LastName = "Smith"}
             };
        return FormatResult(people);
    }
}

OK, all the tests are green and we are done coding. Let’s check if all we’ve done works properly.

Rails_MVC_4

Using the default route we get the same result. Great.

Rails_MVC_6

If “json” format is provided in the URL we get a json file instead. Woot.

Rails_MVC_7

Finally, requesting a xml we get the result in a XML file. That’s it we are done.

I know the post is a little bit long, but that’s because my intention was to explicit all the process of mimicking the behavior of the Rails formatter. And, as you should do as well, I managed to have the code I was working on to be covered by tests that gave me the confidence that I wasn’t breaking anything while I was making the changes. It is specially important to cover your routes with tests as route changes can introduce some hard to find bugs. I hope this post helps to show that, although ASP.NET MVC may not have all the features we want it to have, it’s high extensibility allows us to extend it and easily introduce new behaviors.

As I’m not a native English speaker, I ask and encourage you to point out not only technical mistakes but also any language related mistakes that I’ve made at this post.

UPDATE: As Giovanni correctly pointed out in his comment, the MockController class is not a Mock but a Stub. I renamed the class to StubController to avoid misunderstandings. Thanks, Giovanni.