• Resolving Angular $http promises in services vs. controllers

    Resolving Angular $http promises in services vs. controllers

    For some time now I’ve been asking myself one thing when it comes to resolving $http promises in Angular – “would it be a better practice to resolve them in services that make the calls or in controllers that call these services?”. I always simply went with whatever was the practice on the current project as I didn’t want to introduce inconsistencies into the project. Well I finally set down, played around with it a bit and gave it some active thinking. Turns out, the answer is hugely dependent on the context and there is no right or wrong way to do it but I’ll explain how I decided to do it from now on.

    This post is not about basic usage of $http, then, success, error, callbacks or promises in general. For that I recommend a very nice blog post by dwmkerr.

    Now, one “sub”-question I had here was should I use then or success. Although I noticed a lot of people seem to dislike using success and error callbacks because their signatures are inconsistent with the then callback (they are only a thin wrapper around it), I actually find it very useful that I don’t have to extract the “data” from the response object on my own. If I need to do something like that I still have the option of falling back to using then (which is fine). Some people seem to be really bothered by this so they even go as far as to wrap their responses in new promises using $q to match the then signature, but as Rick Strahl wrote – in this case I don’t really mind trading a bit of inconsistency for simplicity. I don’t see a point in adding additional chunk of wrapper-code to every API call just for the sake of it. So, I decided to go with the success/error combination.

    Back to the main question.. I never make any $http requests directly from controllers and along with any additional “client-side business logic”, that code goes into services. As a rule of thumb, I decided to go with a very simple approach. Since most often what happens after the success callback kicks in is controllers concern, my services return $http promises. Success promises are then resolved from within controllers. If there really is a need (and if it’s logical) to resolve the success callback in the service i will then do it there instead. The whole thing looks something like this:

    Now you’re probably wondering – what about the error callbacks? I could think of a couple of different scenarios of what could go wrong here:

    1. Unhandled exceptions
    2. 404’s
    3. “Expected exceptions” such as unauthorized (401), forbidden (403) or anything else you might knowingly return from the back-end
    4. Back-end model validation (I decided to go with 422 for this)

     

    To make my life easier, for the first three I decided to go with an http-interceptor-service which is in charge of handling WebAPI exceptions. This way I don’t have to rewrite the same error callback code for every $http request. It’s nice, centralized and provides enough flexibility (assuming you’re taking good care of your WebAPI and return proper http statuses). I will explain how to make an http interceptor in my next post.

    As for the last, fourth case, I created a couple of directives that wrap html input elements (text, textarea, dropdown..), WebAPI model state and validation messages (which have the format of foundation abide). For this to work, model state is needed inside a controller and since $http treats 422 status code as an error so far this was the only situation where I had to resolve the error callbacks inside controllers. In this case the http interceptor simply skips any 422 it encounters and it can then be taken care of elsewhere. I will explain this in more detail in my next-next post. Pinky swear. ;)

    The explained might not be the best way to cope with the whole problem but I it worked well for me so far so I hope I was at least able to provide a couple of useful ideas. I did try to google out other blog posts / SO threads about this but I only found a few ones that dealt with something similar but not entirely. If you know of any good ones, please feel free to drop a link down in the comments. Also, if you have a different approach which works for you or you see any problems with mine, do let me know! :)

    Cheers!

  • Enhancing RESTful WebAPI controllers with RPC style endpoints

    Enhancing RESTful WebAPI controllers with RPC style endpoints

    During the setup stage of the new project I’m working on, the decision was made to try and use RESTful WebAPI controllers that would support RPC style endpoints as well. I did a bit of research and found a nice post from Carolyn Van Slyck that explains how this can be achieved by creating a few different routing rules in the WebApiConfig file. I wasn’t however fully satisfied with this approach so I tried to do it in a different way.

    If you follow .net’s WebAPI conventions, you can simply write action methods that start with http verbs (GET, POST, PUT, DELETE) and everything will work out of the box with the default WebApiConfig setup. For example you can name your RESTful action methods something like GetAllProducts, GetProduct, PostProduct, etc… No extra action route attributes (such as RoutePrefix, Route, HttpGet/Post/Put/Delete) are needed for this approach. WebAPI will in this case expect the correct http verb.

    However, as soon as you add a custom action, it will start causing problems (you will start getting the infamous “Multiple actions were found that match the request” response). Say you add CustomGetEndpoint method – this will cause GetAllProducts and GetProduct to not work any more. Luckily by adding a few things we can make it work.

    The first step is to enable attribute routing in your WebApiConfig:

    The second step is to add the RoutePrefix to your controller and to add the Route and http verb attributes to all your custom RPC actions (they of course don’t have to start with “custom” but you can name them whatever you want):

    In this case ProductModel is very simple:

    Keep in mind that if you are using a BaseApiController class which inherits .net’s ApiController, ensure that all your methods are protected. If you make your BaseApiController methods public, this will mess up the routing and you will start getting the “Multiple actions were found that match the request” response (took me long enough to figure that one out!).

    That’s all folks, you can now enjoy both worlds at the same time. Enjoy!

  • Automatic WebAPI property casing serialization

    Automatic WebAPI property casing serialization

    If you work a lot with WebAPI’s and JavaScript and would like to follow the convention of lowerCamelCasing your JSON and UpperCamelCasing your .net model properties, you can do that by using Newtonsoft.Json CamelCasePropertyNamesContractResolver. This works for both directions “WebAPI -> client” and “client -> WebAPI” so that’s cool as well. You should simply set this up in your Global.asax and you’re good to go.

    Well that was an easy one. It was also my chance to try out the Gist GitHub ShortCode plugin. I think it works great, and it’s easier to use. I like it! And if you’re using WP, you should try it as well. Hopefully if won’t take me too long to migrate everything from the Code Colorer plugin that I’ve been using so far. :)

  • TF300T Easter brick resurrection

    TF300T Easter brick resurrection

    Since Asus stopped providing us with official updates for TF300T tablet from the old Android 4.2.1, I decided it was finally time to root it and put a brand new shiny Lollipop ROM onto it. I rooted all my previous phones and it all went well all the times so I figured it would go smooth this time as well. Except it didn’t. :D

    Somehow I managed to follow an outdated tutorial for unlocking and flashing the TF300T that recommended using the ClockWorkMod recovery. All went well up to that point, I unlocked the tablet and put the CWM without any trouble but when I tried to flash the CyanogenMod ROM, CWM complained about not being able to mount any partitions. That meant that I couldn’t select the ROM zip file from a internal location or sideload it with ADB. Funky.

    So I did a bit more research and it turned out that CM does not really support CWM for Lollipop (not sure if only for this device, or generally, but it doesn’t matter anyway). Solution – flash TWRP recovery instead. Ok, so I went back to fastboot and tried to flash TWRP over CWM but it failed every time. This is where it all started to go down..

    My first mistake was that I downloaded the *-JB version of TWRP instead of the *-4.2. I was quite lucky here as it was only later that I read that if you screwed that part – hardbricking was guaranteed. Instead of everything going to smoke already, flashing TWRP simply failed every time (RCK would stop blinking and rebooting using fastboot -i 0x0B05 reboot didn’t work) and each reboot into recovery would load CWM again and again. I could still boot into Android as well. I thought that was strange so I tried a couple of times but of course it didn’t help.

    My second (even bigger) mistake was – I selected the “wipe” option from the fastboot screen.. Fool of a Took. ‘#$&%*!… I thought this would somehow help but instead I got stuck inside an infinite CWM loop. I had ADB access, but rebooting to fastboot using adb reboot-bootloader simply didn’t work. After ~6 hours of trying, I almost gave up at this point but after a bit more research I found a slightly different version of the same command – adb reboot bootloader (without the dash) and BINGO! That was my way back. So happy! Now I could boot into fastboot again.

    Well now I only had to figure out how to get rid of the CWM.. This SE answer stated that I should restore the device to defaults by flashing Asus stock ROM (download firmware for your language, unzip it and flash the *.blob file). Running fastbot erase before flashing the Asus firmware looked reeeally scary, but it was the only option I had so I went for it.. It all went great and after that it was very easy to flash TWRP and then CM and Gapps from TWRP.

    Problem solved! After ~8 hours of not giving up, my tablet was resurrected and alive again! Easter day of 2015. True story.

    Well, lesson learned – do the research and RTFM in advance. And I hope the post helps some other impatient bricker! :)

    PS – the device works faster/smoother/better with Lollipop. *thumbsup* for the Android team.

    Cheers!

  • Secret Arcade Jam – FireWallCade

    Secret Arcade Jam – FireWallCade

    So, last weekend a Secret Arcade Jam was held, organized by Erik Svedäng for his else Heart.break() game. The goal was to create a mini-game that can be run on computer terminals inside the game. Erik created a simple Ruby-inspired programming language called Sprak which is used to code the mini-games. What you get inside the game is a terminal with an editor (it even has syntax-highlighting and everything!), compiler and runner to try your code out..

    This all sounded like a whole lot of awesomesauce to my friend Dalibor and me, so we decided to give it a go and try and make something. After two days of (haaard) work, we had our own mini game – *drumroll* FireWallCade. It’s a very simple game. You have good and bad “network packets” (green and red blocks) falling down from the top of the screen at increasing speed. Then there are two “ports” at the bottom which can be opened and closed by using left and right keys. The goal here is to block the bad packets and to let the good ones pass through. We even created a splash screen in ASCII, a menu and a GameOver screen. Here are a few screenshots..

    else Heart.break() shell

    ehb-screenshot-1

    code editor (very cool!)

    ehb-screenshot-2

    ASCIIIIIII splash

    ehb-screenshot-3

    about (SPACE handlers eveeerywheeereee)

    ehb-screenshot-4

    gameplay :)

    ehb-screenshot-5

    Well anyway, we won! :D People were able to vote on their favorite game and apparently we got over 40% of the votes. Thanks everyone! :)

    The prize for the first place was – your game ends up in the else Heart.break() itself. That means we now actually need to make it even better! Optimize it a bit, perhaps make it look a bit nicer as well, we’ll see.. I hope Eriks game ends up a good and interesting.

    Bye-bye-bye-bye-bye-byeeeeeee

    ehb-screenshot-6

  • Dynamically resolving function shared arguments in JavaScript

    Dynamically resolving function shared arguments in JavaScript

    Sometimes we have functions which expect the same arguments as other functions, all fine there. Sometimes these arguments are obtained/resolved asynchronously. If there are a lot of these functions that share the same resource, we can come up with lot of unnecessary boilerplate.

    Imagine having a function like resolveMetaData() which asynchronously obtains fresh data every time it’s called (to keep the code at a very simple level, for the purpose of this post I’ll be using setTimeout() instead of something a bit more complex like an AJAX call):

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    function resolveMetaData(callback) {
        // for the purpose of the demo we'll simply mock the metaData object    
        var metaData = { message: "Meta message", start: new Date() };

        // async business logic example
        setTimeout(function () {
            metaData.end = new Date();
            // after metaData is ready, resolve callback
            callback(metaData);
        }, 1000);
    }

    And two functions that require a new instance of metaData upon their execution:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    function fnOne(data, metaData) {
        console.log(data);
        console.log(metaData);
    }

    function fnTwo(id, data, metaData) {
        console.log(id);
        console.log(data);
        console.log(metaData);
    }

    The simplest way to provide the latest metaData to these functions would be to use callbacks like this:

    1
    2
    3
    4
    5
    6
    7
    resolveMetaData(function(metaData) {
        fnOne("fnOne", metaData);
    });

    resolveMetaData(function(metaData) {
        fnOne(1, "fnTwo", metaData);
    });

    If you had to write a lot of functions similar to fnOne() and fnTwo() (ie. 10 or more) and all of them required the latest metaData, you would most probably be tempted to somehow reduce the code and get rid of the callback boilerplate. The first two ideas that came to my mind on how to resolve this were function overloads and/or having a base function that would handle metaData resolving. Since JS doesn’t really support overloading (in the same way as say C# does), having a base function to handle metaData resolving seems like a safe bet. The only question is – how do we call a function in JS with the parameters we got and resolve the shared parameters asynchronously?

    Fortunately Function.prototype.apply() comes to the rescue! It allows us to call a function with arguments as an array which is quite handy. Since functions in JS are objects, we can now create a base function which accepts the function object of the function we wish to call, and the args we have at that point. It then resolves metaData, appends it to the arguments array and calls the passed function with these arguments. This is how the base function would look like:

    1
    2
    3
    4
    5
    6
    function fnBase(fn, args) {
        resolveMetaData(function (metaData) {
            args.push(metaData);
            fn.apply(this, args);
        });
    }

    And this is the how we can now call fnOne() and fnTwo() through the fnBase():

    1
    2
    fnBase(fnOne, ["fnOne"]);
    fnBase(fnTwo, [1, "fnTwo"]);

    It would be possible to place metaData as a first argument in fnOne() and fnTwo() signatures but that would require additional args position handling in fnBase() so it is probably best to put metaData as the last argument.

    That’s it, hope it helps. Enjoy! :)

Back to top