Skip Navigation LinksHome

More from me at introspective.io

Hi folks,

This has always been the home of my writing on technical matters (the name of the blog is the clue). Recently, I've been wanting to write more and more on my thoughts and learnings in business. Applying LeanStartup principles, growth hacking, product management and more. The joy of code seemed like an odd place to host such content so I created a new blog: introspective.io.

Head on over to read my second post, a review of my recent visit to the Lean Startup conference in San Francisco.

Tags: LeanStartup

 
Josh Post By Josh Twist
7:59 PM
15 Dec 2014

In-App Purchase and Mobile Services.

At CocoaConf Dallas this weekend I attended a great presentation by Manton Reece on In-App purchases and subscriptions. In it, he cited a few key points:

  1. Ideally you would load the list of product IDs from a cloud service rather than baking them into your application
  2. You should verify your receipts on the server and not trust the client (it's extremely easy to cheat if you do).
  3. Ideally, the enforcement of your subscription should also occur on the server, because we never trust the client.

Since Mobile Services is…

  1. Good at storing and fetching data in the cloud
  2. Capable of executing server code and making HTTP requests from the server to other services
  3. Understands users and allows developers to enforce an authorization policy

... I thought it would be fun to explore how easy it would be to implement this with Windows Azure Mobile Services.

Note - this is not a tutorial on In-App Purchase. Manton mentioned that he would blog the content from his presentation in which case I'll link to this from here. In the meantime - here are the apple docs which are pretty thorough.

To demonstrate the idea, we're going to take the Mobile Services quickstart and make it subscription based. That is, in order to use the app you must pay a weekly fee otherwise the server will deny access. This will be validated and enforced on the server and associated with a particular user. So if I login to another iOS device and have already paid my subscription - the server will automatically let me use the service based on who I am.

Since we're handling users, we need to add login to our application - to save me from boring you with this code here, check out our easy to follow authentication tutorial or watch Brent's 10 minute video: Authenticating users with iOS and Mobile Services.

Now for the in-app purchase stuff. Assumptions:

  • you've configured an app in the iOS provisioning portal for in-app purchase
  • you've created some auto-renewable subscriptions for you to play with (I called mine com.foo.weekly, com.foo.monthly)
  • you've created some test users in itunes connect to 'buy' subscriptions

Phase I - retrieving the Product IDs

This is straightforward since reading data from Mobile Services is extremely easy. I created a new table 'products' and set all permissions except READ to 'Only Script and Admins' (private). READ I set to Only Authenticated Users because they shouldn't be buying stuff without logging in, that's a requirement of my example (but not all, others may choose to make this data completely public).

image

First, we need to add some new code to the QSTodoService class, including these properties (not public, so just in the .m file)

    // add an MSTable instance to hold the products
    @property (strong, nonatomic) MSTable *productsTable; 

// add an MSTable instance to hold the receipts @property (strong, nonatomic) MSTable *receiptsTable;

// add an array property to store the product data loaded from your mobile service @property (strong, nonatomic) NSArray *productData;

// add an array property to the topo of QSTodoService.m @property (strong, nonatomic) NSArray *products;

// add a completion block for the loadproducts callback @property (nonatomic, copy) QSCompletionBlock loadProductsComplete;

And don't forget to initialize those tables in the init method:

    self.productsTable = [self.client tableWithName:@"products"];
    self.receiptsTable = [self.client tableWithName:@"receipts"];

We now need to adjust the code that runs after a successful login to load the products data into an array. Add the following method to your QSTodoService.m and the appropriate signature to your .h file. Note you'll need to link to StoreKit.framework

QSTodoService will also need to implement two protocols to complete the walkthrough. I added these in QSTodoService.m as follows:

@interface QSTodoService() 

Notice I added UIAlertViewDelegate too, to save time later. We'll need to implement two methods to satisfy these protocols

Notice how we store the products in our products array property. Finally, we add a call to loadProducts to the login callback in QSTodoListViewController

Phase II - enforce access based on receipts stored in the server

We'll need to prevent people from reading their todo lists if they haven't paid up. We can do this with read script on the todoItem table as follows:

Phase III - add purchasing to the client

Now, we'll add code to the client that listens for the status code 403 and if it has received, prompts the user to subscribe to the service. We'll do this inside refreshDataOnSuccess in QSTodoService.m by adding this code:

To handle the response, you'll need to implement UIAlertViewDelegate's didDismissWithButtonIndex on the QSTodoService.m class (this should probably be on the controller, but this blog post is focused on the concepts - the correct pattern for your app is left as an exercise for the reader). We'll also need to complete our implementation of SKPaymentTransactionObserver above

If this is ever to fire, you'll need to add the QSTodoService class as an observer in the init method

	[[SKPaymentQueue defaultQueue] addTransactionObserver:self];

For this to work, we'll need to create a receipts table. In this case, all operations are locked down except for INSERT which is set to Only authenticated users. Now for the really important part. It's here that we must check the validity of the receipt. We'll also use a technique that I call 'data-swizzling'. It's a lot like method swizzling (which both JavaScript and Objective-C developers are often familiar). In data-swizzling, we have the client insert one thing, but the server script changes the data that actually gets inserted.

In this case, we'll have the client pass the transaction receipt to the receipts table. But then we'll fire this to the App Store to check if it's valid. If it is, we'll receive a detailed response from Apple with contents of the receipt. This is the data we'll insert into the receipts table, along with the current user's userId.

Phase IV - verifying the receipt with apple and storing it in Mobile Services

This really isn't intended to be a full tutorial on In-App purchase and we don't cover a number of topics including restoration of purchases and more, however, hopefully this post has shown how you can leverage Mobile Services to provide a comprehensive In-App Purchase Server and protect your services from customers that haven't paid.

Here's a snap of some of the data stored in my receipt's table, returned from Apple (including our own userId)

image

And here’s an animated gif showing the flow:

InApp

 
Josh Post By Josh Twist
8:58 AM
08 Apr 2013

Building your app version defense strategy – Part II

In Part I, I talked about building the smarts into your client to give you a chance to provide your users with some advice if you start to make breaking changes to your service, namely “upgrade please!”.

However, sometimes you don’t need to be quite so aggressive. In some cases you’re making a change to the server but would be happy to continue supporting older clients, they just might miss out on some new feature. I’m working on an upgrade to doto now which falls squarely in this category. Basically, I added timestamps to some records in the insert, a la:

function insert(item, user, request) {
    item.created = new Date();
    request.execute();
}

Easy enough (of course, the actual script is a little more complicated but I removed the unrelated guff to keep this post tidy). I built an offline capability in doto that syncs to Core Data to allow you to work with your data even when you’re disconnected from the internet (more on that in another post, it’s surprisingly simple). Now this broke my implementation in the older client because it now tries to save an additional column ‘created’ to my old Core Data store. CRASH. Oh well, the old client doesn’t need that information so I can just strip it out. However, the new client could take advantage of that data.

So the second key part of my versioning defense strategy is to pass information about the current client app version in every request. This way I can add code in my script to smartly remove properties that I know will break my client, e.g.

function insert(item, user, request) { 
    item.created = new Date(); 
    request.execute({
	    success : function() {
		    if (request.parameters.build < 1.2) {
			    delete item.created;
		    }
			request.respond();
	    }
	}); 
}

And on the read function

function read(query, user, request) {
    request.execute({
	    success: function(results) {
		    if (request.parameters.build < 1.2) {
			    results.forEach(function(r) { 
				    delete r.created;
			    }
			}
		    request.respond();
	    }
	});
}

To add this information to every request, I added another filter to my client (naturally) that encodes the additional data on to the URL.

Note – Ideally I’d have used headers for this but Mobile Services doesn’t currently allow you to control HTTP headers in scripts. If you’d like this feature too, get voting on our uservoice. I logged a feature request for this. My team is very customer led so this stuff does matter. We can’t make any promises about the order we’ll tackle things (there are usually multiple variables driving prioritization) but it really does count. Anyway, for now, some query strings on the URL will do just fine.

Here’s my Objective-C filter to add some information about the app to each and every request.

In this case, I add three query string values:

  1. build – this is the bundle’s CFBundleShortVersionString
  2. version – this is the bundle’s CFBundleVersion
  3. p – this indicates the platform (e.g. iOS)

You may want to add more information like OSVersion and maybe device specific information but I’ll leave that as an exercise for the reader.

My last post incited a few great comments and questions - I look forward to more of that with this post. In one case, Nate noted how it's hard to ship breaking changes as you don't know exactly when your app will land in the store. If you genuinely have to break old clients (and the above, backwards compatible approach won't work for you) AND you can't be sure when you can roll your service forward then you’ll need to use a combination of the patterns in this series. That is, your server effectively supports both modes of operation for some time (switching behavior based on the version passed in the querystring). Then, once your new version has bedded in, you can send out those upgrade messages!

In reality I believe we tend to spread our changes over a number of ‘bridging’ releases where in addition to some tolerance in the server, a new client also has a degree of tolerance with respect to server features ‘lighting up’; this helps to pave over these cracks. Some combination of all of the above is applied based on the specific scenarios.

Of course, a key thing to avoid is supporting many versions with significantly different logic and schemas if at all possible.

Versioning is challenging, and there is no silver bullet. But if you loosely follow the advice in this post and Part I (whether you use Mobile Services or not), you’ll be in a much better place to handle those challenges.

 
Josh Post By Josh Twist
3:06 AM
18 Mar 2013

Building your app version defense strategy – Part I

A few weeks back I shipped v1 of my first ever iOS app doto, with accompanying mini-site, video, the lot. The goal was two-fold: build an app for my wife and I that would be actually useful and, more importantly, walk in the footsteps of the iOS developer – soup to nuts. It’s one thing to play around building apps and experimenting, but entirely another to go end-to-end and actually ship and support an app.

image

http://doto.mobi

To ensure I got this all right, I found as many iOS app store checklists as I could and wanted to get my app through certification first time, no silly mistakes. And I’m pleased to report that I succeeded, 7 days later – the app arrived in the store. However, I can see in hindsight there were a couple of things missing from those checklists. Things I knew would be a problem but I chose to ignore it anyway, in the heated excitement of shipping. Of course, that was building an app versioning strategy.

One of the challenges of modern app development is how little control you have over people updating to the latest version. I’ve released two subsequent minor updates to doto since launch yet almost 50% of the install base remain on 1.0.0, which is fine, until I start to make server changes. There are two scenarios I want to consider:

  • I make a non-breaking change to the server and want to ‘downgrade’ the server behavior for older clients (backwards compatibility)
  • I make a breaking change to the server and want to force older clients to upgrade

In this post, we’ll look at my solution for the latter. In the next post, we’ll look at how I added code to help me support older clients where backwards compatibility is possible.

Forcing the upgrade

In iOS, there’s no way to actually *force* somebody to upgrade. However, I could make the client very aware of the need to upgrade or they’ll experience issues. To do this, I decided to implement a startup request that posts information to the service about the client (version, build etc) and the server can send back a set of messages. Each message can contain three properties:

  • Title – the title of the alert box
  • Message – the text to display to the user
  • Link [optional] – an optional link that if present, will be invoked.

The idea behind the link is I can force the app to open the appstore making it even easier for them to upgrade to the latest version (and it’s free!).

To do this, I created a ‘virtual table’ which I discussed in my recent posted video that gave an overview of the Mobile Services HTTP API (it’s worth watching and is less than 20 minutes long). I called the table vLoadMessages (the v prefix is a convention I use for virtual tables) and it had a single script on insert:

function insert(item, user, request) {

var messages = []; // if less than 1.2, request to upgrade if (compareVersions(item.version, "1.2") < 0) { messages.push({ title: "Please upgrade", message: "Please upgrade to the latest version of doto available in the store", link: "http://itunes.apple.com/us/app/doto/id590291737?mt=8&uo=4" }); }

request.respond(200, { messages : messages }); }

// this function helps me compare version numbers, such 1.0 and 1.2.3 function compareVersions(a, b) { var i, cmp, len, re = /(\.0)+[^\.]*$/; a = (a + '').replace(re, '').split('.'); b = (b + '').replace(re, '').split('.'); len = Math.min(a.length, b.length); for( i = 0; i < len; i++ ) { cmp = parseInt(a[i], 10) - parseInt(b[i], 10); if( cmp !== 0 ) { return cmp; } } return a.length - b.length; }

Notice how we only send messages back if the version of the client is < 1.2 (which is my new version, that will go out shortly). Now all I have to do is load the messages when the application starts up. The first thing, is to build the payload I’ll send to the server to identify the client:

NSDictionary *bundleInfo = [[NSBundle mainBundle] infoDictionary];
NSDictionary *clientDescription = @{
                                    @"version" : [bundleInfo objectForKey:@"CFBundleShortVersionString"],
                                    @"build" : [bundleInfo objectForKey:@"CFBundleVersion"],
                                    @"platform" : @"iOS",
                                    @"region" : [bundleInfo objectForKey:@"CFBundleDevelopmentRegion"],
                                    @"bundle" : [bundleInfo objectForKey:@"CFBundleIdentifier"],
                                    @"platformname" : [bundleInfo objectForKey:@"DTPlatformName"],
                                    };

Next, I ‘insert’ it into the vLoadMessages table (it’s not really going to insert, that’s why it’s a virtual table)

[self.vLoadMessages insert:clientDescription completion:^(NSDictionary *item, NSError *error) {
    if (error) {
       // do nothing, just log it using your logging methodology
       NSLog(@"On no, bad things: %@", error);
    }
    else if (item && [item objectForKey:@"messages"]) {
        // TODO - now lets display the messages
    [self displayMessageAtIndex:0 array:[item objectForKey:@"messages"]];
    }
}];

We expect an array of messages in the response, from 0...n (you never know, 1 might not be enough!)

- (void) displayMessageAtIndex:(int) index array:(NSArray *)messages {
    NSDictionary *message = [messages objectAtIndex:index];
    // Util is a simple class I created to help with showing alerts and 
    // providing block support
    [Util displayDialogWithTitle:[message objectForKey:@"title"]
                         message:[message objectForKey:@"message"]
                      completion:^{
                          // when the user clicks OK...
                          NSString *link = [message objectForKey:@"link"];
		      // if there's a link, open it up
                          if (link) {
                              NSURL *url = [NSURL URLWithString:link];
                              [[UIApplication sharedApplication] openURL:url];
                          }
                          int newIndex = index + 1;
                          if (newIndex == messages.count) {
                              [self setupWelcomeView];
                          }
                          else {
                              [self displayMessageAtIndex:newIndex array:messages];
                          }
                      }];
}

By example, here's the experience when a user tries to use this with a version older than 1.2:

upgradedoto

Check out Part II of this series

 
Josh Post By Josh Twist
3:51 PM
07 Mar 2013

Video: Overview of the Mobile Services HTTP API

You’ve probably heard me (sorry) talk about Simplicity with Enablement with respect to Mobile Services before. One thing we strive for in the team is simplicity at every level, and this includes our HTTP API (the client SDKs are basically thin conveniences over this HTTP API). To explain the HTTP API best, I thought I’d put together a short video (18 mins).

UPDATE: You can access and download high quality versions of the embedded video above here: http://channel9.msdn.com/posts/Overview-of-the-Mobile-Services-HTTP-API.

In the video, you’ll learn how to:

  • Create your own client for Mobile Services (if you want to :))
  • Consume a Mobile Service from another backend server using the Master key
  • Create 'virtual tables' that short-circuit the database altogether
  • Understand how the Mobile Services HTTP API works, including authentication
  • Learn more about the application key

I hope you find it useful.

 
Josh Post By Josh Twist
3:22 PM
05 Mar 2013

Periodic Notifications with Windows Azure Mobile Services

Periodic Notifications, sometimes called pull or polling notifications are an additional way to update live tiles in Windows 8. Mobile Services already has awesome support for sending push notifications to tiles (plus toast and badges) as demonstrated in these tutorials:

It’s just a single line of code to pro-actively update a live tile on a user’s device:

push.wns.sendTileSquarePeekImageAndText01(channelUrl, { text1: "boo", image1src: url });

However, Windows Store apps also support an alternative delivery mechanism where the client application can be instructed to poll a specific URL for tile updates on a periodic basis. This is great in a number of scenarios such as when you have frequently updating tile content and the tile will be the same for large groups of users. The canonical example is probably a news app that updates the top news stories throughout the day.

In this case the content of the tiles are are fixed for all users and, if you’re app is popular, that would be a lot of push notifications to move through (though NotificationHubs are another viable options here – especially for Breaking News where you may not want clients to wait for their next poll). However, for such frequent-rolling less-urgent content, Periodic Notifications are perfect.

Since I’m expecting my app to be popular, I want to avoid having all those requests coming through an API and hitting my database, that’s pointless. Instead I can create the notification tile XML and store it in blob storage at a URL that I’ll tell the client applications to poll as the periodic notification target.

There are two stimulii that I might use to cause me to regenerate the file.

  1. An event – for example an insert to the ‘story’ table. The idea here is that when an editor updates my Mobile Services database with a new story I could regenerate the file in blob storage.
  2. On a periodic basis. This is the recommended approach in the Guidelines and checklist for periodic notifications and they suggest setting a period of regeneration for the server file that is equal to the period you specify on the client.

In this case, I’m going to go with step 2 and choose an hourly schedule and naturally I’m going to use Mobile Services’ scheduler to execute a script.

image

And here’s the script that updates (or creates if necessary) the news.xml file that contains the content to be polled. This is a sample script so I set the content to be a toString of the current time. However, I know you’re an imaginative lot so I’ll let you work out where you want to source your data.

The last step, is to configure my Windows 8 application to poll this url, and it couldn’t be easier.

using Windows.UI.Notifications;

// ...

TileUpdater tu = TileUpdateManager.CreateTileUpdaterForApplication(); var uri = new Uri("http://your-account-name.blob.core.windows.net/tiles/news.xml"); tu.StartPeriodicUpdate(uri, PeriodicUpdateRecurrence.HalfHour);

You should put the code above, somewhere in the application that runs every time the application starts or returns from suspension. And in no time at all, you can have a tile that’s always relevant and creates almost zero load on your backend.

It should be noted that there are some features in Periodic Notifications that require custom headers to be sent from the server (x-wns-tag, for example) which you can’t achieve using this approach with blob storage. As ever, stay tuned as the Mobile Services team does move quickly!

 
Josh Post By Josh Twist
6:12 AM
18 Feb 2013

Working with Making Waves and VGTV – Mobile Services

We’ve just published a great case study:

image

Working with Christer at Making Waves was a blast and they’ve created an awesome Windows 8 application backed by Mobile Services.

Check out Christer and team talking about their experience of using Mobile Services in this short video (<5 mins)

image

 
Josh Post By Josh Twist
10:51 PM
28 Jan 2013

Debugging your Mobile Service scripts

During the twelve days of ZUMO, I posted a couple of articles that showed techniques for unit testing your Mobile Service scripts:

And whilst this is awesome, sometimes you really want to be able to debug a script and go past console.log debugging. If you follow this approach to unit testing your scripts then you can use a couple of techniques to also debug your unit tests (and therefore Mobile Service scripts). I recently purchased a copy of WebStorm during their Mayan Calender end of world promotion (bargain) and it’s a nice tool with built-in node debugging. I asked my team mate Glenn Block if he knew how to use WebStorm to debug Mocha tests. Sure enough, he went off, did the research and posted a great article showing how: Debugging mocha unit tests with WebStorm step by step – follow these steps if you own WebStorm.

For those that don’t have a copy of WebStorm, you can still debug your tests using nothing but node, npm and your favorite WebKit browser (such as Google Chrome).

The first thing you’ll need (assuming you already installed node and npm and the mocha framework) is node-inspector. To install node-inspector, just run this command in npm

npm install -g node-inspector

On my mac, I had to run

sudo npm install -g node-inspector

and enter my password (and note, this can take a while to build the inspector). Next, when you run your unit tests add the --debug-brk switch:

mocha test.js -u tdd --debug-brk

This will start your test and break on the first line. Now you need to start node-inspector in a separate terminal/cmd window:

node-inspector &

And you’re ready to start debugging. Just point your webkit browser to the URL shown in the node-inspector window, typically:

http://0.0.0.0:8080/debug?port=5858

image

Now, unfortunately the first line of code that node and the inspector will break on will be mocha code and it’s all a little big confusing here for a few minutes, but bear with it, because once you’re up and running it gets easier.

The first thing you’ll need to do is advance the script past the line , Mocha = require(‘../’)which will load all the necessary mocha files. Now you can navigate to the file Runnable.js using the left pane.

image

And in this file, put a break on the first line inside Runnable.prototype.run function:

image

If you now hit the start/stop button (F8) to run to that breakpoint, the process will have loaded your test files so you can start to add breaks:

image

Here, I’ve found my test file test.js:

image

And we’re away. After this, the webkit browser will typically remember your breakpoints so you only have to do this once. So there you go, debugging mocha with node-inspector. Or you could just buy WebStorm.

 
Josh Post By Josh Twist
11:34 PM
26 Jan 2013

Dispatching to different query functions in Mobile Services

It’s common to want to run trigger differing behaviors inside your read script based on a parameter. For example, imagine we have a table called ‘foo’ and we want to have a default path and two special operations called ‘op1’ and ‘op2’ that do something slightly different (maybe one loads a summary of the objects to reduce the amount of traffic on the wire whilst the other expands a relationship to load child records).

Here’s my approach to this:

So now, if I hit the HTTP endpoint for my table

http://todolist.azure-mobile.net/tables/foo

We’ll load the records as normal, returning a JSON array. However, if we add a parameter

http://todolist.azure-mobile.net/tables/foo?operation=op1

Then we’ll get the following response:

"this result is from operation1"

And if we hit ?operation=op2 then we’ll get:

"this result is from operation2"

And, with the script above if we hit some undeclared operation (?operation=nonsense) then we’ll go back to the default path (you may decide to throw an error).

 
Josh Post By Josh Twist
3:18 AM
26 Jan 2013

Using the scheduler to backup your Mobile Service database

Recently I launched my first iOS application called ‘doto’. doto is a todolist app with two areas of focus: simplicity and sharing. I wanted a super simple application to share lists with my wife (groceries, trip ideas, gift ideas for the kids, checklist for the camping trip etc). For more info, check out the mini-site or watch the 90 second video:

image

Now that I have a real app that stores real people’s data, I feel a responsibility to ensure that I take good care of it. Whilst it’s unlikely; it is possible that I could do something silly like drop a SQL table and lose a lot of data that is important to those users. So taking a periodic backup and keeping that in a safe location is advisable.

SQL Azure has a cool export feature that creates a ‘.bacpac’ file that contains your schema and your data – it saves the file to blob storage. And what’s more, they have a service endpoint with a REST API.

This means it’s easy for me to invoke an export from a Mobile Services script, even better, I can use the scheduler to do a daily backup.

Here’s the script I use; notice how the URL of the export service varies depending on the location of your database and server.

And now I just have to set a schedule, I’m going to go for 1 minute past midnight UTC.

image

Restore

If I ever need to restore the backup data I can create a new database from an import, right in the portal:

image

Which opens a cool wizard that even helps me navigate my blob storage containers to find the appropriate .bacpac file. To hook this new database up to my Mobile Service I could do an ETL over to the existing connected database or use the Change DB feature in the Mobile Service CONFIGURE tab:

image

Tags:

 
Josh Post By Josh Twist
5:23 PM
20 Jan 2013

A quick lick of paint

At the end of my previous post, I promised to update the layout of the thejoyofcode.com. Well, the situation was clearly so dire that it needed emergency attention and, with the help of the incredible bootstrap, we have a new look. OK, so this design won’t be winning any ‘webbys’ – but it only took a few hours and is much easier to read and more responsive.

Aurevoir 2005!

image

Tags: Other

 
Josh Post By Josh Twist
5:35 AM
03 Jan 2013

Exploring custom identity in Mobile Services (Day 12)

This is the last post in the series the twelve days of ZUMO and we'll pull together many of the themes from the last few days, including unit testing scripts, generating your own auth tokens and setting the auth token.

In today's post, we'll go a little experimental, and explore how we could use all these techniques and the built in flexibility of Mobile Services to implement custom identity for our service, where Mobile Services stores your user's username and password and allows them to logon without using a social network.

In order to set this up, we'll need a table to store the user's details. I called it accounts and we'll use this table to store the credentials and also to login using the Mobile Service client.

accounts table

Since we'll allow anyone to register we set the INSERT permission to only require the application key. All other operations (especially READ) should be set to scripts and admins only.

We'll do all the work in the insert script with two flows:

1. Account creation

A POST (insert) to the /tables/accounts endpoint with a body will start the account creation flow. I'll leave it as an exercise for the reader to decide what other data they might want to store in the accounts table and how you'll validate it (e-mail for example? checkout our integration with sendgrid).

In this flow the majority of the work is generating a salt and hashing the password before we write it to storage. 

2. Login

A POST (insert) to the /tables/accounts endpoint with a login parameter set to "true" means this is a login attempt and we should return a 200 with user and token data if successful, otherwise we'll send a 401 Unauthorized.

In this flow, we load the user account from the database by matching the username. The record loaded should include the salt and hashed password (a unique salt per row helps prevent the use of highly effective lookup tables to crack your hashed passwords, if they should ever end up in the wrong hands). We then hash the submitted password and do an equality check with the stored password, if the credentials are good - the hash will match.

Without further a do, here's the script that does all this and should be uploaded against your accounts table's insert operation.

Unit testing the script

With a script like this, it's always good to have some tests in place and so I created a suite of mocha TDD tests to verify the behavior. This is the first script I've shown that we'll unit test and also uses the 'tables' global in Mobile Services. It's the perfect opportunity to demonstrate a simple mockTables module that I use to mock the tables global in scripts. The idea is simple, first your create the mockTables instance:

var tables = require('./mockTables.js');

Then in each test you clear all data (or whenever appropriate):

tables.clear();

and populate the table with the data required for your test, specifying the name of the table:

tables.addItem('accounts', { 
username: "Josh",
password: "<hash>",
salt: "<blah>",
etc: "…" });

Now you can search this table in your Mobile Service' scripts and it should behave as you'd expect, e.g.

var accounts = tables.getTable('accounts');
accounts.where({ username: "Josh" }).read({
   success: function(results) {
      console.log(results); // will output a single record
   }
}); 

Note this mock is great for reading data but doesn't support setting up particular behaviors (such as returning an error) or verifying the order of invocations - but it's useful nonetheless. You can also use the functional where syntax, e.g:

accounts.where(function(a) {
    return this.x === a
}, 1)).read( // etc

The full unit test code and simple mockTables module are shown at the bottom of the post, before that though - the client.

Implementing the client

Believe it or not that's pretty much all we have to do on the server to implement custom identity for this post. Now for the client. We'll need to support registration and a new login approach, let's take a look at how we'd do this in Objective-C for iOS. I've decided to use categories to extend the MSClient class.

This adds login and register methods to the MSClient, so they feel right at home. And as you can see, the login method simply sets up the user and the token. You could change the register method to automatically log the user in (since, if registration was successful, they obviously have the right credentials).

I created a modified iOS quickstart that you can setup to see this in action. You'll need to set your TodoItem to authenticated for all operations and add the accounts table and script (be sure to create your own hashing key and use your own master key).

IOS Simulator Screen shot Jan 1 2013 7 08 31 PM IOS Simulator Screen shot Jan 1 2013 7 12 38 PM IOS Simulator Screen shot Jan 1 2013 7 14 37 PM

You can download the Xcode project here: CustomIdQuickstart.zip (2.6MB)

The client also uses a slightly modified filter from day 11 that uses NSNotificationCenter. 

Unit Tests and MockTables

As promised, here are the full unit tests and that mockTables code:

IMPORTANT: In this post we follow some best practice with regard to password storage by salting and hashing the password value and using a key-stretching algorithm (pbkdf2) and slower equals comparison. However, security and attacks continue to evolve. Remember, this is code you got from the internet and and comes with no warranty. Check out this article for more detail on password hashing: Salted Password Hashing.

Things we didn't look at

There are many. Perhaps the most obvious is, if you support custom identity, then you'll need to provide a way for users to recover their password in the event that they forget it. This isn't necessary when using Twitter or Facebook as they provide this mechanism for you. Typically, this involves an e-mail loop and as we integrate with sendgrid - this is entirely possible to implement. Of course, another key thing to remember is that Mobile Services composes extremely well with other services in Azure (and beyond) - so it's easy to augment your Mobile Service with other capabilities as necessary.

Another thing I'd want to do to ensure the integrity of my account data before putting this into production is enforcing a unique constraint on the username column, to remove the unlikely race condition

And this closes the series "the twelve days of ZUMO", thanks for reading and I value your feedback. The good news is the team is working hard on making almost everything the series has covered even easier in 2013 - HAPPY NEW YEAR!

PS - it is one of my New Year's Resolutions to fix the layout and design of this blog :)

 
Josh Post By Josh Twist
3:19 AM
01 Jan 2013

Handling expired tokens in your application (Day 11)

UPDATE: My buddy Carlos created an updated article that shows how to use the replacement for ServiceFilters in managed clients, check it out: Caching and handling expired tokens in azure mobile services managed SDK

On day 8 we looked at how you can generate your own Mobile Services JWT tokens to create a custom identity. One thing you may have noticed is that the JWT has an expiry, meaning that at some point the user’s authentication token will become invalid. In day 10’s post we also looked at an approach that would allow you to cache the user’s token to save them logging in repeatedly. However, the combination of an expiry and a cached token have an obvious downfall – what happens when it expires? Well, you have to log your user back in, obviously.

So you need to be ready to receive a 401 unauthorized response and log the user back in. But, you probably don’t want to write this code every time you call Mobile Services, right? Enter filters which we covered in Day 3. Filters allow me to intercept calls to and responses from the Mobile Service for any given client instance. This will allow us to inspect the response for a 401 response, trigger the login flow and then, best of all, automatically replay the request that was rejected with a 401.

Here’s the code for C# / Windows 8:

As you can see, all this filter does is inspect the response for a 401 status code. If the call was unauthorized, we log the user in again and then attach the new authentication token to the original request (by modifying the header). Resend the request and then send the response back to the consuming code, as though nothing happened!

You can also do this on iOS / Objective C:

The iOS implementation assumes it’s safe to grab the current rootViewController (from the current UIApplication delegate’s  window). This may not be the case for your application. In reality, you’d probably want to take this concept and use it to trigger your logout/login presentation (e.g. via NSNotificationCenter). I’ll leave this as an exercise for the reader as it is likely to vary by application.

From now on, I’ll be using this in all applications as my primary login flow as it composes extremely well with cached credentials. Join us again tomorrow for the last day of the twelve days of ZUMO.

 
Josh Post By Josh Twist
10:28 PM
31 Dec 2012

Setting the auth token in the Mobile Services client and caching the user’s identity (Day 10)

On Day 8, we looked at how you can generate your own ZUMO authentication token. The good news is, if you do want to generate your own tokens (say you want to create a private identity system or integrate with ADFS) then you can still use the Mobile Services client to work with your tables (and logout will still work too!).

All you have to do is manually ‘log the user in’ rather than use the built in login methods. To do this, you just need to set the user object on the Mobile Service client instance, here’s how – in C#, JS and Objective C:

So if you build your own login mechanism, you’re not forced to build your own client. And what’s more, many people want to cache the full identity of their user so they don’t have to login each time the application starts. This is entirely possible now, all you have to do is store the user ID and authentication token locally on the machine*. Next time the app starts, you check the cache and if these values are available, you skip the login flow and rehydrate the client with this data.

Be careful though, the authenticationToken is sensitive data and ideally (depends on the nature of your application and the data it stores), if you store this on the device you should store it encrypted so that should the device get stolen, the bad guy can’t read the value off the disk and use it. Fortunately, Windows and iOS both provide mechanisms to help with this, checkout Windows’ Credential Locker and iOS keychain.

This was day 10 of the twelve days of ZUMO.

 
Josh Post By Josh Twist
3:22 AM
31 Dec 2012

Fetching a basic user profile in Mobile Services (Day 9)

When building an application that has users authenticate, you often want to augment that user data with some kind of profile. Perhaps a nickname or real name, or a location. We collect this in the doto sample, and we also use the technique I’m going to post about here today.

image

Above you can see the registration screen in doto, but this is how it appears when you open the app, because we prefill the data based on your Microsoft Account profile when you login. This is a nice gesture for the user, as most will accept the defaults but for those that want to represent themselves differently in your app – they can customize the name.

Today, I want to share some server code that can help you do the same based on your public Twitter and Facebook profiles if your user logs in via Twitter or Facebook. The idea behind the simple function fetchBasicProfile below, is that you can simply pass in the user object in your Mobile Service server script, and it will invoke your callback with a simple profile object that has common properties for both Facebook and Twitter (which has wildly different basic profile structures available). Here’s the code:

And that includes a test you can run using Mocha JS (see day 7 for more on this).

The properties returned in the basic profile are:

  • name – the full name of the user, e.g. Josh Twist
  • firstName – the first name, e.g. Josh (not available on Twitter)
  • lastName – the surname or family name, e.g. Twist (not available on Twitter)
  • userName – a username specific to the source, e.g. @joshtwist or joshtwist
  • smallImageUrl – a url to a small (~50x50) profile image
  • largeImageUrl – a bigger image (~200x200)
  • source – the source of the information, e.g. Facebook or Twitter
  • id – the id of the user in the source system, usually a large number

It’s super easy to use and you can just paste this function into any of your scripts that want to do this.

This was day 9 of the twelve days of ZUMO. Join us tomorrow for day 10.

 
Josh Post By Josh Twist
11:49 PM
29 Dec 2012
© 2005 - 2014 Josh Twist - All Rights Reserved.