Tag Archives: JavaScript

GeoPix: A sample iOS app powered by IBM MobileFirst for Bluemix

In this post I’d like to show a fairly simple application that I put together which shows off some of the rich capabilities for IBM MobileFirst for Bluemix that you get out of the box – All with an absolute minimal amount of your own developer effort.  Bluemix, of course, being IBM’s platform as a service offering.

GeoPix is a sample application leveraging IBM MobileFirst for Bluemix to capture data and images on a mobile device, persist that data locally (offline), and replicate that data to the cloud. Since it’s built with IBM MobileFirst, we get lots of things out of the box, including operational analytics, user authentication, and much more.

(full source code at the bottom of this post)

Here’s what the application currently does:

  • User can take a picture or select an image from the device
  • App captures geographic location when the image is captured
  • App saves both the image and metadata to a local data store on the device.
  • App uses asynchronous replication to automatically save any data in local store up to the remote store whenever the network is available
  • Oh yeah, can’t forget, the user auth is via Facebook
  • MobileFirst provides all the analytics we need.  Bluemix provides the cloud based server and Cloudant NoSQL data store.
  • All captured data is available on a web based front-end powered by Node.js

Here’s a video of it in action:

… and you can check out the web interface at geopix.mybluemix.net.

(full source code at the bottom of this post)

This is powered by the iOS 8 MobileFirst application boilerplate on Bluemix.  With this application template you can have your backend infrastructure setup within minutes, and it includes:

  • User authentication
  • Usage/operational analytics
  • Cloudant NoSQL DB
  • Simplified Push Notifications
  • Node.js backend

In this sample I’m using everything but the Push Notifications service.  I’m using user authentication, the Cloudant DB (offline/local store and remote/cloud store), and the node.js backend.  You get the operational analytics automatically.

To get started, you just need to create a new iOS 8 mobile application on Bluemix.  See my video series on Getting Started with IBM MobileFirst for Bluemix for a complete walkthrough of creating a new app using MobileFirst for Bluemix, or check out the Getting Started Guide in the official docs.

You need to initialize your app, and make sure you have setup the Facebook identity provider.  You can create your Facebook authentication at https://developers.facebook.com/.  Once the user is authenticated, the client app is fully functional.

The app UI is very simple, basically just two buttons for capturing images (the last captured image shows up in the background):

App's main UI
App’s main UI

There’s also a gallery for viewing local images:

Local gallery view
Local gallery view

Capturing Location

Capturing data is very straightforward.  The geographic location is captured using Apple’s Core Location framework.  We just need to implement the CLLocationManagerDelegate protocol:

- (void)locationManager:(CLLocationManager *)manager
 didUpdateLocations:(NSArray *)locations {

    self.currentLocation = [locations lastObject];
    NSDate* eventDate = self.currentLocation.timestamp;
    NSTimeInterval howRecent = [eventDate timeIntervalSinceNow];
    if (abs(howRecent) < 15.0) {
    // If the event is recent, do something with it.
    locationLabel.text = [NSString stringWithFormat:@" Lat: %+.5f, Lon: %+.5f\n",
      self.currentLocation.coordinate.latitude,
      self.currentLocation.coordinate.longitude];
    }
}

Then initialize CLLocationManager using our class as the location manager’s delegate:

if (self.locationManager == nil)
  self.locationManager = [[CLLocationManager alloc] init];
self.locationManager.delegate = self;
self.locationManager.desiredAccuracy = kCLLocationAccuracyBest;
self.locationManager.pausesLocationUpdatesAutomatically = YES;

Capturing Images

Capturing images from the device is also very straightforward.  In the app I leverage Apple’s UIImagePickerController to allow the user to either upload an existing image or capture a new image.  See the presentImagePicker and didFinishPickingMediaWithInfo below. All of this standard practice using Apple’s developer SDK:

- (void) presentImagePicker:(UIImagePickerControllerSourceType) sourceType {
 if ( sourceType == UIImagePickerControllerSourceTypeCamera  && ![UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera]) {
  [logger logErrorWithMessages:@"device has no camera"];
  UIAlertView *myAlertView = [[UIAlertView alloc] initWithTitle:@"Error"
                 message:@"Device has no camera"
                delegate:nil
             cancelButtonTitle:@"OK"
             otherButtonTitles: nil];
  [myAlertView show];
 };

 if ( sourceType != UIImagePickerControllerSourceTypeCamera || [UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera] ){
  UIImagePickerController *picker = [[UIImagePickerController alloc] init];
  picker.delegate = self;
  picker.allowsEditing = NO;
  picker.sourceType = sourceType;

  [self presentViewController:picker animated:YES completion:NULL];
 }
}

- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {

 [logger logDebugWithMessages:@"didFinishPickingMediaWithInfo"];
 UIImage *image = info[UIImagePickerControllerOriginalImage];
 currentImage.image = image;
 [[DataManager sharedInstance] saveImage:image withLocation:self.currentLocation];
 [picker dismissViewControllerAnimated:YES completion:nil];
}

Persisting Data

If you notice in the didFinishPickingMediaWithInfo method above, there is a call to the DataManager’s saveImage withLocation method. This is where we save data locally and rely on Cloudant’s replication to automatically save data from the local data store up to the Cloudant NoSQL database.  This is powered by the iOS 8 Data service from Bluemix.

The first thing that we will need to do is initialize the local and remote data stores. Below you can see my init method from my DataManager class. In this, you can see the local data store is initialized, then the remote data store is initialized. If either data store already exists, the existing store will be used, otherwise it is created.

-(id) init {
 self = [super init];

 if ( self ) {
  logger = [IMFLogger loggerForName:NSStringFromClass([self class])];
  [logger logDebugWithMessages:@"initializing local datastore 'geopix'..."];

  // initialize an instance of the IMFDataManager
  self.manager = [IMFDataManager sharedInstance];

  NSError *error = nil;
  //create a local data store
  self.datastore = [self.manager localStore:@"geopix" error:&error];

  if (error) {
   [logger logErrorWithMessages:@"Error creating local data store %@",error.description];
  }

  //create a remote data store
  [self.manager remoteStore:@"geopix" completionHandler:^(CDTStore *store, NSError *error) {
   if (error) {
    [logger logErrorWithMessages:@"Error creating remote data store %@",error.description];
   } else {
    [self.manager setCurrentUserPermissions:DB_ACCESS_GROUP_MEMBERS forStoreName:@"geopix" completionHander:^(BOOL success, NSError *error) {
     if (error) {
      [logger logErrorWithMessages:@"Error setting permissions for user with error %@",error.description];
     }

     [self replicate];
    }];
   }
  }];

  //start replication
  [self replicate];
 }

 return self;
} 

Once the data stores are created, you can see that the replicate method is invoked.  This starts up the replication process to automatically push changesfrom the local data store to the remote data store “in the cloud”.

Therefore, if you’re collecting data when the app is offline, then you have nothing to worry about.  All of the data will be stored locally and pushed up to the cloud whenever you’re back online – all with no additional effort on your part.  When using replication with the Cloudant SDK, you just have to start the replication process and let it do it’s thing… fire and forget.

In my replicate function, I setup CDTPushReplication for pushing changes to the remote data store.  You could also setup two-way replication to automatically pull new changes from the remote store.

-(void) replicate {
 if ( self.replicator == nil ) {
  [logger logDebugWithMessages:@"attempting replication to remote datastore..."];

  __block NSError *replicationError;
  CDTPushReplication *push = [self.manager pushReplicationForStore: @"geopix"];
  self.replicator = [self.manager.replicatorFactory oneWay:push error:&replicationError];
  if(replicationError){
   // Handle error
   [logger logErrorWithMessages:@"An error occurred: %@", replicationError.localizedDescription];
  }

  self.replicator.delegate = self;

  replicationError = nil;
  [logger logDebugWithMessages:@"starting replication"];
  [self.replicator startWithError:&replicationError];
  if(replicationError){
   [logger logErrorWithMessages:@"An error occurred: %@", replicationError.localizedDescription];
  }else{
   [logger logDebugWithMessages:@"replication start successful"];
  }
 }
 else {
  [logger logDebugWithMessages:@"replicator already running"];
 }
}

Once we’ve setup the remote and local data stores and setup replication, we now are ready to save the data the we’re capturing within our app.

Next is my saveImage withLocation method.  Here you can see that it creates a new CDTMutableDocumentRevision object (this is a generic object for the Cloudant NoSQL database), and populates it with the location data and timestamp.   It then creates a jpg image from the UIImage (passed in from the UIImagePicker above) and adds the jpg as an attachment to the document revision.  Once the document is created, it is saved to the local data store.   We then let replication take care of persisting this data to the back end.

-(void) saveImage:(UIImage*)image withLocation:(CLLocation*)location {

 [logger logDebugWithMessages:@"saveImage withLocation"];

 //save in background thread
 dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^(void) {

  [logger logDebugWithMessages:@"creating document..."];

  NSDate *now = [NSDate date];
  NSString *dateString = [NSDateFormatter localizedStringFromDate:now
                 dateStyle:NSDateFormatterShortStyle
                 timeStyle:NSDateFormatterFullStyle];

  // Create a document
  CDTMutableDocumentRevision *rev = [CDTMutableDocumentRevision revision];
  rev.body = @{
      @"sort": [NSNumber numberWithDouble:[now timeIntervalSince1970]],
      @"clientDate": dateString,
      @"latitude": [NSNumber numberWithFloat:location.coordinate.latitude],
      @"longitude": [NSNumber numberWithFloat:location.coordinate.longitude],
      @"altitude": [NSNumber numberWithFloat:location.altitude],
      @"course": [NSNumber numberWithFloat:location.course],
      @"type": @"com.geopix.entry"
      };

  [logger logDebugWithMessages:@"creating image attachment..."];

  NSDate *date = [NSDate date];
  NSString *imageName = [NSString stringWithFormat:@"image%f.jpg", [date timeIntervalSince1970]];

  NSString *tempDirectory = NSTemporaryDirectory();
  NSString *imagePath = [tempDirectory stringByAppendingPathComponent:imageName];

  [logger logDebugWithMessages:@"saving image to temporary location: %@", imagePath];

  NSData *imageData = UIImageJPEGRepresentation(image, 0.1);
  [imageData writeToFile:imagePath atomically:YES];

  CDTUnsavedFileAttachment *att1 = [[CDTUnsavedFileAttachment alloc]
            initWithPath:imagePath
            name:imageName
            type:@"image/jpeg"];

  rev.attachments = @{ imageName: att1 };

  [self.datastore save:rev completionHandler:^(id savedObject, NSError *error) {
   if(error) {
    [logger logErrorWithMessages:@"Error creating document: %@", error.localizedDescription];
   }
   [logger logDebugWithMessages:@"Document created: %@", savedObject];
  }];

  [self replicate];
 });
}

If we want to query data from either the remote or local data stores, we can just use the performQuery method on the data store. Below you can see a method for retrieving data for all of the images in the local data store.

-(void) getLocalData:(void (^)(NSArray *results, NSError *error)) completionHandler {

 NSPredicate *queryPredicate = [NSPredicate predicateWithFormat:@"(type = 'com.geopix.entry')"];
 CDTCloudantQuery *query = [[CDTCloudantQuery alloc] initWithPredicate:queryPredicate];

 [self.datastore performQuery:query completionHandler:^(NSArray *results, NSError *error) {

  completionHandler( results, error );
 }];
}

At this point we’ve now captured an image, captured the geographic location, saved that data in our local offline store, and then use replication to save that data up to the cloud whenever it is available.

AND…

We did all of this without writing a single line of server-side logic.   Since this is built on top of MobileFirst for Bluemix, all the backend infrastructure is setup for us, and we get operational analytics to monitor everything that is happening.

With the operational analytics we get:

  • App usage
  • Active Devices
  • Network Usage
  • Authentications
  • Data Storage
  • Device Logs (yes, complete debug/crash logs from devices out in the field)
  • Push Notification Usage

Sharing on the web

Up until this point we haven’t had to write any back-end code. However the mobile app boilerplate on Bluemix comes with a Node.js server.  We might as well take advantage of it.

I exposed the exact same data captured within the app on the Node.js service, which you can see at http://geopix.mybluemix.net/.

Web UI
Web UI

The Node.js back end comes preconfigured to leverage the express.js framework for building web applications.  I added the jade template engine and Leaflet for web-mapping, and was able to crank this out ridiculously quickly.

The first thing we need to do is make sure  we have our configuration variables for accessing the Cloudant service from our node app.  These are environment vars that you get automatcilly if you’re running on Bluemix, but you need to set these for your local dev environment:

var credentials = {};

if (process.env.hasOwnProperty("VCAP_SERVICES")) {
 // Running on Bluemix. Parse out the port and host that we've been assigned.
 var env = JSON.parse(process.env.VCAP_SERVICES);
 var host = process.env.VCAP_APP_HOST;
 var port = process.env.VCAP_APP_PORT;

 credentials = env['cloudantNoSQLDB'][0].credentials;
}
else {

 //for local node.js server instance
 credentials.username = "cloudant username here";
 credentials.password = "cloudant password here";
 credentials.url = "cloudant url here";
}

Next we’ll add our URL/content mappings:

app.get('/', function(req, res){
  prepareData(res, 'map');
});

app.get('/list', function(req, res){
  prepareData(res, 'list');
});

Next you’ll se the logic for querying the Cloudant data store and preparing the data for our UI templates. You can customize this however you want – caching for performance, refactoring for abstraction, or whatever you want. All interactions with Cloudant are powered by the Cloudant Node.js Client

var prepareData = function(res, template) {
 var results = [];

 //create the index if it doesn't already exist
 var sort_index = {name:'sort', type:'json', index:{fields:['sort']}};
 geopix.index(sort_index, function(er, response) {
  if (er) {
   throw er;
  }

  //perform the search
  //we're just pulling back all
  //data captured ("sort" will be numeric)
  var selector = {sort:{"$gt":0}};
  geopix.find({selector:selector, sort:["sort"]}, function(er, result) {
   if (er) {
    throw er;
   }

   //prepare data for template
   for (var x=0; x<result.docs.length; x++) {
    var obj = result.docs[x];

    for (var key in obj._attachments) {
     obj.image = credentials.url + "/" + database + "/" + obj._id +"/" + key;
     break;
    }

    results.push( obj );
   }
   res.render(template, { results:results});
  });
 });
};

After the prepareData method has prepared data for formatting in the UI, the template is rendered by invoking Jade’s render method:

res.render(template, { results:results});

This will render whichever template was passed in – I have two: map.jade (the map template) and list.jade (the list template). You can check out the list template below, and see it in action here: http://geopix.mybluemix.net/list

html
  head
 title GeoPix - powered by Bluemix
 link(href='//maxcdn.bootstrapcdn.com/bootstrap/3.3.2/css/bootstrap.min.css' rel='stylesheet')
 link(href='/public/css/index.css' rel='stylesheet')
 meta(name="viewport" content="width=device-width, initial-scale=1")
  body
 div(class='well')
  h1 GeoPix - Powered by Bluemix
  p
   a(href='/') Map
   &nbsp;|&nbsp;
   a(href='/list') List
 div(class="container-fluid")
  each val, index in results
   div(class="col-md-6")
    div(class="panel panel-default")
     div(class="panel-heading")
      h3= val.clientDate
     div(class="panel-body")
      img(src=val.image)
      p= 'latitude: ' + val.latitude + ", longitude:" + val.longitude + ", altitude:" + val.altitude

In the map view I used the Leaflet map engine and Open Street Map data, along with the Leaflet Marker Cluster plugin for displaying clustered results.

Source Code

You can check out the web interface live at: http://geopix.mybluemix.net/.  If you want to setup the environment on your own, you can grab the complete source code at:

Helpful Links

Ready to start building your own apps on IBM Bluemix?  Just head over to http://bluemix.net and get a free developer trial today!

Updated: Parallax Effects in Hybrid/Web Apps

A while back I wrote about adding parallax effects to your HTML/JS experiences to make them feel a bit richer and closer to a native experience.  I’ve just added this subtle (key word *subtle*) effect to a new project and made a few changes I wanted to share here.

If you are wondering what I am talking about with “parallax effects” – Parallax movement is where objects in the background move at a different rate than objects in the foreground, thus causing the perception of depth.  Read more about it if you’re interested.

First, here’s a quick video of this latest app in action.  It’s a hybrid MobileFirst app, but this technique could be used in any Cordova/PhoneGap/MobileFirst/mobile web app experience.  The key is to keep it subtle and not too much “in your face”, and yes, it is very subtle in this video.  You have to watch closely.

The techniques that I wrote about in the previous post still apply – I’ve just added a bit more to cover more use cases.

First let’s look at the CSS.

body {
    background-image: url('../images/cloud_advisor_bkgd.png');
    background-attachment: fixed;
    background-position: center;
    background-size: auto 120%;
}

This sets the background image and default position.  The distinct change here is that I set the background size to “auto” width and 120% height.  In this case, you can have a huge image that shrinks down to just slightly larger than the window size, or a small image that scales up to a larger window size.  This way you don’t end up with seams in a repeated background or a background that is too big to highlight the parallax effect effectively.

Next let’s take a quick look at the JS involved.

var position = "center";
var lastPosition = "center";
var contentCSS = "";
var body = $("body");
var content = $(".content");
window.suspendAnimation = false;

var xMovement = 15;
var yMovement = 30;
var halfX = xMovement/2;
var halfY = yMovement/2;

window.ondeviceorientation = function(event) {
 var gamma = event.gamma/90;
 var beta = event.beta/180;

 var temp = 0;

 //fix for holding the device upside down
 if ( gamma >= 1 ) {
  gamma = 2- gamma;
 } else if ( gamma <= -1) {
  gamma = -2 - gamma;
 }

 // shift values/motion based upon device orientation
 switch (window.orientation) {
  case 90:
   temp = gamma;
   gamma = beta;
   beta = temp;
   break;
  case -90:
   temp = -gamma;
   gamma = beta;
   beta = temp;
   break;

 }

 // update positions to be used for CSS
 var yPosition = -yMovement - (beta * yMovement);
 var xPosition = -xMovement - (gamma * xMovement);
 var contentX = (-xMovement - xPosition)/2;
 var contentY = (-yMovement - yPosition)/2;

 // generate css styles
 position = xPosition.toFixed(1) + "px " + yPosition.toFixed(1) + "px";
 contentCSS = "translate3d( " + (contentX.toFixed(1)) + "px, " + (contentY.toFixed(1)) + "px, " + " 0px)";
}

function updateBackground() {

 if (!window.suspendAnimation) {
  if ( position.valueOf() != lastPosition.valueOf() ) {

   body.css( "background-position", position);
   content.css( "-webkit-transform", contentCSS);
   lastPosition = position;
  }
 } else {
  lastPosition = "translate3d(0px, 0px, 0px)";;
 }

 window.requestAnimationFrame(updateBackground);
}

window.requestAnimationFrame(updateBackground);

The html where this would be used would be something like this:

<body>
  <div class="content">put your foreground stuff here.</div>
</body>

There are some subtle but important changes here:

  1. In the requestAnimationFrame loop, it only applies changes *if* there are changes to apply.  This prevents needless calls to apply CSS even if the CSS styles hadn’t changed.  In this, I also truncate the numeric CSS string so that it isn’t reapplying CSS if the position should shift by 0.01 pixels. Side note: If you aren’t using requestAnimationFrame for HTML animations, you should learn about it.
  2. If you used my old code and were holding the device upside down, it wouldn’t work.  Not even a little bit.  This has that fixed (see comments inline above).
  3. This moves the background in CSS, which doesn’t cause browser reflow operations, and moves the foreground content (inside of a div) using translate3d, which also doesn’t cause browser reflow operations.  This helps keep animations smooth and the UX performing optimally.
  4. I also added a global variable to turn parallax on and off very quickly, if you need it.

The result is a faster experience that is more efficient and less of a strain on CPU and battery.  Feel free to test this technique out on your own.

If you use the code above, you can modify the xMovement and yMovement variables to exaggerate the parallax effect.

MBaaS – IBM Mobile Cloud Services, Bluemix & MobileFirst

MBaaS, or Mobile Backend as a Service, seems to be a particularly hot topic these days. MBaaS generally refers to backend services for mobile applications that provides data storage, user management, push notifications, and other pertinent mobile APIs.  mbaas

This is more than just “Cloud Services” which more generally refer to a scalable virtual cluster of computing or storage resources.  Bluemix is IBM’s suite of cloud service offerings, and covers lots of use cases:

Bluemix is an open-standards, cloud-based platform for building, managing, and running apps of all types, such as web, mobile, big data, and smart devices. Capabilities include Java, mobile back-end development, and application monitoring, as well as features from ecosystem partners and open source—all provided as-a-service in the cloud.

You can view the full catalog of Bluemix service offerings here.

Rather, MBaaS back-ends include services for data management, user management, notifications, and possibly more depending on the provider – all geared towards powering applications on mobile devices.

Why is it a hot topic? MBaaS enables growth of mobile applications with seamless (and virtually endless) scalability, all without having to manage individual systems for the application server, database, identify management, push notifications, or platform-specific services.

I’ve been writing a lot about IBM MobileFirst lately for a seamless API to deliver mobile apps to multiple platforms; though it has been in the context of an on-premise installation.  However, did you know that many of the exact same MobileFirst features are available as MBaaS services on IBM Bluemix?

IBM’s Mobile Cloud Services includes device management, user authentication, offline and back-end data storage, push notifications, operational analytics, and provides APIs for native iOS, native Android, hybrid apps, web apps, and even node.js clients for custom backend services.

MobileCloudServices

Here’s a bit more detail on what is currently exposed in IBM’s Mobile Cloud Services:

  • Mobile Data – The mobile data service includes a NOSQL database (powered by IBM Cloudant), file storage capabilities, and appropriate management and analytics features to measure the number of calls, storage usage, time/activity, and OS distribution.
  • Push Notifications – The push notification service allows you to easily push data to the right people at the right time on either Apple APNS or Google GCM platforms – all with a single API. Notifications can be sent by either an app or backend system, and can be sent to a single device, or a group of devices based on their tags/subscriptions.  Of course, with appropriate analytics for monitoring activity, distribution, and engagement.
  • Mobile Application Security – The mobile application security service enables you to provision or block any devices and/or users using your application, provides user authentication, and provides analytics for app/device usage, OS distribution, and time/activity.

Before starting development with IB’s Mobile Cloud Services, be sure to check out the following resources:

… and don’t forget the platform-specific developer guides:

Ready to get started?

Many of these are the exact same features that you can host in your own on-premise IBM MobileFirst Platform Foundation server – the difference is that you don’t have to maintain the infrastructure.  You can scale as needed through the Bluemix cloud offering.

 

IBM MobileFirst & Remote Client Side Logging in Mobile Apps

One of the many popular feature of IBM MobileFirst SDK is the ability to capture client-side logs from mobile devices out in the wild in a central location (on the server).  That means you can capture information from devices *after* you have deployed your app into production.  If you are trying to track down or recreate bugs, this can be incredibly helpful. Let’s say that users on iOS 7.0, specifically on iPhone 4 models are having an issue.  You can capture device logs at this level of granularity (or at a much broader scope, if you choose).

The logging classes in the MobileFirst Platform Foundation are similar in concept to Log4J.  You have logging classes that you can use to write out trace, debug, info, log, warn, fatal, or error messages.  You can also optionally specify a package name, which is used to identify which code module the debug statements are coming from.  With the package name, you’ll be able to see if the log message is coming from a user authentication manager, a data receiver, a user interface view, or any other class based upon how you setup your loggers.  Once the log file reaches the specified buffer size, it will automatically be sent to the server.

On the server you can setup log profiles that determine the level of granularity of messages that are captured on the server.  Let’s say you have 100,000 devices consuming your app.  You can configure the profiles to collect error or fatal messages for every app instance.  However, you probably don’t want to capture complete device logs for every app instance; You can setup the log profiles to only capture complete logs for a specific set of devices.

As an example, take a look at the screenshot below to see how you can setup log collection profiles:

Configuring Log Profiles on the MobileFirst Server
Configuring Log Profiles on the MobileFirst Server

When writing your code, you just need to create a logger instance, then write to the log.

If you’re curious when you might want a trace statement, vs. a log statement, vs. a debug statement, etc… Here is the usage level guidance from the docs:

  • Use TRACE for method entry and exit points.
  • Use DEBUG for method result output.
  • Use LOG for class instantiation.
  • Use INFO for initialization reporting.
  • Use WARN to log deprecated usage warnings.
  • Use ERROR for unexpected exceptions or unexpected network protocol errors.
  • Use FATAL for unrecoverable crashes or hangs.

For hybrid apps, you use the WL.Logger class in JavaScript:

var logger = WL.Logger.create({pkg: 'mynamespace.mymodule'});

logger.trace('trace', 'another mesage');
logger.debug('debug', [1,2,3], {hello: 'world'});
logger.log('log', 'another message');
logger.info('info', 1, 2, 3);
logger.warn('warn', undefined);
logger.error('error', new Error('oh no'));
logger.fatal('fatal', 'another message');

For native iOS apps, you will use the OCLogger class:

OCLogger *logger = [OCLogger getInstanceWithPackage:@"UserManager"];

[logger trace:@"this is a trace message"];
[logger debug:@"this is a debug message"];
[logger log:@"this is a log message"];
[logger info:@"this is an info message"];
[logger warn:@"this is a warning message"];
[logger error:@"this is an error message"];
[logger fatal:@"this is a fatal message"];

For native Android apps, you will use the com.worklight.common.Logger class:

private final static Logger logger = Logger.getLogger(MyClass.class.getName());

logger.trace('trace mesage');
logger.debug('debug message');
logger.log('log message');
logger.info('info message');
logger.warn('warn message');
logger.error('error - OH NOES!');
logger.fatal('fatal - Oops, you broke it');

Then on the server, you can go into the analytics dashboard and access complete logs for a device, or search through all client-side logs with the ability to filter on application name, app versions, log levels, package name, environment, device models, and OS versions within an optional date range, and with the ability to search for keywords in the log message.

Log Search Results within the MobileFirst Analytics Dashboard
Log Search Results within the MobileFirst Analytics Dashboard

For a complete reference and additional detail, be sure to check out the latest docs on client side logging with the MobileFirst platform.

IBM Watson, Cognitive Computing & Speech APIs

IBM Watson is a cognitive computing platform that you can use to add intelligence and natural language analysis to your own applications.  Watson employs natural language processing, hypothesis generation, and dynamic learning to deliver solutions for natural language question and answer services, sentiment analysis, relationship extration, concept expansion, and language/translation services. ..and, it is available for you to check out with IBM Bluemix cloud services.

Watson won Jeopardy, tackles genetics,  creates recipes, and so much more.  It is breaking new ground on a daily basis.

The IBM Watson™ Question Answer (QA) service provides an API that give you the power of the IBM Watson cognitive computing system. With this service, you can connect to Watson, pose questions in natural language, and receive responses that you can use within your application.

In this post, I’ve hooked the Watson QA node.js starter project to the Web Speech API speech recognition and speech synthesis APIs. Using these APIs, you can now have a conversation with Watson. Ask any question about healthcare, and see what watson has to say. Check out the video below to see it in action.

You can check out a live demo at:

Just click on the microphone button, allow access to the system mic, and start talking.  Just a warning, lots of background noise might interfere with the API’s ability to recognize & generate a meaningful transcript.

This demo only supports Google Chrome only at the time of writing. You can check out where Web Speech is supported at caniuse.com.

You can check out the full source code for this sample on IBM Jazz Hub (git):

I basically just took the Watson QA Sample Application for Node.js and started playing around with it to see what I could do…

This demo uses the Watson For Healthcare data set, which contains information from HealthFinder.gov, the CDC, National Hear Lung, and Blood Institute, National Institute of Arthritis and Musculoskeletal and Skin Diseases, National Institute of Diabetes and Digestive and Kidney Diseases, National Institute of Neurological Disorders and Stroke, and Cancer.gov.  Just know that this is a beta service/data set – implementing Watson for your own enterprise solutions requires system training and algorithm development for Watson to be able to understand your data.

Using Watson with this dataset, you can ask conditional questions, like:

  • What is X?
  • What causes X?
  • What is the treatment for X?
  • What are the symptoms of X?
  • Am I at risk of X?

Procedure questions, like:

  • What should I expect before X?
  • What should I expect after X?

General health auestions, like:

  • What are the benefits of taking aspirin daily?
  • Why do I need to get shots?
  • How do I know if I have food poisoning?

Or, action-related questions, like:

  • How can I quit smoking?
  • What should I do if my child is obese?
  • What can I do to get more calcium?

Watson services are exposed through a RESTful API, and can easily be integrated into an existing application.  For example, here’s a snippet demonstrating how you can consume the Watson QA service inside of a Node.js app:

var parts = url.parse(service_url +'/v1/question/healthcare');
var options = {
host: parts.hostname,
port: parts.port,
path: parts.pathname,
method: 'POST',
headers: {
  'Content-Type'  :'application/json',
  'Accept':'application/json',
  'X-synctimeout' : '30',
  'Authorization' :  auth
}
};

// Create a request to POST to Watson
var watson_req = https.request(options, function(result) {
  result.setEncoding('utf-8');
  var response_string = '';

  result.on('data', function(chunk) {
    response_string += chunk;
  });

  result.on('end', function() {
    var answers = JSON.parse(response_string)[0];
    var response = extend({ 'answers': answers },req.body);
    return res.render('response', response);
  });
});

Hooking into the Web Speech API is just as easy (assuming you’re using a browser that implements the Web Speech API – I built this demo using Chrome on OS X). On the client side, you just need need to create a SpeechRecognition instance, and add the appropriate event handlers.

var recognition = new webkitSpeechRecognition();
 recognition.continuous = true;
 recognition.interimResults = true;

 recognition.onstart = function() { ... }
 recognition.onresult = function(event) {

   var result = event.results[event.results.length-1];
   var transcript = result[0].transcript;

   // then do something with the transcript
   search( transcript );
 };
 recognition.onerror = function(event) { ... }
 recognition.onend = function() { ... }

To make your app talk back to you (synthesize speech), you just need to create a new SpeechSynthesisUtterance object, and pass it into the window.speechSynthesis.speak() function. You can add event listeners to handle speech events, if needed.

var msg = new SpeechSynthesisUtterance( tokens[i] ); 

msg.onstart = function (event) {
    console.log('started speaking');
};

msg.onend = function (event) {
    console.log('stopped speaking');
};

window.speechSynthesis.speak(msg);

Check out these articles on HTML5Rocks.com for more detail on Speech Recognition and Speech Synthesis.

Here are those links again…

You can get started with Watson services for Bluemix at https://console.ng.bluemix.net/#/store/cloudOEPaneId=store