Tag Archives: Node.js

JavaScript All The Things – Or – Why You Should Pay Attention To JavaScript

This post is inspired by all the comments I’ve seen this week about JS in the enterprise. I would have never imagined this 10 years ago, but JavaScript is now pretty much ubiquitous. Here are a few reasons why you need to paying attention to JavaScript if you aren’t already, and why you should definitely not write it off.

First, I think one of the major reasons for JavaScript’s ubiquity is that JavaScript is approachable. It is relatively easy for beginners to learn JavaScript, and powerful enough for advanced users to build complex and reliable systems.

Second, why you need to pay attention, JavaScript is everywhere.

jsatt

You can now use JavaScript to develop on virtually any platform: client side applications, server side logic, embedded chips/IoT devices, manage build scripts and dependencies, and more.

This doesn’t mean you’ll use the exact same code in every case, rather that you can use the same skill set – JavaScript Development – to deliver solutions across multiple paradigms.

The Client Side

JavaScript can be used to power client side apps/user interfaces, and user interactions on numerous platforms and devices.

Web

Of course JavaScript powers the web, this is a given. JavaScript is the primary scripting language for all web browsers. I won’t focus on this much b/c it’s already well known.

Mobile

JavaScript can also be used to power mobile applications that are natively installed on a device.

  1. Apache Cordova/PhoneGap – You can build natively installed apps with web technology using PhoneGap or Cordova. PhoneGap is Adobe’s branded distribution of Cordova, but from the developer’s perspective, they are basically the same thing. Your app runs within a webview on the mobile device, and you build your user interface the same way you you build a dynamic web application. Your user interface is implemented in HTML, styled with CSS, and all interactivity is created with JavaScript.
  2. React Native – JavaScript powered web apps don’t just have to be inside of a a web view. The React Native framework gives developers the ability to write their application using JavaScript and declarative UI elements, and results in a native application running on the mobile device. The logic is interpreted JavaScript at runtime, but everything that the user interacts with (all UI elements) is 100% native, providing a very high quality user experience, and it is now available for both iOS and Android applications.
  3. Unity 3D – You can even develop rich & immersive mobile 3D simulation or gaming experience, entirely powered by JavaScript using the Unity 3D engine. **These can be web, desktop, or mobile, but is often used in mobile gaming.
  4. NativeScript – Framework for building cross-platform native iOS, Android and Windows mobile apps using JavaScript.
Desktop

Yup, desktop apps are not left out of the mix. Most desktop solutions fall into a category similar to Apache Cordova, where the end results is a web view that has access to lower level APIs, whose content is developed with web based technology.

  1. Electron – Node.js + Chromium desktop app container from GitHub
  2. app.js  – Node + Chromium for a desktop app container
  3. nw.js – Another framework for Node +Chromium for a desktop app container
  4. CEF – The Chromium Embedded Framework – a framework for embedding the guts of the chrome browser inside of a desktop app.

… and more… I know Microsoft has a solution for building Windows apps purely out of HTML/JS, and there are more solutions out there that I am forgetting.

In fact, some of my favorite desktop tools, such as SlackAtom and VS Code are actually based on web technology and implemented in HTML/JS. Heck, even Photoshop can be scripted and extended with the generator extensibility layer or have a customized user interface in HTML/JS with design spaces.

The Server Side

Most obviously Node.js – a JavaScript runtime buit on Chrome’s V8 JavaScript Engine – has made huge inroads into server side development and the enterprise. Node.js, powered by frameworks like express.js or loopback.io makes server side development and complex enterprise apps with JavaScript possible.

IoT

Pretty much everything that doesn’t fall in the categories above falls in here. You can develop headless apps that run on Arduino, Raspberry Pi or other small boards completely using JavaScript, you can manage infrastructure and information flow of IoT sensors using JavaScript, you can write on-chip programs for embedded systems using JavaScript, you can control robots with it, and you can even power media-centric connected TV experiences using JavaScript.

Like I said… It’s everywhere.

Ecosystem

It’s not just about where you can build and run JavaScript for your applications. JavaScript has a massive and thriving developer ecosystem.

JavaScript is the #1 most active language on GitHub in both the total number of active repositories and total number of active pushes/commits.

 

http://githut.info/
statistics visualization from http://githut.info/

Here are some stats that show the magnitude of growth and adoption for Node.js/npm.js alone. NPM stats currently shows a total of 186,946 packages available for download, 94,978,032 package downloads in the last day, and 2,451,734,737 package downloads in the last month.

npm
NPM Statistics

 

Node.js adoption is massive, and is still growing.

This doesn’t mean that JavaScript is the best language at everything. It also doesn’t meant that you can take a single piece of source code and run it in every device/context imaginable.

It means that you can use your skills in JavaScript to develop for just about any kind of device/context out there. It’s not going to be write once, run everywhere, rather in the words of the React.js team: learn once, write everywhere.

IBM Acquires StrongLoop – Leveling Up Node.js in the Enterprise

Today IBM announced the acquisition of StrongLoop, Inc,  leaders in enterprise development on Node.js and major contributors to Express, LoopBack, and other Node.js tools and frameworks.

0-strongloop

Node.js is an incredible tool for rapidly building highly performant and scalable back end systems, and you develop it using a familiar core language that most front-end developers are already accustomed to, JavaScript. This acquisition is positioned to greatly enhance Node.js in the enterprise, and StrongLoop’s offerings will be integrated into IBM Bluemix, IBM MobileFirst, and WebSphere.

Even though the acquisition is still “hot off of the presses”, you can start using these tools together today:

You can read more about this acquisition and the future vision between IBM and StrongLoop on the StrongLoop blog, IBM Bluemix Blog, and IBM MobileFirst Blog.

If you haven’t heard about StrongLoop’s LoopBack framework, it enables you to easily connect and expose your data as REST services. It provides the ability to visually create data models in a graphical (or command line) interface, which are used to automatically generate REST APIs – thus generating CRUD operations for your REST services tier, without having to write any code.

Why is this important?

It makes API development easier and drastically reduces time from concept to implementation.  If you haven’t yet looked at the LoopBack framework, you should definitely check it out.  You can build API layers for your apps literally in minutes.  Check out the video below for a quick introduction:

Again, be sure to check out these posts that detail the integration steps so you can start using these tools together today:

 

 

IBM Watson QA + Speech Recognition + Speech Synthesis = A Conversation With Your Computer

Back in November I released a demo application here on my blog showing the IBM Watson QA Service for cognitive/natural language computing connected to the Web Speech API in Google Chrome to have real conversational interaction with a web application.  It’s a nice demo, but it always drove me nuts that it only worked in Chrome.  Last month the IBM Watson team released 5 new services, and guess what… Speech Recognition and Speech Synthesis are included!

These two services enable you to quickly add Text-To-Speech or Speech-To-Text capability to any application.  What’s a better way to show them off than by updating my existing app to leverage the new speech services?

So here it is: watsonhealthqa.mybluemix.net!

By leveraging the Watson services it can now run in any browser that supports getUserMedia (for speech recognition) and HTML5 <Audio> (for speech playback).

(Full source code available at the bottom of this post)

You can check out a video of it in action below:

If your browser doesn’t support the getUserMedia API or HTML5 <Audio>, then your mileage may vary.  You can check where these features are supported with these links: <Audio>getUserMedia

Warning: This is targeting desktop browsers – HTML5 Audio is a mess on mobile devices due to limited codec support and immature APIs.

So how does this all work?

Just like the QA service, the new Text To Speech and Speech To Text services are now available in IBM Bluemix, so you can create a new application that leverages any of these services, or you can add them to any existing application.

I simply added the Text To Speech and Speech To Text services to my existing Healthcare QA application that runs on Bluemix:

bluemix-dashboard
IBM Bluemix Dashboard

 

These services are available via a REST API. Once you’ve added them to your application, you can consume them easily within any of your applications.

I updated the code from my previous example in 2 ways: 1) take advantage of the Watson Node.js Wrapper that makes interacting with Watson a lot easier and 2) to take advantage of these new services services.

Watson Node.js Wrapper

Using the Watson Node.js Wrapper, you can now easily instantiate Watson services in a single line of code.  For example:

var watson = require('watson-developer-cloud');
var question_and_answer_healthcare = watson.question_and_answer(QA_CREDENTIALS);
var speechToText = watson.speech_to_text(STT_CREDENTIALS);

The credentials come from your environment configuration, then you just create instances of whichever services that you want to consume.

QA Service

The code for consuming a service is now much simpler than the previous version.  When we want to submit a question to the Watson QA service, you can now simply call the “ask” method on the QA service instance.

Below is my server-side code from app.js that accepts a POST submission from the browser, delegates the question to Watson, and takes the result and renders HTML using a Jade template. See the Getting Started Guide for the Watson QA Service to learn more about the wrappers for Node or Java.

// Handle the form POST containing the question
app.post('/ask', function(req, res){

    // delegate to Watson
    question_and_answer_healthcare.ask({ text: req.body.questionText}, function (err, response) {
        if (err)
            console.log('error:', err);
        else {
          var response = extend({ 'answers': response[0] },req.body);

          // render the template to HTML and send it to the browser
          return res.render('response', response);
        }
    });
});

Compare this to the previous version, and you’ll quickly see that it is much simpler.

Speech Synthesis

At this point, we already have a functional service that can take natural language text, submit it to Watson,  and return a search result as text.  The next logical step for me was to add speech synthesis using the Watson Text To Speech Service (TTS).  Again, the Watson Node Wrapper and Watson’s REST services make this task very simple.  On the client side you just need to set the src of an <audio> instance to the URL for the TTS service:

<audio controls="" autoplay="" src="/synthesize?text=The text that should generate the audio goes here"></audio>

On the server you just need to synthesize the audio from the data in the URL query string.  Here’s an example how to invoke the text to speech service directly from the Watson TTS sample app:

var textToSpeech = new watson.text_to_speech(credentials);

// handle get requests
app.get('/synthesize', function(req, res) {

  // make the request to Watson to synthesize the audio file from the query text
  var transcript = textToSpeech.synthesize(req.query);

  // set content-disposition header if downloading the
  // file instead of playing directly in the browser
  transcript.on('response', function(response) {
    console.log(response.headers);
    if (req.query.download) {
      response.headers['content-disposition'] = 'attachment; filename=transcript.ogg';
    }
  });

  // pipe results back to the browser as they come in from Watson
  transcript.pipe(res);
});

The Watson TTS service supports .ogg and .wav file formats.  I modified this sample is setup only with .ogg files.  On the client side, these are played using the HTML5 <audio> tag.

Speech Recognition

Now that we’re able to process natural language and generate speech, that last part of the solution is to recognize spoken input and turn it into text.  The Watson Speech To Text (STT) service handles this for us.  Just like the TTS service, the Speech To Text service also has a sample app, complete with source code to help you get started.

This service uses the browser’s getUserMedia (streaming) API with socket.io on Node to stream the data back to the server with minimal latency. The best part is that you don’t have to setup any of this on your own. Just leverage the code from the sample app. Note: the getUserMedia API isn’t supported everywhere, so be advised.

On the client side you just need to create an instance of the SpeechRecognizer class in JavaScript and handle the result:

var recognizer = new SpeechRecognizer({
  ws: '',
  model: 'WatsonModel'
});

recognizer.onresult = function(data) {

    //get the transcript from the service result data
    var result = data.results[data.results.length-1];
    var transcript = result.alternatives[0].transcript;

    // do something with the transcript
    search( transcript, result.final );
}

On the server, you need to create an instance of the Watson Speech To Text service, and setup handlers for the post request to receive the audio stream.

// create an instance of the speech to text service
var speechToText = watson.speech_to_text(STT_CREDENTIALS);

// Handle audio stream processing for speech recognition
app.post('/', function(req, res) {
    var audio;

    if(req.body.url && req.body.url.indexOf('audio/') === 0) {
        // sample audio stream
        audio = fs.createReadStream(__dirname + '/../public/' + req.body.url);
    } else {
        // malformed url
        return res.status(500).json({ error: 'Malformed URL' });
    }

    // use Watson to generate a text transcript from the audio stream
    speechToText.recognize({audio: audio, content_type: 'audio/l16; rate=44100'}, function(err, transcript) {
        if (err)
            return res.status(500).json({ error: err });
        else
            return res.json(transcript);
    });
});

Source Code

You can interact with a live instance of this application at watsonhealthqa.mybluemix.net, and complete client and server side code is available at github.com/triceam/IBMWatson-QA-Speech.

Just setup your Bluemix app, clone the sample code, run NPM install and deploy your app to Bluemix using the Cloud Foundry CLI.

Helpful Links

GeoPix: A sample iOS app powered by IBM MobileFirst for Bluemix

In this post I’d like to show a fairly simple application that I put together which shows off some of the rich capabilities for IBM MobileFirst for Bluemix that you get out of the box – All with an absolute minimal amount of your own developer effort.  Bluemix, of course, being IBM’s platform as a service offering.

GeoPix is a sample application leveraging IBM MobileFirst for Bluemix to capture data and images on a mobile device, persist that data locally (offline), and replicate that data to the cloud. Since it’s built with IBM MobileFirst, we get lots of things out of the box, including operational analytics, user authentication, and much more.

(full source code at the bottom of this post)

Here’s what the application currently does:

  • User can take a picture or select an image from the device
  • App captures geographic location when the image is captured
  • App saves both the image and metadata to a local data store on the device.
  • App uses asynchronous replication to automatically save any data in local store up to the remote store whenever the network is available
  • Oh yeah, can’t forget, the user auth is via Facebook
  • MobileFirst provides all the analytics we need.  Bluemix provides the cloud based server and Cloudant NoSQL data store.
  • All captured data is available on a web based front-end powered by Node.js

Here’s a video of it in action:

… and you can check out the web interface at geopix.mybluemix.net.

(full source code at the bottom of this post)

This is powered by the iOS 8 MobileFirst application boilerplate on Bluemix.  With this application template you can have your backend infrastructure setup within minutes, and it includes:

  • User authentication
  • Usage/operational analytics
  • Cloudant NoSQL DB
  • Simplified Push Notifications
  • Node.js backend

In this sample I’m using everything but the Push Notifications service.  I’m using user authentication, the Cloudant DB (offline/local store and remote/cloud store), and the node.js backend.  You get the operational analytics automatically.

To get started, you just need to create a new iOS 8 mobile application on Bluemix.  See my video series on Getting Started with IBM MobileFirst for Bluemix for a complete walkthrough of creating a new app using MobileFirst for Bluemix, or check out the Getting Started Guide in the official docs.

You need to initialize your app, and make sure you have setup the Facebook identity provider.  You can create your Facebook authentication at https://developers.facebook.com/.  Once the user is authenticated, the client app is fully functional.

The app UI is very simple, basically just two buttons for capturing images (the last captured image shows up in the background):

App's main UI
App’s main UI

There’s also a gallery for viewing local images:

Local gallery view
Local gallery view

Capturing Location

Capturing data is very straightforward.  The geographic location is captured using Apple’s Core Location framework.  We just need to implement the CLLocationManagerDelegate protocol:

- (void)locationManager:(CLLocationManager *)manager
 didUpdateLocations:(NSArray *)locations {

    self.currentLocation = [locations lastObject];
    NSDate* eventDate = self.currentLocation.timestamp;
    NSTimeInterval howRecent = [eventDate timeIntervalSinceNow];
    if (abs(howRecent) < 15.0) {
    // If the event is recent, do something with it.
    locationLabel.text = [NSString stringWithFormat:@" Lat: %+.5f, Lon: %+.5f\n",
      self.currentLocation.coordinate.latitude,
      self.currentLocation.coordinate.longitude];
    }
}

Then initialize CLLocationManager using our class as the location manager’s delegate:

if (self.locationManager == nil)
  self.locationManager = [[CLLocationManager alloc] init];
self.locationManager.delegate = self;
self.locationManager.desiredAccuracy = kCLLocationAccuracyBest;
self.locationManager.pausesLocationUpdatesAutomatically = YES;

Capturing Images

Capturing images from the device is also very straightforward.  In the app I leverage Apple’s UIImagePickerController to allow the user to either upload an existing image or capture a new image.  See the presentImagePicker and didFinishPickingMediaWithInfo below. All of this standard practice using Apple’s developer SDK:

- (void) presentImagePicker:(UIImagePickerControllerSourceType) sourceType {
 if ( sourceType == UIImagePickerControllerSourceTypeCamera  && ![UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera]) {
  [logger logErrorWithMessages:@"device has no camera"];
  UIAlertView *myAlertView = [[UIAlertView alloc] initWithTitle:@"Error"
                 message:@"Device has no camera"
                delegate:nil
             cancelButtonTitle:@"OK"
             otherButtonTitles: nil];
  [myAlertView show];
 };

 if ( sourceType != UIImagePickerControllerSourceTypeCamera || [UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera] ){
  UIImagePickerController *picker = [[UIImagePickerController alloc] init];
  picker.delegate = self;
  picker.allowsEditing = NO;
  picker.sourceType = sourceType;

  [self presentViewController:picker animated:YES completion:NULL];
 }
}

- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {

 [logger logDebugWithMessages:@"didFinishPickingMediaWithInfo"];
 UIImage *image = info[UIImagePickerControllerOriginalImage];
 currentImage.image = image;
 [[DataManager sharedInstance] saveImage:image withLocation:self.currentLocation];
 [picker dismissViewControllerAnimated:YES completion:nil];
}

Persisting Data

If you notice in the didFinishPickingMediaWithInfo method above, there is a call to the DataManager’s saveImage withLocation method. This is where we save data locally and rely on Cloudant’s replication to automatically save data from the local data store up to the Cloudant NoSQL database.  This is powered by the iOS 8 Data service from Bluemix.

The first thing that we will need to do is initialize the local and remote data stores. Below you can see my init method from my DataManager class. In this, you can see the local data store is initialized, then the remote data store is initialized. If either data store already exists, the existing store will be used, otherwise it is created.

-(id) init {
 self = [super init];

 if ( self ) {
  logger = [IMFLogger loggerForName:NSStringFromClass([self class])];
  [logger logDebugWithMessages:@"initializing local datastore 'geopix'..."];

  // initialize an instance of the IMFDataManager
  self.manager = [IMFDataManager sharedInstance];

  NSError *error = nil;
  //create a local data store
  self.datastore = [self.manager localStore:@"geopix" error:&error];

  if (error) {
   [logger logErrorWithMessages:@"Error creating local data store %@",error.description];
  }

  //create a remote data store
  [self.manager remoteStore:@"geopix" completionHandler:^(CDTStore *store, NSError *error) {
   if (error) {
    [logger logErrorWithMessages:@"Error creating remote data store %@",error.description];
   } else {
    [self.manager setCurrentUserPermissions:DB_ACCESS_GROUP_MEMBERS forStoreName:@"geopix" completionHander:^(BOOL success, NSError *error) {
     if (error) {
      [logger logErrorWithMessages:@"Error setting permissions for user with error %@",error.description];
     }

     [self replicate];
    }];
   }
  }];

  //start replication
  [self replicate];
 }

 return self;
} 

Once the data stores are created, you can see that the replicate method is invoked.  This starts up the replication process to automatically push changesfrom the local data store to the remote data store “in the cloud”.

Therefore, if you’re collecting data when the app is offline, then you have nothing to worry about.  All of the data will be stored locally and pushed up to the cloud whenever you’re back online – all with no additional effort on your part.  When using replication with the Cloudant SDK, you just have to start the replication process and let it do it’s thing… fire and forget.

In my replicate function, I setup CDTPushReplication for pushing changes to the remote data store.  You could also setup two-way replication to automatically pull new changes from the remote store.

-(void) replicate {
 if ( self.replicator == nil ) {
  [logger logDebugWithMessages:@"attempting replication to remote datastore..."];

  __block NSError *replicationError;
  CDTPushReplication *push = [self.manager pushReplicationForStore: @"geopix"];
  self.replicator = [self.manager.replicatorFactory oneWay:push error:&replicationError];
  if(replicationError){
   // Handle error
   [logger logErrorWithMessages:@"An error occurred: %@", replicationError.localizedDescription];
  }

  self.replicator.delegate = self;

  replicationError = nil;
  [logger logDebugWithMessages:@"starting replication"];
  [self.replicator startWithError:&replicationError];
  if(replicationError){
   [logger logErrorWithMessages:@"An error occurred: %@", replicationError.localizedDescription];
  }else{
   [logger logDebugWithMessages:@"replication start successful"];
  }
 }
 else {
  [logger logDebugWithMessages:@"replicator already running"];
 }
}

Once we’ve setup the remote and local data stores and setup replication, we now are ready to save the data the we’re capturing within our app.

Next is my saveImage withLocation method.  Here you can see that it creates a new CDTMutableDocumentRevision object (this is a generic object for the Cloudant NoSQL database), and populates it with the location data and timestamp.   It then creates a jpg image from the UIImage (passed in from the UIImagePicker above) and adds the jpg as an attachment to the document revision.  Once the document is created, it is saved to the local data store.   We then let replication take care of persisting this data to the back end.

-(void) saveImage:(UIImage*)image withLocation:(CLLocation*)location {

 [logger logDebugWithMessages:@"saveImage withLocation"];

 //save in background thread
 dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^(void) {

  [logger logDebugWithMessages:@"creating document..."];

  NSDate *now = [NSDate date];
  NSString *dateString = [NSDateFormatter localizedStringFromDate:now
                 dateStyle:NSDateFormatterShortStyle
                 timeStyle:NSDateFormatterFullStyle];

  // Create a document
  CDTMutableDocumentRevision *rev = [CDTMutableDocumentRevision revision];
  rev.body = @{
      @"sort": [NSNumber numberWithDouble:[now timeIntervalSince1970]],
      @"clientDate": dateString,
      @"latitude": [NSNumber numberWithFloat:location.coordinate.latitude],
      @"longitude": [NSNumber numberWithFloat:location.coordinate.longitude],
      @"altitude": [NSNumber numberWithFloat:location.altitude],
      @"course": [NSNumber numberWithFloat:location.course],
      @"type": @"com.geopix.entry"
      };

  [logger logDebugWithMessages:@"creating image attachment..."];

  NSDate *date = [NSDate date];
  NSString *imageName = [NSString stringWithFormat:@"image%f.jpg", [date timeIntervalSince1970]];

  NSString *tempDirectory = NSTemporaryDirectory();
  NSString *imagePath = [tempDirectory stringByAppendingPathComponent:imageName];

  [logger logDebugWithMessages:@"saving image to temporary location: %@", imagePath];

  NSData *imageData = UIImageJPEGRepresentation(image, 0.1);
  [imageData writeToFile:imagePath atomically:YES];

  CDTUnsavedFileAttachment *att1 = [[CDTUnsavedFileAttachment alloc]
            initWithPath:imagePath
            name:imageName
            type:@"image/jpeg"];

  rev.attachments = @{ imageName: att1 };

  [self.datastore save:rev completionHandler:^(id savedObject, NSError *error) {
   if(error) {
    [logger logErrorWithMessages:@"Error creating document: %@", error.localizedDescription];
   }
   [logger logDebugWithMessages:@"Document created: %@", savedObject];
  }];

  [self replicate];
 });
}

If we want to query data from either the remote or local data stores, we can just use the performQuery method on the data store. Below you can see a method for retrieving data for all of the images in the local data store.

-(void) getLocalData:(void (^)(NSArray *results, NSError *error)) completionHandler {

 NSPredicate *queryPredicate = [NSPredicate predicateWithFormat:@"(type = 'com.geopix.entry')"];
 CDTCloudantQuery *query = [[CDTCloudantQuery alloc] initWithPredicate:queryPredicate];

 [self.datastore performQuery:query completionHandler:^(NSArray *results, NSError *error) {

  completionHandler( results, error );
 }];
}

At this point we’ve now captured an image, captured the geographic location, saved that data in our local offline store, and then use replication to save that data up to the cloud whenever it is available.

AND…

We did all of this without writing a single line of server-side logic.   Since this is built on top of MobileFirst for Bluemix, all the backend infrastructure is setup for us, and we get operational analytics to monitor everything that is happening.

With the operational analytics we get:

  • App usage
  • Active Devices
  • Network Usage
  • Authentications
  • Data Storage
  • Device Logs (yes, complete debug/crash logs from devices out in the field)
  • Push Notification Usage

Sharing on the web

Up until this point we haven’t had to write any back-end code. However the mobile app boilerplate on Bluemix comes with a Node.js server.  We might as well take advantage of it.

I exposed the exact same data captured within the app on the Node.js service, which you can see at http://geopix.mybluemix.net/.

Web UI
Web UI

The Node.js back end comes preconfigured to leverage the express.js framework for building web applications.  I added the jade template engine and Leaflet for web-mapping, and was able to crank this out ridiculously quickly.

The first thing we need to do is make sure  we have our configuration variables for accessing the Cloudant service from our node app.  These are environment vars that you get automatcilly if you’re running on Bluemix, but you need to set these for your local dev environment:

var credentials = {};

if (process.env.hasOwnProperty("VCAP_SERVICES")) {
 // Running on Bluemix. Parse out the port and host that we've been assigned.
 var env = JSON.parse(process.env.VCAP_SERVICES);
 var host = process.env.VCAP_APP_HOST;
 var port = process.env.VCAP_APP_PORT;

 credentials = env['cloudantNoSQLDB'][0].credentials;
}
else {

 //for local node.js server instance
 credentials.username = "cloudant username here";
 credentials.password = "cloudant password here";
 credentials.url = "cloudant url here";
}

Next we’ll add our URL/content mappings:

app.get('/', function(req, res){
  prepareData(res, 'map');
});

app.get('/list', function(req, res){
  prepareData(res, 'list');
});

Next you’ll se the logic for querying the Cloudant data store and preparing the data for our UI templates. You can customize this however you want – caching for performance, refactoring for abstraction, or whatever you want. All interactions with Cloudant are powered by the Cloudant Node.js Client

var prepareData = function(res, template) {
 var results = [];

 //create the index if it doesn't already exist
 var sort_index = {name:'sort', type:'json', index:{fields:['sort']}};
 geopix.index(sort_index, function(er, response) {
  if (er) {
   throw er;
  }

  //perform the search
  //we're just pulling back all
  //data captured ("sort" will be numeric)
  var selector = {sort:{"$gt":0}};
  geopix.find({selector:selector, sort:["sort"]}, function(er, result) {
   if (er) {
    throw er;
   }

   //prepare data for template
   for (var x=0; x<result.docs.length; x++) {
    var obj = result.docs[x];

    for (var key in obj._attachments) {
     obj.image = credentials.url + "/" + database + "/" + obj._id +"/" + key;
     break;
    }

    results.push( obj );
   }
   res.render(template, { results:results});
  });
 });
};

After the prepareData method has prepared data for formatting in the UI, the template is rendered by invoking Jade’s render method:

res.render(template, { results:results});

This will render whichever template was passed in – I have two: map.jade (the map template) and list.jade (the list template). You can check out the list template below, and see it in action here: http://geopix.mybluemix.net/list

html
  head
 title GeoPix - powered by Bluemix
 link(href='//maxcdn.bootstrapcdn.com/bootstrap/3.3.2/css/bootstrap.min.css' rel='stylesheet')
 link(href='/public/css/index.css' rel='stylesheet')
 meta(name="viewport" content="width=device-width, initial-scale=1")
  body
 div(class='well')
  h1 GeoPix - Powered by Bluemix
  p
   a(href='/') Map
   &nbsp;|&nbsp;
   a(href='/list') List
 div(class="container-fluid")
  each val, index in results
   div(class="col-md-6")
    div(class="panel panel-default")
     div(class="panel-heading")
      h3= val.clientDate
     div(class="panel-body")
      img(src=val.image)
      p= 'latitude: ' + val.latitude + ", longitude:" + val.longitude + ", altitude:" + val.altitude

In the map view I used the Leaflet map engine and Open Street Map data, along with the Leaflet Marker Cluster plugin for displaying clustered results.

Source Code

You can check out the web interface live at: http://geopix.mybluemix.net/.  If you want to setup the environment on your own, you can grab the complete source code at:

Helpful Links

Ready to start building your own apps on IBM Bluemix?  Just head over to http://bluemix.net and get a free developer trial today!