Tag Archives: JavaScript

IBM Watson, Cognitive Computing & Speech APIs

IBM Watson is a cognitive computing platform that you can use to add intelligence and natural language analysis to your own applications.  Watson employs natural language processing, hypothesis generation, and dynamic learning to deliver solutions for natural language question and answer services, sentiment analysis, relationship extration, concept expansion, and language/translation services. ..and, it is available for you to check out with IBM Bluemix cloud services.

Watson won Jeopardy, tackles genetics,  creates recipes, and so much more.  It is breaking new ground on a daily basis.

The IBM Watson™ Question Answer (QA) service provides an API that give you the power of the IBM Watson cognitive computing system. With this service, you can connect to Watson, pose questions in natural language, and receive responses that you can use within your application.

In this post, I’ve hooked the Watson QA node.js starter project to the Web Speech API speech recognition and speech synthesis APIs. Using these APIs, you can now have a conversation with Watson. Ask any question about healthcare, and see what watson has to say. Check out the video below to see it in action.

You can check out a live demo at:

Just click on the microphone button, allow access to the system mic, and start talking.  Just a warning, lots of background noise might interfere with the API’s ability to recognize & generate a meaningful transcript.

This demo only supports Google Chrome only at the time of writing. You can check out where Web Speech is supported at caniuse.com.

You can check out the full source code for this sample on IBM Jazz Hub (git):

I basically just took the Watson QA Sample Application for Node.js and started playing around with it to see what I could do…

This demo uses the Watson For Healthcare data set, which contains information from HealthFinder.gov, the CDC, National Hear Lung, and Blood Institute, National Institute of Arthritis and Musculoskeletal and Skin Diseases, National Institute of Diabetes and Digestive and Kidney Diseases, National Institute of Neurological Disorders and Stroke, and Cancer.gov.  Just know that this is a beta service/data set – implementing Watson for your own enterprise solutions requires system training and algorithm development for Watson to be able to understand your data.

Using Watson with this dataset, you can ask conditional questions, like:

  • What is X?
  • What causes X?
  • What is the treatment for X?
  • What are the symptoms of X?
  • Am I at risk of X?

Procedure questions, like:

  • What should I expect before X?
  • What should I expect after X?

General health auestions, like:

  • What are the benefits of taking aspirin daily?
  • Why do I need to get shots?
  • How do I know if I have food poisoning?

Or, action-related questions, like:

  • How can I quit smoking?
  • What should I do if my child is obese?
  • What can I do to get more calcium?

Watson services are exposed through a RESTful API, and can easily be integrated into an existing application.  For example, here’s a snippet demonstrating how you can consume the Watson QA service inside of a Node.js app:

var parts = url.parse(service_url +'/v1/question/healthcare');
var options = {
host: parts.hostname,
port: parts.port,
path: parts.pathname,
method: 'POST',
headers: {
  'Content-Type'  :'application/json',
  'Accept':'application/json',
  'X-synctimeout' : '30',
  'Authorization' :  auth
}
};

// Create a request to POST to Watson
var watson_req = https.request(options, function(result) {
  result.setEncoding('utf-8');
  var response_string = '';

  result.on('data', function(chunk) {
    response_string += chunk;
  });

  result.on('end', function() {
    var answers = JSON.parse(response_string)[0];
    var response = extend({ 'answers': answers },req.body);
    return res.render('response', response);
  });
});

Hooking into the Web Speech API is just as easy (assuming you’re using a browser that implements the Web Speech API – I built this demo using Chrome on OS X). On the client side, you just need need to create a SpeechRecognition instance, and add the appropriate event handlers.

var recognition = new webkitSpeechRecognition();
 recognition.continuous = true;
 recognition.interimResults = true;

 recognition.onstart = function() { ... }
 recognition.onresult = function(event) {

   var result = event.results[event.results.length-1];
   var transcript = result[0].transcript;

   // then do something with the transcript
   search( transcript );
 };
 recognition.onerror = function(event) { ... }
 recognition.onend = function() { ... }

To make your app talk back to you (synthesize speech), you just need to create a new SpeechSynthesisUtterance object, and pass it into the window.speechSynthesis.speak() function. You can add event listeners to handle speech events, if needed.

var msg = new SpeechSynthesisUtterance( tokens[i] ); 

msg.onstart = function (event) {
    console.log('started speaking');
};

msg.onend = function (event) {
    console.log('stopped speaking');
};

window.speechSynthesis.speak(msg);

Check out these articles on HTML5Rocks.com for more detail on Speech Recognition and Speech Synthesis.

Here are those links again…

You can get started with Watson services for Bluemix at https://console.ng.bluemix.net/#/store/cloudOEPaneId=store

So, What is IBM MobileFirst?

I’m still “the new guy” on the MobileFirst team here at IBM, and right away I’ve been asked by peers outside of IBM: “So, what exactly is MobileFirst/Worklight?  Is it just for hybrid apps?”

In this post I’ll try to shed some light on IBM MobileFirst, and for starters, it is a lot more than just hybrid apps.

MobileFirst-Logo

IBM MobileFirst Platform is a suite of products that enable you to efficiently build and deliver mobile applications for your enterprise, and is composed of three parts:

IBM MobileFirst Platform Foundation

IBM MobileFirst Platform Foundation (formerly known as Worklight Foundation) is a platform for building mobile applications for the enterprise.  It is a suite of tools and services available either on-premise or in the cloud, which enable you to rapidly build, administer, and monitor secure applications.

The MobileFirst Platform Foundation consists of:

  1. MobileFirst Server – the middleware tier that provides a gateway between back-end systems and services and the mobile client applications.  The server enables application authentication, data endpoints/services, data optimization and transformation, push notification management (streamlined API for all platforms), consolidated logging, and app/services analytics. For development purposes, the MobileFirst server is available as either part of the MobileFirst Studio (discussed below), or as command line tools.

  2. MobileFirst API - both client and server-side APIs for developing and managing your enterprise mobile applications.
    • The server-side API enables you to expose data adapters to your mobile applications – these adapters could be consuming data from SQL databases, REST or SOAP Services, or JMS data sources. The Server side API also provides a built-in security framework, unified push notifications (across multiple platforms), and data translation/transformation services. You can leverage the server-side API in JavaScript, or dig deeper and use the Java implementation.
    • The client-side API is available for native iOS (Objective-C), native Android (Java), J2ME, C# native Windows Phone (C#), and JavaScript for cross-platform hybrid OR mobile-web applications. For the native implementations, this includes user authentication, encrypted storage, push notifications, logging, geo-notifications, data access, and more.  For hybrid applications, it includes everything from the native API, plus cross-platform native UI components and platform specific application skinning.  With the hybrid development approach, you can even push updates to your applications that are live, out on devices, without having to push an update through an app store.  Does the hybrid approach leverage Apache Cordova?  YES.

  3. MobileFirst Studio - an optional all-inclusive development environment for developing enterprise apps on the MobileFirst platform.  This is based on the Eclipse platform, and includes an integrated server, development environment, facilities to create and test all data adapters/services, a browser-based hybrid app simulator, and the ability to generate platform-specific applications for deployment.  However, using the studio is not required! Try to convince a native iOS (Xcode) developer that they have to use Eclipse, and tell me how that goes for you… :)  If you don’t want to use the all-inclusive studio, no problem.  You can use the command line tools (CLI).  The CLI provides a command line interface for managing the MobileFirst server, creating data adapters, creating the encrypted JSON store, and more.

  4. MobileFirst Console – the console provides a dashboard and management portal for everything happening within your MobileFirst applications.  You can view which APIs and adapters have been deployed, set app notifications, manage or disable your apps, report on connected devices and platforms, monitor push notifications, view analytics information for all services and adapters exposed through the MobileFirst server, and manage remote collection of client app logs.  All together, an extremely powerful set of features for monitoring and managing your applications.

  5. MobileFirst Application Center - a tool to make sharing mobile apps easier within an organization.  Basically, it’s an app store for your enterprise.

MobileFirst Platform Application Scanning

MobileFirst Platform Application Scanning is set of tools that can scan your JavaScript, HTML, Objective-C, or Java code for security vulnerabilities and coding best practices.  Think of it as a security layer in your software development lifecycle.


MobileFirst Quality Assurance

MobileFirst Quality Assurance is a set of tools and features to help provide quality assurance to your mobile applications.  It includes automated crash analytics, user feedback and sentiment analysis, in-app bug reporting, over-the-air build distribution to testers, test/bug prioritization, and more.


So, is MobileFirst/Worklight just for hybrid (HTML/JS) apps? You tell me… if you need clarification more information, please re-read this post and follow all the links.  ;)

 

DevNexus 2014 PhoneGap Presentations

I’ve just wrapped up my presentations for this year’s DevNexus event in Atlanta – it has been a great event, filled with tons of information on web, mobile, and back-end development. I had 3 sessions on PhoneGap – One intro, one advanced, and one a mobile frameworks panel.

Below are my presentations.  I didn’t record them this time, since they were being recorded by the conference organizers, so expect to see a video once they’re released.

Just press the space bar, or use the arrow keys to view the presentation in your browser.

Getting Started with PhoneGap and Cross Platform Mobile Development

View Presentation …

intro_2_pg

(Lesson learned, never make changes to you presentation/environment after midnight when you have the first session of the day – it will always bite you)

Designing & Architecting for PhoneGap & the Mobile Web

View presentation …

architecture_pg

Enjoy, and feel free to reach out with any questions!

PhoneGap Presentations from HTML5DevConf

I was  searching the web earlier this week for an older presentation from a few months back, and just happened to stumble across my recent presentations from HTML5DevConf from this past October. Looks like the videos were posted in November, but I’m just seeing them now. I had two sessions: Designing and Architecting PhoneGap and Mobile Web Apps and Getting Started with PhoneGap and Cross-Platform Mobile Development, and if you weren’t able to attend them, you’re still in luck! Here are the videos from those sessions:

Designing and Architecting PhoneGap and Mobile Web Apps

Tired of Hello World? In this session, we explore best practices to build real-world PhoneGap applications. We investigate the Single Page Architecture, HTML templates, effective Touch events, performance techniques, modularization and more. We also compare and contrast the leading JavaScript and Mobile Frameworks. This session is a must If you plan to build a PhoneGap application that has more than a couple of screens.

Getting Started with PhoneGap and Cross-Platform Mobile Development

Unfortunately, I ran into network issues which prevented some of my samples from working in this one, but you’ll still be able to get the point.

HTML has emerged as a powerful alternative to “native” to enable cross-platform mobile application development. In this session, you learn how to leverage your existing HTML and JavaScript skills to build cross-platform mobile applications, how to access the device features (camera, accelerometer, contacts, file system, etc) using JavaScript APIs, and how to package your HTML application as a native app for distribution through the different app stores.

You can also check out highlights from HTML5DevConf and find my presentation assets and materials online here.

More Device Motion Experiments with HTML & Adobe DPS

I wanted to follow up my last post on 3D Parallax effects in HTML or Adobe DPS, I’ve decided to release some of the other experiments that I’ve been exploring with device motion in DPS publications. Check out the video below to see two new samples, and a corrected version of the strawberries example from my last post (the plants were going the wrong way in the last post).

All three of these samples leverage the same basic technique for responding to device motion inside of a DPS publication. The motion-interactive components are implemented using HTML and JavaScript, and are included in publications as web content overlays. In JavaScript, it takes advantage of the ondevicemotion event handler to respond to the physical orientation of the device.

In all three of samples, the web content overlay is set to autoplay, with user interaction disabled. This way the HTML & JavaScript automatically loads and the scripting is active, but it doesn’t block interaction or gestures for DPS navigation. I also enabled “Scale Content To Fit” so that HTML content scales appropriately between retina and non-retina devices.

Adobe InDesign - Web Content Overlay Options
Web Content Overlay Options

Strawberries

The strawberries sample is identical to the one from my previous post. This is just a capture of the updated motion. You can access the full source project to this sample at:
https://github.com/triceam/DPS-HTML-Samples/tree/master/strawberries

strawberries

Adobe San Francisco

The Adobe/inline content example is implemented in the same manner as the strawberries example. The large city image It is a two-layer composition created with Adobe Edge Animate. The foreground building and flag move independently from the background image. I used Photoshop to separate the content into layers and made them animate based on device orientation in the exact same fashion as the strawberries sample. All of the text and image content surrounding the cityscape panorama is laid out with InDesign.

adobe

You can check out the Adobe Edge Animate project at:
https://github.com/triceam/DPS-HTML-Samples/tree/master/adobe%20roof

AT&T Park/San Francisco Giants

The AT&T Park/San Francisco Giants example is implemented with basic HTML and JavaScript, no additional tools were used to create this interactive scenario.   The content on the left hand side was all laid out with InDesign. The content on the right side is the interactive HTML.

att_park

The image used in this example is a vertical panorama captured from a remote control helicopter. This image contains various perspectives that have been composited in Photoshop. The motion of the device is aligned to match the perspectives in the image/viewport; When the device is facing down, the image is looking down and when the device is vertical, the image faces forward. You can check out the vertical panorama image below. If you’re interested in creating a vertical panorama, be sure to check out this tutorial from Russell Brown.

Vertical Panorama over AT&T Park

The HTML and JavaScript used in this example is fairly minimal. The image is applied as the background of the root HTML <body> element, and the position of the background is shifted based upon the device motion event. This approach keeps the HTML DOM as flat and simple as possible.

Here’s the HTML that makes up this example:

<html>
    <head>
        <meta charset="utf-8">
        <meta name="viewport" content="initial-scale=1, maximum-scale=1, user-scalable=no">
        <title>Parallax Vertical Background</title>
        <link rel="stylesheet" href="assets/styles.css">
        <script src="assets/zepto.js"></script>
    </head>
    <body></body>
</html>

… and the CSS styles used to apply the background image.

body {
    background-image:url('att_park_vert_panorama.jpg');
    background-position: center;
}

… and the JavaScript used to shift the position based on the device orientation event. Note: this also uses the Zepto JavaScript library.

window.ondeviceorientation = function(event) {
        var gamma = event.gamma/90;
	var beta = event.beta/180;
	var temp = 0;

	// shift values/motion based upon device orientation
	switch (window.orientation) {
		case 90:
			temp = gamma;
			gamma = beta;
			beta = temp;
			break;
		case -90:
			temp = -gamma;
			gamma = beta;
			beta = temp;
			break;

	}

	// update positions to be used for CSS
	var yPosition = 1200 - (beta * 2200);
	var xPosition = -200 + (gamma * 300);
	xPosition = Math.max( -300, Math.min( 0,xPosition));
	yPosition = -Math.max( 100, Math.min( 1400,yPosition));
	//console.log(xPosition, yPosition);

	// apply css styles
	var css = xPosition + "px " + yPosition + "px";
	$("body").css( "background-position", css);
}

Textual content used in this example from: http://en.wikipedia.org/wiki/At%26t_park

Source code for the device motion in this example is available at: https://github.com/triceam/DPS-HTML-Samples/tree/master/ATT%20Park

All of the HTML, CSS and JavaScript code used for these examples is available in the GitHub repository at: https://github.com/triceam/DPS-HTML-Samples