Tag Archives: web

IBM Watson, Cognitive Computing & Speech APIs

IBM Watson is a cognitive computing platform that you can use to add intelligence and natural language analysis to your own applications.  Watson employs natural language processing, hypothesis generation, and dynamic learning to deliver solutions for natural language question and answer services, sentiment analysis, relationship extration, concept expansion, and language/translation services. ..and, it is available for you to check out with IBM Bluemix cloud services.

Watson won Jeopardy, tackles genetics,  creates recipes, and so much more.  It is breaking new ground on a daily basis.

The IBM Watson™ Question Answer (QA) service provides an API that give you the power of the IBM Watson cognitive computing system. With this service, you can connect to Watson, pose questions in natural language, and receive responses that you can use within your application.

In this post, I’ve hooked the Watson QA node.js starter project to the Web Speech API speech recognition and speech synthesis APIs. Using these APIs, you can now have a conversation with Watson. Ask any question about healthcare, and see what watson has to say. Check out the video below to see it in action.

You can check out a live demo at:

Just click on the microphone button, allow access to the system mic, and start talking.  Just a warning, lots of background noise might interfere with the API’s ability to recognize & generate a meaningful transcript.

This demo only supports Google Chrome only at the time of writing. You can check out where Web Speech is supported at caniuse.com.

You can check out the full source code for this sample on IBM Jazz Hub (git):

I basically just took the Watson QA Sample Application for Node.js and started playing around with it to see what I could do…

This demo uses the Watson For Healthcare data set, which contains information from HealthFinder.gov, the CDC, National Hear Lung, and Blood Institute, National Institute of Arthritis and Musculoskeletal and Skin Diseases, National Institute of Diabetes and Digestive and Kidney Diseases, National Institute of Neurological Disorders and Stroke, and Cancer.gov.  Just know that this is a beta service/data set – implementing Watson for your own enterprise solutions requires system training and algorithm development for Watson to be able to understand your data.

Using Watson with this dataset, you can ask conditional questions, like:

  • What is X?
  • What causes X?
  • What is the treatment for X?
  • What are the symptoms of X?
  • Am I at risk of X?

Procedure questions, like:

  • What should I expect before X?
  • What should I expect after X?

General health auestions, like:

  • What are the benefits of taking aspirin daily?
  • Why do I need to get shots?
  • How do I know if I have food poisoning?

Or, action-related questions, like:

  • How can I quit smoking?
  • What should I do if my child is obese?
  • What can I do to get more calcium?

Watson services are exposed through a RESTful API, and can easily be integrated into an existing application.  For example, here’s a snippet demonstrating how you can consume the Watson QA service inside of a Node.js app:

var parts = url.parse(service_url +'/v1/question/healthcare');
var options = {
host: parts.hostname,
port: parts.port,
path: parts.pathname,
method: 'POST',
headers: {
  'Content-Type'  :'application/json',
  'Accept':'application/json',
  'X-synctimeout' : '30',
  'Authorization' :  auth
}
};

// Create a request to POST to Watson
var watson_req = https.request(options, function(result) {
  result.setEncoding('utf-8');
  var response_string = '';

  result.on('data', function(chunk) {
    response_string += chunk;
  });

  result.on('end', function() {
    var answers = JSON.parse(response_string)[0];
    var response = extend({ 'answers': answers },req.body);
    return res.render('response', response);
  });
});

Hooking into the Web Speech API is just as easy (assuming you’re using a browser that implements the Web Speech API – I built this demo using Chrome on OS X). On the client side, you just need need to create a SpeechRecognition instance, and add the appropriate event handlers.

var recognition = new webkitSpeechRecognition();
 recognition.continuous = true;
 recognition.interimResults = true;

 recognition.onstart = function() { ... }
 recognition.onresult = function(event) {

   var result = event.results[event.results.length-1];
   var transcript = result[0].transcript;

   // then do something with the transcript
   search( transcript );
 };
 recognition.onerror = function(event) { ... }
 recognition.onend = function() { ... }

To make your app talk back to you (synthesize speech), you just need to create a new SpeechSynthesisUtterance object, and pass it into the window.speechSynthesis.speak() function. You can add event listeners to handle speech events, if needed.

var msg = new SpeechSynthesisUtterance( tokens[i] ); 

msg.onstart = function (event) {
    console.log('started speaking');
};

msg.onend = function (event) {
    console.log('stopped speaking');
};

window.speechSynthesis.speak(msg);

Check out these articles on HTML5Rocks.com for more detail on Speech Recognition and Speech Synthesis.

Here are those links again…

You can get started with Watson services for Bluemix at https://console.ng.bluemix.net/#/store/cloudOEPaneId=store

So, What is IBM MobileFirst?

I’m still “the new guy” on the MobileFirst team here at IBM, and right away I’ve been asked by peers outside of IBM: “So, what exactly is MobileFirst/Worklight?  Is it just for hybrid apps?”

In this post I’ll try to shed some light on IBM MobileFirst, and for starters, it is a lot more than just hybrid apps.

MobileFirst-Logo

IBM MobileFirst Platform is a suite of products that enable you to efficiently build and deliver mobile applications for your enterprise, and is composed of three parts:

IBM MobileFirst Platform Foundation

IBM MobileFirst Platform Foundation (formerly known as Worklight Foundation) is a platform for building mobile applications for the enterprise.  It is a suite of tools and services available either on-premise or in the cloud, which enable you to rapidly build, administer, and monitor secure applications.

The MobileFirst Platform Foundation consists of:

  1. MobileFirst Server – the middleware tier that provides a gateway between back-end systems and services and the mobile client applications.  The server enables application authentication, data endpoints/services, data optimization and transformation, push notification management (streamlined API for all platforms), consolidated logging, and app/services analytics. For development purposes, the MobileFirst server is available as either part of the MobileFirst Studio (discussed below), or as command line tools.

  2. MobileFirst API - both client and server-side APIs for developing and managing your enterprise mobile applications.
    • The server-side API enables you to expose data adapters to your mobile applications – these adapters could be consuming data from SQL databases, REST or SOAP Services, or JMS data sources. The Server side API also provides a built-in security framework, unified push notifications (across multiple platforms), and data translation/transformation services. You can leverage the server-side API in JavaScript, or dig deeper and use the Java implementation.
    • The client-side API is available for native iOS (Objective-C), native Android (Java), J2ME, C# native Windows Phone (C#), and JavaScript for cross-platform hybrid OR mobile-web applications. For the native implementations, this includes user authentication, encrypted storage, push notifications, logging, geo-notifications, data access, and more.  For hybrid applications, it includes everything from the native API, plus cross-platform native UI components and platform specific application skinning.  With the hybrid development approach, you can even push updates to your applications that are live, out on devices, without having to push an update through an app store.  Does the hybrid approach leverage Apache Cordova?  YES.

  3. MobileFirst Studio - an optional all-inclusive development environment for developing enterprise apps on the MobileFirst platform.  This is based on the Eclipse platform, and includes an integrated server, development environment, facilities to create and test all data adapters/services, a browser-based hybrid app simulator, and the ability to generate platform-specific applications for deployment.  However, using the studio is not required! Try to convince a native iOS (Xcode) developer that they have to use Eclipse, and tell me how that goes for you… :)  If you don’t want to use the all-inclusive studio, no problem.  You can use the command line tools (CLI).  The CLI provides a command line interface for managing the MobileFirst server, creating data adapters, creating the encrypted JSON store, and more.

  4. MobileFirst Console – the console provides a dashboard and management portal for everything happening within your MobileFirst applications.  You can view which APIs and adapters have been deployed, set app notifications, manage or disable your apps, report on connected devices and platforms, monitor push notifications, view analytics information for all services and adapters exposed through the MobileFirst server, and manage remote collection of client app logs.  All together, an extremely powerful set of features for monitoring and managing your applications.

  5. MobileFirst Application Center - a tool to make sharing mobile apps easier within an organization.  Basically, it’s an app store for your enterprise.

MobileFirst Platform Application Scanning

MobileFirst Platform Application Scanning is set of tools that can scan your JavaScript, HTML, Objective-C, or Java code for security vulnerabilities and coding best practices.  Think of it as a security layer in your software development lifecycle.


MobileFirst Quality Assurance

MobileFirst Quality Assurance is a set of tools and features to help provide quality assurance to your mobile applications.  It includes automated crash analytics, user feedback and sentiment analysis, in-app bug reporting, over-the-air build distribution to testers, test/bug prioritization, and more.


So, is MobileFirst/Worklight just for hybrid (HTML/JS) apps? You tell me… if you need clarification more information, please re-read this post and follow all the links.  ;)

 

Video: Data Visualization With Web Standards

Last week I had the opportunity to present “Data Visualization With Web Standards” to the Data Visualization New York Meetup group.  There was a great turnout, and thanks to everyone who attended.  I’d like to especially thank Christian Lilley and Paul Trowbridge for organizing the event.

My presentation focused on the fundamental techniques of visualizing data within HTML/JS experiences.  You can view my presentation in its entirety below.  Slides and bullet points are below the fold…

Entire meetup video available here.

My slides are available below.  Just press the space bar to advance to the next “slide”.

data_viz

Key Points

Basically, there are 5 general ways to visualize data using web-standards techniques – here is a brief overview with pros & cons:


<img>

You can embed images using the html <img> that have server-rendered data visualizations. This is nothing new… They are very basic, but will certainly work.

  • Not interactive
  • Requires online & round-trip to server
  • No “WOW” factor – let’s face it, they are boring
  • Example: Google Image Charts

HTML5 <canvas>

You can use the HTML5 <canvas> element to programmatically render content based upon data in-memory using JavaScript. The HTML5 Canvas provides you with an API for rendering graphical content via moveTo or lineTo instructions, or by setting individual pixel values manually.  Learn more about the HTML5 canvas from the MDN tutorials.

  • Can be interactive
  • Dynamic – client side rendering with JavaScript
  • Hardware accelerated on some platforms
  • Can work offline
  • Works in newer browsers: http://caniuse.com/#search=canvas

Demos:


Scalable Vector Graphics (SVG)

SVG is a declarative XML-based markup language that is used to create vector graphics content, and can be used to create visual content inside of web experiences.

  • Client or Server-side rendering
  • Can be static or dynamic
  • Can be scripted with JS
  • Can be manipulated via HTML DOM
  • Works in newer browsers (but not on Android 2.x and earlier): http://caniuse.com/#search=SVG

Demos:


HTML DOM Elements

Visualizations like interactive maps, or simple charts can be created purely with HTML structures and creative use of CSS styles to control position, visual presentation, etc… You can use CSS positioning to control x/y placement, and percentage-based width/height to display relative values based upon a range of data.   For example, the following bar chart/table is created purely using HTML DIV containers with CSS styles.

Samples:


WebGL

WebGL is on the “bleeding edge” of interactive graphics & data visualization across the web. WebGL enables hardware-accelerated 3D graphics inside the browser experience. Technically, it is not a standard, and there is varied and/or incomplete support across different browsers (http://caniuse.com/#search=webgl).  There is also considerable debate whether it ever will be a standard; however there are some incredible samples out on the web worth mentioning:

Feel free to leave a comment with any questions.
Enjoy!