Tag Archives: Watson

Interview: Gathering & analyzing data with drones & IBM Bluemix

Here’s an interview that I recently did with IBM DeveloperWorks TV at the recent World of Watson conference. In it I discuss a project I’ve been working on that analyzes drone imagery to perform automatic damage detection using the Watson Visual Recognition, and generates 3D models from the drone images using photogrammetry processes. The best part – the entire thing runs in the cloud on IBM Bluemix.

It leverages the IBM Watson Visual Recognition service with custom classifiers to detect the presence of hail damage on shingled roofs, Cloudant for metadata/record storage, the IBM Cloud Object Storage cross-region S3 API for massively scalable & distributed image/model/asset storage, and Bare Metal servers for high performance computing.

Bare Metal servers are dedicated machines in the cloud: not shared, and not virtualized. I’ve got mine setup as a linux server with 24 cores (48 threads), 64 Gigs of RAM, a SSD RAID array, multiple GPUs, etc… and it improved my photogrammetry rendering from hours on my laptop down to merely 10 minutes (in my opinion the best part).

I’ve done all of my testing with DJI Phantom and DJI Inspire aircraft, but really, it could work with any images, from any camera that has embedded GPS information.

Check out the video to see it in action.

Drones, Bots, Cognitive Apps, Image Recognition, Motion Analysis, and Photogrammetry (or, what I’ve been up to lately)

It’s been a while since I’ve posted here on the blog…  In fact, I just did the math, and it’s been over 7 months. Lots of things have happened since, I’ve moved to a new team within IBM, built new developer tools, worked directly with clients on their solutions, worked on a few high profile keynotes, built apps for kinetic motion and activity tracking, built a mobile client for a chat bot, and even completed some new drone projects.  It’s been exciting to say the least, but the real reason I’m writing this post is to share a few of the public projects I’ve been involved with from recent conferences.

I recently returned from Gartner Symposium and IBM’s annual World of Watson conference, and it’s been one of the busiest, yet most exciting span of two weeks I’ve experienced in quite a while.

At both events, we showed a project I’ve been working on with IBM’s Global Business Services team that focuses on the use of small consumer drones and drone imagery to transform Insurance use cases. In particular, by leveraging IBM Watson to automatically detect roof damage, in conjunction with photogrammetry to create 3D reconstructions and generate measurements of afflicted areas to expedite and automate claims processing.

This application leverages many of the services IBM Bluemix has to offer… on-demand CloudFoundry runtimes, a Cloudant NoSQL database, scalable Cloud Object Storage (S3 compatible storage), and BareMetal servers on Softlayer. Bare Metal servers are *awesome*… I have a dedicated server in the cloud that has 24 cores (48 threads), 64 GB RAM, RAID array of SSD drives, and 2 high end multi-core GPUs. It’s taken my analysis processes from 2-3 hours on my laptop down to 10 minutes for photogrammetric reconstruction with Watson analysis.

It’s been an incredibly interesting project, and you can check it out yourself in the links below.

World of Watson

World of Watson was a whirlwind of the best kind… I had the opportunity to join IBM SVP of Cloud, Robert LeBlanc, on stage as part of the the Cloud keynote at T-Mobile Arena (a huge venue that seats over 20,000 people) to show off the drone/insurance demo, plus 2 more presentations, and an “ask me anything” session on the expo floor.


The official recording is available on IBM Go, but it’s easier to just see the YouTube videos. There are two segments for my presentation: the “set up” starts at 57:16 here: https://youtu.be/VrZMQZSB_UE?t=57m16s and the “end result” starts at 1:08:00 https://youtu.be/VrZMQZSB_UE?t=1h8m0s. I wasn’t allowed to fly inside the arena, but at least I was able to bring the Inspire up on stage as a prop!

You can also check out my session “Elevate Your apps with IBM Bluemix” on UStream to see an overview in much more detail:

.. and that’s not all. I also finally got to see a complete working version of the Olympic Cycling team’s training app on the expo floor, including cycling/biometric feedback, video, etc… I worked with an IBM JStart team and wrote the video integration layer into for the mobile app using IBM Cloud Object Storage and Aspera for efficient network transmission.


This app was also showcased in Jason McGee’s general session “Trends & Directions: Digital Innovation in the Era of Cloud and Cognitive”: https://youtu.be/hgd3tbc2eKs?t=11m49s

Gartner Symposium

At the Gartner Symposium event, I showed the end to end workflow for the drone/insurance app…


On this project we’ve been working with a partner DataWing, who provides drone image/data capture as a service. However, I’ve also been flying and capturing my own data. The app can process virtually any images with appropriate metadata, but I’ve been putting both the DJI Phantom and Inspire 1 to work, and they’re working fantastically.

Here’s a sample point-cloud scan I did of my office. :)

  • Left-click and drag to rotate
  • Right-click and drag to pan
  • Scroll or pinch/pull to zoom

Or check it out fullscreen in a new window.

Mobile Apps, Cognitive Computing, & Wearables

talkingLast week I was in good ‘ole Las Vegas for IBM InterConnect – IBM’s largest conference of the year. With over 20,000 attendees, it was a fantastic event that covered everything from technical details for developers to forward-looking strategy and trends for C-level executives. IBM also made some big announcements for developers – OpenWhisk serverless computing and bringing the Swift language to the server – just to name a few. Both of these are exciting new initiatives that offer radical changes & simplification to developer workflows.

It was a busy week to say the least – lots of presentations, a few labs, and even a role in the main stage Swift keynote. You can expect to find more detail on each of these here on the blog in the days/weeks to come.

For starters, here are two “lightning talks” I presented in the InterConnect Dev@ developer zone:

Smarter apps with Cognitive Computing

This session introduces the concept of cognitive computing, and demonstrates how you can use cognitive services in your own mobile apps.  If you aren’t familiar with cognitive computing, then I strongly recommend that you check out this post: The Future of Cognitive Computing.

In the presentation below, I show two apps leveraging services on Bluemix, IBM’s Cloud computing platform, and the iOS SDK for Watson.

Actually, I’m using two Watson SDKs… The older Speech SDK for iOS, and the new iOS SDK.  I’m using the older speech SDK in one example because it supports continuous listening for Watson Speech To Text, which is currently still in development for the new SDK.

You can check out the source code for the translator app here.

Redefining your personal mobile expression with on-body computing

My second presentation highlighted how we can use on-body computing devices to change how we interact with systems and data.  For example, we can use a luxury smart watch (ex: Apple Watch) to consume and engage with data in more efficient and more personal ways.  Likewise, we can also use smart/wearable peripherals devices to access and act on data in ways that were never possible before.

For example, determining gestures or biometric status based upon patterns in raw data transmitted by the on-body devices.  For this, I leveraged the new IBM Wearables SDK.  The IBM Wearables SDK provides a consistent interface/abstraction layer for interacting with wearable sensors.  This allows you to focus on building your apps that interact with the data, rather thank learning the ins & outs of a new device-specific SDK.

The wearables SDK also users data interpretation algorithms to enable you to define gestures or patterns in the data, and use those patterns to act upon events when they happen – without additional user interaction.  For example: you can determine if someone falls down, you can determine when someone is raising their hand, you can determine anomalies in heart rate or skin temperature, and much more.  The system is capable of learning patterns for any type of action or virtually any data being submitted to the system.  Sound interesting?  Then check it out here.

The wearables SDK is open source on Github, and contains a sample to help you get started.

I also had some other sessions on integrating drones with cloud services, integrating weather services in your mobile apps, and more.  I’ll be sure to post updates for this content I make them publicly available.  I think you’ll find the session on drones + cloud especially interesting – I know I did.

Introducing the new Watson iOS SDK (beta)

I’ve written here in the past on both the impact of cognitive computing, and how you can integrate IBM Watson services into your mobile apps to add cognitive language processing capabilities and more.  I’m happy to share that IBM has just recently released a new beta SDK that makes integrating more Watson services into your iOS applications easier than ever.

If you aren’t familiar with cognitive computing, or the transformative impact that it is already having on entire industries, then I strongly suggest checking out this video and related article on IBM DeveloperWorks.

IBM Watson services, which are based on machine learning algorithms, give you the ability to handle unstructured data, like text analysis or translation, speech processing, and more.  This makes consumption, mining, or responding to unstructured data or “dark data” faster, more efficient, and more powerful than ever.

The new Watson iOS SDK provides developers with an API to simplify integration of the Watson Developer Cloud services into their mobile apps, including the Dialog, Language Translation, Natural Language Classifier, Personality Insights, Speech To Text, Text to Speech, Alchemy Language, or Alchemy Vision services – all of which are available today, and can now be integrated with just a few lines of code.

The Watson iOS SDK makes integration with Watson services significantly *really* easy. For example, if you want to take advantage of the Language Translation service, you first have to setup a service instance. Once the translation service is setup, then you’ll be able to leverage translation capabilities within your mobile app:

//instantiate the LanguageTranslation service
let service = LanguageTranslation(username: "yourname", password: "yourpass")

//invoke translation methods
service.translate(["Hello","Welcome"],source:"en",target:"es",callback:{(text:[String], error) in
  //do something with the translated text strings

I’ve actually put a sample application together that demonstrates the language translation service integration, which you can access at github.com/triceam/Watson-iOS-SDK-Demo.


Be sure to check out the sample’s readme for additional detail and setup instructions. As with all of the Watson services, You must have a service instance properly configured, with authentication credentials in order to be able to consume it within your app.

The new Watson iOS SDK is written in Swift, is open source, and the team encourages you to provide feedback, submit issues, or make contributions.  You can learn more about the Watson iOS SDK, get the source code, and access the open source project here.

Mobile Apps with Language & Translation Services using IBM Watson & IBM MobileFirst

UPDATE 12/22/15:  IBM Recently released a new iOS SDK for Watson that makes integration with Watson services even easier. You can read more about it here.

I recently gave a presentation at IBM Insight on Cognitive Computing in mobile apps.  I showed two apps: one that uses Watson natural language processing to perform search queries, and another that uses Watson translation and speech to text services to take text in one language, translate it to another language, then even have the app play back the spoken audio in the translated language.  It’s this second app that I want to highlight today.

In fact, it gets much cooler than that.  I had an idea: “What if we hook up an OCR (optical character recognition) engine to the translation services?” That way, you can take a picture of something, extract the text, and translate it.  It turns out, it’s not that hard, and I was able to put together this sample app in just under two days.  Check out the video below to see it in action.

To be clear, I ended up using a version of the open source Tesseract OCR engine targeting iOS. This is not based on any of the work IBM research is doing with OCR or natural scene OCR, and should not be confused with any IBM OCR work.  This is basic OCR and works best with dark text on a light background.

The Tesseract engine lets you pass in an image, then handles the OCR operations, returning you a collection of words that it is able to extract from that image.  Once you have the text, you can do whatever you want from it.

So, here’s where Watson Developer Cloud Services come into play. First, I used the Watson Language Translation Service to perform the translation.  When using this service, I make a request to my Node.js app running on IBM Bluemix (IBM’s cloud platform).  The Node.js app acts as a facade and delegates to the Watson service for the actual translation.


You can check out a sample on the web here:

Translate english to:

On the mobile client, you just make a request to your service and do something with the response. The example below uses the IMFResourceRequest API to make a request to the server (this can be done in either Objective C or Swift). IMFResourceRequest is the MobileFirst wrapper for networking requests that enables the MobileFirst/Mobile Client Access service to capture operational analytics for every request made by the app.

NSDictionary *params = @{

IMFResourceRequest * imfRequest =
  [IMFResourceRequest requestWithPath:@"https://translator.mybluemix.net/translate"
                      method:@"GET" parameters:params];

[imfRequest sendWithCompletionHandler:^(IMFResponse *response, NSError *error) {
  NSDictionary* json = response.responseJson;
  NSArray *translations = [json objectForKey:@"translations"];
  NSDictionary *translationObj = [translations objectAtIndex:0];
  self.lastTranslation = [translationObj objectForKey:@"translation"];
  // now do something with the result - like update the UI

On the Node.js server, it is simply taking the request and brokering it to the Watson Translation service (using the Watson Node.js SDK):

app.get('/translate', function(req, res){
  language_translation.translate(req.query, function(err, translation) {
    if (err) {
      res.send( err );
    } else {
      res.send( translation );

Once you receive the result from the server, then you can update the UI, make a request to the speech to text service, or pretty much anything else.

To generate audio using the Watson Text To Speech service, you can either use the Watson Speech SDK, or you can use the Node.js facade again to broker requests to the Watson Speech To Text Service. In this sample I used the Node.js facade to generate Flac audio, which I played in the native iOS app using the open source Origami Engine library that supports Flac audio formats.

You can preview audio generated using the Watson Text To Speech service using the embedded audio below. Note: In this sample I’m using the OGG file format; it will only work in browsers that support OGG.

English: Hello and welcome! Please share this article with your friends!

Hola y bienvenido! Comparta este artículo con sus amigos!

app.get('/synthesize', function(req, res) {
  var transcript = textToSpeech.synthesize(req.query);
  transcript.on('response', function(response) {
    if (req.query.download) {
      response.headers['content-disposition'] = 'attachment; filename=transcript.flac';
  transcript.on('error', function(error) {
    console.log('Synthesize error: ', error)

On the native iOS client, I download the audio file and play it using the Origami Engine player. This could also be done with the Watson iOS SDK (much easier), but I wrote this sample before the SDK was available.

//format the URL
NSString *urlString = [NSString stringWithFormat:@"https://translator.mybluemix.net/synthesize?text=Hola!&voice=es-US_SofiaVoice&accept=audio/flac&download=1", phrase, voice ];
NSString* webStringURL = [urlString stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding];
NSURL *flacURL = [NSURL URLWithString:webStringURL];

//download the contents of the audio file
NSData *audioData = [NSData dataWithContentsOfURL:flacURL];
NSString *docDirPath = NSTemporaryDirectory() ;
NSString *filePath = [NSString stringWithFormat:@"%@transcript.flac", docDirPath ];
[audioData writeToFile:filePath atomically:YES];

//pass the file url the the origami player and play the audio
NSURL* fileUrl = [NSURL fileURLWithPath:filePath];
[self.orgmPlayer playUrl:fileUrl];

Cognitive computing is all about augmenting the experience of the user, and enabling the users to perform their duties more efficiently and more effectively. The Watson language services enable any app to greater facilitate communication and broaden the reach of content across diverse user bases. You should definitely check them out to see how Watson services can benefit you.


So, I mentioned that this app uses IBM MobileFirst offerings on Bluemix. In particular I am using the Mobile Client Access service to collect logs and operational analytics from the app. This lets you capture logs and usage metrics for apps that are live “out in the wild”, providing insight into what people are using, how they’re using it, and the health of the system at any point in time.

Analytics from the Mobile Client Access service
Analytics from the Mobile Client Access service

Be sure to check out the MobileFirst on Bluemix and MobileFirst Platform offerings for more detail.


You can access the sample iOS client and Node.js code at https://github.com/triceam/Watson-Translator. Setup instructions are available in the readme document. I intend on updating this app with some more translation use cases in the future, so be sure to check back!