Category Archives: Cognitive Computing

Is that me on the company home page?

It’s not every day that you get the opportunity to have your work showcased front and center on the main landing page for one of the largest companies in the world. Well, today is definitely my lucky day. I was interviewed last month about a drone-related project that I’ve been working on that focuses on insurance use cases and safety/productivity improvement by using cognitive/artifical intelligence via IBM Watson. I knew it was going to be used for some marketing materials, but the last thing that I expected was to have my image right there on ibm.com. I see this as a tremendous honor, and am humbled by the opportunity and exposure.

ibm.com.screenshot

You can check out the complete article/interview at: https://www.ibm.com/thought-leadership/passion-projects/smart-drone/

Interview: Gathering & analyzing data with drones & IBM Bluemix

Here’s an interview that I recently did with IBM DeveloperWorks TV at the recent World of Watson conference. In it I discuss a project I’ve been working on that analyzes drone imagery to perform automatic damage detection using the Watson Visual Recognition, and generates 3D models from the drone images using photogrammetry processes. The best part – the entire thing runs in the cloud on IBM Bluemix.

It leverages the IBM Watson Visual Recognition service with custom classifiers to detect the presence of hail damage on shingled roofs, Cloudant for metadata/record storage, the IBM Cloud Object Storage cross-region S3 API for massively scalable & distributed image/model/asset storage, and Bare Metal servers for high performance computing.

Bare Metal servers are dedicated machines in the cloud: not shared, and not virtualized. I’ve got mine setup as a linux server with 24 cores (48 threads), 64 Gigs of RAM, a SSD RAID array, multiple GPUs, etc… and it improved my photogrammetry rendering from hours on my laptop down to merely 10 minutes (in my opinion the best part).

I’ve done all of my testing with DJI Phantom and DJI Inspire aircraft, but really, it could work with any images, from any camera that has embedded GPS information.

Check out the video to see it in action.

Drones, Bots, Cognitive Apps, Image Recognition, Motion Analysis, and Photogrammetry (or, what I’ve been up to lately)

It’s been a while since I’ve posted here on the blog…  In fact, I just did the math, and it’s been over 7 months. Lots of things have happened since, I’ve moved to a new team within IBM, built new developer tools, worked directly with clients on their solutions, worked on a few high profile keynotes, built apps for kinetic motion and activity tracking, built a mobile client for a chat bot, and even completed some new drone projects.  It’s been exciting to say the least, but the real reason I’m writing this post is to share a few of the public projects I’ve been involved with from recent conferences.

I recently returned from Gartner Symposium and IBM’s annual World of Watson conference, and it’s been one of the busiest, yet most exciting span of two weeks I’ve experienced in quite a while.

At both events, we showed a project I’ve been working on with IBM’s Global Business Services team that focuses on the use of small consumer drones and drone imagery to transform Insurance use cases. In particular, by leveraging IBM Watson to automatically detect roof damage, in conjunction with photogrammetry to create 3D reconstructions and generate measurements of afflicted areas to expedite and automate claims processing.

This application leverages many of the services IBM Bluemix has to offer… on-demand CloudFoundry runtimes, a Cloudant NoSQL database, scalable Cloud Object Storage (S3 compatible storage), and BareMetal servers on Softlayer. Bare Metal servers are *awesome*… I have a dedicated server in the cloud that has 24 cores (48 threads), 64 GB RAM, RAID array of SSD drives, and 2 high end multi-core GPUs. It’s taken my analysis processes from 2-3 hours on my laptop down to 10 minutes for photogrammetric reconstruction with Watson analysis.

It’s been an incredibly interesting project, and you can check it out yourself in the links below.

World of Watson

World of Watson was a whirlwind of the best kind… I had the opportunity to join IBM SVP of Cloud, Robert LeBlanc, on stage as part of the the Cloud keynote at T-Mobile Arena (a huge venue that seats over 20,000 people) to show off the drone/insurance demo, plus 2 more presentations, and an “ask me anything” session on the expo floor.

wow

The official recording is available on IBM Go, but it’s easier to just see the YouTube videos. There are two segments for my presentation: the “set up” starts at 57:16 here: https://youtu.be/VrZMQZSB_UE?t=57m16s and the “end result” starts at 1:08:00 https://youtu.be/VrZMQZSB_UE?t=1h8m0s. I wasn’t allowed to fly inside the arena, but at least I was able to bring the Inspire up on stage as a prop!

You can also check out my session “Elevate Your apps with IBM Bluemix” on UStream to see an overview in much more detail:

.. and that’s not all. I also finally got to see a complete working version of the Olympic Cycling team’s training app on the expo floor, including cycling/biometric feedback, video, etc… I worked with an IBM JStart team and wrote the video integration layer into for the mobile app using IBM Cloud Object Storage and Aspera for efficient network transmission.

olympics

This app was also showcased in Jason McGee’s general session “Trends & Directions: Digital Innovation in the Era of Cloud and Cognitive”: https://youtu.be/hgd3tbc2eKs?t=11m49s

Gartner Symposium

At the Gartner Symposium event, I showed the end to end workflow for the drone/insurance app…

Drones

On this project we’ve been working with a partner DataWing, who provides drone image/data capture as a service. However, I’ve also been flying and capturing my own data. The app can process virtually any images with appropriate metadata, but I’ve been putting both the DJI Phantom and Inspire 1 to work, and they’re working fantastically.

Here’s a sample point-cloud scan I did of my office. :)

  • Left-click and drag to rotate
  • Right-click and drag to pan
  • Scroll or pinch/pull to zoom

Or check it out fullscreen in a new window.

Mobile Apps, Cognitive Computing, & Wearables

talkingLast week I was in good ‘ole Las Vegas for IBM InterConnect – IBM’s largest conference of the year. With over 20,000 attendees, it was a fantastic event that covered everything from technical details for developers to forward-looking strategy and trends for C-level executives. IBM also made some big announcements for developers – OpenWhisk serverless computing and bringing the Swift language to the server – just to name a few. Both of these are exciting new initiatives that offer radical changes & simplification to developer workflows.

It was a busy week to say the least – lots of presentations, a few labs, and even a role in the main stage Swift keynote. You can expect to find more detail on each of these here on the blog in the days/weeks to come.

For starters, here are two “lightning talks” I presented in the InterConnect Dev@ developer zone:

Smarter apps with Cognitive Computing

This session introduces the concept of cognitive computing, and demonstrates how you can use cognitive services in your own mobile apps.  If you aren’t familiar with cognitive computing, then I strongly recommend that you check out this post: The Future of Cognitive Computing.

In the presentation below, I show two apps leveraging services on Bluemix, IBM’s Cloud computing platform, and the iOS SDK for Watson.

Actually, I’m using two Watson SDKs… The older Speech SDK for iOS, and the new iOS SDK.  I’m using the older speech SDK in one example because it supports continuous listening for Watson Speech To Text, which is currently still in development for the new SDK.

You can check out the source code for the translator app here.

Redefining your personal mobile expression with on-body computing

My second presentation highlighted how we can use on-body computing devices to change how we interact with systems and data.  For example, we can use a luxury smart watch (ex: Apple Watch) to consume and engage with data in more efficient and more personal ways.  Likewise, we can also use smart/wearable peripherals devices to access and act on data in ways that were never possible before.

For example, determining gestures or biometric status based upon patterns in raw data transmitted by the on-body devices.  For this, I leveraged the new IBM Wearables SDK.  The IBM Wearables SDK provides a consistent interface/abstraction layer for interacting with wearable sensors.  This allows you to focus on building your apps that interact with the data, rather thank learning the ins & outs of a new device-specific SDK.

The wearables SDK also users data interpretation algorithms to enable you to define gestures or patterns in the data, and use those patterns to act upon events when they happen – without additional user interaction.  For example: you can determine if someone falls down, you can determine when someone is raising their hand, you can determine anomalies in heart rate or skin temperature, and much more.  The system is capable of learning patterns for any type of action or virtually any data being submitted to the system.  Sound interesting?  Then check it out here.

The wearables SDK is open source on Github, and contains a sample to help you get started.

I also had some other sessions on integrating drones with cloud services, integrating weather services in your mobile apps, and more.  I’ll be sure to post updates for this content I make them publicly available.  I think you’ll find the session on drones + cloud especially interesting – I know I did.

Introducing the new Watson iOS SDK (beta)

watson-header
I’ve written here in the past on both the impact of cognitive computing, and how you can integrate IBM Watson services into your mobile apps to add cognitive language processing capabilities and more.  I’m happy to share that IBM has just recently released a new beta SDK that makes integrating more Watson services into your iOS applications easier than ever.

If you aren’t familiar with cognitive computing, or the transformative impact that it is already having on entire industries, then I strongly suggest checking out this video and related article on IBM DeveloperWorks.

IBM Watson services, which are based on machine learning algorithms, give you the ability to handle unstructured data, like text analysis or translation, speech processing, and more.  This makes consumption, mining, or responding to unstructured data or “dark data” faster, more efficient, and more powerful than ever.

The new Watson iOS SDK provides developers with an API to simplify integration of the Watson Developer Cloud services into their mobile apps, including the Dialog, Language Translation, Natural Language Classifier, Personality Insights, Speech To Text, Text to Speech, Alchemy Language, or Alchemy Vision services – all of which are available today, and can now be integrated with just a few lines of code.

The Watson iOS SDK makes integration with Watson services significantly *really* easy. For example, if you want to take advantage of the Language Translation service, you first have to setup a service instance. Once the translation service is setup, then you’ll be able to leverage translation capabilities within your mobile app:

//instantiate the LanguageTranslation service
let service = LanguageTranslation(username: "yourname", password: "yourpass")

//invoke translation methods
service.translate(["Hello","Welcome"],source:"en",target:"es",callback:{(text:[String], error) in
  //do something with the translated text strings
})

I’ve actually put a sample application together that demonstrates the language translation service integration, which you can access at github.com/triceam/Watson-iOS-SDK-Demo.

swift-translator

Be sure to check out the sample’s readme for additional detail and setup instructions. As with all of the Watson services, You must have a service instance properly configured, with authentication credentials in order to be able to consume it within your app.

The new Watson iOS SDK is written in Swift, is open source, and the team encourages you to provide feedback, submit issues, or make contributions.  You can learn more about the Watson iOS SDK, get the source code, and access the open source project here.