Tag Archives: cloud

Interview: Gathering & analyzing data with drones & IBM Bluemix

Here’s an interview that I recently did with IBM DeveloperWorks TV at the recent World of Watson conference. In it I discuss a project I’ve been working on that analyzes drone imagery to perform automatic damage detection using the Watson Visual Recognition, and generates 3D models from the drone images using photogrammetry processes. The best part – the entire thing runs in the cloud on IBM Bluemix.

It leverages the IBM Watson Visual Recognition service with custom classifiers to detect the presence of hail damage on shingled roofs, Cloudant for metadata/record storage, the IBM Cloud Object Storage cross-region S3 API for massively scalable & distributed image/model/asset storage, and Bare Metal servers for high performance computing.

Bare Metal servers are dedicated machines in the cloud: not shared, and not virtualized. I’ve got mine setup as a linux server with 24 cores (48 threads), 64 Gigs of RAM, a SSD RAID array, multiple GPUs, etc… and it improved my photogrammetry rendering from hours on my laptop down to merely 10 minutes (in my opinion the best part).

I’ve done all of my testing with DJI Phantom and DJI Inspire aircraft, but really, it could work with any images, from any camera that has embedded GPS information.

Check out the video to see it in action.

Drones, Bots, Cognitive Apps, Image Recognition, Motion Analysis, and Photogrammetry (or, what I’ve been up to lately)

It’s been a while since I’ve posted here on the blog…  In fact, I just did the math, and it’s been over 7 months. Lots of things have happened since, I’ve moved to a new team within IBM, built new developer tools, worked directly with clients on their solutions, worked on a few high profile keynotes, built apps for kinetic motion and activity tracking, built a mobile client for a chat bot, and even completed some new drone projects.  It’s been exciting to say the least, but the real reason I’m writing this post is to share a few of the public projects I’ve been involved with from recent conferences.

I recently returned from Gartner Symposium and IBM’s annual World of Watson conference, and it’s been one of the busiest, yet most exciting span of two weeks I’ve experienced in quite a while.

At both events, we showed a project I’ve been working on with IBM’s Global Business Services team that focuses on the use of small consumer drones and drone imagery to transform Insurance use cases. In particular, by leveraging IBM Watson to automatically detect roof damage, in conjunction with photogrammetry to create 3D reconstructions and generate measurements of afflicted areas to expedite and automate claims processing.

This application leverages many of the services IBM Bluemix has to offer… on-demand CloudFoundry runtimes, a Cloudant NoSQL database, scalable Cloud Object Storage (S3 compatible storage), and BareMetal servers on Softlayer. Bare Metal servers are *awesome*… I have a dedicated server in the cloud that has 24 cores (48 threads), 64 GB RAM, RAID array of SSD drives, and 2 high end multi-core GPUs. It’s taken my analysis processes from 2-3 hours on my laptop down to 10 minutes for photogrammetric reconstruction with Watson analysis.

It’s been an incredibly interesting project, and you can check it out yourself in the links below.

World of Watson

World of Watson was a whirlwind of the best kind… I had the opportunity to join IBM SVP of Cloud, Robert LeBlanc, on stage as part of the the Cloud keynote at T-Mobile Arena (a huge venue that seats over 20,000 people) to show off the drone/insurance demo, plus 2 more presentations, and an “ask me anything” session on the expo floor.

wow

The official recording is available on IBM Go, but it’s easier to just see the YouTube videos. There are two segments for my presentation: the “set up” starts at 57:16 here: https://youtu.be/VrZMQZSB_UE?t=57m16s and the “end result” starts at 1:08:00 https://youtu.be/VrZMQZSB_UE?t=1h8m0s. I wasn’t allowed to fly inside the arena, but at least I was able to bring the Inspire up on stage as a prop!

You can also check out my session “Elevate Your apps with IBM Bluemix” on UStream to see an overview in much more detail:

.. and that’s not all. I also finally got to see a complete working version of the Olympic Cycling team’s training app on the expo floor, including cycling/biometric feedback, video, etc… I worked with an IBM JStart team and wrote the video integration layer into for the mobile app using IBM Cloud Object Storage and Aspera for efficient network transmission.

olympics

This app was also showcased in Jason McGee’s general session “Trends & Directions: Digital Innovation in the Era of Cloud and Cognitive”: https://youtu.be/hgd3tbc2eKs?t=11m49s

Gartner Symposium

At the Gartner Symposium event, I showed the end to end workflow for the drone/insurance app…

Drones

On this project we’ve been working with a partner DataWing, who provides drone image/data capture as a service. However, I’ve also been flying and capturing my own data. The app can process virtually any images with appropriate metadata, but I’ve been putting both the DJI Phantom and Inspire 1 to work, and they’re working fantastically.

Here’s a sample point-cloud scan I did of my office. :)

  • Left-click and drag to rotate
  • Right-click and drag to pan
  • Scroll or pinch/pull to zoom

Or check it out fullscreen in a new window.

New Swift Offerings from IBM

In my last post I mentioned some new announcements related to the Swift programming language at IBM.  Upon further thought, I guess it’s probably not a bad idea to re-post more detail here too…

If you didn’t see/hear it last week, IBM unveiled several projects to advance the Swift language for developers, which we think will have a profound impact on developers & developer productivity in the years to come. You can view a replay of the IBM announcement in the video embedded below, or just scroll down for direct links:

Here are quick links to each of the projects listed:

Kitura
A light-weight web framework written in Swift, that allows you to build web services with complex routes, easily. Learn more…

Swift Package Catalog
The IBM Swift Package Catalog enables the Swift.org developer community to leverage and share code across projects. Learn more…

Updated IBM Swift Sandbox
The Swift Sandbox enables developers to learn Swift, build prototypes and share code snippets. Whatever your Swift ambitions, join the over 100,000 community members using the Sandbox today. Learn more…

OpenWhisk
OpenWhisk is an event-driven compute platform that executes application logic in response to events or through direct invocations–from web/mobile apps or other endpoints. Learn more…

Or, you can read John Ponzo’s official announcement here.


This is more or less a re-share from my original post on the Swift@IBM Blog.

Wearables & IBM MobileFirst – Video & Sample Code

Last week I attended IBM Insight in Las Vegas. It was a great event, with tons of great information for attendees. I had a few sessions on mobile applications. In particular, my dev@Insight session on Wearables powered by IBM MobileFirst was recorded. You can check it out here:

https://youtu.be/d4AEwCOmvug

Sorry it’s not in HD, but the content is still great! (Yes, I am biased.)

In this session I showed how you can power wearable apps, specifically those on smart watch devices, using either the MobileFirst Platform Foundation Server, or the MobileFirst offerings on IBM Bluemix (cloud).

Key takeaways from the session:

  1. Wearables are the most personal computing devices ever. Your users can use them to be notified of information, search/consume data, or even collect environmental data for reporting or actionable analysis.
  2. Regardless of whether developing for a peripheral device like the Apple Watch or Microsoft Band, or a standalone device like Android Wear, you are developing an app that runs in an environment that mirrors that of a a native app. So, the fundamental development principles are exactly the same. You write native code, that uses standard protocols and common conventions to interact with the back-end.
  3. Caveat to #1: You user interface is much smaller. You should design the user interface and services to acomodate for the reduced amount of information that can be displayed.
  4. You can share code across both the phone/tablet and watch/wearable experience (depending on the target device).
  5. Using IBM MobileFirst you can easily expose data, add authentication, and capture analytics for both the mobile and wearable solutions.

Demos/Code Samples:

In the session I showed 3 sample wearable apps.  Full source code and setup instructions for each app is available at: https://github.com/triceam/MobileFirst-Wearables/

Stocks

A sample WatchKit (Apple Watch) app powered by IBM MobileFirst Platform Foundation Server.

applewatch-stocks

Contacts

A sample WatchKit (Apple Watch) app powered by IBM MobileFirst on Bluemix.

contacts-watch

Heartrate

A simple heart rate monitor using the Microsoft Band, powered by MobileFirst on Bluemix and IBM Cloudant.

heartrate

 

Thoughts on Cognitive Computing

You may have heard a lot of buzz coming out of IBM lately about Cognitive Computing, and you might have also wondered “what the heck are they talking about?”  You may have heard of services for data and predictive analytics, services for natural language text processing, services for sentiment analysis, services understand speech and translate languages, but it’s sometimes hard to see the forest through the trees.

I highly recommend taking a moment to watch this video that introduces Cognitive Computing from IBM:

Those services that I mentioned above are all examples of Cognitive Computing systems, and are all available for you to use today.

From IBM Research:

Cognitive computing systems learn and interact naturally with people to extend what either humans or machine could do on their own.

They help human experts make better decisions by penetrating the complexity of Big Data.

Cognitive systems are often based upon massive sets of data and powerful analytics algorithms that detect patterns and concepts that can be turned into actionable information for the end users.  It’s not “artificial intelligence” in the sense that the services/machines act upon their own; rather a system that provides the user tools or information that enables them to make better decisions.

The benefits of cognitive systems in a nutshell:

  1. They augment the user’s experience
  2. They provide the ability to process information faster
  3. They make complex information easier to understand
  4. They enable you to do things you might not otherwise be able to do

Curious where this will lead?  Now take a moment and watch this video talking about the industry-transforming opportunities that Cognitive Computing is already beginning to bring to life”

So, why is the “mobile guy” talking about Cognitive Computing?

First, it’s because Cognitive Computing is big… I mean, really, really big.  Cognitive systems are literally transforming industries and providing powerful analytics and insight into the hands of both experts and “normal people”.  When I say “into the hands”, I again mean this literally; much of this cognitive ability is being delivered to those end users through their mobile devices.

It’s also because cognitive systems fit nicely with IBM’s MobileFirst product offerings.  It doesn’t matter whether you’re using the MobileFirst Platform Foundation server on-premise, or leveraging the MobileFirst offerings on IBM Bluemix, in both cases you can easily consume IBM Watson cognitive services to augment and enhance the interactions and data for your mobile applications. Check out the Bluemix catalog to see how you might start adding Watson cognitive or big data abilities to your apps today.

Last, and this is purely just personal opinion, I see the mobile MobileFirst offerings themselves as providing somewhat of cognitive service for developing mobile apps.  If you look at it from the operational analytics perspective, you have an immediate insight and a snapshot into the health of your system that you would never have seen otherwise.  You can know what types of devices are hitting your system, what services are being used, how long things are taking, and detect issues, all without any additional development efforts on your end. It’s not predictive analytics, but sure is helpful and gets us moving in the right direction.