Say What? Live video chat between iOS & WebRTC with Twilio & IBM Watson Cognitive Computing in Real Time

What I’m about to show you might seem like science fiction from the future, but I can assure you it is not. Actually, every piece of this is available for you to use as a service.  Today.

Yesterday Twilio, an IBM partner whose services are available via IBM Bluemix, announced several new SDKs, including live video chat as a service.  This makes live video very easy to integrate into your native mobile or web based applications, and gives you the power to do some very cool things. For example, what if you could add video chat capabilities between your mobile and web clients? Now, what if you could take things a step further, and add IBM Watson cognitive computing capabilities to add real-time transcription and analysis?

Check out this video from yesterday’s Twilio Signal conference keynote, where fellow IBM’ers Damion Heredia and Jeff Sloyer demonstrate exactly this scenario; the integration of the new Twilio video SDK between iOS native and WebRTC client with IBM Watson cognitive computing services providing realtime transcription and sentiment analysis.

If it doesn’t automatically jump to the IBM Bluemix Demo, skip ahead to 2 hours, 15 min, and 20 seconds.

Jeff and Damion did an awesome job showing of both the new video service and the power of IBM Watson. I can also say first-hand that the new Twilio video services are pretty easy to integrate into your own projects (I helped them integrate these services into the native iOS client (physician’s app) shown in the demo)!  You just pull in the SDK, add your app tokens, and instantiate a video chat.   Jeff is pulling the audio stream from the WebRTC client and pushing it up to Watson in real time for the transcription and sentiment analysis services.

Bidirectional Communication Between An Apple Watch Extension and the Host App

In this entry we’re going to focus on building Apple Watch apps that can communicate back and forth with the host application running on the iPhone.  This is extremely important since the Apple Watch provides a second screen/peripheral complimentary experience to the main app running on the iOS device – be it a remote control, or quick view/glance into whats happening within the bigger picture.

In my last post I showed how to setup remote logging and instrumentation/analytics in an Apple Watch app using IBM MobileFirst Platform Foundation server.   I used the methods described below for communicating between the WatchKit and host apps in the sample app from that previous post.

When we’re talking about bidirectional communication, we’re talking about sending data two ways:

  1. Sending data from the host app to the WatchKit app
  2. Sending data to the WatchKit app from the host app

At first thought, you might think “oh that’s easy, just use NSNotificationCenter to communicate between the separate classes of the application”, but things aren’t exactly that simple.

An Apple Watch app is really made of 3 parts: 1) the main iOS application binary, 2) the user interface on the Apple Watch, and 3) the WatchKit extension binary (on the iOS device).

Apple Watch App - Architectural Components
Apple Watch App – Architectural Components

Yep, you read that correctly, the WatchKit extension (which controls all of the logic inside the Apple Watch UI and resides on the iOS device) is a separate binary from the “main” iOS application binary.  These are separate processes, so objects in memory in the main app are not the same objects in memory in the extension, and as a result, these processes do not communicate directly. NSNotificationCenter isn’t going to work.

However there are definitely ways you can make this type of a scenario work.

First, WatchKit has methods to invoke actions on the host application from the WatchKit extension.  WatchKit’s openParentApplication or handleWatchKitExtensionRequest methods both provide the ability to invoke actions and pass data in the containing app, and provide a mechanism to invoke a “reply” code block back in the WatchKit extension after the code in the host application has been completed.

For example, in the WatchKit extension, this will invoke an action in the host application and handle the reply:

[WKInterfaceController openParentApplication:@{@"action":@"toggleStatus"} reply:^(NSDictionary *replyInfo, NSError *error) {
    [logger trace:@"toggleStatus reply"];
    [self updateUIFromHost:replyInfo];
}];

Inside the host application we have access to the userInfo NSDictionary that was passed, and we can respond to it accordingly. For example, in the code below I am setting a string value on the userInfo instance, and taking appropriate actions based upon the value of that string.

- (void)application:(UIApplication *)application
handleWatchKitExtensionRequest:(NSDictionary *)userInfo
  reply:(void (^)(NSDictionary *replyInfo))reply {

  //handle this as a background task
  __block UIBackgroundTaskIdentifier watchKitHandler;
  watchKitHandler = [[UIApplication sharedApplication] beginBackgroundTaskWithName:@"backgroundTask"
            expirationHandler:^{
              watchKitHandler = UIBackgroundTaskInvalid;
            }];

  NSString *action = (NSString*) [userInfo valueForKey:@"action"];
  [logger trace:@"incoming request from WatchKit: ", action];

  LocationManager * locationManager = [LocationManager sharedInstance];

  NSMutableDictionary *result = [[NSMutableDictionary alloc] init];

  if ([action isEqualToString:@"toggleStatus"]) {
    //toggle whether or not we're actually tracking the location
    [locationManager toggleTracking];
  } else if ([action isEqualToString:@"stopTracking"]) {
    [locationManager stopTracking];
  } else if ([action isEqualToString:@"currentStatus"]) {
    //do nothing for now
  }

  NSString *trackingString = [NSString stringWithFormat:@"%s", locationManager.trackingActive ? "true" : "false"];
  [result setValue:trackingString forKey:@"tracking"];
  reply(result);

  dispatch_after( dispatch_time( DISPATCH_TIME_NOW, (int64_t)NSEC_PER_SEC * 1 ), dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0 ), ^{
    [[UIApplication sharedApplication] endBackgroundTask:watchKitHandler];
  } );
}

This covers the “pull” scenario, and is great if you want to invoke actions within your host app from your WatchKit extension, and then handle the responses back in the WatchKit extension to update your Apple Watch UI accordingly.

What about the “push” scenario?  The previous scenario only covers requests that originate inside the WatchKit extension.  What happens if you have a process running inside of your host app, and have updates that you want to push to the WatchKit extension without an originating request?

There is no shared memory, and it is not a shared process, so neither NSNotificationCenter or direct method invocation will work. However, you *can* use Darwin notifications (which work across seprate processes by using CFNotificationCenter).  This enables near-realtime interactions across processes, and you can share data as attributes of a CFdictionary object based between processes. You can also share larger amounts of data using access groups, and notify the separate processes using the CFNotificationCenter implementation.

Note: CFNotificationCenter is C syntax, not Objective-C syntax.

First you’ll need to subscribe for the notifications in the WatchKitExtension. Pay attention to the static id instance “staticSelf”… you’ll need this later when invoking Objective-C methods from the C notification callback.

static id staticSelf;

- (void)awakeWithContext:(id)context {
  [super awakeWithContext:context];

  //add your initialization stuff here

  CFNotificationCenterAddObserver(CFNotificationCenterGetDarwinNotifyCenter(), (__bridge const void *)(self), didReceiveTrackingStatusNotificaiton, CFSTR("TrackingStatusUpdate"), NULL, CFNotificationSuspensionBehaviorDrop);

  staticSelf = self;
}

From within your host app you can invoke CFNotificationCenterPostNotification to invoke the Darwin Notification.

-(void) postTrackingStatusNotificationToWatchKit {

  NSString *trackingString = [NSString stringWithFormat:@"%s", self.trackingActive ? "true" : "false"];

  NSDictionary *payload = @{@"tracking":trackingString};
  CFDictionaryRef cfPayload = (__bridge CFDictionaryRef)payload;

  CFNotificationCenterPostNotification(CFNotificationCenterGetDarwinNotifyCenter(), CFSTR("TrackingStatusUpdate"), (__bridge const void *)(self), cfPayload, TRUE);
}

Then in the WatchKit extension, handle the notification, and update your WatchKit extension accordingly.

void didReceiveTrackingStatusNotificaiton() {
  [staticSelf respondToPostFromHostApp];
}

We’ve now covered scenarios where you you can request data or actions in the host application *from* the WatchKit extension, and also how you can push data from the host application to the WatchKit extension.

Now, what if there was a library that encapsulated some of this, and made it even easier for the developer?  When I wrote the app in my previous post, I used the methods described above. However, I recently stumbled across the open source MMWormhole, which wraps the Darwin Notifications method (above) for ease of use.  I’m pretty sure I’ll be using this in my next WatchKit app.

Helpful Links for inter-process communication between WatchKit and host apps:
Series on Apple WatchKit Apps powered by IBM MobileFirst:

 

Powering Apple Watch Apps with IBM MobileFirst – Part 1

This is the first entry in a multipart series on powering native iPhone and Apple Watch apps using the IBM MobileFirst Platform.  In this entry we will cover how to setup the MobileFirst Platform for use within Apple WatchKit apps and leverage the operational analytics and remote logging features.

So, let’s first take a look at the app we’re going to build in this video:

The app is a simple location tracker.  Think of something like a much simpler version of Run Keeper that will allow you to track your location path over a period of time, and show your location on a map.  We’re also building a WatchKit app that enables you to quickly start or stop tracking your location without ever having to pull your iPhone out of your pocket.  All of this powered by IBM MobileFirst.

WatchKit apps are essentially 3 parts:

  • The native iOS App on the phone
  • The watch app user interface
  • The WatchKit extension, which is a binary that runs *on the phone* but controls all of the logic for the watch interface

This means that when you run Apple Watch apps, they’re really no different than a native iOS app because all of the logic is executed on the Phone.

So… Setting up the MobileFirst Platform for WatchKit is really no different than setting it up for a native iOS app, with a few exceptions.

Full instructions how to setup MobileFirst Platform Foundation server with a native iOS app are available in the platform documentation.  Specifically, see the Configuring a Native iOS Application entry.

When you’re setting up your WatchKit app, you need to follow the exact same steps that you did for the native app target, just apply them to your WatchKit extension target.

First you need to add the required frameworks and dependencies (full list here, also be sure to include libWorklightStaticLibProjectNative.a that is inside the iOS API):

Add MobileFirst Frameworks and Dependencies
Add MobileFirst Frameworks and Dependencies

Next, add the “-ObjC” linker flag:

Add Linker Flag
Add Linker Flag

Then make sure that worklight.plist (which is inside of the MobileFirst API you generated from either the CLI or Eclipse Studio) so that it is included in both the native app and WatchKit extension.

Worklight.plist Target Membership
Worklight.plist Target Membership

 

This allows you to take advantage of MobileFirst APIs within your WatchKit extension, complete with operational analytics.  You cansave remote logs, you can access data adapters, and more. The server-side security mechanisms also work, so if you want to shut down your API for specific versions, you have that ability.

I mentioned earlier, it’s just like a native iOS app, but with a few exceptions.  The most important and notable exception is that the UI elements (modal dialogs, alerts, etc…) that you would normally see in the native phone interface do not appear in the WatchKit interface.  You don’t get errors – you just don’t see the notification.  So, you need to work around any scenarios that rely on this, and make sure you handle errors accordingly.

To invoke MobileFirst APIs, you call them as you wold normally in either Objective-C or Swift.  For example:

//InterfaceController for WatchKit app
- (void)awakeWithContext:(id)context {
  [super awakeWithContext:context];

  //setup MobileFirst remote logging
  logger = [OCLogger getInstanceWithPackage:@"WatchKit: InterfaceController"];
  [logger trace:@"InterfaceController awakeWithContext"];

  //connect to MobileFirst server
  [[WLClient sharedInstance] wlConnectWithDelegate: self];
}

Once your app is connected, you’ll be able to access the operational analytics, remote logs, push notification management, etc… from the MobileFirst Platform Foundation server.

For example, the operational analytics dashboard showing app activity:

MobileFirst Operation Analytics with the WatchKit App
MobileFirst Operation Analytics with the WatchKit App

… and the remote log search capability, including logs from the WatchKit extension:

MobileFirst Remote Logging with the WatchKit App
MobileFirst Remote Logging with the WatchKit App

That’s all that you need to get started!

Stay tuned!  Full source code will be released on my github account in a subsequent post. Also be sure to stay tuned for future entries that cover the MobileFirst platform with offline data, persisting data to the server, push notifications, geo notifications, bidirectional communication between the watch and host app, background processing, and more! I will update this post to links to each subsequent post as it is made available.

Wondering what IBM MobileFirst is?  It’s a platform that enables you to deliver and maintain mobile applications throughout their entire lifecycle.  This includes tools to easily manage data, offline storage, push notifications, user authentication, and more, plus you get operational analytics and remote logging to keep an eye on things once you’ve deployed it to the real world, and its available as either cloud or on-premise solutions.

Helpful Links for MobileFirst Platform

Helpful Links for WatchKit apps:

Also, did I mention, writing apps for the Apple Watch is *really* fun!

watchkit

Series on Apple WatchKit Apps powered by IBM MobileFirst:

 

Complete Walkthrough and Source Code for “Building Offline Apps”

I recently put together some content on building “Apps that Work as Well Offline as they do Online” using IBM MobileFirst and Bluemix (cloud services).  There was the original blog post, I used the content in a presentation at ApacheCon, and now I’ve opened everything up for anyone use or learn from.

The content now lives on the IBM Bluemix github account, and includes code for the native iOS app, code for the web (Node.js) endpoint, a comprehensive script that walks through every step of of the process configuring the application, and also a video walkthrough of the entire process from backend creation to a complete solution.

Key concepts demonstrated in these materials:

  • User authentication using the Bluemix Advanced Mobile Access service
  • Remote app logging and instrumentation using the Bluemix Advanced Mobile Access service
  • Using a local data store for offline data access
  • Data replication (synchronization) to a remote data store
  • Building a web based endpoint on the Node.js infrastructure

You can download or fork any of the code at:

The repo contains:

  • Complete step-by-step instructions to guide through the entire app configuration and deployment process
  • Client-side Objective-C code (you can do this in either hybrid or other native platforms too, but I just wrote it for iOS).  The “iOS-native” folder contains the source code for a complete sample application leveraging this workflow. The “GeoPix-complete” folder contains a completed project (still needs you to walk through backend configuration). The “GeoPix-starter” folder contains a starter application, with all MobileFirst/Bluemix code commented out. You can follow the steps inside of the “Step By Step Instructions.pdf” file to setup the backend infrastructure on Bluemix, and setup all code within the “GeoPix-starter” project.
  • Backend Node.js app for serving up the web experience.

 

Significant Advances in the Consumer Drone Market

Lately I’ve been so focused on mobile, apps, development, conferences, and more that I haven’t posted much besides IBM work news and projects.  Well, I’m taking a break for just a moment…

If you’ve followed my blog for a while, then you already know that I’m pretty much obsessed with “drones”.  It is by far the most fun and exciting recreation that I’ve taken up in a very long time. Not only are they fun to fly, but they get you into some amazing views that were previously inacessible, and have applications far beyond just taking pictures.  I’ve written how-tos for aerial photography and videography, taken tons of pictures for fun, and even shot some indoor footage for TV commercials.

I’m always following the news feeds, watching the advances in technology, watching prices drop, and am continually blown away by what the industry is offering.  The last week to ten days have been nothing short of amazing.


First let’s start with the latest from DJI, who announced the Phantom 3 –  a consumer drone with some very impressive specs and performance.

The Phantom 3  is an easy to fly copter that sports a 3-axis gimbal (camera stabilizer), up to 4K video footage, an integrated rectilinear (flat) lens camera, live HD first-person view, integrated iOS and Android apps, a vision positioning system (for stabilized indoor flights) and up to a 1.2 mile flight range.  All for a cost of under $1300 USD.  That’s one heck of a package, and officially makes my old Phantom look like a dinosaur.


3 Days later, 3D Robotics announced the Solo, a direct competitor to the Phantom. The Solo is also very impressive, and has already won an award for Best Drone at NAB in Las Vegas.

The Solo also has a 3-axis gimbal for stabilized footage, and is designed to work with GoPro cameras.  In fact, it is the only copter that integrates with the camera controller and can control the GoPro remotely. The Solo also has dual processors (one in the controller, one in the copter), HD first person view, and has an upgradeable system that can have new camera systems or payloads configured.  It doesn’t have an optical stabilization system built in, but that can be added to the expansion bay.  What really sets the Solo apart is the intelligent auto-pilot sytem that appears to make complex shots very easy. All of this with a price tag starting at $1000 USD.

I currently own DJI products, but this has gotten me seriously considering a purchase.


Both of these are small aircraft targeting consumers, but from the look of it they are definitely capable of high end applications.  Their small size make them extremely portable, and a potential add in many industries and use cases.  Larger copters are always available for larger scale applications.


Let’s not forget drones for the enterprise…  Last week Airware launched their drone operating system.  Business can now license their operating system for commercial applications and data collection.


Meanwhile, people everywhere still freak out over drones as a political debate, ignoring their utility and positive value. The rules for commercial use continue to shake out, but oh man, it’s an exciting time.