Category Archives: Apps

Video: Enabling the Next Generation of Apps with IBM MobileFirst

Back in February I had the opportunity to present “Enabling the Next Generation of Apps with IBM MobileFirst” at the DevNexus developer conference in Atlanta.  It was a great event, packed with lots of useful content.  Luckily for everyone who wasn’t able to attend, the organizers recorded most of the sessions – which have just been made available on Youtube.

In my presentation I introduce both the MobileFirst Platform Foundation Server and MobileFirst services on IBM Bluemix to enable mobile applications. The video is available below.  In it I cover remote logging, operational analytics, exposing & delivering data, managing push notifications, and more.  Both the platform server and cloud solutions are free to try and enable developers to deliver more from their mobile apps more efficiently and more securely.

https://youtu.be/Xcl5phnAVfI

Here’s the session Description: Once your app goes live in the app store you will have just entered into an iterative cycle of updates, improvements, and releases. Each successively building on features (and defects) from previous versions. IBM MobileFirst Foundation gives you the tools you need to manage every aspect of this cycle, so you can deliver the best possible product to your end user. In this session, we’ll cover the process of integrating a native iOS application with IBM MobileFirst Foundation to leverage all of the capabilities the platform has to offer.

Learn more – IBM Bluemix:

Learn more – MobileFirst Platform Foundation Server:

To get started just sign up for Bluemix or download MobileFirst Platform Foundation Server today (they’re free to try!)

 

Voice-Driven Native Mobile Apps with IBM Watson & IBM MobileFirst

Using your voice to drive interactions within your app is a powerful concept. It is the primary interaction driving Apple’s Siri, Microsoft’s Cortana, and Google’s Voice Actions. By analyzing spoken words, voice commands allow you to complete possibly complex actions with minimal interaction with the device. Or, they enable entirely different forms of interaction, for example, interacting with a remote system through the telephone.

Voice driven interactions are essentially a two part process:

  • Transcribe audible signal to text transcript
  • Perform a system action by parsing text transcript

If you think that voice-driven apps are too complicated, or out of your reach, then I have great news for you: They are not! Last week, IBM elevated several IBM Watson voice services from Beta to General Availability – that means you can use them reliably in your own systems too!

Let’s examine the two parts of the system, and see what solutions IBM has available right now for you to take advantage of…

Transcribe audible signal to text transcript

Part one of this equation is converting the audible signal into text that can be parsed and acted upon. The IBM Speech to Text service fits this bill perfectly, and can be called from any app platform that supports REST services… which means just about anything. It could be from the browser, it could be from the desktop, and it could be from a native mobile app. The Watson STT service is very easy to use, you simply post a request to the REST API containing an audio file, and the service will return to you a text transcript based upon what it is able to analyze from the audio file. With this API you don’t have to worry about any of the transcription actions on your own – no concern for accents, etc… Let Watson do the heavy lifting for you.

Perform a system action by parsing text transcript

This one is perhaps not quite as simple because it is entirely subjective, and depends upon what you/your app is trying to do. You can parse the text transcript on your own, searching for actionable keywords, or you can leverage something like the IBM Watson Q&A service, which enables natural language search queries to Watson data corpora.

Riding on the heels of the Watson language services promotion, I put together a sample application that enables a voice-driven app experience on the iPhone, powered by both the Speech To Text and Watson Question & Answer services, and have made the mobile app and Node.js middleware source code available on github.

Watson Speech QA for iOS

This native iOS app, which I’m calling “Watson Speech QA for iOS” allows you to ask Watson questions in natural, spoken language, and receive textual responses based on the Watson QA Healthcare data set.

Check out the video below to see it in action:

https://youtu.be/0kedhwC3ikY

Bluemix Services Used

This app uses three services available through IBM Bluemix:

  1. Speech to Text – Convert spoken audio into text
  2. Question & Answer – Natural language search
  3. Advanced Mobile Access – Capture analytics and logs from mobile apps running on devices
App Architecture
IBM Watson Speech QA for iOS App Architecture

The app communicates to the Speech to Text and Question & Answer services through the Node.js middelware tier, and connects directly to the Advanced Mobile Access service to provide operational analytics (usage, devices, network utilization) and remote log collection from the client app on the mobile devices.

For the Speech To Text service, the app records audio from the local device, and sends a WAV file to the Node.js in a HTTP post request. The Node.js tier then delegates to the Speech To Text service to provide transcription capabilities. The Node.js tier then formats the respons JSON object and returns the query to the mobile app.

For the QA service, the app makes an HTTP GET request (containing the query string) to the Node.js server, which delegates to the Watson QA natural language processing service to return search results. The Node.js tier then formats the respons JSON object and returns the query to the mobile app.

The general flow between these systems is shown in the graphic below:

IBM Watson Speech QA for iOS - Logic Flow
IBM Watson Speech QA for iOS – Logic Flow

 

Code Explained

Mobile app and Node.js middleware source code and setup instructions are available at: https://github.com/triceam/IBM-Watson-Speech-QA-iOS

The code for this example is really in 2 main areas: The client side integration in the mobile app (Objective-C, but could also be done in Swift), and the application server/middleware implemented in Node.js.

Node.js Middleware

The server side JavaScript code uses the Watson Node.js Wrapper, which enables you to easily instantiate Watson services in just a few short lines of code

var watson = require('watson-developer-cloud');
var question_and_answer_healthcare = watson.question_and_answer(QA_CREDENTIALS);
var speechToText = watson.speech_to_text(STT_CREDENTIALS);

The credentials come from your Bluemix environment configuration, then you just create instances of whichever services that you want to consume.

I implemented two methods in the Node.js application tier. The first accepts the audio input from the mobile client as an attachment to a HTTP POST request and returns a transcript from the Speech To Text service:

// Handle the form POST containing an audio file and return transcript (from mobile)
app.post('/transcribe', function(req, res){

  //grab the audio WAV file attachment and prepare to send to Watson
  var file = req.files.audio;
  var readStream = fs.createReadStream(file.path);
  console.log("opened stream for " + file.path);

  var params = {
    audio:readStream,
    content_type:'audio/l16; rate=16000; channels=1',
    continuous:"true"
  };

  //send the audio WAV file to the watson.recognize service
  speechToText.recognize(params, function(err, response) {
    readStream.close();

    if (err) {
      return res.status(err.code || 500).json(err);
    } else {
      //parse the results and return them to the client
      var result = {};
      if (response.results.length > 0) {
        var finalResults = response.results.filter( isFinalResult );
        if ( finalResults.length > 0 ) {
          result = finalResults[0].alternatives[0];
        }
      }
      return res.send( result );
    }
  });
});

Once you have the text transcript on the client, you could do whatever you want with it. You could parse it to invoke local actions, or delegate to a natural language query service

The second method does exactly this: it accepts a URL query parameter from a HTTP GET request and uses that parameter in a Watson QA natural language search:

//handle QA query and return json result (for mobile)
app.get('/ask', function(req, res){

  //get a copy of the search query text from the req.query object
  var query = req.query.query;

  if ( query != undefined ) {
    //perform a search using the QA "ask" method
    question_and_answer_healthcare.ask({ text: query}, function (err, response) {
      if (err){
        return res.status(err.code || 500).json(response);
      } else {
        //format the results and return them to the mobile client
        if (response.length > 0) {
          var answers = [];

          for (var x=0; x<response[0].question.evidencelist.length; x++) {
            var item = {};
            item.text = response[0].question.evidencelist[x].text;
            item.value = response[0].question.evidencelist[x].value;
            answers.push(item);
          }

          var result = {
            answers:answers
          };
          return res.send( result );
        }
        return res.send({});
      }
    });
  }
  else {
    return res.status(500).send('Bad Query');
  }
});

Note: I am using the free/open Watson Healthcare data set. However the Watson QA service can handle other data sets – these require an engagement with IBM to train the Watson service to understand the desired data sets.

Native iOS – Objective C

On the mobile side we’re working with a native iOS application. My code is written in Objective C, however you could also implement this using Swift. I won’t go into complete line-by-line code here for the sake of brevity, but you can access the client side code in the ViewController.m file. In particular, this is within the postToServer and requestQA methods.

You can see the flow of the application within the image below:

app
App Flow: User speaks, transcript displayed, results displayed

 

The native mobile app first captures audio input from device’s microphone. This is then sent to the Node.js server’s /transcribe method as an attachment to a HTTP POST request (postToServer method on line 191). On the server side this delegates to the Speech To Test service as described above. Once the result is received on the client, the transcribed text is displayed in the UI and then a request is made to the QA service.

In the requestQA method, the mobile app makes a HTTP GET request to the Node.js app’s /ask method (as shown on line 257). The Node.js app delegates to the Watson QA service as shown above. Once the results are returned to the client they are displayed within a standard UITableView in the native app.

MobileFirst – Advanced Mobile Access

A few other things you may notice if you decide to peruse the native Objective-C code:

  1. Within AppDelegate.m you will see calls to IMFClient, IMFAnalytics, and OCLogger classes. These enable operational analytics and log collection within the Advanced MobileAccess service.
  2. All network requests inside of ViewController.m use the

    IMFResourceRequest class. Using the IMFResourceRequest class enables the collection of analytics for every request made within the application (through this class).

Together these allow for the collection of device logs, automatic crash reporting, and operational analytics that provide one of the strengths of the Advanced Mobile Access service, which is one of the mobile offerings on IBM Bluemix.

Source Code

Mobile app and Node.js middleware source code and setup instructions for this app are available at:

Just create an account on IBM Bluemix, and you have everything that you need to get started creating your own voice-driven apps.

Serving Data to the Apple Watch with IBM MobileFirst

This is the third entry in my series on powering Apple Watch apps using IBM MobileFirst.  In the first post I covered setting up the project, remote logging, and analytics. In the second post I covered bidirectional communication between the WatchKit extension and host app (not really MobileFirst, but still applicable).  In this post we’ll examine how to consume data from the MobileFirst Foundation Server inside of an Apple Watch app.

If you’re already familiar with consuming data using MobileFirst Adapters, then guess what… it is *exactly* the same as consuming an Adapter in a native iOS project. Since the logic for a WatchKit app is executed in the WatchKit extension, which is actually an executable that runs on the phone, there is no difference between between the two.

If you aren’t familiar with Adapters, they are server-side code that is used to transfer and retrieve information from back-end systems to client applications and cloud services.  You can write them in either Java or JavaScript, they can be consumed in any MobileFirst app, and they offer security, data transformation, and reporting metrics out of the box.

In the video below I walk through the process of recreating the Apple Watch Stocks app using data delivered from a MobileFirst Platform Foundation server instance. The data is simulated, so don’t use it for any investments. :)

The basic process was this: build out the Apple Watch apps user interface in Xcode/Interface Builder, build the adapters to expose the data, then start consuming the data within the WatchKit extension to deliver it to the watch app interface.

Full source code for this project is available at: https://github.com/triceam/MobileFirst-WatchKit/tree/master/Stocks

The User Interface

So, lets first look at the app interface.  I have two views that were built in Interface Builder.  One is a table that displays rows of data, one is a details screen which has lots of labels used to display data.

applewatch-ui

In the main interface I have a “loading…” label (that is hidden once the data is loaded) and a table that is used to display data.  For each row in the table there are 3 labels to display specific data fields. These were connected to IBOutlet references in the view controller class. All of these are straightforward WatchKit development practices.  Be sure to check out the WKInterfaceTable class reference for more detail on working with WatchKit tables.

Xcode-Interface Builder for Table View
Xcode-Interface Builder for Table View

For displaying the details screen, I also used very similar pattern.  I added labels for displaying data, and linked them to IBOutlet references in my view controller so I can change their values once the data is loaded.

Xcode-InterfaceBuilder Detail View
Xcode-InterfaceBuilder Detail View

Serving Data

Loading data into a WatchKit extension is identical to making a request to the MobileFirst server adapter from a native iOS app.  I did use my helper class so I can use code blocks instead of the delegate patter, but the implementation is exactly the same.

So, here’s how we can create an adapter using the MobileFirst Command Line Interface.  Use the “mfp add adapter” command and follow the prompts:

$ mfp add adapter
[?] What do you want to name your MobileFirst Adapter? StocksAdapter
[?] What type of adapter would you like?
 Cast Iron
 HTTP
 Java
 JMS
 SAP JCo
 SAP Netweaver Gateway
❯ SQL
 [?] Create procedures for offline JSONStore? No
 A new sql Adapter was added at /Users/andrewtrice/Documents/dev/MobileFirst-Stocks/server/MFStocks/adapters/StocksAdapter

Adapters can be used to easily connect back end systems to mobile clients.  You can quickly and easily expose data from a relational database, or even consume data from http endpoints and easily serialize it into a more compact mobile-friendly format.  You should definitely read more about MobileFirst adapters through the platform documentation for more detail.

What’s also great about the MobileFirst platform is that you get operational analytics for all adapters out of the box, with no additional configuration.  You can see the number of requests, data payload sizes, response times, devices/platforms used to consumes, and much more.  Plus, you can also remotely access client log messages from the mobile devices.  Take a look at the screenshots below for just a sample (these are from my dev instance on my laptop):

All of the data I am displaying is simulated.  I’m not actively pulling from a relational database or live service. However, you could use a very similar method to connect to a live data repository.

I exposed two pretty basic procedures on the MobileFirst server: getList – which returns a stripped down list of data, and getDetail – which returns complete data for a stock symbol:

function getList() {

  simulateData();

  var items = [];
  var trimmedProperties = ["symbol","price","change"];

  for (var i=0; i<data.length; i++) {
    var item = {};
    for (var j in trimmedProperties) {
      var prop = trimmedProperties[j];
      item[prop] = data[i][prop];
    }
    items.push(item);
  }

  return {
    "stocks":items
  };
}

function getDetail(symbol) {

  for (var i=0; i<data.length; i++) {
    if (data[i].symbol == symbol) {
      return data[i];
    }
  }
  return null;
}

Once these are deployed to the server using the CLI “mfp bd” command, you can invoke the adapter procedures from a client application, regardless of whether it is native iOS, native Android, or hybrid application.

Consuming the Data

OK, now we’re back to the native iOS project.  In either Objective-C or Swift you can invoke an adapter directly using the WLResourceRequest or invokeProcedure mechanisms.  In my sample I used a helper class to wrap invokeProcedure to support code blocks, so I can define the response/failure handlers directly inline in my code.  So, in my code, I invoke the adapters like so:

-(void) getList:(void (^)(NSArray*))callback{

  WLProcedureInvocationData *invocationData =
    [[WLProcedureInvocationData alloc]
      initWithAdapterName:@"StockAdapter"
          procedureName:@"getList"];

  [WLClientHelper invokeProcedure:invocationData successCallback:^(WLResponse *successResponse) {

    NSArray *responseData = [[successResponse responseJSON] objectForKey:@"stocks"];
    //do something with the response data

  } errorCallback:^(WLFailResponse *errorResponse) {

    //you should do better error handling than this
  }];
}

Once you have the data within the WatchKit extension, we can use it to update the user interface.

For the data table implementation, you simply need to set the number of rows, and then loop over the data to set values for each row based on the WKInterfaceTable specification.

[self.dataTable setNumberOfRows:[self.stocks count] withRowType:@"stockTableRow"];

for (NSInteger i = 0; i < self.dataTable.numberOfRows; i++) {

  StockTableRow* row = [self.dataTable rowControllerAtIndex:i];
  NSDictionary* item = [self.stocks objectAtIndex:i];

  [row.stockLabel setText:[item valueForKey:@"symbol"]];

  NSNumber *price = [item valueForKey:@"price"];
  NSNumber *change = [item valueForKey:@"change"];
  [row.priceLabel setText:[NSString stringWithFormat:@"%-.2f", [price floatValue]]];
  [row.changeLabel setText:[NSString stringWithFormat:@"%-.2f", [change floatValue]]];

  if ([change floatValue] > 0.0) {
    [row.changeLabel setTextColor: [UIColor greenColor]];
    [row.containerGroup setBackgroundColor:[UIColor colorWithRed:0 green:0.2 blue:0 alpha:1]];
  } else if ([change floatValue] < 0.0) {
    [row.changeLabel setTextColor: [UIColor redColor]];
    [row.containerGroup setBackgroundColor:[UIColor colorWithRed:0.2 green:0 blue:0 alpha:1]];
  }
  else {
    [row.changeLabel setTextColor: [UIColor whiteColor]];
    [row.containerGroup setBackgroundColor:[UIColor colorWithRed:0.15 green:0.15 blue:0.15 alpha:1]];
  }
}

For the detail screen we’re also doing things even more straightforward.  When the screen is initialized, we request detail data from the server.  Once we receive that data, we’re simply assigning label values based upon the data that was returned.

[self.nameLabel setText:[stockData objectForKey:@"name"]];

NSNumber *change = [stockData objectForKey:@"change"];
NSNumber *price = [stockData objectForKey:@"price"];
NSNumber *high = [stockData objectForKey:@"high"];
NSNumber *low = [stockData objectForKey:@"low"];
NSNumber *high52 = [stockData objectForKey:@"high52"];
NSNumber *low52 = [stockData objectForKey:@"low52"];
NSNumber *open = [stockData objectForKey:@"open"];
NSNumber *eps = [stockData objectForKey:@"eps"];

float percentChange = [change floatValue]/[price floatValue];

[self.priceLabel setText:[NSString stringWithFormat:@"%-.2f", [price floatValue]]];
[self.changeLabel setText:[NSString stringWithFormat:@"%.02f (%.02f%%)", [change floatValue], percentChange]];

if ([change floatValue] > 0.0) {
	[self.changeLabel setTextColor: [UIColor greenColor]];
} else if ([change floatValue] < 0.0) {
	[self.changeLabel setTextColor: [UIColor redColor]];
}
else {
	[self.changeLabel setTextColor: [UIColor whiteColor]];
}

//update change with percentage

[self.highLabel setText:[NSString stringWithFormat:@"%-.2f", [high floatValue]]];
[self.lowLabel setText:[NSString stringWithFormat:@"%-.2f", [low floatValue]]];
[self.high52Label setText:[NSString stringWithFormat:@"%-.2f", [high52 floatValue]]];
[self.low52Label setText:[NSString stringWithFormat:@"%-.2f", [low52 floatValue]]];

[self.openLabel setText:[NSString stringWithFormat:@"%-.2f", [open floatValue]]];
[self.epsLabel setText:[NSString stringWithFormat:@"%-.2f", [eps floatValue]]];
[self.volLabel setText:[stockData objectForKey:@"shares"]];

What next?

Ready to get started?  Just download the free MobileFirst Platform Server Developer Edition, and get started.

Complete source code for this project is available on my github account at: https://github.com/triceam/MobileFirst-WatchKit/tree/master/Stocks

Series on Apple WatchKit Apps powered by IBM MobileFirst:

 

Enjoy!

 

 

Say What? Live video chat between iOS & WebRTC with Twilio & IBM Watson Cognitive Computing in Real Time

What I’m about to show you might seem like science fiction from the future, but I can assure you it is not. Actually, every piece of this is available for you to use as a service.  Today.

Yesterday Twilio, an IBM partner whose services are available via IBM Bluemix, announced several new SDKs, including live video chat as a service.  This makes live video very easy to integrate into your native mobile or web based applications, and gives you the power to do some very cool things. For example, what if you could add video chat capabilities between your mobile and web clients? Now, what if you could take things a step further, and add IBM Watson cognitive computing capabilities to add real-time transcription and analysis?

Check out this video from yesterday’s Twilio Signal conference keynote, where fellow IBM’ers Damion Heredia and Jeff Sloyer demonstrate exactly this scenario; the integration of the new Twilio video SDK between iOS native and WebRTC client with IBM Watson cognitive computing services providing realtime transcription and sentiment analysis.

If it doesn’t automatically jump to the IBM Bluemix Demo, skip ahead to 2 hours, 15 min, and 20 seconds.

Jeff and Damion did an awesome job showing of both the new video service and the power of IBM Watson. I can also say first-hand that the new Twilio video services are pretty easy to integrate into your own projects (I helped them integrate these services into the native iOS client (physician’s app) shown in the demo)!  You just pull in the SDK, add your app tokens, and instantiate a video chat.   Jeff is pulling the audio stream from the WebRTC client and pushing it up to Watson in real time for the transcription and sentiment analysis services.

Bidirectional Communication Between An Apple Watch Extension and the Host App

In this entry we’re going to focus on building Apple Watch apps that can communicate back and forth with the host application running on the iPhone.  This is extremely important since the Apple Watch provides a second screen/peripheral complimentary experience to the main app running on the iOS device – be it a remote control, or quick view/glance into whats happening within the bigger picture.

In my last post I showed how to setup remote logging and instrumentation/analytics in an Apple Watch app using IBM MobileFirst Platform Foundation server.   I used the methods described below for communicating between the WatchKit and host apps in the sample app from that previous post.

When we’re talking about bidirectional communication, we’re talking about sending data two ways:

  1. Sending data from the host app to the WatchKit app
  2. Sending data to the WatchKit app from the host app

At first thought, you might think “oh that’s easy, just use NSNotificationCenter to communicate between the separate classes of the application”, but things aren’t exactly that simple.

An Apple Watch app is really made of 3 parts: 1) the main iOS application binary, 2) the user interface on the Apple Watch, and 3) the WatchKit extension binary (on the iOS device).

Apple Watch App - Architectural Components
Apple Watch App – Architectural Components

Yep, you read that correctly, the WatchKit extension (which controls all of the logic inside the Apple Watch UI and resides on the iOS device) is a separate binary from the “main” iOS application binary.  These are separate processes, so objects in memory in the main app are not the same objects in memory in the extension, and as a result, these processes do not communicate directly. NSNotificationCenter isn’t going to work.

However there are definitely ways you can make this type of a scenario work.

First, WatchKit has methods to invoke actions on the host application from the WatchKit extension.  WatchKit’s openParentApplication or handleWatchKitExtensionRequest methods both provide the ability to invoke actions and pass data in the containing app, and provide a mechanism to invoke a “reply” code block back in the WatchKit extension after the code in the host application has been completed.

For example, in the WatchKit extension, this will invoke an action in the host application and handle the reply:

[WKInterfaceController openParentApplication:@{@"action":@"toggleStatus"} reply:^(NSDictionary *replyInfo, NSError *error) {
    [logger trace:@"toggleStatus reply"];
    [self updateUIFromHost:replyInfo];
}];

Inside the host application we have access to the userInfo NSDictionary that was passed, and we can respond to it accordingly. For example, in the code below I am setting a string value on the userInfo instance, and taking appropriate actions based upon the value of that string.

- (void)application:(UIApplication *)application
handleWatchKitExtensionRequest:(NSDictionary *)userInfo
  reply:(void (^)(NSDictionary *replyInfo))reply {

  //handle this as a background task
  __block UIBackgroundTaskIdentifier watchKitHandler;
  watchKitHandler = [[UIApplication sharedApplication] beginBackgroundTaskWithName:@"backgroundTask"
            expirationHandler:^{
              watchKitHandler = UIBackgroundTaskInvalid;
            }];

  NSString *action = (NSString*) [userInfo valueForKey:@"action"];
  [logger trace:@"incoming request from WatchKit: ", action];

  LocationManager * locationManager = [LocationManager sharedInstance];

  NSMutableDictionary *result = [[NSMutableDictionary alloc] init];

  if ([action isEqualToString:@"toggleStatus"]) {
    //toggle whether or not we're actually tracking the location
    [locationManager toggleTracking];
  } else if ([action isEqualToString:@"stopTracking"]) {
    [locationManager stopTracking];
  } else if ([action isEqualToString:@"currentStatus"]) {
    //do nothing for now
  }

  NSString *trackingString = [NSString stringWithFormat:@"%s", locationManager.trackingActive ? "true" : "false"];
  [result setValue:trackingString forKey:@"tracking"];
  reply(result);

  dispatch_after( dispatch_time( DISPATCH_TIME_NOW, (int64_t)NSEC_PER_SEC * 1 ), dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0 ), ^{
    [[UIApplication sharedApplication] endBackgroundTask:watchKitHandler];
  } );
}

This covers the “pull” scenario, and is great if you want to invoke actions within your host app from your WatchKit extension, and then handle the responses back in the WatchKit extension to update your Apple Watch UI accordingly.

What about the “push” scenario?  The previous scenario only covers requests that originate inside the WatchKit extension.  What happens if you have a process running inside of your host app, and have updates that you want to push to the WatchKit extension without an originating request?

There is no shared memory, and it is not a shared process, so neither NSNotificationCenter or direct method invocation will work. However, you *can* use Darwin notifications (which work across seprate processes by using CFNotificationCenter).  This enables near-realtime interactions across processes, and you can share data as attributes of a CFdictionary object based between processes. You can also share larger amounts of data using access groups, and notify the separate processes using the CFNotificationCenter implementation.

Note: CFNotificationCenter is C syntax, not Objective-C syntax.

First you’ll need to subscribe for the notifications in the WatchKitExtension. Pay attention to the static id instance “staticSelf”… you’ll need this later when invoking Objective-C methods from the C notification callback.

static id staticSelf;

- (void)awakeWithContext:(id)context {
  [super awakeWithContext:context];

  //add your initialization stuff here

  CFNotificationCenterAddObserver(CFNotificationCenterGetDarwinNotifyCenter(), (__bridge const void *)(self), didReceiveTrackingStatusNotificaiton, CFSTR("TrackingStatusUpdate"), NULL, CFNotificationSuspensionBehaviorDrop);

  staticSelf = self;
}

From within your host app you can invoke CFNotificationCenterPostNotification to invoke the Darwin Notification.

-(void) postTrackingStatusNotificationToWatchKit {

  NSString *trackingString = [NSString stringWithFormat:@"%s", self.trackingActive ? "true" : "false"];

  NSDictionary *payload = @{@"tracking":trackingString};
  CFDictionaryRef cfPayload = (__bridge CFDictionaryRef)payload;

  CFNotificationCenterPostNotification(CFNotificationCenterGetDarwinNotifyCenter(), CFSTR("TrackingStatusUpdate"), (__bridge const void *)(self), cfPayload, TRUE);
}

Then in the WatchKit extension, handle the notification, and update your WatchKit extension accordingly.

void didReceiveTrackingStatusNotificaiton() {
  [staticSelf respondToPostFromHostApp];
}

We’ve now covered scenarios where you you can request data or actions in the host application *from* the WatchKit extension, and also how you can push data from the host application to the WatchKit extension.

Now, what if there was a library that encapsulated some of this, and made it even easier for the developer?  When I wrote the app in my previous post, I used the methods described above. However, I recently stumbled across the open source MMWormhole, which wraps the Darwin Notifications method (above) for ease of use.  I’m pretty sure I’ll be using this in my next WatchKit app.

Helpful Links for inter-process communication between WatchKit and host apps:
Series on Apple WatchKit Apps powered by IBM MobileFirst: