Category Archives: Mobile

IBM MobileFirst Platform Foundation 8.0 Beta Now Available!

Back at the end of February, IBM announced an upcoming beta version of MobileFirst Platform Foundation version 8.0. Well, guess what? … As of last week, it is now available!

MobileFirst-Foundation

What is IBM MobileFirst Platform Foundation?

For those stumbling upon this and wondering “What is IBM MobileFirst Platform Foundation?”:

IBM MobileFirst Platform Foundation is an open, comprehensive platform to develop, test, secure, and manage mobile apps.

MobileFirst Platform Foundation provides a middleware solution and SDK that makes exposing data to mobile apps easier, improves security through encryption, authentication and handshaking to guarantee app authenticity, provides facilities to easily manage multiple versions of an app, notify and engage users, and, on top of everything else, provides operational analytics so that you can monitor the health of your overall system at any point in time.

As a mobile developer catering to the enterprise, it makes your life significantly easier, and it supports any mobile development paradigm that you might want to target: Native platforms, hybrid Xamarin using C#, and hybrid Cordova platforms (HTML/JS).

What’s new in the IBM MobileFirst Platform Foundation 8.0 Beta?

The recently opened beta has some great new features, AND it’s now available as a service on Bluemix (IBM’s Cloud platform).   The beta program is intended to deliver the next generation of an open, integrated and comprehensive mobile app development platform redesigned for cloud agility, speed, and productivity, that enables enterprises to accelerate delivery of their mobile strategy.

Those new features include (but not limited to):

  • Use of NPM on Cordova apps
  • CocoaPods support for iOS apps
  • Gradle and NuGet support for native apps
  • Maven support for backend logic
  • Faster plug-in speed – faster MFPF performance for new and existing apps
  • New, better, sample code, documentation, and guides
  • Automation support and self-service features for faster ramp-up and tear-down of environments for testing and iterations
  • Ability to make changes to app runtime settings without redeployment
  • Middleware redesigned for DevOps efficiency
  • Custom notifications in MobileFirst Operations Analytics
  • New crash analysis tools
  • and more…

Getting involved in the Beta

This is a great opportunity to to explore new features, and drive business value.  We also want your feedback to make sure the MobileFirst Platform has what you need.

You can start using the Mobile Foundation service on Bluemix today, or join the Beta, and tell us what you think.

To join the Beta program, just head over to the MobileFirst Platform Beta home page, scroll down to the “Interested in the Beta Program?” heading, and follow the instructions to sign up.

You can also join the Slack community (channel: #MFPF8_beta) to engage directly with IBM.

New Swift Offerings from IBM

In my last post I mentioned some new announcements related to the Swift programming language at IBM.  Upon further thought, I guess it’s probably not a bad idea to re-post more detail here too…

If you didn’t see/hear it last week, IBM unveiled several projects to advance the Swift language for developers, which we think will have a profound impact on developers & developer productivity in the years to come. You can view a replay of the IBM announcement in the video embedded below, or just scroll down for direct links:

Here are quick links to each of the projects listed:

Kitura
A light-weight web framework written in Swift, that allows you to build web services with complex routes, easily. Learn more…

Swift Package Catalog
The IBM Swift Package Catalog enables the Swift.org developer community to leverage and share code across projects. Learn more…

Updated IBM Swift Sandbox
The Swift Sandbox enables developers to learn Swift, build prototypes and share code snippets. Whatever your Swift ambitions, join the over 100,000 community members using the Sandbox today. Learn more…

OpenWhisk
OpenWhisk is an event-driven compute platform that executes application logic in response to events or through direct invocations–from web/mobile apps or other endpoints. Learn more…

Or, you can read John Ponzo’s official announcement here.


This is more or less a re-share from my original post on the Swift@IBM Blog.

Mobile Apps, Cognitive Computing, & Wearables

talkingLast week I was in good ‘ole Las Vegas for IBM InterConnect – IBM’s largest conference of the year. With over 20,000 attendees, it was a fantastic event that covered everything from technical details for developers to forward-looking strategy and trends for C-level executives. IBM also made some big announcements for developers – OpenWhisk serverless computing and bringing the Swift language to the server – just to name a few. Both of these are exciting new initiatives that offer radical changes & simplification to developer workflows.

It was a busy week to say the least – lots of presentations, a few labs, and even a role in the main stage Swift keynote. You can expect to find more detail on each of these here on the blog in the days/weeks to come.

For starters, here are two “lightning talks” I presented in the InterConnect Dev@ developer zone:

Smarter apps with Cognitive Computing

This session introduces the concept of cognitive computing, and demonstrates how you can use cognitive services in your own mobile apps.  If you aren’t familiar with cognitive computing, then I strongly recommend that you check out this post: The Future of Cognitive Computing.

In the presentation below, I show two apps leveraging services on Bluemix, IBM’s Cloud computing platform, and the iOS SDK for Watson.

Actually, I’m using two Watson SDKs… The older Speech SDK for iOS, and the new iOS SDK.  I’m using the older speech SDK in one example because it supports continuous listening for Watson Speech To Text, which is currently still in development for the new SDK.

You can check out the source code for the translator app here.

Redefining your personal mobile expression with on-body computing

My second presentation highlighted how we can use on-body computing devices to change how we interact with systems and data.  For example, we can use a luxury smart watch (ex: Apple Watch) to consume and engage with data in more efficient and more personal ways.  Likewise, we can also use smart/wearable peripherals devices to access and act on data in ways that were never possible before.

For example, determining gestures or biometric status based upon patterns in raw data transmitted by the on-body devices.  For this, I leveraged the new IBM Wearables SDK.  The IBM Wearables SDK provides a consistent interface/abstraction layer for interacting with wearable sensors.  This allows you to focus on building your apps that interact with the data, rather thank learning the ins & outs of a new device-specific SDK.

The wearables SDK also users data interpretation algorithms to enable you to define gestures or patterns in the data, and use those patterns to act upon events when they happen – without additional user interaction.  For example: you can determine if someone falls down, you can determine when someone is raising their hand, you can determine anomalies in heart rate or skin temperature, and much more.  The system is capable of learning patterns for any type of action or virtually any data being submitted to the system.  Sound interesting?  Then check it out here.

The wearables SDK is open source on Github, and contains a sample to help you get started.

I also had some other sessions on integrating drones with cloud services, integrating weather services in your mobile apps, and more.  I’ll be sure to post updates for this content I make them publicly available.  I think you’ll find the session on drones + cloud especially interesting – I know I did.

Introducing the new Watson iOS SDK (beta)

watson-header
I’ve written here in the past on both the impact of cognitive computing, and how you can integrate IBM Watson services into your mobile apps to add cognitive language processing capabilities and more.  I’m happy to share that IBM has just recently released a new beta SDK that makes integrating more Watson services into your iOS applications easier than ever.

If you aren’t familiar with cognitive computing, or the transformative impact that it is already having on entire industries, then I strongly suggest checking out this video and related article on IBM DeveloperWorks.

IBM Watson services, which are based on machine learning algorithms, give you the ability to handle unstructured data, like text analysis or translation, speech processing, and more.  This makes consumption, mining, or responding to unstructured data or “dark data” faster, more efficient, and more powerful than ever.

The new Watson iOS SDK provides developers with an API to simplify integration of the Watson Developer Cloud services into their mobile apps, including the Dialog, Language Translation, Natural Language Classifier, Personality Insights, Speech To Text, Text to Speech, Alchemy Language, or Alchemy Vision services – all of which are available today, and can now be integrated with just a few lines of code.

The Watson iOS SDK makes integration with Watson services significantly *really* easy. For example, if you want to take advantage of the Language Translation service, you first have to setup a service instance. Once the translation service is setup, then you’ll be able to leverage translation capabilities within your mobile app:

//instantiate the LanguageTranslation service
let service = LanguageTranslation(username: "yourname", password: "yourpass")

//invoke translation methods
service.translate(["Hello","Welcome"],source:"en",target:"es",callback:{(text:[String], error) in
  //do something with the translated text strings
})

I’ve actually put a sample application together that demonstrates the language translation service integration, which you can access at github.com/triceam/Watson-iOS-SDK-Demo.

swift-translator

Be sure to check out the sample’s readme for additional detail and setup instructions. As with all of the Watson services, You must have a service instance properly configured, with authentication credentials in order to be able to consume it within your app.

The new Watson iOS SDK is written in Swift, is open source, and the team encourages you to provide feedback, submit issues, or make contributions.  You can learn more about the Watson iOS SDK, get the source code, and access the open source project here.

Mobile Apps with Language & Translation Services using IBM Watson & IBM MobileFirst

UPDATE 12/22/15:  IBM Recently released a new iOS SDK for Watson that makes integration with Watson services even easier. You can read more about it here.


I recently gave a presentation at IBM Insight on Cognitive Computing in mobile apps.  I showed two apps: one that uses Watson natural language processing to perform search queries, and another that uses Watson translation and speech to text services to take text in one language, translate it to another language, then even have the app play back the spoken audio in the translated language.  It’s this second app that I want to highlight today.

In fact, it gets much cooler than that.  I had an idea: “What if we hook up an OCR (optical character recognition) engine to the translation services?” That way, you can take a picture of something, extract the text, and translate it.  It turns out, it’s not that hard, and I was able to put together this sample app in just under two days.  Check out the video below to see it in action.

To be clear, I ended up using a version of the open source Tesseract OCR engine targeting iOS. This is not based on any of the work IBM research is doing with OCR or natural scene OCR, and should not be confused with any IBM OCR work.  This is basic OCR and works best with dark text on a light background.

The Tesseract engine lets you pass in an image, then handles the OCR operations, returning you a collection of words that it is able to extract from that image.  Once you have the text, you can do whatever you want from it.

So, here’s where Watson Developer Cloud Services come into play. First, I used the Watson Language Translation Service to perform the translation.  When using this service, I make a request to my Node.js app running on IBM Bluemix (IBM’s cloud platform).  The Node.js app acts as a facade and delegates to the Watson service for the actual translation.

translator

You can check out a sample on the web here:

Translate english to:

On the mobile client, you just make a request to your service and do something with the response. The example below uses the IMFResourceRequest API to make a request to the server (this can be done in either Objective C or Swift). IMFResourceRequest is the MobileFirst wrapper for networking requests that enables the MobileFirst/Mobile Client Access service to capture operational analytics for every request made by the app.

NSDictionary *params = @{
  @"text":text,
  @"source":@"en",
  @"target":language
};

IMFResourceRequest * imfRequest =
  [IMFResourceRequest requestWithPath:@"https://translator.mybluemix.net/translate"
                      method:@"GET" parameters:params];

[imfRequest sendWithCompletionHandler:^(IMFResponse *response, NSError *error) {
  NSDictionary* json = response.responseJson;
  NSArray *translations = [json objectForKey:@"translations"];
  NSDictionary *translationObj = [translations objectAtIndex:0];
  self.lastTranslation = [translationObj objectForKey:@"translation"];
  // now do something with the result - like update the UI
}];

On the Node.js server, it is simply taking the request and brokering it to the Watson Translation service (using the Watson Node.js SDK):

app.get('/translate', function(req, res){
  language_translation.translate(req.query, function(err, translation) {
    if (err) {
      console.log(err)
      res.send( err );
    } else {
      console.log(translation);
      res.send( translation );
    }
  });
});

Once you receive the result from the server, then you can update the UI, make a request to the speech to text service, or pretty much anything else.

To generate audio using the Watson Text To Speech service, you can either use the Watson Speech SDK, or you can use the Node.js facade again to broker requests to the Watson Speech To Text Service. In this sample I used the Node.js facade to generate Flac audio, which I played in the native iOS app using the open source Origami Engine library that supports Flac audio formats.

You can preview audio generated using the Watson Text To Speech service using the embedded audio below. Note: In this sample I’m using the OGG file format; it will only work in browsers that support OGG.

English: Hello and welcome! Please share this article with your friends!

Spanish:
Hola y bienvenido! Comparta este artículo con sus amigos!

app.get('/synthesize', function(req, res) {
  var transcript = textToSpeech.synthesize(req.query);
  transcript.on('response', function(response) {
    if (req.query.download) {
      response.headers['content-disposition'] = 'attachment; filename=transcript.flac';
    }
  });
  transcript.on('error', function(error) {
    console.log('Synthesize error: ', error)
  });
  transcript.pipe(res);
});

On the native iOS client, I download the audio file and play it using the Origami Engine player. This could also be done with the Watson iOS SDK (much easier), but I wrote this sample before the SDK was available.

//format the URL
NSString *urlString = [NSString stringWithFormat:@"https://translator.mybluemix.net/synthesize?text=Hola!&voice=es-US_SofiaVoice&accept=audio/flac&download=1", phrase, voice ];
NSString* webStringURL = [urlString stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding];
NSURL *flacURL = [NSURL URLWithString:webStringURL];

//download the contents of the audio file
NSData *audioData = [NSData dataWithContentsOfURL:flacURL];
NSString *docDirPath = NSTemporaryDirectory() ;
NSString *filePath = [NSString stringWithFormat:@"%@transcript.flac", docDirPath ];
[audioData writeToFile:filePath atomically:YES];

//pass the file url the the origami player and play the audio
NSURL* fileUrl = [NSURL fileURLWithPath:filePath];
[self.orgmPlayer playUrl:fileUrl];

Cognitive computing is all about augmenting the experience of the user, and enabling the users to perform their duties more efficiently and more effectively. The Watson language services enable any app to greater facilitate communication and broaden the reach of content across diverse user bases. You should definitely check them out to see how Watson services can benefit you.

MobileFirst

So, I mentioned that this app uses IBM MobileFirst offerings on Bluemix. In particular I am using the Mobile Client Access service to collect logs and operational analytics from the app. This lets you capture logs and usage metrics for apps that are live “out in the wild”, providing insight into what people are using, how they’re using it, and the health of the system at any point in time.

Analytics from the Mobile Client Access service
Analytics from the Mobile Client Access service

Be sure to check out the MobileFirst on Bluemix and MobileFirst Platform offerings for more detail.

Source

You can access the sample iOS client and Node.js code at https://github.com/triceam/Watson-Translator. Setup instructions are available in the readme document. I intend on updating this app with some more translation use cases in the future, so be sure to check back!