IBM Watson Speech Services Just Got A Whole Lot Easier

UPDATE 12/22/15:  IBM Recently released a new iOS SDK for Watson that makes integration with Watson services even easier. You can read more about it here.


IBM_Watson_avatar_negIBM’s Watson Developer Cloud speech services just got a whole lot easier for mobile developers.  I myself just learned about these two, and can’t wait to integrate them into my own mobile applications.

The Watson Speech to Text and Text to Speech services are now available in both native iOS and Android SDKs, making it even easier to integrate language services into your apps.

These native APIs now include audio streaming back to the Watson Speech to Text service, for lower latency responses to spoken languages.

I can guarantee you that my “voice-drive iOS apps” demo will be updated soon, and I’ll be using this for all future language processing services.

JavaScript All The Things – Or – Why You Should Pay Attention To JavaScript

This post is inspired by all the comments I’ve seen this week about JS in the enterprise. I would have never imagined this 10 years ago, but JavaScript is now pretty much ubiquitous. Here are a few reasons why you need to paying attention to JavaScript if you aren’t already, and why you should definitely not write it off.

First, I think one of the major reasons for JavaScript’s ubiquity is that JavaScript is approachable. It is relatively easy for beginners to learn JavaScript, and powerful enough for advanced users to build complex and reliable systems.

Second, why you need to pay attention, JavaScript is everywhere.

jsatt

You can now use JavaScript to develop on virtually any platform: client side applications, server side logic, embedded chips/IoT devices, manage build scripts and dependencies, and more.

This doesn’t mean you’ll use the exact same code in every case, rather that you can use the same skill set – JavaScript Development – to deliver solutions across multiple paradigms.

The Client Side

JavaScript can be used to power client side apps/user interfaces, and user interactions on numerous platforms and devices.

Web

Of course JavaScript powers the web, this is a given. JavaScript is the primary scripting language for all web browsers. I won’t focus on this much b/c it’s already well known.

Mobile

JavaScript can also be used to power mobile applications that are natively installed on a device.

  1. Apache Cordova/PhoneGap – You can build natively installed apps with web technology using PhoneGap or Cordova. PhoneGap is Adobe’s branded distribution of Cordova, but from the developer’s perspective, they are basically the same thing. Your app runs within a webview on the mobile device, and you build your user interface the same way you you build a dynamic web application. Your user interface is implemented in HTML, styled with CSS, and all interactivity is created with JavaScript.
  2. React Native – JavaScript powered web apps don’t just have to be inside of a a web view. The React Native framework gives developers the ability to write their application using JavaScript and declarative UI elements, and results in a native application running on the mobile device. The logic is interpreted JavaScript at runtime, but everything that the user interacts with (all UI elements) is 100% native, providing a very high quality user experience, and it is now available for both iOS and Android applications.
  3. Unity 3D – You can even develop rich & immersive mobile 3D simulation or gaming experience, entirely powered by JavaScript using the Unity 3D engine. **These can be web, desktop, or mobile, but is often used in mobile gaming.
  4. NativeScript – Framework for building cross-platform native iOS, Android and Windows mobile apps using JavaScript.
Desktop

Yup, desktop apps are not left out of the mix. Most desktop solutions fall into a category similar to Apache Cordova, where the end results is a web view that has access to lower level APIs, whose content is developed with web based technology.

  1. Electron – Node.js + Chromium desktop app container from GitHub
  2. app.js  – Node + Chromium for a desktop app container
  3. nw.js – Another framework for Node +Chromium for a desktop app container
  4. CEF – The Chromium Embedded Framework – a framework for embedding the guts of the chrome browser inside of a desktop app.

… and more… I know Microsoft has a solution for building Windows apps purely out of HTML/JS, and there are more solutions out there that I am forgetting.

In fact, some of my favorite desktop tools, such as SlackAtom and VS Code are actually based on web technology and implemented in HTML/JS. Heck, even Photoshop can be scripted and extended with the generator extensibility layer or have a customized user interface in HTML/JS with design spaces.

The Server Side

Most obviously Node.js – a JavaScript runtime buit on Chrome’s V8 JavaScript Engine – has made huge inroads into server side development and the enterprise. Node.js, powered by frameworks like express.js or loopback.io makes server side development and complex enterprise apps with JavaScript possible.

IoT

Pretty much everything that doesn’t fall in the categories above falls in here. You can develop headless apps that run on Arduino, Raspberry Pi or other small boards completely using JavaScript, you can manage infrastructure and information flow of IoT sensors using JavaScript, you can write on-chip programs for embedded systems using JavaScript, you can control robots with it, and you can even power media-centric connected TV experiences using JavaScript.

Like I said… It’s everywhere.

Ecosystem

It’s not just about where you can build and run JavaScript for your applications. JavaScript has a massive and thriving developer ecosystem.

JavaScript is the #1 most active language on GitHub in both the total number of active repositories and total number of active pushes/commits.

 

http://githut.info/
statistics visualization from http://githut.info/

Here are some stats that show the magnitude of growth and adoption for Node.js/npm.js alone. NPM stats currently shows a total of 186,946 packages available for download, 94,978,032 package downloads in the last day, and 2,451,734,737 package downloads in the last month.

npm
NPM Statistics

 

Node.js adoption is massive, and is still growing.

This doesn’t mean that JavaScript is the best language at everything. It also doesn’t meant that you can take a single piece of source code and run it in every device/context imaginable.

It means that you can use your skills in JavaScript to develop for just about any kind of device/context out there. It’s not going to be write once, run everywhere, rather in the words of the React.js team: learn once, write everywhere.

IBM Acquires StrongLoop – Leveling Up Node.js in the Enterprise

Today IBM announced the acquisition of StrongLoop, Inc,  leaders in enterprise development on Node.js and major contributors to Express, LoopBack, and other Node.js tools and frameworks.

0-strongloop

Node.js is an incredible tool for rapidly building highly performant and scalable back end systems, and you develop it using a familiar core language that most front-end developers are already accustomed to, JavaScript. This acquisition is positioned to greatly enhance Node.js in the enterprise, and StrongLoop’s offerings will be integrated into IBM Bluemix, IBM MobileFirst, and WebSphere.

Even though the acquisition is still “hot off of the presses”, you can start using these tools together today:

You can read more about this acquisition and the future vision between IBM and StrongLoop on the StrongLoop blog, IBM Bluemix Blog, and IBM MobileFirst Blog.

If you haven’t heard about StrongLoop’s LoopBack framework, it enables you to easily connect and expose your data as REST services. It provides the ability to visually create data models in a graphical (or command line) interface, which are used to automatically generate REST APIs – thus generating CRUD operations for your REST services tier, without having to write any code.

Why is this important?

It makes API development easier and drastically reduces time from concept to implementation.  If you haven’t yet looked at the LoopBack framework, you should definitely check it out.  You can build API layers for your apps literally in minutes.  Check out the video below for a quick introduction:

Again, be sure to check out these posts that detail the integration steps so you can start using these tools together today:

 

 

Adaptive mobile apps that change based on personal context

That title get your attention?  Yes, it really read “Adaptive mobile apps that change based on personal context” – with near real-time rules application, without much extra development effort.  If that sounds interesting to you, or like a product you might want to use within your own apps, then you might want to check out this site where you can get involved in the product’s development: http://adaptiveexperience.mybluemix.net/

IBM is looking for your input on creating these types of mobile app experiences. User experiences within a single app that can be dramatically different per user based on location, past behavior, profile information, social media activity, and so much more.  With this behavior being driven by configurable rules that can be changed without redeploying an app to the app store.

How it works for your customer

Consider this scenario:

Jon and Andrea download the mobile app for S&W, a retailer known for its attention to providing great customer service. Over the next month, Jon and Andrea use the app to browse and discover content and merchandise differently.

Jon primarily navigates to sports related content for his favorite teams to find gear and clothes for travel to his favorite team’s games. Andrea scours the app for sales and fashion trends and usually ends up following her favorite designers.

Andrea and Jon go to a baseball game together. She’s never enjoyed watching it, so she opens up the S&W app to entertain herself, and her app’s navigation quickly steers her through Spring fashion articles.

Jon however, wants to replace the hat he’s worn the last three times the team lost, and since he’s in the stadium, his S&W app opens right up to the team’s gear page. The app knows he’s out of town and tells him how to get to an S&W store.

How it works for the dev team

Consider another scenario:

One of the developers on the team, George, sets up the system and application. He then gives access to Janet who is responsible for the customer experience.

Janet writes rules defining how the application could adapt and become more personalized based on inputs like , social media, geolocation, app usage, or customer information data.

Once Janet has built out her rules, she simply hits ‘Submit’ and can immediately see her clever interactions reflected in the mobile application without having to involve the development team.

Analytics let Janet know which adaptations are working best, and helps her find new opportunities to optimize the app’s user experience.

Sound interesting yet?  Check it out, and get involved in the product development at:  http://adaptiveexperience.mybluemix.net/

We’re not talking about a content management system, or translation based on locale, instead a rules-driven product that can adapt literally every aspect of your app:  customize the user interface, enable or disable different features, customized messaging and notifications, and much more, all variable based upon the user context.  This can be used to present contextually relevant information, drive adoption, provide more/less data depending on your physical context, and so much more.

It won’t be tied to a specific UI framework, won’t be tied to a specific content management system, isn’t attempting to re-create Google Now or Apple Proactive Assistance.  Rather, a set of tools and a rules engine that enable you to customize and tailor the app experience to the individual user.

Head over to http://adaptiveexperience.mybluemix.net/ to learn more and get involved!

Video – Smarter Apps with Cognitive Computing

UPDATE 12/22/15:  IBM Recently released a new iOS SDK for Watson that makes integration with Watson services even easier. You can read more about it here.


Last week I had the opportunity to present to a great audience at the MoDev DC meetup group on “Smarter Apps with Cognitive Computing”.   In this session I focused on how you can create a voice-driven experience in your mobile apps. I gave an introduction to IBM Bluemix and IBM Watson services (particularly the Watson language services), and demonstrated how you can integrate them into your native iOS apps. I also covered IBM MobileFirst for operational analytics and remote logging to provide insight into your app’s performance once it goes live.  Check out a recording of the complete presentation in the video below:

https://youtu.be/TGRMmf8e-6s

You can read more detail about how this example works and access source code for the sample application in the links below:

Just create an account on IBM Bluemix and you can get started for free!

This app uses three services available through IBM Bluemix, all of which are available for you to try out:

App Architecture
App Architecture

Feel free to poke around the code to learn more!

Video: Enabling the Next Generation of Apps with IBM MobileFirst

Back in February I had the opportunity to present “Enabling the Next Generation of Apps with IBM MobileFirst” at the DevNexus developer conference in Atlanta.  It was a great event, packed with lots of useful content.  Luckily for everyone who wasn’t able to attend, the organizers recorded most of the sessions – which have just been made available on Youtube.

In my presentation I introduce both the MobileFirst Platform Foundation Server and MobileFirst services on IBM Bluemix to enable mobile applications. The video is available below.  In it I cover remote logging, operational analytics, exposing & delivering data, managing push notifications, and more.  Both the platform server and cloud solutions are free to try and enable developers to deliver more from their mobile apps more efficiently and more securely.

https://youtu.be/Xcl5phnAVfI

Here’s the session Description: Once your app goes live in the app store you will have just entered into an iterative cycle of updates, improvements, and releases. Each successively building on features (and defects) from previous versions. IBM MobileFirst Foundation gives you the tools you need to manage every aspect of this cycle, so you can deliver the best possible product to your end user. In this session, we’ll cover the process of integrating a native iOS application with IBM MobileFirst Foundation to leverage all of the capabilities the platform has to offer.

Learn more – IBM Bluemix:

Learn more – MobileFirst Platform Foundation Server:

To get started just sign up for Bluemix or download MobileFirst Platform Foundation Server today (they’re free to try!)

 

Voice-Driven Native Mobile Apps with IBM Watson & IBM MobileFirst

Update: The IBM Watson team just announced a new native SDK for both iOS and Android that simplifies and streamlines integration with Speech To Text and Text To Speech services.  Check out more detail here: IBM Watson Speech Services Just Got A Whole Lot Easier.


Using your voice to drive interactions within your app is a powerful concept. It is the primary interaction driving Apple’s Siri, Microsoft’s Cortana, and Google’s Voice Actions. By analyzing spoken words, voice commands allow you to complete possibly complex actions with minimal interaction with the device. Or, they enable entirely different forms of interaction, for example, interacting with a remote system through the telephone.

Voice driven interactions are essentially a two part process:

  • Transcribe audible signal to text transcript
  • Perform a system action by parsing text transcript

If you think that voice-driven apps are too complicated, or out of your reach, then I have great news for you: They are not! Last week, IBM elevated several IBM Watson voice services from Beta to General Availability – that means you can use them reliably in your own systems too!

Let’s examine the two parts of the system, and see what solutions IBM has available right now for you to take advantage of…

Transcribe audible signal to text transcript

Part one of this equation is converting the audible signal into text that can be parsed and acted upon. The IBM Speech to Text service fits this bill perfectly, and can be called from any app platform that supports REST services… which means just about anything. It could be from the browser, it could be from the desktop, and it could be from a native mobile app. The Watson STT service is very easy to use, you simply post a request to the REST API containing an audio file, and the service will return to you a text transcript based upon what it is able to analyze from the audio file. With this API you don’t have to worry about any of the transcription actions on your own – no concern for accents, etc… Let Watson do the heavy lifting for you.

Perform a system action by parsing text transcript

This one is perhaps not quite as simple because it is entirely subjective, and depends upon what you/your app is trying to do. You can parse the text transcript on your own, searching for actionable keywords, or you can leverage something like the IBM Watson Q&A service, which enables natural language search queries to Watson data corpora.

Riding on the heels of the Watson language services promotion, I put together a sample application that enables a voice-driven app experience on the iPhone, powered by both the Speech To Text and Watson Question & Answer services, and have made the mobile app and Node.js middleware source code available on github.

Watson Speech QA for iOS

This native iOS app, which I’m calling “Watson Speech QA for iOS” allows you to ask Watson questions in natural, spoken language, and receive textual responses based on the Watson QA Healthcare data set.

Check out the video below to see it in action:

https://youtu.be/0kedhwC3ikY

Bluemix Services Used

This app uses three services available through IBM Bluemix:

  1. Speech to Text – Convert spoken audio into text
  2. Question & Answer – Natural language search
  3. Advanced Mobile Access – Capture analytics and logs from mobile apps running on devices
App Architecture
IBM Watson Speech QA for iOS App Architecture

The app communicates to the Speech to Text and Question & Answer services through the Node.js middelware tier, and connects directly to the Advanced Mobile Access service to provide operational analytics (usage, devices, network utilization) and remote log collection from the client app on the mobile devices.

For the Speech To Text service, the app records audio from the local device, and sends a WAV file to the Node.js in a HTTP post request. The Node.js tier then delegates to the Speech To Text service to provide transcription capabilities. The Node.js tier then formats the respons JSON object and returns the query to the mobile app.

For the QA service, the app makes an HTTP GET request (containing the query string) to the Node.js server, which delegates to the Watson QA natural language processing service to return search results. The Node.js tier then formats the respons JSON object and returns the query to the mobile app.

The general flow between these systems is shown in the graphic below:

IBM Watson Speech QA for iOS - Logic Flow
IBM Watson Speech QA for iOS – Logic Flow

 

Code Explained

Mobile app and Node.js middleware source code and setup instructions are available at: https://github.com/triceam/IBM-Watson-Speech-QA-iOS

The code for this example is really in 2 main areas: The client side integration in the mobile app (Objective-C, but could also be done in Swift), and the application server/middleware implemented in Node.js.

Node.js Middleware

The server side JavaScript code uses the Watson Node.js Wrapper, which enables you to easily instantiate Watson services in just a few short lines of code

[js]var watson = require(‘watson-developer-cloud’);
var question_and_answer_healthcare = watson.question_and_answer(QA_CREDENTIALS);
var speechToText = watson.speech_to_text(STT_CREDENTIALS);[/js]

The credentials come from your Bluemix environment configuration, then you just create instances of whichever services that you want to consume.

I implemented two methods in the Node.js application tier. The first accepts the audio input from the mobile client as an attachment to a HTTP POST request and returns a transcript from the Speech To Text service:

[js]// Handle the form POST containing an audio file and return transcript (from mobile)
app.post(‘/transcribe’, function(req, res){

//grab the audio WAV file attachment and prepare to send to Watson
var file = req.files.audio;
var readStream = fs.createReadStream(file.path);
console.log("opened stream for " + file.path);

var params = {
audio:readStream,
content_type:’audio/l16; rate=16000; channels=1′,
continuous:"true"
};

//send the audio WAV file to the watson.recognize service
speechToText.recognize(params, function(err, response) {
readStream.close();

if (err) {
return res.status(err.code || 500).json(err);
} else {
//parse the results and return them to the client
var result = {};
if (response.results.length > 0) {
var finalResults = response.results.filter( isFinalResult );
if ( finalResults.length > 0 ) {
result = finalResults[0].alternatives[0];
}
}
return res.send( result );
}
});
});[/js]

Once you have the text transcript on the client, you could do whatever you want with it. You could parse it to invoke local actions, or delegate to a natural language query service

The second method does exactly this: it accepts a URL query parameter from a HTTP GET request and uses that parameter in a Watson QA natural language search:

[js]//handle QA query and return json result (for mobile)
app.get(‘/ask’, function(req, res){

//get a copy of the search query text from the req.query object
var query = req.query.query;

if ( query != undefined ) {
//perform a search using the QA "ask" method
question_and_answer_healthcare.ask({ text: query}, function (err, response) {
if (err){
return res.status(err.code || 500).json(response);
} else {
//format the results and return them to the mobile client
if (response.length > 0) {
var answers = [];

for (var x=0; x<response[0].question.evidencelist.length; x++) {
var item = {};
item.text = response[0].question.evidencelist[x].text;
item.value = response[0].question.evidencelist[x].value;
answers.push(item);
}

var result = {
answers:answers
};
return res.send( result );
}
return res.send({});
}
});
}
else {
return res.status(500).send(‘Bad Query’);
}
});[/js]

Note: I am using the free/open Watson Healthcare data set. However the Watson QA service can handle other data sets – these require an engagement with IBM to train the Watson service to understand the desired data sets.

Native iOS – Objective C

On the mobile side we’re working with a native iOS application. My code is written in Objective C, however you could also implement this using Swift. I won’t go into complete line-by-line code here for the sake of brevity, but you can access the client side code in the ViewController.m file. In particular, this is within the postToServer and requestQA methods.

You can see the flow of the application within the image below:

app
App Flow: User speaks, transcript displayed, results displayed

 

The native mobile app first captures audio input from device’s microphone. This is then sent to the Node.js server’s /transcribe method as an attachment to a HTTP POST request (postToServer method on line 191). On the server side this delegates to the Speech To Test service as described above. Once the result is received on the client, the transcribed text is displayed in the UI and then a request is made to the QA service.

In the requestQA method, the mobile app makes a HTTP GET request to the Node.js app’s /ask method (as shown on line 257). The Node.js app delegates to the Watson QA service as shown above. Once the results are returned to the client they are displayed within a standard UITableView in the native app.

MobileFirst – Advanced Mobile Access

A few other things you may notice if you decide to peruse the native Objective-C code:

  1. Within AppDelegate.m you will see calls to IMFClient, IMFAnalytics, and OCLogger classes. These enable operational analytics and log collection within the Advanced MobileAccess service.
  2. All network requests inside of ViewController.m use the

    IMFResourceRequest class. Using the IMFResourceRequest class enables the collection of analytics for every request made within the application (through this class).

Together these allow for the collection of device logs, automatic crash reporting, and operational analytics that provide one of the strengths of the Advanced Mobile Access service, which is one of the mobile offerings on IBM Bluemix.

Source Code

Mobile app and Node.js middleware source code and setup instructions for this app are available at:

Just create an account on IBM Bluemix, and you have everything that you need to get started creating your own voice-driven apps.

Apple WWDC Recap for Mobile Devs

I’m sure you’ve already heard Apple’s big announcements from the annual Worldwide Developer Conference this week.  I was lucky enough to snag a ticket in Apple’s lottery and got to check it all out in person. There were lots of great sessions, with tons of content.  Here are the highlights as I saw them from a mobile developer’s perspective – *not* from the general consumer point of view.  For the most part, I think this year’s announcements highlighted the evolution and maturity of existing products and projects – no new amazing breakthoughs, but definitely steps in the right direction.

If you haven’t seen them already, the Keynote and the Platforms State of the Union videos cover most of the announcements, but not in complete detail. Just be warned, the Keynote is loaded with product marketing fluff, not just developer topics.  Once you get to “we’ve got one more thing…” you can turn off the Keynote – the Apple Music announcement has pretty much zero significance for developers.

So let’s get started…

Swift 2.0

swift

There was a tremendous emphasis on the Swift language at this year’s WWDC event.  There was the announcement that Swift is going to be open sourced, plus many language enhancements, and nearly every piece of sample code that was shown was written in Swift.  It is very clear that Swift is Apple’s direction moving forward.

I think the open souring of Swift is a big deal b/c it opens up the language for use beyond just iOS and OSX applications.  Think about it… Perhaps another platform might adopt Switft to develops apps (Windows?), or let’s hypothetically say you really like Node.js on the backend b/c its the same language as your web front end (JavaScript, that is). What if you are developing native apps, and you’d like to write your back end in the same language as the front end mobile client, or what if you want an ECMAScript inspired language that is more structured than Node, with real Object Oriented or functional programming constructs (and what if you want something that is really multi-threaded)?  Swift is your answer. I’m willing to bet that we will see server-side Swift not long after it is open sourced.  Let’s just hope that Swift is opened in the truest sense – you know, actually accepting input and contributions from external parties.

The Swift language itself has also evolved quite significantly.  Better error handling, protocol extensions, and improved performance are a great start.  Heck, if I understood one of the speakers correctly, it’s now even faster than Objective C at runtime in some cases.

Want to learn more about Swift?  Check out these session videos from WWDC (requires Safari):

  1. What’s new in Swift
  2. Protocol Oriented Programming in Swift
  3. Optimizing Swift Performance
  4. Swift in Practice
  5. Improve Your Existing Apps with Swift
  6. Swift and Objective-C Interoperability

OS Improvements

New versions of both OS X and iOS were announced and released to  developers… OS X El Capitan and iOS 9 respectively.  Both seem to be incremental updates of the previous OSes. New apps, new features, etc… for the end users.  Not necessarily significant changes for developers.  If you’re a graphics programmer, Metal will be a big deal for you (low level graphics/gpu API), but if you’re not a graphics guru, you probably won’t even know its there.

iPad Multitasking

iPad-sidebyside

The new iOS 9 multitasking/side-by-side mode for iPad is going to be a great addition which brings the iPad even closer to being a full laptop replacement.  Having the ability to have multiple apps open next to each other will improve the iPad’s “get $h1t done” ability.  You’ll have to ensure that you’ve authored your apps to leverage adaptive layouts, but that’s pretty much all that you need to do to take advantage of iPad Multitasking.

These videos will get you going in the right direction for iOS multitasking and adaptive layouts:

  1. Getting Started with Multitasking on iPad in iOS 9
  2. Multitasking Essentials for Media-Based Apps on iPad in iOS 9
  3. Mysteries of Auto Layout, Part 1
  4. Mysteries of Auto Layout, Part 2
App Thinning

ios app thinning

The new “App Thinning” features in Xcode 7/iOS 9 are also a great addition.  Currently if you build an iOS app it gets bundled with lots of resources that may never be used depending on the type of device.  App thinning introduces three concepts that help minimize the footprint and increase the quality of your installed apps: App Slicing, On Demand Resources, and Bitcode. According to the presenters, these can decrease the download/installed size of your apps quite significantly.

If you haven’t seen the App Thinning in Xcode session, you should definitely check it out.

App Slicing is a new feature that creates variants of your app executable depending on the device that you are downloading the app to. So, if your app doesn’t use @3x graphics, or doesn’t use the arm7s architecture on a particular device, then they won’t be downloaded.  Likewise, if your device does leverage those assets, then the other smaller scale assets and non-used binaries won’t be downloaded.

App Slicing from iOS Docs

On Demand Resources give you the ability to download specific sets of resources from the app store as they are needed.  They are still hosted by the app store, but not part of the initial download. Let’s say you are building a platform game.  Initially the shell/navigation assets will be downloaded.  While the app is running you’ll be able to download assets for level 1, level 2, level 3, etc… incrementally as they are needed.  The system can also clean up ODR resources to conserve space using a least-recently-used cleanup routine.

On-Demand Resources from Apple Docs

Bitcode, according to the docs:

Bitcode is an intermediate representation of a compiled program. Apps you upload to iTunes Connect that contain bitcode will be compiled and linked on the App Store. Including bitcode will allow Apple to re-optimize your app binary in the future without the need to submit a new version of your app to the store.

Bitcode enables the app store to re-compile your code to take advantage of new LLVM optimizations without you even having to recompile and upload a new application binary.

UI Testing

The new UI testing features in Xcode 7 look pretty awesome as far as automated UI testing goes.  It enables you to record/playback steps and generated UI unit tests all from within Xcode.  What’s even better, it enables you to set breakpoints within your tests, so you can debug why your tests might be failing, or you can set breakpoints inside of your app, and the automated testing stops at the breakpoints and allows you to step through code while inside the automated unit test.  Definitely do not miss the session on UI Testing in Xcode 7 if you have any (even remote) interest in automated UI testing, it looks pretty darn useful.

Improved Search and Deep Linking

Improved search functionality was also announced for both iOS and OS X.  This improves the search functionality, and also enables your apps to index their content, so using the device search enables you to search for information hosted *inside* of the app.  To complement the enhanced search, there are also features that better facilitate deep linking into your app.  This enables apps to be launched directly into the appropriate content/context with greater ease.  I need to look into this more, but it sounded interesting…

Check out these resources for additional detail:

  1. Introducing Search APIs
  2. Seamless Linking To Your App

 

watchOS 2

Last, but by certainly no means least, the announcement of watchOS 2 looks like a massive leap forward for developing for the Apple Watch.

watchOS-architecture

WatchOS 2 brings us the ability to execute code natively on the Apple Watch, not just in the WatchKit extension running on your iPhone, brings us the ability to implement custom watch complications, access to network connectivity if your phone is not connected, support for multimedia, and direct access to hardware sensors.  If you’re wondering what “watch complications” are, they are the widgets on the watch face that enable you to display customized information.

WatchOS Complications

You should definitely check out the videos on developing for the Apple Watch if you have any interest in watchOS:

  1. Building Watch Apps
  2. Introducing WatchKit for watchOS 2
  3. Layout & Animation Techniques for WatchKit
  4. WatchKit in-Depth, Part 1
  5. WatchKit in-Depth, Part 2
  6. Introducing Watch Connectivity
  7. Designing for AppleWatch

Also, don’t forget the watchOS docs, which are chock full of resources and a watchOS 2 transition guide.

There are also new APIs, enhanced features in CloudKit, MapKit, HomeKit, Core Motion, Core Location, updates to Apple Pay, security updates, networking updates, and lots more.  Be sure to check out the complete list of WWDC videos for more.

There was so much to absorb, I’m sure I missed something, so feel free to point anything out that I’ve overlooked!

Serving Data to the Apple Watch with IBM MobileFirst

This is the third entry in my series on powering Apple Watch apps using IBM MobileFirst.  In the first post I covered setting up the project, remote logging, and analytics. In the second post I covered bidirectional communication between the WatchKit extension and host app (not really MobileFirst, but still applicable).  In this post we’ll examine how to consume data from the MobileFirst Foundation Server inside of an Apple Watch app.

If you’re already familiar with consuming data using MobileFirst Adapters, then guess what… it is *exactly* the same as consuming an Adapter in a native iOS project. Since the logic for a WatchKit app is executed in the WatchKit extension, which is actually an executable that runs on the phone, there is no difference between between the two.

If you aren’t familiar with Adapters, they are server-side code that is used to transfer and retrieve information from back-end systems to client applications and cloud services.  You can write them in either Java or JavaScript, they can be consumed in any MobileFirst app, and they offer security, data transformation, and reporting metrics out of the box.

In the video below I walk through the process of recreating the Apple Watch Stocks app using data delivered from a MobileFirst Platform Foundation server instance. The data is simulated, so don’t use it for any investments. 🙂

The basic process was this: build out the Apple Watch apps user interface in Xcode/Interface Builder, build the adapters to expose the data, then start consuming the data within the WatchKit extension to deliver it to the watch app interface.

Full source code for this project is available at: https://github.com/triceam/MobileFirst-WatchKit/tree/master/Stocks

The User Interface

So, lets first look at the app interface.  I have two views that were built in Interface Builder.  One is a table that displays rows of data, one is a details screen which has lots of labels used to display data.

applewatch-ui

In the main interface I have a “loading…” label (that is hidden once the data is loaded) and a table that is used to display data.  For each row in the table there are 3 labels to display specific data fields. These were connected to IBOutlet references in the view controller class. All of these are straightforward WatchKit development practices.  Be sure to check out the WKInterfaceTable class reference for more detail on working with WatchKit tables.

Xcode-Interface Builder for Table View
Xcode-Interface Builder for Table View

For displaying the details screen, I also used very similar pattern.  I added labels for displaying data, and linked them to IBOutlet references in my view controller so I can change their values once the data is loaded.

Xcode-InterfaceBuilder Detail View
Xcode-InterfaceBuilder Detail View

Serving Data

Loading data into a WatchKit extension is identical to making a request to the MobileFirst server adapter from a native iOS app.  I did use my helper class so I can use code blocks instead of the delegate patter, but the implementation is exactly the same.

So, here’s how we can create an adapter using the MobileFirst Command Line Interface.  Use the “mfp add adapter” command and follow the prompts:

[bash]$ mfp add adapter
[?] What do you want to name your MobileFirst Adapter? StocksAdapter
[?] What type of adapter would you like?
Cast Iron
HTTP
Java
JMS
SAP JCo
SAP Netweaver Gateway
❯ SQL
[?] Create procedures for offline JSONStore? No
A new sql Adapter was added at /Users/andrewtrice/Documents/dev/MobileFirst-Stocks/server/MFStocks/adapters/StocksAdapter[/bash]

Adapters can be used to easily connect back end systems to mobile clients.  You can quickly and easily expose data from a relational database, or even consume data from http endpoints and easily serialize it into a more compact mobile-friendly format.  You should definitely read more about MobileFirst adapters through the platform documentation for more detail.

What’s also great about the MobileFirst platform is that you get operational analytics for all adapters out of the box, with no additional configuration.  You can see the number of requests, data payload sizes, response times, devices/platforms used to consumes, and much more.  Plus, you can also remotely access client log messages from the mobile devices.  Take a look at the screenshots below for just a sample (these are from my dev instance on my laptop):

All of the data I am displaying is simulated.  I’m not actively pulling from a relational database or live service. However, you could use a very similar method to connect to a live data repository.

I exposed two pretty basic procedures on the MobileFirst server: getList – which returns a stripped down list of data, and getDetail – which returns complete data for a stock symbol:

[js]function getList() {

simulateData();

var items = [];
var trimmedProperties = ["symbol","price","change"];

for (var i=0; i<data.length; i++) {
var item = {};
for (var j in trimmedProperties) {
var prop = trimmedProperties[j];
item[prop] = data[i][prop];
}
items.push(item);
}

return {
"stocks":items
};
}

function getDetail(symbol) {

for (var i=0; i<data.length; i++) {
if (data[i].symbol == symbol) {
return data[i];
}
}
return null;
}[/js]

Once these are deployed to the server using the CLI “mfp bd” command, you can invoke the adapter procedures from a client application, regardless of whether it is native iOS, native Android, or hybrid application.

Consuming the Data

OK, now we’re back to the native iOS project.  In either Objective-C or Swift you can invoke an adapter directly using the WLResourceRequest or invokeProcedure mechanisms.  In my sample I used a helper class to wrap invokeProcedure to support code blocks, so I can define the response/failure handlers directly inline in my code.  So, in my code, I invoke the adapters like so:

[objc]-(void) getList:(void (^)(NSArray*))callback{

WLProcedureInvocationData *invocationData =
[[WLProcedureInvocationData alloc]
initWithAdapterName:@"StockAdapter"
procedureName:@"getList"];

[WLClientHelper invokeProcedure:invocationData successCallback:^(WLResponse *successResponse) {

NSArray *responseData = [[successResponse responseJSON] objectForKey:@"stocks"];
//do something with the response data

} errorCallback:^(WLFailResponse *errorResponse) {

//you should do better error handling than this
}];
}[/objc]

Once you have the data within the WatchKit extension, we can use it to update the user interface.

For the data table implementation, you simply need to set the number of rows, and then loop over the data to set values for each row based on the WKInterfaceTable specification.

[objc][self.dataTable setNumberOfRows:[self.stocks count] withRowType:@"stockTableRow"];

for (NSInteger i = 0; i < self.dataTable.numberOfRows; i++) {

StockTableRow* row = [self.dataTable rowControllerAtIndex:i];
NSDictionary* item = [self.stocks objectAtIndex:i];

[row.stockLabel setText:[item valueForKey:@"symbol"]];

NSNumber *price = [item valueForKey:@"price"];
NSNumber *change = [item valueForKey:@"change"];
[row.priceLabel setText:[NSString stringWithFormat:@"%-.2f", [price floatValue]]];
[row.changeLabel setText:[NSString stringWithFormat:@"%-.2f", [change floatValue]]];

if ([change floatValue] > 0.0) {
[row.changeLabel setTextColor: [UIColor greenColor]];
[row.containerGroup setBackgroundColor:[UIColor colorWithRed:0 green:0.2 blue:0 alpha:1]];
} else if ([change floatValue] < 0.0) {
[row.changeLabel setTextColor: [UIColor redColor]];
[row.containerGroup setBackgroundColor:[UIColor colorWithRed:0.2 green:0 blue:0 alpha:1]];
}
else {
[row.changeLabel setTextColor: [UIColor whiteColor]];
[row.containerGroup setBackgroundColor:[UIColor colorWithRed:0.15 green:0.15 blue:0.15 alpha:1]];
}
}[/objc]

For the detail screen we’re also doing things even more straightforward.  When the screen is initialized, we request detail data from the server.  Once we receive that data, we’re simply assigning label values based upon the data that was returned.

[objc][self.nameLabel setText:[stockData objectForKey:@"name"]];

NSNumber *change = [stockData objectForKey:@"change"];
NSNumber *price = [stockData objectForKey:@"price"];
NSNumber *high = [stockData objectForKey:@"high"];
NSNumber *low = [stockData objectForKey:@"low"];
NSNumber *high52 = [stockData objectForKey:@"high52"];
NSNumber *low52 = [stockData objectForKey:@"low52"];
NSNumber *open = [stockData objectForKey:@"open"];
NSNumber *eps = [stockData objectForKey:@"eps"];

float percentChange = [change floatValue]/[price floatValue];

[self.priceLabel setText:[NSString stringWithFormat:@"%-.2f", [price floatValue]]];
[self.changeLabel setText:[NSString stringWithFormat:@"%.02f (%.02f%%)", [change floatValue], percentChange]];

if ([change floatValue] > 0.0) {
[self.changeLabel setTextColor: [UIColor greenColor]];
} else if ([change floatValue] < 0.0) {
[self.changeLabel setTextColor: [UIColor redColor]];
}
else {
[self.changeLabel setTextColor: [UIColor whiteColor]];
}

//update change with percentage

[self.highLabel setText:[NSString stringWithFormat:@"%-.2f", [high floatValue]]];
[self.lowLabel setText:[NSString stringWithFormat:@"%-.2f", [low floatValue]]];
[self.high52Label setText:[NSString stringWithFormat:@"%-.2f", [high52 floatValue]]];
[self.low52Label setText:[NSString stringWithFormat:@"%-.2f", [low52 floatValue]]];

[self.openLabel setText:[NSString stringWithFormat:@"%-.2f", [open floatValue]]];
[self.epsLabel setText:[NSString stringWithFormat:@"%-.2f", [eps floatValue]]];
[self.volLabel setText:[stockData objectForKey:@"shares"]];[/objc]

What next?

Ready to get started?  Just download the free MobileFirst Platform Server Developer Edition, and get started.

Complete source code for this project is available on my github account at: https://github.com/triceam/MobileFirst-WatchKit/tree/master/Stocks

Series on Apple WatchKit Apps powered by IBM MobileFirst:

 

Enjoy!

 

 

Using Code Blocks Instead of Delegates with IBM MobileFirst Platform in Native iOS Apps

We’ve been able to write native iOS apps leveraging the scaffolding and analytics of the IBM MobileFirst Platform Foundation Server for a while now. This was first introduced way back when MobileFirst still went by the Worklight name, serveral versions ago.

As I would write apps, one thing I really wanted was to use code blocks instead of having to implement delegate classes every time I need to call a procedure on the MobileFirst server.   In MobileFirst 7.0, the new WLResourceRequest API allows you to invoke requests using either the completionHandler (code block) or delegate implementations.

But… what if you’re still using an earlier version of the MobileFirst platform, or what if you still want to leverage your existing code that uses WLProcedureInvocationData parameters, but don’t want to have to create a new delegate for every request?  Well, look no further.  I put together a very simple utility class that helps with this task by allowing you to pass code blocks as parameters for the requests to the MobileFirst (or Worklight) server.

You can grab the Objective-C client-side utility class from https://github.com/triceam/MobileFirst-Helper

Right now it only contains two utlitiy methods, but I’ll update it if I i come up with anything else useful. 

The invokeProcedure method allows you to invoke a procedure and pass code blocks for success/error callbacks inline, without having to define delegates.

[objc]WLProcedureInvocationData *invocationData =
[[WLProcedureInvocationData alloc]
initWithAdapterName:@"StockAdapter"
procedureName:@"getList"];

[WLClientHelper invokeProcedure:invocationData
successCallback:^(WLResponse *successResponse) {

//handle the response
}
errorCallback:^(WLFailResponse *errorResponse) {

//handle the error response
}];[/objc]

I normally prefer code blocks b/c they allow you to encapsulate functionality inside of a single class, instead of having logic spread between a controller and delegate class (and having to worry about communication between the two).

The other getLoggerForInstance utility function is just a shortcut to get an OClogger instance with the package string matching the class name of the instance passed, with just a single line of code:

[objc]OCLogger *logger = [WLClientHelper getLoggerForInstance:self];[/objc]

Download the utility directly from https://github.com/triceam/MobileFirst-Helper

Enjoy!