Last week I was in good ‘ole Las Vegas for IBM InterConnect – IBM’s largest conference of the year. With over 20,000 attendees, it was a fantastic event that covered everything from technical details for developers to forward-looking strategy and trends for C-level executives. IBM also made some big announcements for developers – OpenWhisk serverless computing and bringing the Swift language to the server – just to name a few. Both of these are exciting new initiatives that offer radical changes & simplification to developer workflows.
It was a busy week to say the least – lots of presentations, a few labs, and even a role in the main stage Swift keynote. You can expect to find more detail on each of these here on the blog in the days/weeks to come.
For starters, here are two “lightning talks” I presented in the InterConnect Dev@ developer zone:
Smarter apps with Cognitive Computing
This session introduces the concept of cognitive computing, and demonstrates how you can use cognitive services in your own mobile apps. If you aren’t familiar with cognitive computing, then I strongly recommend that you check out this post: The Future of Cognitive Computing.
In the presentation below, I show two apps leveraging services on Bluemix, IBM’s Cloud computing platform, and the iOS SDK for Watson.
Actually, I’m using two Watson SDKs… The older Speech SDK for iOS, and the new iOS SDK. I’m using the older speech SDK in one example because it supports continuous listening for Watson Speech To Text, which is currently still in development for the new SDK.
You can check out the source code for the translator app here.
Redefining your personal mobile expression with on-body computing
My second presentation highlighted how we can use on-body computing devices to change how we interact with systems and data. For example, we can use a luxury smart watch (ex: Apple Watch) to consume and engage with data in more efficient and more personal ways. Likewise, we can also use smart/wearable peripherals devices to access and act on data in ways that were never possible before.
For example, determining gestures or biometric status based upon patterns in raw data transmitted by the on-body devices. For this, I leveraged the new IBM Wearables SDK. The IBM Wearables SDK provides a consistent interface/abstraction layer for interacting with wearable sensors. This allows you to focus on building your apps that interact with the data, rather thank learning the ins & outs of a new device-specific SDK.
The wearables SDK also users data interpretation algorithms to enable you to define gestures or patterns in the data, and use those patterns to act upon events when they happen – without additional user interaction. For example: you can determine if someone falls down, you can determine when someone is raising their hand, you can determine anomalies in heart rate or skin temperature, and much more. The system is capable of learning patterns for any type of action or virtually any data being submitted to the system. Sound interesting? Then check it out here.
The wearables SDK is open source on Github, and contains a sample to help you get started.
I also had some other sessions on integrating drones with cloud services, integrating weather services in your mobile apps, and more. I’ll be sure to post updates for this content I make them publicly available. I think you’ll find the session on drones + cloud especially interesting – I know I did.