CF Summit 2018: Adobe ColdFusion 2018 and Amazon Alexa Skills
In today’s ever-changing technological landscape, new visions are emerging every day. One of the latest developments is “voice command tech”. Amazon leads the way with its flagship platform, Alexa. One of the best things about Alexa is the ability for custom skills to be made. Adobe ColdFusion and its JVM core can help you do just that. Let’s take a look at what Alexa is, the basics that surround it, and some CF frameworks that can help you develop your very own Alexa skills.
What is Amazon Alexa?
So what is Amazon Alexa? It is Amazon’s cloud-based voice service used with many compatible devices. Using just your voice, you can use Alexa to access and perform an ever-growing array of functions and tasks.
Fun Fact! — Alexa’s roots actually came from Star Trek! The computer voice and navigational command system from the Starship Enterprise from Star Trek: TOS and Star Trek: The Next Generation serves as inspiration.
The name “Alexa” was due to the hard consonant “X” sound. This makes it easier for it to recognize for voice activation. It came out in November 2014 alongside the Echo, the first Amazon Alexa compatible speaker. Since its release, Alexa can now register with not only the Echo series but with third-party speakers such as:
Alexa and ColdFusion
Alexa operates and performs functions called skills. There are many different types of skills that Alexa can perform. The great thing about developing Alexa skills is the correlation between Alexa and Java. Any Java-based language is so easy to use when building new Alexa skills. CFML is no exception. At the 2017 CFCamp in Munich, Evagoras Charalambous gave a presentation on the foundation of using ColdFusion to build Alexa skills. He focused first on defining your app on the Amazon Development Portal. Next, he discussed how to make the skill talk to your ColdFusion code. He provided a sample CF project for the user to take away and use to develop their own app.
In order to run any Alexa Skill, it uses a custom web server or AWS Lambda. Non-lambda functions must be accessible with HTTPS. Enter ColdFusion. Coldfusion and CFML may be used to create the custom web server. Things to note are the conversion factors needed. There are differences in code format and syntax between platforms. Switching must be made between Amazon’s raw JSON format to ColdFusion’s format construct. Once done, CF can be used to handle launch and intent requests, sample utterances, and session end requests. ColdFusion can be the primary input, but it must be output as usable JSON for Alexa. The tricky part is debugging. If a CF error occurs, the error message only indicates as an error and not as a CF error. Free third-party apps such as Postman can help you out with this issue.
Related: Framework for CF developed Alexa Skills
Developing Skills
There are two major parts of skill when developing. These are the “skill service” and the “skill interface.” The skill service is the part that implements the logic associated with speech request can generate a response to the POST. The second part is the skill interface, which handles several different functions:
- Speech Recognition
- Parsing your Request into an HTTP POST
- Receive Responses from Skill Implementation
- Synthesize Response Speech
When developing skills, activation utterances will be defined directly on the skill interface. This is what is used to address Alexa and known as the invocation name. All activation utterances may be developed using ColdFusion as long as the primary output is converted into usable JSON. If building a greeting skill, for example, enter greeter as the invocation name. You may then call upon Alexa to call upon “greeter”.
“Alexa, tell greeter to say hello.”
Along with activation utterances, the skill interface defines intent utterances. Once again, these intents can be developed using CFML.
These are the responses that are generated by Alexa.
“Alexa, tell greeter to say hello.” (ACTIVATION)
“Hello.”
The route from start to finish for skill activation is as follows:
Alexa sends received audio to the skill interface. Skill Interface resolves words to spoken intent which then gets sent to the skill service. The skill service then triggers the intended response.
Framework Creation for Amazon Alexa Skills
There are three basic steps to creating a framework for Amazon Skills according to Leor Brenmen on appcelerator.com:
- Define your Skill Interface on the Amazon Developer portal
- **Add a custom API to your API Builder Project that will
- Handle the request (a POST from the Skill Interface)
- Construct a JSON object response of a certain structure
- Configure your Skill Interface to point to the URL of your custom API
**These custom APIs are easily developed using CFML in coordination with a tool such as Adobe ColdFusion API Manager**
Ok. So, what can Alexa do?
- Ordering
- Home Automation
- Music
- Sports
- Messaging and Call Services
- Business Purposes
Remember to test your skills prior to deployment to eliminate any bugs that may interfere with proper operations. Also, keep in mind, that outsiders with malicious intent may try to exploit your Skill. Using a modern CF web server can help to maximize stability and security.
Related: 020 Secrets of High-Security ColdFusion Code, With Pete Freitag
Where can you Obtain Third Party Apps?
We can obtain Alexa skills through a companion app. This app is available on the Apple store, Google Play Store, and Amazon Appstore. Once downloaded, this app allows you to pick up a multitude of Alexa skills. Also, Amazon makes it easy for developers to create their own custom skills using the Alexa Skills Kit.
Now, another opportunity has arisen in regards to learning about ColdFusion’s interaction with Alexa. At the 2018 CF Summit, Mike Callahan will speak on developing Alexa skills. His session will cover everything from consuming utterances, intents, and slots. Attendees will also walk away with a custom framework and all the information needed to start constructing Alexa skills. Don’t miss out on this exciting chance to learn more about the future of CF and voice tech.
Originally published at teratech.com on August 22, 2018.