Alexa, Is My Infrastructure on Fire?

I recently broke down and purchased an Amazon Echo after hearing enough good things about it, and also seeing how straightforward it looked to develop for it. It's no secret that I'm a big fan of Datadog, so naturally I felt like I needed to mix the two. I've previously covered exposing Datadog metrics through Hubot, so I figured I'd try to do something similar for the Echo.

I decided to create and host the skill through an AWS Lambda function which made it really easy to get started and deploy. There's plenty of documentation around on creating skills in Lambda so I won't really get into that part here. I also went with the Serverless framework to simplify the development and deployment processes, but that's not actually too important to the implementation here. Ultimately it's just a simple Lambda function tied to an Alexa skill.

At present, it exposes the current CPU levels of any hosts in your account. For example, saying:

Alexa, ask Datadog to check the CPU

will result in a response along the lines of:

Here are the current CPU loads. Gregs MacBook Pro is at 7%. Gregs iMac is at 4%

I think that's pretty awesome, so let's take a look at how to implement it.

Defining the Interaction Model

First we need to definte the skill's interaction model in Amazon's developer console.

Intent Schema

The intent schema is the primary manifest of what your skill can do, and how users will interact with it. For this skill we'll keep it simple and just expose a single intent for querying:

  "intents": [
      "intent": "QueryIntent",
      "slots": [
          "name": "Query",
          "type": "QUERY_LIST"

Eventually it would be great to build this out further and make the skill more conversational and interesting, but this is a sufficient starting point.

Custom Slot Types

In the intent schema you may have noticed the QUERY_LIST type, so now we need to actually define that. This is a custom slot that defines a list of the types of queries we can do. For now it will just contain a single value:

Type Values

This provides a nice place to expose more formal query types as the skill gets extended.

Sample Utterances

Finally, we need to give Amazon a list of sample utterances for the skill in order to teach it how each intent can be interacted with. We'll give it a few different ways to be invoked:

QueryIntent query {Query}  
QueryIntent check {Query}  
QueryIntent to query {Query}  
QueryIntent to check {Query}  
QueryIntent to query the {Query}  
QueryIntent to check the {Query}  

Implementing the Skill

With all that configuration out of the way, let's look at the code involved in implementing the skill. Just like in that Hubot plugin I created, we'll leverage the dogapi package to query the Datadog API. I'll only include the interesting bits in this post, but the full sample can be found on GitHub.

Talking to Datadog

First, let's build out a function to query CPU values from Datadog:

import dogapi from 'dogapi';  
import Promise from 'bluebird';

const queryDatadog = Promise.promisify(dogapi.metric.query);

function queryCPU() {  
  const now = parseInt(new Date().getTime() / 1000);
  const then = now - 300;
  const query = 'system.cpu.user{*}by{host}';

  return queryDatadog(then, now, query)
    .then(res => => ({
      name: reading.scope
                   .replace(/^host:/i, '')
                   .replace(/(\..*$)/i, '')
                   .replace(/\W/g, ' '),
      value: reading.pointlist[reading.pointlist.length - 1][1]

Here I'm making use of bluebird, which is a great Promise library that comes with a lot of useful functionality, on top of being very performant. I definitely recommend using this as a replacement for native Promises when working with AWS Lambda functions, as it performs much better and has a significantly lower memory footprint.

There's not too much to the implementation here. It goes out to Datadog, grabs the latest CPU reading for each host, and then does a little processing on the host name to make it more speech-friendly.

Processing the Intent

When a request comes in for the QueryIntent we defined earlier in the schema, we'll need to process that. Here's an example of the type of data that will come in with our intent:

  "session": {
    "sessionId": "SessionId.908e5538-9a5e-4201-b20b-0ed7cc6761bb",
    "application": {
      "applicationId": ""
    "attributes": {},
    "user": {
    "new": true
  "request": {
    "type": "IntentRequest",
    "requestId": "EdwRequestId.f01bc54b-6d75-4354-a478-08ec5b3cfed1",
    "timestamp": "2016-06-20T00:11:14Z",
    "intent": {
      "name": "QueryIntent",
      "slots": {
        "Query": {
          "name": "Query",
          "value": "CPU"
    "locale": "en-US"
  "version": "1.0"

Based on that, we can easily implement a function to pull the query value out of the intent and sent it over to Datadog:

function processIntent(intentRequest, session) {  
  const intent = intentRequest.intent;

  if ( === 'QueryIntent') {
    const querySlot = intent.slots.Query;

    if (querySlot.value && querySlot.value.toLowerCase() === 'cpu') {
      return queryCPU().then(readings => {
        const hostSpeechFragments = =>
          `${} is at ${reading.value}%`).join('. ');
        const speechOutput = `Here are the current CPU loads. ${hostSpeechFragments}`;

        return buildSpeechletResponse(
          'CPU Load', 

  return Promise.resolve(buildSpeechletResponse(
    'Datadog Query',
    'Sorry, I don\'t know that query',

Most of that code is around validation and parsing. Once it gets a list of CPU readings it turns them into something readable and forms a spoken response based on them. The buildSpeechletResponse function referenced here is a simple helper method that formats things the way the Alexa API expects them. The code for that method can be found in the helpers file. If we get a query value other than CPU we simply respond saying that we don't understand that query.

The true at the end of the buildSpeechletResponse signature denotes that each response will end the session with the user. In a more interesting implementation you can imagine keeping the session open and making things more conversational, but for now we'll keep things as a single operation.

The Handler

Finally, we need to tie it all together and process the incoming request to our Lambda function:

module.exports.handler = function(event, context, callback) {  
    api_key: process.env.DATADOG_API_KEY,
    app_key: process.env.DATADOG_APP_KEY

  if (event.request.type === 'IntentRequest') {
    processIntent(event.request, event.session)
      .then(speechletResponse =>
        context.succeed(buildResponse({}, speechletResponse)));

When a request comes in, we initialize dogapi with our API and app keys, and process the intent. You can specify your own keys by adding them as Serverless variables, such as through the _meta/variables/s-variables-dev.json file, in this format:

  "datadogApiKey": "your-api-key-here",
  "datadogAppKey": "your-app-key-here"

That's it! The full source for this sample is available on GitHub. It may look like a lot but really it's very simple to set up Alexa skills, especially when you use AWS Lambda to define them. With just a few lines of code and configuration you can add interactive speech-driven APIs to anything.

Alexa, is that cool or what?

comments powered by Disqus