AWS Cloudwatch to Slack via API Gateway and Lambda

Slack has many integrations with third party tools and apps, unfortunately as of this time there is no direct integration with AWS SNS which CloudWatch uses to send its alerts.

In order to get CloudWatch alerts sending to Slack it is necessary to use an intermediary service, examples include Zapier, Heroku (Example 1, Example 2) or more recently AWS Lambda (Example 1, Example 2).

Using Lambda is the most attractive option as it is very easy to set it up and means you don’t need to maintain another account with a different service. The only draw back with using Lambda is that its only available in a few select regions. If you use AWS regions outside of the 4 that currently support Lambda then you cannot send alerts to Slack directly via Lambda.

Hopefully Lambda will eventually be able to be used in all AWS regions but until that time there is another way to leverage the power of Lambda to get CloudWatch alerts posting into Slack channels – using the AWS API Gateway.

Lets get started by creating a new incoming web hook within Slack. Once that is done we can create our Lambda function to process the SNS alerts.

Choose one of the available regions for Lambda skip the blueprint section and choose a name for your function. Make sure Node.js is selected as the runtime. You can accept the defaults for the rest of the fields.

Screen Shot 2015-08-20 at 11.06.46 pm

Paste the following code into the code box replacing <your_unique_web_hook_url> on line 49 with the web hook URL you have created in Slack and save the Lambda function.

Now we can create our API with the API Gateway from within the AWS console.

Screen Shot 2015-08-20 at 10.48.23 pm

Setup a POST method and choose the Lambda function we setup earlier then click Save.

Screen Shot 2015-08-20 at 10.52.31 pm


Now you are ready to deploy your API, click Deploy API and create a stage, I have used the default suggestion of prod.

Screen Shot 2015-08-20 at 11.22.53 pm


Copy the invoke URL and create a new SNS topic called “Slack”. Create a subscription setting the protocol to HTTPS and then paste in your API URL from above.

The final step is to request a confirmation for your new subscription and then check the logs for your Lambda function to get the subscription confirmation link. You need to confirm the subscription with this link.

Now you are done and you should have CloudWatch alerts flowing through to your Slack channel.

Screen Shot 2015-08-20 at 11.30.42 pm


Deploying client side applications to S3 with Grunt

If you are developing static client side applications then S3 is the perfect place to host them – it is cheap and massively scalable especially if you use CloudFront.

Using Grunt and grunt-aws-s3 also makes it incredibly easy to deploy these apps.

I’m going to assume you already have nodejs and Grunt installed and ready to go, so lets start by adding the grunt-aws-s3 plugin:

Once that is installed you can enable it inside your Gruntfile by adding the following:

Now you need to create an S3 bucket to deploy to and an IAM user and policy to give Grunt permission to deploy to it.

When you create the new IAM user you should generate new keys, place the access key, secret key and your S3 bucket region into a file named deploy-keys.json (Make sure you place this file in your .gitignore – you should never commit API keys) in the same directory as your Gruntfile and in the following format:

Attach the following IAM policy to your newly created user where is the name of the S3 bucket you have created:

You can now add the following to your grunt.initConfig to setup the deploy task:

Refer to the grunt-aws-s3 documentation for further configuration options, you may also like to change the files cwd to a sub folder such as /dist or /www if you have Grunt running a build step into a sub directory.

You can now deploy your application to S3 by running the following:

Most likely you will want to run a few other grunt tasks before deploying like linting or building so you should register a task like so:

Now you have a deploy task which will run jshint and build your project before deploying it to S3 (obviously you will need to have registered tasks for jshint and build for this to work) :

This task can now be run whenever you need to push a new version of your application live.

A final tip – If you are using jshint or any other linter with your project and have it set to enforce camel case variable names you may find that it does not like the “aws_s3” key used inside the grunt.initConfig block – this is easy to fix by adding the following to your Gruntfile:

You can now go through the rest of your Gruntfile and replace all references to “aws_s3” with “s3Deploy” and you will receive no more linting errors.

Automatically installing Vagrant plugin dependencies

Following on from my post on Customising Vagrant behaviour another handy thing you can do with your Vagrantfile and some simple Ruby is install your plugin dependencies automatically.

Say you are distributing a Vagrant config to your team. You might require them to have vagrant-omnibus and vagrant-aws installed so that they can provision servers on AWS with Chef. Ordinarily you would need to provide them with instructions on installing these plugins, but wouldn’t it be great if they could just:

With a bit of Ruby you can install these plugins automatically for them and all they have to do is simply run vagrant up and all systems are go.

Add the following to the top of your Vagrantfile changing the first line to have the plugins you require:

Now everytime Vagrant is run it will quickly check that all of the plugins you require are installed. If a plugin is missing it will be installed and then Vagrant will be re-run with the same arguments you originally provided.

Creating your own SSH key pair for use with AWS

Creating key pairs with AWS is rather easy but for convenience and security reasons generating your own SSH keys and importing them into AWS can be a good option.

From a security stand point generating your own key pair means that you can know 100% that the private key has never seen the light of day… or any other computer other than the one you generate it on.

If you are using multiple regions in AWS then generating your own key pair and importing it gives you another benefit – you can use the same key globally rather than having to create one per region.

On Ubuntu the process of generating a key pair is as simple as running the following command.

This will prompt you to enter a name for the key and then a pass-phrase – this can be left blank if you wish… I usually leave this blank because I don’t want to enter a password every time I use the key.

Once you have entered the required details you will have two files which have been generated for you: <keyname> and <keyname>.pub where <keyname> is the name you chose.

You can now import the .pub file into the Key Pairs section of the EC2 console, usually located here.

You can import this same public key into as many different regions as you wish which enables you to connect to all of your servers with the same private key – much simpler than keeping track of a key for each region.

Now you are good to go, you will be able to launch new instances with your created key pair safe in the knowledge that your private key is as secure as can possibly be.

If you want to use this newly created key on your existing instances then check out my post on replacing SSH keys on running instances.

SSH to Auto Scaled EC2 instances

If you use Auto Scaling with AWS, the following script may come in handy.

Sometimes you just want to connect to a random auto scaled server or servers. Using this script you can simply run it once to get a random server or run it repeatedly to connect to all the servers in your auto scaling group.

I place the script at ~/bin/appserver and then run chmod +x ~/bin/appserver to make it executable.

It requires PHP and the AWS CLI to be installed – you will also need to have permission to run the aws ec2 describe-instances command.

Setup is simple. Just set the path to your private key, your SSH username and change the autoscaling group or groups you wish to connect to.