AWS Cloudwatch to Slack via API Gateway and Lambda

Slack has many integrations with third party tools and apps, unfortunately as of this time there is no direct integration with AWS SNS which CloudWatch uses to send its alerts.

In order to get CloudWatch alerts sending to Slack it is necessary to use an intermediary service, examples include Zapier, Heroku (Example 1, Example 2) or more recently AWS Lambda (Example 1, Example 2).

Using Lambda is the most attractive option as it is very easy to set it up and means you don’t need to maintain another account with a different service. The only draw back with using Lambda is that its only available in a few select regions. If you use AWS regions outside of the 4 that currently support Lambda then you cannot send alerts to Slack directly via Lambda.

Hopefully Lambda will eventually be able to be used in all AWS regions but until that time there is another way to leverage the power of Lambda to get CloudWatch alerts posting into Slack channels – using the AWS API Gateway.

Lets get started by creating a new incoming web hook within Slack. Once that is done we can create our Lambda function to process the SNS alerts.

Choose one of the available regions for Lambda skip the blueprint section and choose a name for your function. Make sure Node.js is selected as the runtime. You can accept the defaults for the rest of the fields.

Screen Shot 2015-08-20 at 11.06.46 pm

Paste the following code into the code box replacing <your_unique_web_hook_url> on line 49 with the web hook URL you have created in Slack and save the Lambda function.

Now we can create our API with the API Gateway from within the AWS console.

Screen Shot 2015-08-20 at 10.48.23 pm

Setup a POST method and choose the Lambda function we setup earlier then click Save.

Screen Shot 2015-08-20 at 10.52.31 pm

 

Now you are ready to deploy your API, click Deploy API and create a stage, I have used the default suggestion of prod.

Screen Shot 2015-08-20 at 11.22.53 pm

 

Copy the invoke URL and create a new SNS topic called “Slack”. Create a subscription setting the protocol to HTTPS and then paste in your API URL from above.

The final step is to request a confirmation for your new subscription and then check the logs for your Lambda function to get the subscription confirmation link. You need to confirm the subscription with this link.

Now you are done and you should have CloudWatch alerts flowing through to your Slack channel.

Screen Shot 2015-08-20 at 11.30.42 pm

 

Deploying client side applications to S3 with Grunt

If you are developing static client side applications then S3 is the perfect place to host them – it is cheap and massively scalable especially if you use CloudFront.

Using Grunt and grunt-aws-s3 also makes it incredibly easy to deploy these apps.

I’m going to assume you already have nodejs and Grunt installed and ready to go, so lets start by adding the grunt-aws-s3 plugin:

Once that is installed you can enable it inside your Gruntfile by adding the following:

Now you need to create an S3 bucket to deploy to and an IAM user and policy to give Grunt permission to deploy to it.

When you create the new IAM user you should generate new keys, place the access key, secret key and your S3 bucket region into a file named deploy-keys.json (Make sure you place this file in your .gitignore – you should never commit API keys) in the same directory as your Gruntfile and in the following format:

Attach the following IAM policy to your newly created user where <s3_bucket_name> is the name of the S3 bucket you have created:

You can now add the following to your grunt.initConfig to setup the deploy task:

Refer to the grunt-aws-s3 documentation for further configuration options, you may also like to change the files cwd to a sub folder such as /dist or /www if you have Grunt running a build step into a sub directory.

You can now deploy your application to S3 by running the following:

Most likely you will want to run a few other grunt tasks before deploying like linting or building so you should register a task like so:

Now you have a deploy task which will run jshint and build your project before deploying it to S3 (obviously you will need to have registered tasks for jshint and build for this to work) :

This task can now be run whenever you need to push a new version of your application live.

A final tip – If you are using jshint or any other linter with your project and have it set to enforce camel case variable names you may find that it does not like the “aws_s3” key used inside the grunt.initConfig block – this is easy to fix by adding the following to your Gruntfile:

You can now go through the rest of your Gruntfile and replace all references to “aws_s3” with “s3Deploy” and you will receive no more linting errors.

Automatically installing Vagrant plugin dependencies

Following on from my post on Customising Vagrant behaviour another handy thing you can do with your Vagrantfile and some simple Ruby is install your plugin dependencies automatically.

Say you are distributing a Vagrant config to your team. You might require them to have vagrant-omnibus and vagrant-aws installed so that they can provision servers on AWS with Chef. Ordinarily you would need to provide them with instructions on installing these plugins, but wouldn’t it be great if they could just:

With a bit of Ruby you can install these plugins automatically for them and all they have to do is simply run vagrant up and all systems are go.

Add the following to the top of your Vagrantfile changing the first line to have the plugins you require:

Now everytime Vagrant is run it will quickly check that all of the plugins you require are installed. If a plugin is missing it will be installed and then Vagrant will be re-run with the same arguments you originally provided.

Customising Vagrant behaviour

Vagrant is a great piece of software and an integral part of my development toolbox.

One thing that a lot of people don’t realise about Vagrant is that the Vagrantfile which you use to store your configuration in is just Ruby code. This means you can quickly and easily customise the behaviour of Vagrant.

Here are a few quick examples to get you going, place the following snippets in your Vagrant file and then execute a Vagrant command like vagrant up to see their output.

Output the folder path of your Vagrantfile

Output the Vagrant command (up, halt, destroy etc.)

Output all of the Vagrant command line arguments

Output the chosen Vagrant provider

Output the username running Vagrant

As you can see adding these snippets to your Vagrantfile is easy and allows powerful custom logic. You could provision certain resources based on the user running Vagrant or stop people from being able to run commands like vagrant halt, it is very flexible.

Replacing SSH keys on running instances

As part of your regular security practices or in the event an employee leaves your company you should be switching out the SSH keys on your servers.

This is an easy process but if not done correctly you can unintentionally lock your self out of your servers.

You should start by generating a new SSH key pair which on Ubuntu is as simple as running the ssh-keygen tool and following the prompts.

Once you have your newly created private and public keys you should SSH to the server you wish to change with your old key.

Copy the contents of the new public key you created with ssh-keygen – it will be the .pub file –  and put them on a new line in the ~/.ssh/authorized_keys file.

You should now have at least two entry’s in this file one will be the public key for your old SSH key and one will be for your new SSH key, save this file and disconnect from the server.

Now its time to test your new SSH key, connect to the server with your new key like so:

If you have added the public key correctly you should be able to logon to the server, you can now edit the authorized_keys file once more and remove the entry for the old public key you are discarding. Save the file once more and disconnect and reconnect to ensure things are working as expected.

As a final step you should try connecting to the server with your old SSH key, this should now no longer work, resulting in the following message:

Creating your own SSH key pair for use with AWS

Creating key pairs with AWS is rather easy but for convenience and security reasons generating your own SSH keys and importing them into AWS can be a good option.

From a security stand point generating your own key pair means that you can know 100% that the private key has never seen the light of day… or any other computer other than the one you generate it on.

If you are using multiple regions in AWS then generating your own key pair and importing it gives you another benefit – you can use the same key globally rather than having to create one per region.

On Ubuntu the process of generating a key pair is as simple as running the following command.

This will prompt you to enter a name for the key and then a pass-phrase – this can be left blank if you wish… I usually leave this blank because I don’t want to enter a password every time I use the key.

Once you have entered the required details you will have two files which have been generated for you: <keyname> and <keyname>.pub where <keyname> is the name you chose.

You can now import the .pub file into the Key Pairs section of the EC2 console, usually located here.

You can import this same public key into as many different regions as you wish which enables you to connect to all of your servers with the same private key – much simpler than keeping track of a key for each region.

Now you are good to go, you will be able to launch new instances with your created key pair safe in the knowledge that your private key is as secure as can possibly be.

If you want to use this newly created key on your existing instances then check out my post on replacing SSH keys on running instances.

SSH to Auto Scaled EC2 instances

If you use Auto Scaling with AWS, the following script may come in handy.

Sometimes you just want to connect to a random auto scaled server or servers. Using this script you can simply run it once to get a random server or run it repeatedly to connect to all the servers in your auto scaling group.

I place the script at ~/bin/appserver and then run chmod +x ~/bin/appserver to make it executable.

It requires PHP and the AWS CLI to be installed – you will also need to have permission to run the aws ec2 describe-instances command.

Setup is simple. Just set the path to your private key, your SSH username and change the autoscaling group or groups you wish to connect to.

Laravel Cron Validator

As part of a tool I am currently creating I needed a way to validate user supplied cron patterns.

Laravel has plenty of built in validators including regex which I could have used, however I found creating the required regex pattern was complicated – and ugly. I wanted to just be able to specify the rule in the model validation rules with something simple like “cron_expression”.

Custom validation rules allow you to do this easily and in a much more tidy fashion.

Instead of writing the regex patterns required to detect cron patterns I decided to use the brilliant cron expression library to do the validation for me – all of the validation is already done and any PHP project that is dealing with cron expressions should already be using this parser as it seems to be the best out there by far.

Putting all of this together I came up with the laravel cron validator which I wrapped up as a service provider for ease of use.

Been meaning to do this for a while…

It’s now 2015, so what better time to finally get on the blog bandwagon.

I anticipate that this will probably become a baron wasteland before too long but for now I have finally created a place to write down a few dev related things as I think of them.

Still not much to see here… for now…