by

NodeJS setup tips for DevOps

These are steps I took to help secure a NodeJS web application and keep socket.io running reliably on Ubuntu 12.04 on an Amazon EC2 server.

Stop running NodeJS as root and use port fowarding

Because if someone manages to hack some aspect of your application, they could do a lot of damage to your server.

The downside of running a NodeJS process as a non-root user means it can’t read traffic from port 80 or 443, which poses a problem for many web applications. The solution is port forwarding. Here’s an excellent write-up on port forwarding with iptables.

But, next the time the server is rebooted, your iptables config will be gone. They weren’t permanent. To save the configuration, check out Solution #1 in the iptables howto and this stackoverflow Q&A. They both describe the same method, but one page makes it look more complicated than the other.

Reboot the server and view your iptables to check the settings are still applied.

Open File Limits

Linux puts limits on the amount of files a user can have open at once. You can see the limit with the command ulimit -n. Linux also counts open network connections as open files. Using socket.io for realtime web applications causes at least one connection to open (and stay open) as users come to your site, leave it open in a browser tab, and then go somewhere else. On a default Ubuntu setup, ~1000 idling users with socket.io connections open may be enough to bring your NodeJS app down with this error:

Error: EMFILE, Too many open files

You can temporarily set the open file limit higher for the current logged in Linux user with the command ulimit -n 5000, however this change will be wiped out as soon as the Linux shell user logs out. There’s a good blog post on updating ulimit numbers on posidev.com which outlines the steps make the change permanent, even after a reboot.

I’ve gone live already! Is it too late?

If you can swap your active web application server(s) from one machine to another without losing data, it isn’t too late.

In my case, I had a NodeJS app running on an AWS EC2 server, and a database hosted somewhere else by MongoLab. I was able to:

  1. Make a copy of the EC2 server by making an AMI, and launching a new server from it
  2. Make the iptables and ulimit changes on the new server and reboot it to test changes stuck
  3. Check the website was still accessible on the new server by accessing it via its Public DNS (a URL like http://ec2-107-20-193-72.compute-1.amazonaws.com)
  4. Pointing the domain to the new server by re-associating the Elastic IP with the new server
  5. Shutting down the old server.