Spinup - Are you ready for Hacker News?
This past weekend I went to PennApps with my good friends and teammates Josh Hofing and Mitchell Gouzenko. We went in with no idea what we were going to make, and came out with something awesome, which is one of the things I love about hackathons.
What did we make? #
We set out to solve a problem which hits every web dev now and then. Most of the time when you write a site you don’t expect it to get more than a few concurrent viewers, so you just run it on one server. But occasionally you make something really awesome, and it ends up on the top of Reddit or Hacker News. Suddenly your server is completely overloaded and you’re frantically looking for ways to keep the thousand people at the gate from seeing a slow load time and turning around. You could of course avoid this by running your project on a dozen servers from the start, but that gets expensive.
That’s where [Spinup](pennapps2015w.challengepost.com/submissions/31428-spinup) comes in. Log in with your Digital Ocean account, show us which droplet is your backend and which is your load balancer, then run a script on your backend and hit the submit button. We’ll handle the rest. Now whenever we see your servers getting overloaded we’ll automatically spin up a clone of your backend, start it running, and hook it up to your load balancer. Best of all, if all’s quiet on your personal front we’ll take down the droplets you don’t need, so you don’t have to keep paying for them.
Wait, what’s a load balancer? #
Good thing you asked; many people don’t know because they haven’t had to write a website that handles a lot of traffic. A load balancer is a server that acts as a intermediary between the users and the backend servers. It forwards requests from the users to the backend servers in an effort to distribute the workload evenly between them. A good load balancer will take into account stuff like how hard each server is working to process each request, watch for stalled servers, and so on. But you can in many cases get away with a simple program that just iterates through its list of servers and forwards a request to each in turn. I wrote and debugged just that in a about an hour to test Spinup. Check out my code here.
How did we do it? #
We use DigitalOcean’s API to log you in and show you a list of your droplets. That shell script we have you run on the backend installs our data-reporting daemon and adds it to your init.d listing so that it starts whenever your server does. When you hit submit we shut down your droplet, save an image of it on your DO account, and then start it back up. Now you’re ready to go.
The daemon running in the background is sleeping most of the time (like us after PennApps) and wakes up periodically to check the CPU and RAM usage on your server. It normally wakes up every ten seconds, but we have it wake up more frequently when it detects a lot of activity, since those are the important times for you. We call that feature Dynamic Dormancy. The daemon also has an api that lets you report whatever statistics you want.
All these statistics get sent to Spinup’s central server(s), where they’re stored for your viewing pleasure, as well as our practical use. From here it’s relatively straightforward for Spinup to take an average of the CPU stats over the last few seconds, and check to see if your servers are being maxed out. If they are, we tell DO to spin up another server using the image we saved earlier. Then we hook it up to the load balancer with a post request, and you’re in business.
Where will we go from here? #
We don’t want to open Spinup to the public yet because of a number of possible security issues that we would want to address. As one might expect, projects straight out of hackathons are often unsafe. My team and I have been talking about continuing work, but at the moment we’re focusing on other projects and dealing with school starting again.
If we start working on it again, I’d like to add a few features to the stats handling in particular. Firstly, displaying graphs of floats is great, but there are a lot of interesting usecase-specific visualizations that one might want to have. For example a blogging site might want to see a word cloud for recent tags. I’d really like to set up a modular system for visualizations that lets the user upload their own rendering code, and maybe specify some libraries to import.
Secondly, reporting all this data is awesome, but if we could make it useful that’s even better. I think it would be awesome to let the user choose what statistic we use to determine if we should spin them up a new server. A search engine trying to minimize search time might base the decision on that.
Final thoughts #
I hope you’ve enjoyed this writeup. You can access all of the code on GitHub and check out our submission on ChallengePost. Lastly I’d like to say that our easter egg this time is great.