0 to 1 Million: Scaling my side project to 1 million requests a day

In the Beginning

In late 2014 I decided that I needed a side project.  There were some technologies that I wanted to learn, and in my experience building an actual project was the best way to do that.  As I sat on my couch trying to figure out what to build, I remembered an idea I had back when I was still a junior dev doing WordPress development.  The idea was that people building commercial plugins and themes should be able to use the automated update system that WordPress provides.  There were a few self-managed solutions out there for this, but I thought building a SaaS product would be a good way to learn some new tech.

Getting Started

My programming history in 2014 looked something like: LAMP (PHP, MySQL, Apache) -> Ruby on Rails -> Django.  In 2014 Node.js was becoming extremely popular and MongoDB had started to become mature.  Both of these technologies interested me, so I decided to use them on this new project.  As to not get too overwhelmed with learning things, I decided to use Angular for fronted since I was already familiar with it.

A few months after getting started, I finally deployed https://kernl.us for the world to see.  To give you an idea of the expectations I had for this project, I deployed it to a $5/month Digital Ocean droplet.  That means everything (Mongo, Nginx, Node) was on a single $5 machine.  For the next month or two, this sufficed since my traffic was very low.

The First Wave

In December of 2014 things started to get interesting with Kernl.  I had moved Kernl out of a closed alpha and into beta, which led to a rise in sign ups.  Traffic steadily started to climb, but not so high that it couldn’t be handled by a single $5 droplet.

Around December 5th I had a customer with a large install base start to use Kernl.  As you can see the graph scale completely changes.  Kernl went from ~2500 requests per day, to over 2000 requests per hour.  That seems like a lot (or it did at the time), but it was still well within what a single $5 droplet could handle.  After all, thats less that 1 request per second.

Scaling Up

Through the first 3 months of 2015 Kernl experienced steady growth.  I started charging for it in February, which helped fuel further growth as it made customers feel more comfortable trusting it with something as important as updates.  Starting in March, I noticed that resource consumption on my $5 droplet was getting a bit out of hand.  Wanting to keep costs low (both in my development time and actual money) I opted to scale Kernl vertically to a $20 per month droplet.  It had 2GB of RAM and 2 cores, which seemed like plenty.  I knew that this wasn’t a permanent solution, but it was the lowest friction one at the time.

During the ‘Scaling Up’ period that Kernl went through, I also ran into issues with Apache.  I started out by using Apache as a reverse proxy because I was familiar with it, but it started to fall over on me when I would occasionally receive requests rates of about 20/s.  Instead of tweaking Apache, I switched to using Nginx and have yet to run in to any issues with it.  I’m sure Apache can handle far more that 20 requests/s, but I simply don’t know enough about tweaking it’s settings to make that happen.

SCaling Out & Increasing Availability

For the rest of 2015 Kernl saw continued steady growth.  As Kernl grew and customers started to rely on it for more than just updates (Bitbucket / Github push-to-build), I knew that it was time to make things far more reliable and resilient than they currently were.  Over the course of 6 months, I made the following changes:

  • Moved file storage to AWS S3 – One thing that occasionally brought Kernl down or resulted in dropped connections was when a large customer would push an update out.  Lots of connections would stay open while the files were being download, which made it hard for other requests to get through without timing out.  Moving uploaded files to S3 was a no-brainer, as it makes scaling file downloads stupid-simple.
  • Moved Mongo to Compose.io – One thing I learned about Mongo was that managing a cluster is a huge pain in the ass.  I tried to run my own Mongo cluster for a month, but it was just too much work to do correctly.  In the end, paying Compose.io $18/month was the best choice.  They’re also awesome at what they do and I highly recommend them.
  • Moved Nginx to it’s own server – In the very beginning, Nginx lived on the same box as the Node application.  For better scaling (and separation of concerns) I moved Nginx to it’s own $5 droplet.  Eventually I would end up with 2 Nginx servers when I implemented a floating ip address.
  • Added more Node servers – With Nginx living on it’s own server, Mongo living on Compose.io, and files being served off of S3, I was able to finally scale out the Node side of things.  Kernl currently has 3 Node app servers, which handle requests rates of up to 170/second.

Final Thoughts

Over the past year I’ve wondered if taking the time to build things right the first time through would have been worth it.  I’ve come to the conclusion that optimizing for simplicity is probably what kept me interested in Kernl long enough to make it profitable.  I deal with enough complication in my day job, so having to deal with it in a “fun” side project feels like a great way to kill passion.

Kernl Goes Beta!

Screen Shot 2015-11-22 at 3.02.31 PM

In May of this year I launched the Kernl alpha with hopes that WordPress developers would be interested in it.  And interested they were!  6 months later Kernl has over 65 users from all around the globe and a host of new capabilities to make WordPress plugin and theme development easier.  For instance, since launch we’ve added:

  • Continuous Integration with BitBucket
  • Continuous Integration with GitHub
  • Purchase Code Validation

But new features aren’t all that make a service great.  For people to trust in something it must be reliable, and thats what the beta phase of Kernl is all about: improving reliability.  We’ve reached a point where we feel Kernl provides enough value to the WordPress community to allow us to take some time to refactor code and add a lot more tests.

What does this mean for you?  Not much.  If we do our job right you won’t notice anything.  The beta is still free and everyone will get big “heads up” before we start charging for the service.

Thank you to all of the alpha users who have made this possible.  Without you Kernl wouldn’t be where it is today.

 

Continuous Deployment of WordPress Plugins Using Kernl

One of the problems I’ve always had with WordPress plugin development is doing it in a modern build pipeline. I really wanted to be able to merge a branch into master, build the zip file, and push the update out to my clients. For the longest time I wasn’t able to do this, so I built Kernl to enable a more modern development approach to WordPress plugin development.

What is Continuous Deployment?

Continuous Deployment (or Continuous Delivery) is a software development strategy where you ship code frequently. Your pipeline is fully automated, so as soon as some event on your version control repository is triggered the deploy process starts. For me, that event is when I merge a pull request into master.

What is Kernl

Kernl started out as a way to provide private plugin and theme updates for WordPress, which grew out of my frustration at having to update clients manually every time a small bug was patched. Once I had the updates working manually, the next step was automating everything. This is where “push to build” came in.

How Push To Build Works

Getting Started with Kernl

Getting push-to-build updates on your plugin or theme is pretty easy to set up with Kernl.

  1. Go to https://kernl.us and sign up. After you’ve logged in, click “Continuous Integration”.
  2. Now connect BitBucket.  This will authorize Kernl to access your BitBucket account so that it can enable push-to-build functionality.
  3. The next step is adding a WebHook to BitBucket.  This tells BitBucket to send a message to Kernl after every code push.  To do this, go to your repository settings, scroll down to “Integrations” and click “WebHooks”.  Set the new Webhook to point at https://kernl.us/api/v1/repositories/bitbucket/webhook.
  4. In order for Kernl to know when to build a new version of your plugin, it looks for a file named kernl.version in the root directory of your repository.  Go ahead and add this file now and commit it.  The kernl.version should contain a semantic version that looks like “1.0.1”.
  5. Next, you need to add a plugin.  In Kernl, click “Plugins” on the left and then click “Add Plugin” on the upper-right.  Fill out the name, slug, and description fields, then scroll to the bottom.Kernl Select Repository and Branch You should now be able to select from a list of repositories from your BitBucket account.  You can also choose what branch Kernl should make its builds from.  The default is master, but it can be anything that you want.  Select a repository now and press “Save”.
  6. Next, you need to add the first version to Kernl manually.  Click the “versions” button for the plugin you just created, and then click “Add Version”.  The most important part of the process here is to make sure that the version number in Kernl, kernl.version, and your plugin match.  If you put 1.0.0 in the kernl.version file, make sure that it matches in your plugin’s main file, as well as in Kernl when you upload the first version.  If this still isn’t clear, check out the example plugin on BitBucket.  The kernl.version should contain one line, and on that line will be your version.  Once you have the versions figured out, zip up the plugin as if you were going to distribute it and upload it to Kernl.
  7. Thats it!  Distribute this copy of the plugin to your clients and they’ll receive private updates whenever you upload a new copy or push a new version to your BitBucket repository.

Pushing a New Version

With all the boilerplate setup complete, getting a new update out to your clients is super easy.  Follow the steps below and you’ll be good to go.

  1. Make code changes.  Whatever change you want to push out, go ahead and make it.
  2. Update your plugin’s version.  This is typically in the comment document block in your functions.php file.
  3. Update the kernl.version file.  This should match your functions.php version.
  4. Commit
  5. Push to the branch you specified in your plugin setup on Kernl.  If you didn’t specify a branch, that means you’ll need to push to master.
  6. Done.  If all went well, you’ll receive an email from Kernl that lets you know about the new version that was pushed.  You can also verify that the plugin was built by visiting Kernl and looking in the version list for your plugin.

Plugin build email

If you’ve ever wanted to modernize your WordPress development pipeline, I highly suggest you check out Kernl.  Automatic updates triggered by changes in your repository will save you tons of time and get bug fixes and updates out to your clients faster.