Best Practices of Writing Software To Scale

I’ve been reviewing some of the apps I’ve written in the past few years, and one noticeably has stood out. It’s currently averaging around 23k requests/min with pretty good performance. I read an article on Hacker News a few days ago about a start-up that was shutting down that was serving around what I was serving. That 23k requests per minute comes out to 993 million requests per month on average. That blew my mind!

I don’t believe I write apps in any particularly way to scale at that level, and I’ve seen on the current setup on heroku (8 standard dynos @ $200/mo) it handle double that traffic with ease. In fact, I know exactly where I could optimize the app to be a bit quicker if I needed to be, but all those transactions are actually writes to the database, so the bottleneck of the load of this app is in fact the database.

Here are a few quick stats about the application:

  • Hosted on Heroku, across 8 standard dynos.
  • 95% of requests fall get the asset their requesting servered in less than 500ms. Median is 150ms.
  • For the past 24 hrs the peak requests have been 28.3k req/min and the low of 14.8k req/min.
  • Max dyno load has been 4.24 with the average of 1.
  • Max memory consumption has been 15 MB, mean has been 12 MB
  • No swap has been used

For the database: - Hosted on Linode, on an 8 GB instance - No real tweaks to the default Ubuntu 14.04 LTS version of MySQL it had beyond some optimizations for innodb buffer pool and max connections

Some things that contribute to the performance:

The Stack

As you may have guessed, this application was developed in Go. It was originally using the Martini Framework, with the high amount of traffic this is getting (and actually this is only a fraction of it, I’d say at least 30% is still hitting a CDN) I switched over to the Gorilla Mux Router and the Negroni framework which I have documented on this site.

The Process

I try to keep my code simple. I come from a background of PHP, and I hate to admit it but for a long time my code was procedural and I still find myself going back to those old habits, because Go doesn’t necessarily penalize you for writing code that way (can’t say the same about the team of developers working with you though).

I think though that keeping the code straightforward helps, I have a process I use with all my apps that I stick with, and I’ve even tweaked it in future projects but haven’t yet implemented for this project.

This project currently does something very simple:

  • Takes in the request, and checks the cache
  • If the cache is not found, query the database and build the cache
  • If the cache is found, parse the cached data into the correct data struct and continue along
  • Cache expires after X minutes, repeat the process

That’s it. I’m not using Redis in this case (but I easily could, I would still use an in-app cache, then hit Redis, then hit MySQL)

Don’t wait for things the user doesn’t need

I had mentioned previously that probably 30% of the application sits behind a CDN, a lot of the requests I’m actually handling are analytics and event tracking to see performance of things in real-time. Go makes it very easy by simply writing:

  go func() {
    fmt.Println("Do your work here")
  }()

Which begins execution of that code, bu won’t wait for it. I wasn’t sure at first this would actually work with Heroku because Google App Engine has a it’s own library to handle deferring functions like this in their queue system, but it works flawlessly and I haven’t heard complaints from Heroku’s ops team yet :)

I actually do run one SQL query before the request finishes, but the rest of the queries can finish after we have returned a ‘result’ back to the user.

Keep Improving

Looking through my code for this project as I write this article, I see places where I can actually improve performance even further. I have actually improved the way I cache and get updates for data queried from a data source which is as follows:

  • Takes in the requests, and checks the cache
  • If the cache is not found, query the database and build the cache. Add the cache key to a map of cache keys
  • If the cache is found, parse the cached data into the correct data struct and continue along
  • No expiration is set
  • When your application starts, create a function to check the map of cache keys and update them every X minutes automatically.

That small improvement of the cache actually never expiring and us checking cache updates in another area of the application is the better way to handle this, and Go allows for this to be built out easily. I’m cutting down the requests to the database in a consistent manner where instead of a request waiting for a database call to be made and there be an issue of another request waiting for the same call and those calls are being built up potentially causing crashes, I know I am only checking that raw data once and updating the cache so the other piece of my application can use it when it needs it.

Again, this is something I wasn’t sure would work on Heroku because of the nature of it, but I have found that it works perfectly without issue.


I guess my biggest take away from all this is, I didn’t really do anything to optimize for a huge load of users that I would have done otherwise. These are things I picked up as a developer along the way, but there is really no ‘magic’ that I applied to the application or any confusing code I had to implement. Caching, and learning the best methods of caching for your language of choice and your stack will be your greatest allies in your ability to scale for mass amounts of users.

Do you have any tips on how you design your applications to scale? I would love to hear about it in the comments!

Last Updated: 2015-06-05 00:44:53 +0000 UTC



What are your thoughts on this?

Categories

RSS feed

Follow Doug On Social