Writing software that scales quickly

In a https://www.dougcodes.com/go-lang/best-priactices-of-writing-software-to-scale, I wrote some tips to write software that scales, this post expands and updates on some of those tips.

These tips are more for general programming so should be able to translate to most languages pretty easily (except maybe PHP, sorry PHP)

End the request as quickly as possible

I see speed in an application in two layers: how does the user see it and how does the server see it?

Since I primarily write software that tracks data and generates client-side pixels, I have the benefit where I can push the request back to the user as quickly as possible. In go and I’m sure in other languages where you write an application that is constantly running, there is going to be some method of concurrency where you can defer pieces of the function to run in the background.

What I suggest doing is setting up the data that needs to be process: grab the data that was sent via POST or GET, get any information from the referrer you need, save it in an object tell your code to process that data in parralel so you can send your user back to where they need to be.

A few concerns may be “What if the user needs to see visual feedback of the processing? What if there is an error?” Well, one thing you can do is design your app around the fact that things are happening behind the scenes so the user may not see feedback immediately. How you can lay this out to the user is to tell them upfront “If you saved something and it doesn’t appear right away, it just means it’s processing” or somehow add a visual cue to the user. Don’t leave the user in the dark. If you want to get extra fancy, this is a great use of websockets. If you are developing a site that uses a framework such as Angular JS or something similar, use a fancy alerting system to tell the user that the item is saving, and use websockets to communicate back in real-time when that item has finished processing. Reiterate that the benefit of not seeing the changes in real-time all the time is that the user is able to continue working and they don’t need to wait.

Another concern is “What if there are errors processing the information?” In that case, the same could still apply but you’ll definitely want an efficient way to warn the user. Possibly via a notification which can take the user back to the screen to edit what was able to be processed and have the errors highlighted. If you can handle any error checking via js verification before it is sent to the server then this will help with back and forth as well.

Know when to access MySQL and what type of caching should be set up.

MySQL is a valuable database, it’s powerful and cheap to run. I was in the mindset that MySQL was slowly dying, MongoDB and no-sql were the future but the truth is MySQL has never let me down. I’ve used Mongo, it’s great but I actually don’t use it anymore for newer projects I work on. MySQL is used for saving data 100% of the time now, and Redis is used for caching. There is no “unless I’m doing this” or any other scenario where that isn’t the case.

Why? Again, because MySQL has proven to work, to me it’s benefits outweigh it’s shortcomings. To be fair, I do run Percona MySQL in production because I have seen direct performance benefits.

Here are some rules I use for working with MySQL:

  • Only read from the database on user input if the latest content needs to be read or if you have to build an initial cache (even then, try not to). Chances are it’s not the end of the world if data is 5 minutes stale.
  • When you read from the database, place it in a cache. Even if it lasts only for a minute, it’s better than nothing.
  • Once the data is in a cache, the user should not be able to request that data from mysql again publically.
  • A tip I’ve learned for analytics software is to combine your updates. I used to always to an update for every count increment to the database, and now I combine them and process through them every few minutes.

To expand on this, finding a powerful caching library is important. Your language has one, I guarantee it, and if it doesn’t then you can probably write something in a night if you had to.

Once I pull something from a database, I immediately place that in a cache that is explicitely set to NEVER expire. Why don’t I want it to expire? I’ll be rewriting the cache every few minutes programatically behind the scenes. What that means is say I request the profile for the user Doug, the profile doesn’t exist in the cache so I’m forced to make a MySQL call. When I grab the data from the profile, I immediately insert it into the cache. What I also do is tell it to add that profile to a cache map to be updated. The cache map for example could be called ‘profiles’ and it could be an array of all the ID’s the application has requested for the profiles. Every five minutes, I have a background process running through those caches and updating them so the content is always fresh.

What this allows is for only one user to hit the DB, and all subsequent requests are always going to hit the cache and every 5 or 10 or whatever duration you have set your application will grab that record once. One problem I always had with PHP is while it had caching libraries, it had no way to do a ‘lock’ on data so if I had to clear the cache on a server when making changes to something, I would be flooded with the same queries over and over again until that cache had built up, but every 15 min the queries would build up again, not as bad as the first but still more than I would like. After using this new technique there still may be an initial cache build up but there are even ways around that. You could possibly have your admin application behind the scenes building the caches in redis and your front-end applications referencing redis before it would ever think of referencing MySQL, or to make it more strict, if it wasn’t in Redis then just let the end user know the object wasn’t available or user websockets to notify the user when it is available and automatically refresh the page.

This is a technique that has helped decrease database server load dramatically while still keeping content fresh. Even if your cache is only set to 1 minute, as long as the backend is doing the heavy lifting you’re going to have a very performant site where the user doesn’t have to be placed in a queue to wait for the DB to respond.

For writing, you want to do something similar. Say you have a table that has a view count for a page and everytime the user hits the page you do something like UPDATE page SET count = count + 1. For one, I’d hope you decouple count from your page so you can track by time, but also you may want to take a few things into consideration:

  • Use timeslices, unless you have to have data accurate to the minute or 10 minutes then try storing data in hourly chunks.
  • Cache the ‘update’ to the database. For example if you are updating page ID 1 then create an array with the current count and increment that, then every few minutes grab that array, copy it to a new array and clear the original array and then you’ll run a command that may look lik UPDATE page SET count = 12. What this allows you do is take twelve database update calls and turn it into one. It may not look like much, but you just decreased your calls to mysql by 91%. One thing to note is you want to make sure to look into locking the data when reading/writing to it This is why it’s important to do this behind the scenes, if you lock the data when the user is hitting your site then they may have to wait in a queue. If the queue is happening behind the scenes then it’s not a big deal, the server will empty the queue eventually.

Cache everything

Make it a priority from day one even if you don’t think it’ll matter, you’ll be thankful you got into the habit. If you use a framework for developing applications look into how it does caching, chances are things like templates are cached on production but not on development but have you tested it? How do your headers look? Have you used Etag before to check to see if the user already has the latest content cached?

You could use the same principles explained above for caching MySQL requests for caching template rendering as well as long as it applies, or at least the pieces of the template that could be static across a subset of users.

Minify what you can, compress what you can - the bytes add up

This is something I was never concerned about prior to the work I do now, but the difference between being billed for serving a 5 kb JS file that’d cost $1k/mo unminified to lets say 2kb JS file compressed could be a $600/mo difference. There are plenty of tools out there that make this dead simple for you. Look at the images you have on your site, do they have to be 100% resolution? Are you grabbing a larger file size than what is required? I’m not talking double the resolution for hidpi screens, I’m talking 5x or 10x the size. Have you seen Facebook’s photos? They’re compressed to hell and back and they’re doing fine, so I’m sure you can compress your photos too without anyone complaining.

Profile

Figure out how to profile your code in a way that makes sense to you. Using the network console in Chrome is great because it shows a clear breakdown of where a request may be taking the longest. Also see if you can do profiling of any 3rd party requests in your application as well, a simple start/stop timer is all you need. Take your MySQL query and throw EXPLAIN in front of it and see if it shows you anywhere you can add a key. I actually found that upgrading from Percona MySQL 5.5 to 5.6 increased one of my queries from 30 seconds to 10 seconds because of the way I was using joins. After you have done the hard work of getting a good caching system set up in place and optimizing your outputs, profiling is just low hanging fruit at that point.


These are tips I’ve began to live by over the past few months and they’ve never let me down. It may take a bit more development time to implement these techniques from day one in your application, but it’s a lot quicker than refactoring down the line when you’re going to need to do it.

This blog was inspired by a post on hacker news where I was writing a comment where I realized I wrote an app last week that is currently on pace to serve at least 970 million impressions per month on limited hardware resources. It’s something that blows my mind still because it’s so much data being passed through at such a high rate that I’d never thought I’d be doing when I was making drag and drop websites on geocities 15 years ago.

Hopefully this article was useful! In the coming months I’m hoping to make videos explaining how to use some of these techniques in golang to show the steps involved and see how it works in real time.

Let me know if there are any tips you have for keeping your application lean! I’d love to learn if there are things I can improve upon.

Last Updated: 2015-12-08 11:29:10 +0000 UTC



What are your thoughts on this?

Categories

RSS feed

Follow Doug On Social