This won’t uncover anything most people familiar with server latency already know: latency increases as the distance to a given server increases. This post isn’t much more than that, except that there are numbers and data attached to supplement that point. Hopefully this illustration will help some folks realize that you have to consider distance when evaluating server latency.
The site I used to run this test was my own. At the time I wrote this, I had my site deployed on 2 servers: one in Dallas, another in Missouri. I first bought the Missouri server as my 1st step to get away from shared hosting. After deploying my site to that server I was surprised to find out that it was served slower than from my shared hosting server located in California. That prompted me to purchase another server nearby. After a brief search, I went with Tailor Made Servers in Dallas.
Before I start, a few things should be noted about each server:
- the Dallas server runs an Intel 3.1 ghz dual core cpu, 4gb of RAM, and 7200 rpm drives
- the Missouri server runs an AMD 1.8 ghz dual core cpu, 8gb of RAM, and 15k drives
- both servers run PHP 5.3.3, APC (configured identically), and CentOS 5.5
- Apache and other configs are identical on both servers; to supplement, they score identically in YSlow and PageSpeed.
Overall, I’ve done everything I can to make sure both servers are identical. I don’t consider the hardware differences that significant given the trivial amount of resources it takes to load my home page (which is the only page I tested against). Additionally, ping and traceroute tests, which I’ll illustrate below, don’t tax a server’s hardware very much either, if at all.
The first thing I tested was ping latency, which provided a pretty quick glimpse at the impact distance has on latency. Results show about 77ms for the Missouri server vs. about 18ms for the Dallas server:
Next up, I wanted to see how many network hops it took to reach each server, which traceroute can tell us. As shown, the Missouri server requires 16 network hops while the Dallas server only requires 12:
Finally, I pointed siege to both instances to measure differences in first-byte latency and requests per second each instance can process:
Missouri: 27 requests per second at around 160ms first-byte latency
Dallas: 100 requests per second at around 50ms first-byte latency
So with both instances setup almost identically, save a cpu difference, a change in location alone resulted in about a 4x speedup for users viewing my site from Austin. I’ve since changed DNS to point solutionfactor.net to the Dallas server, so it’s important to note that other users viewing my site from areas close to Missouri, or further north, will probably experience higher latency. However, given that 90% of my users are based in Austin, I’ll take that hit in favor of speeding things up locally.