A Crazy Idea for Web Servers Serving Large Files

While I was staring at the output of an Apache server-status page from a server that was getting smashed by hundreds of users simultaneously downloading the latest Ubuntu release, I was thinking about how a web server reads files off the disk.

When multiple clients are connected all downloading the same file – as often happens, given that we host many popular files – they’re all downloading at varying speeds (depending on their Internet connection and server load), so the web server is reading different parts of the file off the disk and sending it to heaps of different users, all at different rates.

This results in a lot of seeking as the hard drives spin furiously to try and keep up with the demand from the HTTP server, which in turn is trying desperately to service the requests of users. This turns into load, which turns into slower downloads.

Our experience is that – under peak usage in general conditions for a typically configured web server – users can suck down files much faster than we can send them out. Even though we have bitchin’ servers on awesome connections, it only takes so many cable users with their 30mbit connections trying to suck down at full capacity before we’re hitting the wall – bear in mind 30 such users will use around 900mbit of bandwidth!

The obvious solution to this problem is to stuff the file into RAM and serve it from there. There are many ways to do this – RAM drives, mmapping with an Apache module, etc. This is probably the best way, but requires you have a buttload of RAM available to do it for large files.

The Crazy Idea

The crazy idea I was thinking about was – instead of having the disks bottleneck up the webserver from all the different clients downloading at different speeds in different parts of the file, you come up with a way to synchronise the file reads across user downloads.

The desired outcome of this idea is to reduce the overhead on the disks by sacrificing the download speed of some specific users. Obviously, such a method would result in users with faster downloads getting worse performance – but there exists a point at which the server is loaded so heavily they’re getting slow downloads anyway.

The complicated part (at least, the only complicated part I can think of – this idea might completely suck for other low-level reasons I’m not aware of/haven’t considered) would be keeping track of all the users, their download speed, and the current “position” of their download (ie, the amount of bytes they’ve already been sent), then slowing down the faster connections to bring them in line with the slower ones – so the web server is only reading from the disk ONCE, but sending it to MULTIPLE users.

With some clever programming it seems feasible to me that you could come up with rough user groups as well – for example, users downloading between 50-75 kbytes/sec could all be lumped together and set to download at 50kbytes/sec, users downloading between 200 and 300kbytes/sec could be grouped and set at 200 kbytes/sec, and so on.

Optimally, such a system would be smart enough to only turn itself on when it detects disk activity reaching a certain threshold – so under normal circumstances everyone is downloading at full speed, but once processes start getting stuck waiting for I/O, this magical new mode would get enabled, multiple connections for the same file in the same speed categories would start syncing, and then disk wait times would go back down.

I would love to know if such a system has already been implemented in an existing web server (I’ve had a quick search and can’t find anything like it), or if there’s any reasons why such a system would be impossible to implement – or if indeed it’s just a dumb idea.