A Crazy Idea for Web Servers Serving Large Files

While I was staring at the output of an Apache server-status page from a server that was getting smashed by hundreds of users simultaneously downloading the latest Ubuntu release, I was thinking about how a web server reads files off the disk.

When multiple clients are connected all downloading the same file – as often happens, given that we host many popular files – they’re all downloading at varying speeds (depending on their Internet connection and server load), so the web server is reading different parts of the file off the disk and sending it to heaps of different users, all at different rates.

This results in a lot of seeking as the hard drives spin furiously to try and keep up with the demand from the HTTP server, which in turn is trying desperately to service the requests of users. This turns into load, which turns into slower downloads.

Our experience is that – under peak usage in general conditions for a typically configured web server – users can suck down files much faster than we can send them out. Even though we have bitchin’ servers on awesome connections, it only takes so many cable users with their 30mbit connections trying to suck down at full capacity before we’re hitting the wall – bear in mind 30 such users will use around 900mbit of bandwidth!

The obvious solution to this problem is to stuff the file into RAM and serve it from there. There are many ways to do this – RAM drives, mmapping with an Apache module, etc. This is probably the best way, but requires you have a buttload of RAM available to do it for large files.

The Crazy Idea

The crazy idea I was thinking about was – instead of having the disks bottleneck up the webserver from all the different clients downloading at different speeds in different parts of the file, you come up with a way to synchronise the file reads across user downloads.

The desired outcome of this idea is to reduce the overhead on the disks by sacrificing the download speed of some specific users. Obviously, such a method would result in users with faster downloads getting worse performance – but there exists a point at which the server is loaded so heavily they’re getting slow downloads anyway.

The complicated part (at least, the only complicated part I can think of – this idea might completely suck for other low-level reasons I’m not aware of/haven’t considered) would be keeping track of all the users, their download speed, and the current “position” of their download (ie, the amount of bytes they’ve already been sent), then slowing down the faster connections to bring them in line with the slower ones – so the web server is only reading from the disk ONCE, but sending it to MULTIPLE users.

With some clever programming it seems feasible to me that you could come up with rough user groups as well – for example, users downloading between 50-75 kbytes/sec could all be lumped together and set to download at 50kbytes/sec, users downloading between 200 and 300kbytes/sec could be grouped and set at 200 kbytes/sec, and so on.

Optimally, such a system would be smart enough to only turn itself on when it detects disk activity reaching a certain threshold – so under normal circumstances everyone is downloading at full speed, but once processes start getting stuck waiting for I/O, this magical new mode would get enabled, multiple connections for the same file in the same speed categories would start syncing, and then disk wait times would go back down.

I would love to know if such a system has already been implemented in an existing web server (I’ve had a quick search and can’t find anything like it), or if there’s any reasons why such a system would be impossible to implement – or if indeed it’s just a dumb idea.

5 thoughts on “A Crazy Idea for Web Servers Serving Large Files”

  1. Well, sort of – I don’t want to start a “broadcast” at a given time and rely on users all starting their download at the same time – I want the server to be smart enough to know when its serving multiple clients the same file at the same time, and to be able to dynamically adjust download rates to move to a ‘multicast’ kind of scenario.

  2. With the latest version of Symantec GhostCast server you can join the session at any time. The server keeps sending the file continuously out so if you join when the server is 51% through, the client will remain connected once it reaches the 100% marker and keep saving from the 1% marker up until it reaches the point where it first joined the session. I think the session can be either multicast or unicast.

    It’s not HTTP, but it shows the idea can work in practice.

  3. Yeh, that’s sorta similar. The advantage of figuring it all out magically in the web server though is it means clients can use this magical new system without requiring any special software or changes to their end – only the server needs to change.

  4. It’s a good idea, the only question for me is whether the CPU overhead tracking what’s required could be resource intensive.

    Were I running a high demand file-server I would RAID-1 that baby with as many HDDs as my budget allowed. HDDs are cheap as hell, more RAID-1 increases read speed greatly – which is all you care about, since you are the only one writing to the server, everyone else is reading from it.

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.