I'm running this by you guys to make sure we're not trying something
completely insane. ;)
We already rely on memcached quite heavily to minimize load on our DB
with stunning success, but as a music streaming service, we also serve
up lots and lots of 5-6MB files, and right now we don't have a
I'm not sure how well a reverse proxy would fit our needs, having
never used one before. The way we do streaming is a client sends a one-
time-use key to the stream server. The key is used to determine which
file should be streamed, and then the file is returned. The effect is
that no two
You could put something like varnish inbetween that final step and your
client..
so key is pulled in, file is looked up, then file is fetched *through*
varnish. Of course I don't know offhand how much work it would be to make
your app deal with that fetch-through scenario.
Since these files are
dormando wrote:
You could put something like varnish inbetween that final step and your
client..
so key is pulled in, file is looked up, then file is fetched *through*
varnish. Of course I don't know offhand how much work it would be to make
your app deal with that fetch-through scenario.
You could also redirect the client to the proxy/cache after computing the
filename, but that exposes the name in a way that might be reusable.
perlbal is great for this... I think nginx might be able to do it too?
Internal reproxy. Server returns headers for where the load balancer is
to
On Nov 2, 2009, at 1:35 PM, Jay Paroline wrote:
What I'd love to do is get those popular files served from memory,
which should alleviate load on the disks considerably. Obviously the
file system cache does some of this already, but since it's not
distributed it uses the space a lot less
Perhaps using tmpfs may be an option. Benefit of using tmpfs is that you
can create a filesystem that is larger than physical memory. This has
the benefit that virtual memory manager will swap out unused items to
disk. You can then perhaps NFS export the file system or do something
else.