As most of you know, I haven't been active with libsl recently. There are
many reasons for that, the majority of them doing with priorities and school
work.

But do not fear! I will be back very soon, and I have lots of ideas. Heck,
I'll share one with all of you (if you make it before I even start, can I
have a lil' credit? :P [EMAIL PROTECTED] is my paypal*)

The global cache program will be a client and server side service. The
client will be written in libsl and utilizing slproxy. The client captures
all packets regarding images, sound, and animation transfers. The proxy will
not impede on any transfers, but instead will capture requests and take the
UUID out of them and then let the request go to LL. It will then contact the
server and ask if the server has that UUID in its storage. If it does, it
will download that asset from the server and then inject the data packets
very quickly. If the asset is not on the server, the client will flag the
UUID and will download that UUID from LL as it comes through the pipe. When
it is done downloading, it will compress it down and send it off to the
server in the background.

The server is scalable. In fact, it will be comprised of Amazon Web Services
such as S3 and EC2. There will be an array of EC2's handling all the
requests. There will be one EC2 (or regular linux server) that will act as a
DNS and manages the array. It will be efficient in terms that it will track
the demand of the service and open or close EC2 servers appropiately. This
way, it will minimize cost while maximizing stability. Example: Lets say its
a sunday evening and SL just hit peak concurrency. At the same time, 1000
people are running the client with SL. With these clients, over 50 requests
per second are coming in to the DNS. The array only holds one EC2, and that
server has a server load of 90% and it is increasing. The master server
(DNS) will then open up a new EC2 and add it to the array and then
distribute requests to the new one through RR DNS. Night is coming along,
and both servers have around .4 CPU load and a total of about 40 requests
per second. The primary server will then change the DNS to point to only one
of those servers and then shut down the other one when the DNS proprogates.

Another thing about the server array.. since it is using AWS, we can use S3
to store all the assets privately. The reason why we need EC2s to handle the
assets is to avoid anyone reverse engineering the proxy code and downloading
a lot of assets and reuploading them to SL, sparking another OMG COPYCACHE
thingy. The EC2 servers encrypt the assets with a unique salt and compresses
them down to save us bandwidth and increase delivary speed. I know, it may
be silly.. but tell me what you all think

I'll be back in a month or so (or maybe earlier!)
Sorry for the bad grammar, had to rush through it.

Alpha / Qode
_______________________________________________
Libsecondlife-dev mailing list
[email protected]
https://lists.berlios.de/mailman/listinfo/libsecondlife-dev

Reply via email to