On Sat, May 05, 2007 at 02:20:35AM -0700, Stewart Stremler wrote:
> So let me see if I have our assumptions correct:
> 
> (1) Writes and reads are atomic, but may not be persistent. That is,
> the data in a file will not ever be corrupt
> 
> (2) Data in a file might be silently replaced at any time.
> 
> (3) Files may be silently removed at any time.
> 
> (4) No latency guarantees are made for how long a change to become
> universal.
> 
> (5) No ordering guarantees are made for which server updates which
> of its siblings when, or how.

I think I probably don't completely understand the requirements, but it
sounds like we're all too fascinated with handling locking requirements
on a service that doesn't support it well.  Would the problem get a lot
simpler if you handled the locking locally, so that each node knew that
it was the winner for the particular task it was taking on?

I think trying to get S3 to do something it's not good at is maybe not
such a good idea, and being a fairly new service with a lot of traction,
it is likely to better support these scenarios in the future without you
having to build your own mini infrastructure.  Infrastructures are a
dangerous black hole because they're so much fun to write.

Or perhaps the gang of machines does their local work, and then just forwards
on the actual S3 manipulation to one server, so that bit of it serializes.
But that's just a variation on handling the locking locally.

Alright... I just re-read the initial description.  Sounds to me like this
would've better been handled with a local message queue.  The jobs are dumped
to the queue  and your gang of machines are consumers of the queue.  
Smarter people than us have solved these problems already.

B

-- 
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg

Reply via email to