On Fri, Jun 09, 2006 at 06:51:00PM -0500, [EMAIL PROTECTED] wrote:
> > Or you have a file server that keeps some non-file server related state in
> > memory. The unability to serve any more requests is fine as long as it can
> > start serving them at some point later when there is more memory. The dying
> > is not acceptible because the data kept in the memory is important.
>  
> i'm skeptical that this is a real-world problem.  i've not run out of memory
> without hosing the system to the point where it needed to be rebooted.
> 
> worse, all these failure modes need to be tested if this is production code.

  I believe it is to be a crucial issue here. True, what Latchesar is after
  is a fine goal, its just that I'm yet to see a production system where
  it can save you from a bigger trouble. May be I've been exceptionally
  unlucky, but that's a reality -- if you truly run out of memory you're
  screwed. Your escape strategies don't work, worse yet they are *very*
  likely to fail in the manner that you don't except in the layer that
  you have no knowledge about. 

  Consider this -- what's worse: a fossil server that died and lost some
  of the requests or a server that tried to recover and committed random
  junk ?

Thanks,
Roman.

Reply via email to