begin  quoting Todd Walton as of Sun, Mar 23, 2008 at 06:27:47PM -0500:
> On Sun, Mar 23, 2008 at 4:59 PM, Tracy R Reed <[EMAIL PROTECTED]> wrote:
> > Not really a RAM disk: Similar, but different:
> >
> > http://www.pcworld.idg.com.au/index.php/id;1829360654;pp;1
> 
> I can see it now.  Somebody uses this Ramback.  People say real-world
> gains outweigh the risks: "Come on, when's the last time your computer
> actually failed where data write integrity would have saved you?".
> Then one day it does, that one in ten thousand chance, and SJS blesses
> us with an I-told-you-so bit of wisdom, and grumbles about the way
> things should be in computing.

Well, a one-in-ten-thousand chance of failure will result in a problem
about what, every ten minutes, given that we're talking about a TERABYTE
of RAM?

On the one hand, I *like* RAM-disks. (Remember, I'm an old Amiga bigot.
RAMDisks were *easy*: "I want a ramdisk" "okay, here you go.")  I think
that if you have a set of programs and data you want really fast access
to (as opposed to the amortized access times you get with caching), and
you're willing to give up some RAM, sure, go ahead.

However, the article says:

   Should an application call fsync(), the expected result (blocking
   until the data is written to physical media) will not happen.

...and that gets me annoyed.

What's wrong with respecting fsync()? This is betraying the user's
trust in the system, really.

He goes on:

   You just need to believe in your battery, Linux and the hardware it
   runs on. Which of these do you mistrust?

Duh. All of 'em.

I have six UPSes at home. All of them have failed at one time or
another.  Batteries are consumable items, and who goes around replacing
a UPS battery at home once a year "just in case"?

My linux box at work had a period where OOo would regularly lock up
the system.  An upgrade has fixed that, but it was one of those rather
annoying and disturbing problems -- how can an _application_ screw
with my machine so badly so as to require a 120-bounce?

There's a separation failure in there somewhere.

And as for the hardware... it is to laugh.

I trust my hardware because I _have_ to. Not because I actually get a
warm fuzzy thinking about the robustness of a cost-optimized design.

That being said, RAM seems to outlive disks... (probably not in a
per-megabyte sense, but in a per-unit sense, certainly).  So long as
the data on the disk is in a known-good state (LVM snapshot, perhaps),
emulating a disk entirely in RAM might be just peachy.

But by the time a terabyte of RAM is common, I expect there to be
petabyte disks, so we'll be back to fighting the same old disparity.

> On the other hand, if you were virtualizing and using automatic
> failover it might not be a problem.

How do the guys running Blue Gene handle this sort of thing?

-- 
What's the MTBF on a per-byte rate?
Stewart Stremler


-- 
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list

Reply via email to