Carl Wilhelm Soderstrom wrote:
>>> I would really like to see hard drives made to be more reliable, rather than
>>> just bigger.
>> I'm not sure that can be improved enough to matter. A failure of an
>> inexpensive part once in five years isn't a big deal other than the side
>> effects it might cause, and unless they can be 100% reliable (probably
>> impossible) you have to be prepared for those side effects anyway.
>
> on a small scale, perhaps.
> when you get to an organization the size of (TBS|Hotmail|Yahoo)tho; it may
> make sense to spend a little more money in order to spend less labor
> swapping parts.
I think before you get to that that scale you'd be network booting
diskless and mostly disposable motherboards. You still have to deal
with the drives somewhere, but using larger ones instead of large
numbers of them.
> the market seems to agree with you tho... which is a pretty good indictment
> of central planning notions (i.e. Me) vs. what really is efficient.
I find complete-system redundancy with commodity boxes to be cheaper and
more efficient than using military strength parts that cost 10x as much
and fail half as often. In our operations the most common failure is in
software anyway since we try to push out new capabilities as fast as
possible so I like lots of redundancy and the ability to run multiple
versions of things concurrently. I could see where that would be
different for operations where everything had to be in a single
database, though.
>> > I want reliable storage.
>> Run mirrored hot-swap drives. If one dies, replace it at some convenient
>> time, sync it up and keep going. I have one machine last booted in
>> mid-2003 that has had a couple of it's drives replaced that way.
>
> Your point is well-taken. I do keep in mind tho that I've seen multiple
> drives fail simultaneously or in rapid succession, and that the process of
> replacing drives costs time and effort.
Yes, your building might catch fire or be flooded too. Those are
probably more likely than multiple cheap drives failing simultaneously
from some cause that wouldn't also kill better drives. You need offsite
backups to cover these unlikely events anyway.
> It is not as trivial as you might
> think, once you factor in the time to detect the failure, find the
> replacement (if you can), replace the drive (which may involve removing the
> old drive from the carrier and replacing it with the new one), and make sure
> it's working properly. In an ideal world it's 5 minutes of work, in the real
> world I usually expect to lose an hour or two dealing with a failed drive.
The real killer is the time planning out the replacement operation so
the only time wasted is the one person who does the work. In the
whole-system scenario where the systems are load balanced anyway,
yanking a machine out isn't anything special and you don't even need the
hot-swap drive case.
> This can be accounted as several hundred dollars of lost
> productivity in many cases; so it's worthwhile to spend more money
> on a better-quality drive sometimes.
In practice I don't see any relationship between price and reliability.
I'm inclined to think that they all come off the same production line
these days.
--
Les Mikesell
[EMAIL PROTECTED]
-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
BackupPC-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/