On Tue, 23 Jan 2001 09:12:30 +0100 (MET), you wrote: >On Mon, 22 Jan 2001 21:21:20 +0100 "Aley Keprt" <[EMAIL PROTECTED]> wrote: >> > >However, there is no redundancy in a RAID0 set, hence it's more commonly >> > >referred to as striping. Actually, introducing RAID0 increases the >> > >probability of a disk crash (and hence data loss without propper >> > >backup) by the increase in disks. >> > > >> > > -Frode >> >> What a theory is this?! > >No theory - simple math. > >> How can it increate a probility of a disk crash? Is it just because of using >> two disks? >> Is so, it is a nonsense. > >No. Disks come with a MTBF. If you add disks, this MTBF remains >(almost) constant. The MTBF of the entire raid will then decrase >when the number of disks increase. > >> 450MB means it can possibly fill my memory 5 times per second :-))) > >Provided your bus can handle it.... > >> > Yeah, thats true, but for my home system, i'm not too concened about >> > redundency, although this m/board can do raid0+1, but i backup a few >> > of the things i want to keep, erm... sometimes <g> its not worth the >> > cost of loosing the extra disks, most stuff i can re-install >> > >> > it feels good ahinve it tho ;o), win2k boots nice and quick, and ive >> > got loads of space to put my sam images >> >> What data loosing are you talking about? I think hard drive failure is not a >> common problem >> (compared e.g. to strange problems of M$ Anything <enter any year here>) >> Or not? > >If you believe that, you have not been exposed to any real life >disk crashs. If you have one or two disks you will rarely >experience any disk crash. However, as you add drives, the total >MTBF quickly decrements. That is why RAID was invented in the first >place (keepign RAID 0 out of it). > >In my work as a administrator I have experienced about 15 disk >crashes - ~10 of which were cheap PC IDE drives. 3-4 SCSI and one >FW disks. > > -Frode
Although i use my system a lot at home, the chance of me having a drive fail, is very very slim, if they fail, they do it straight away, or last for years, even tho my system is usually on 24/7, its still not a big issue. the problem of data recovery in event of failure is a lot worse with RAID, but hell, its only like the time i booted to command prompt in windows and wanted to format a floppy, so i did the usual : format c: /u/q/s, then it asked my if i'm sure... durr.. of course i'm sure, i wouldnt have typed it if i wasnt..... oh shit!!!... my floppy isnt 8,095 MB !!!!!!.... oops ;o), i think i'm a bit _too_ used to trashing systems at work and re-installing I was going to have a go at recovering the file system but decided it wasnt worth the hassle, especially since i put the system files on there aswell Any stuff i loose is just tough luck, thats the compromise of increase in speed. everything in lifes a compromise :o) Weve been fairly lucky at our place, i havnt had too many disks die so far on the servers, mainly a few SCSI ones, but they were in the the mail server, the system wasnt really too high quality on the cooling front, there was once ,when one of the drives was getting hammenerd that much, that it keeled the linux box with loads of scsi bus errors, and i had to down the server, hold it with the case off for about 5 minutes infront of the air-conditioning, to cool it down, then boot it back up again.. worked a treat . :) -- Dean Liversidge [EMAIL PROTECTED]

