Good afternoon, all.  I'm wondering if anyone
has anything good or bad to say about kernel-resident
raid implementations using the recent large ide disks.
160 gig for $199, 640 gb = $800, on a pc.  raid 5, 480 
gig.  And that's without using pci ide-bus expansion
cards.

        I'm using a raid 0 set of 3 disks (wdc, maxtor)
to transfer a few hundred gig of jpgs between two sites.
The pc (dell gx150) has some hardware problems, eth
cards have to be reseated each time it's moved.  The
raid set worked fine in testing, but after moving the
pc to the second site, a cpio failed at only 77 gig
with: cpio: write error: No space left on device.  The
raid set is 390 gb, and was filled to about 270 during
testing, with no problems.  Copying files off it after
that, returned some i/o errors, below.

        So, lots of space, but flaky?  Are hardware
controllers the better choice?


        By the way, the results of that solaris 2.6 to 
2.8 effort were that Oracle ran fine, but netscape web
server failed to open ports.  netstat showed 80 in
"BOUND" state; and of course there's no source code.





tar: ./10.gz: Read error at byte 49152, reading 10240 bytes: Input/output
error
tar: ./10.gz: File shrank by 1096222 bytes; padding with zeros


cp: cannot stat `/z/27': Input/output error#
cp: reading `/z/73.gif': Input/output error
cp: reading `/z/75.gif': Invalid argument


cpio: ./net: Value too large for defined data type
cpio: ./inc: unknown file type 




---
Send mail for the `bblisa' mailing list to [EMAIL PROTECTED]'.
Mail administrative requests to [EMAIL PROTECTED]'.

Reply via email to