Hello all,
We are planning on using RAID-1 to mirror two identical drives.
The drives are set up the same in bios, CHS mode with 25228
cylinders, 16 heads, ande 63 sectors each. Linux sees them and shows
the same setup as the bios. Linux fdisk on the other hand, shows
different
I've mostly been a lurker but recent changes in my company have peaked my
interest in the performance of sw vs hw raid.
Does anyone have some statistics online of sw raid (1,5) vs hw raid
(1,5) on a linux system?
Also is there anyway to have a hot-swappable sw raid system. (IDE or SCSI)?
I've mostly been a lurker but recent changes in my company have peaked my
interest in the performance of sw vs hw raid.
Does anyone have some statistics online of sw raid (1,5) vs hw raid
(1,5) on a linux system?
We have a DPT midrange SmartRAID-V and we're going to do testing on
I doubt it, unless the controller is hotswap capable and you can reload
the IDE driver I don't know of any hotswap capable IDE
controllers (Not to say that I wouldn't be interested if anyone else
on the list does!!!)
I would as well be interested in a hot-swappable ide controller.
Or am I going to run into trouble because /lib's files will be unavailable
for a bit while I enter these commands? Is there a better way to enlarge
/? In general how to you recommend changing partition sizes? Is this an
argument for not seperating directories into different partitions,
Perhaps I am wrong, I expected that a reboot would make the original /lib
available again at boot time. The data is still there, just hidden by the
mount, right?
mounting only occurs after fstab is processed.
you can't process fstab with the mount command if there are no libraries
for
I know this has been covered before but I can't find a searchable archive
of this list anywhere.
We've got DLT's doing backups right now and we're conceiving that it might
be cheaper to setup a system with 2 or 3 linear striped or raid 0 34+gig
ide disks and have 2 sets of these disks that we
Price wise, this seems like a good approach. If it were my system, I would
be concerned about disaster recovery. I have been a believer for a long
time in tape rotation and offsite storage. Also, you are risking losing
4 weeks worth of data; a full backup at least weekly and incremental
While I'll be the last person to praise IDE, recent drives and controllers
have CRC error checking, which is actually better than parity.
would you happen to know which drives and controllers?
The promise udma66's? Any WD IDE's or IBM's 36+gig.
-sv
has anyone on this list used or had any dealing's with the Zero-D UDMA
internal SCSI external Raid Arrays?
this is the URL (the 400 model specifically)
http://www.zero-d.com/ide2.html
I'm interested for use with linux and/or solaris and I'd love to know of
any feelings or responses.
They look
check out www.zero-d.com
They make an eide internal uw scsi external raid box that looks pretty
cool.
-sv
Unfortunately the hardware RAID still doesn't solve the 2GB+ problem. I
also have a hard time with the 'if you want big files, buy a 64 bit machine'
argument. What percentage of Linux users are on 64 bit platforms? How many
other x86 OS's support 64 bit filesystems (NT, FreeBSD, BeOS,
I have a accelraid 250 w/32mb of ram. I've setup 3 ibm 18 lzx drives
(18gig 10krpm LVD drives) on it in a raid 5 configuration.
Everything comes up great and functions just fine - but:
If I soft reboot the system (ie:ctrl-alt-del or init 6) the dac960 will
fail to detect the drives. If I
Ah, sorry for the puns and any confusion. I am talking about 2GB+
file sizes, not memory. The also proves my point - we now have 4GB
memory on 32 bit systems - which is only applicable for a VERY small
percentage of Linux users, but not 2GB files on 32 bit systems (once
again - even though
Nope. Bigmem was for 4 GB RAM and such, and has been pretty much replaced
by highmem (all culled from the Linux Memory Management mailing list). All
of the 2GB file stuff is refereed to mostly as Large File Summit (LFS) not
to be confused with Log File System (LFS - no idea what it does.
Hi folks,
got a small problem.
I'm running redhat 6.1+ (2.2.14-5.0 kernels from rawhide and new
raidtools 0.90-6) I've checked and the 2.2.14-5.0 are using the B1 patch
from mingo's page. I think the raidtools they are using (mentioned above)
are the correct version.
Here is what happens:
I
How about --force / -f look for $HOME/.md_force_warning_read and
if not exists:
- print huge warning (and beep thousands of times as desired)
- creat()/close() the file
how about an expiration on the timestamp on this file
ie: if the time is longer than 2 weeks make them read it again.
I
If the partition types are set to "fd" and you selected the "autorun"
config option in block devices (it should be turned on on a rawhide-type
kernel), raidstart shouldn't be necessary. (the kernel will have
already started the md arrays itself, and the later initscripts raidstart
call
Hi folks,
I've got a user in my dept who is thinking about using software raid5
(after I explained the advantages to them) - but they want "testimonials"
ie: - people who have used software raid5 under linux and have had it save
their ass or have had it work correctly and keep them from a costly
Well, we've been using assorted versions of the 0.90 raid code for over a
year in a couple of servers. We've had mostly good success with both the
raid1 and raid5 code. I don't have any raid5 disk failure stories (yet
;-), but we are using EIDE drives so I expect one before TOO long ;-)
Notice that it checks every 3 seconds, but emails every 10 minutes
(prevents the inbox from filling up overnight).
What does it look like when a drive dies? I presume something like:
[..UD]
Then, perhaps just doing a (Perl) regexp: if (/\[[^\]]*D[^\]]*\]/)
then report the failure?
Hi,
I'm doing a series of bonnie tests along with a fair amount of file
md5summing to determine speed and reliability of a raid5 configuration.
I have 5 drives on a TekRam 390U2W adapter. 3 of the drives are the same
seagate barracuda 9.1 gig drive. The other two are the 18 gig barracuda's.
Two
removed the cable from one drive and rebooted for a test.
All seemed to go well, system ran in degraded mode. When I reconnected
drive, only 1 of the 3 partitions on the drive are recognized. 2 of my
3 /dev/md- arrays still run in degraded mode.
How can I force a "good" partition so
I dug through the linux-raid archives last night and found the answer
too. Got everything resynced last night.
I am using RH 6.1 2.2.12-20 with a Promise EIDE-MaxII with 3 Maxtor
51536U3 ide drives; 2 of these drives on Promise card, and 3rd on
secondary of motherboard. All seems to be
I've got a four disk RAID5 setup on one controller. I want to add
another controller, but am unsure of what strategy I should adopt to
maintain the RAID integrity.
As the order that the disks are found and identified as sda, sdb etc.
determines the RAID structure and depends on the disk
What's more, it does ...
While there is evidence of this on normal drives and hw raid drives too.
(I assume the `While' is spurious).
I have first hand evidence of the first.
I'd like to know if it will work on sw raid drives.
It's independent of the underlying hardware -- ext2
What you *REALLY* want is LVM
url please?
pointers of some type?
-sv
the SCSI bus on one side and emulate one disk, and on the other do
hardware raid5 across 4 - 8 UDMA buses?
I ask because, while not normally somthing I would do, I need
to rig a large storage array in an evil environ. No way am I mounting
eight 1K$ each drives in a mobile
There's "specs" and then there's real life. I have never seen a hard drive
that could do this. I've got brand new IBM 7200rpm ATA66 drives and I can't
seem to get them to do much better than 6-7mb/sec with either Win98,
Win2000, or Linux. That's with Abit BH6, an Asus P3C2000, and
Hi folks,
I did some tests comparing a k6-2 500 vs a celeron 400 - on a raid5
system - found some interesting results
Raid5 write performance of the celeron is almost 50% better than the k6-2.
Is this b/c of mmx (as james manning suggested) or b/c of the FPU?
I used tiobench in sizes of than
NOT because of MMX, as the K6-2 has MMX instructions. It could be because
of the parity calculations, but you'd need to do a test on a single disk to
make sure that it doesn't have anything to do with the CPU/memory chipset or
disk controller. Can you try with a single drive to determine
A 7200RPM IDE drive is faster than a 5400RPM SCSI drive and a 1RPM
SCSI drive is faster than a 7200RPM drive.
If you have two 7200RPM drives, one scsi and one ide, each on there own
channel, then they should be about the same speed.
Not entirely true - the DMA capabilities of IDE
Raid5 write performance of the celeron is almost 50% better than the k6-2.
Can you report the xor calibration results when booting them?
sure I should be able to pull that out of somewhere
from the k6-2:
raid5: MMX detected, trying high-speed MMX checksum routines
pII_mmx : 1121.664
early stepping K6-2s did not have an MTRR. later steppings do (i believe
stepping 8 was the first one to have an MTRR... but i can't say for
certain):
my cpu:
processor : 0
vendor_id : AuthenticAMD
cpu family : 5
model : 8
model name : AMD-K6(tm) 3D
arguably only 500gb per machine will be needed. I'd like to get the fastest
possible access rates from a single machine to the data. Ideally 90MB/s+
Is this vastly read-only or will write speed also be a factor?
mostly read-only.
-sv
If you can afford it and this is for real work, you may want to
consider something like a Network Appliance Filer. It will be
a lot more robust and quite a bit faster than rolling your own
array. The downside is they are quite expensive. I believe the
folks at Raidzone make a "poor man's"
i have not used adaptec 160 cards, but i have found most everything else they
make to be very finicky about cabling and termination, and have had hard
drives give trouble on adaptec that worked fine on other cards.
my money stays with a lsi/symbios/ncr based card. tekram is a good vendor,
FWIW, you are going to have trouble pushing anywhere near 90MB/s out of a
gigabit ethernet card, at least under 2.2. I don't have any experience w/
2.4 yet.
I hadn't planned on implementing this under 2.2 - I realize the
constraints on the network performance. I've heard good things about
There are some (pre) test
versions by Linux and Alan Cox out awaiting feedback from testers, but
nothing solid or consistent yet. Be careful when using these for
serious work. Newer != Better
This isn't being planned for the next few weeks - its 2-6month planning
that I'm doing. So I'm
I'd try an alpha machine, with 66MHz-64bit PCI bus, and interleaved
memory access, to improve memory bandwidth. It costs around $1
with 512MB of RAM, see SWT (or STW) or Microway. This cost is
small compared to the disks.
The alpha comes with other headaches I'd rather not involve myself
So if Linus gets hit by a bus (or a fast moving hari krishna), how
are folks to get things into the kernel then?
Probably Alan.
-sv
Hi,
We've been using the sw raid 5 support in linux for about 2-3 months now.
We've had good luck with it.
Until this week.
In this one week we've lost two drives on a 3 drive array. Completely
eliminating the array. We have good backups, made everynight, so the data
is safe. The problem is
Hey Seth,
Sorry to hear about your drive failures. To me, this is something that
most people ignore about RAID5: Lose more than one drive and everything is
toast. Good reason to have a drive setup as a hot spare, not to mention an
extra drive laying on the shelf. And hold your breathe
43 matches
Mail list logo