On Tue, 15 May 2007, Tomasz Chmielewski wrote:
I have a RAID-10 setup of four 400 GB HDDs. As the data grows by several GBs
a day, I want to migrate it somehow to RAID-5 on separate disks in a separate
machine.
Which would be easy, if I didn't have to do it online, without stopping any
On Thu, 5 Apr 2007, Lennert Buytenhek wrote:
[*] probably an entirely defective batch of 14 Samsung Spinpoint
500G disks
Lets hope not... Keep checking those SMART values...
Failed disk #2 still reports a SMART status of PASSED..
Hmmm.. I'd be tempted to double check your hardware +
On Wed, 4 Apr 2007, Lennert Buytenhek wrote:
(please CC on replies, not subscribed to linux-raid@)
Hi!
While my RAID6 array was rebuilding after one disk had failed (which
I replaced), a second disk failed[*], and this caused the rebuild
process to start over from the beginning.
Why would
Are there any plans in the near future to enable growing RAID-6 arrays by
adding more disks into them?
I have a 15x500GB - drive unit and I need to add another 15 drives into
it... Hindsight is telling me that maybe I should have put LVM on top of
the RAID-6, however, the usable 6TB it
On Fri, 23 Mar 2007, Mattias Wadenstein wrote:
On Fri, 23 Mar 2007, Gordon Henderson wrote:
Are there any plans in the near future to enable growing RAID-6 arrays by
adding more disks into them?
I have a 15x500GB - drive unit and I need to add another 15 drives into
it... Hindsight
On Mon, 15 Jan 2007, dean gaudet wrote:
you can also run monthly checks...
echo check /sys/block/mdX/md/sync_action
it'll read the entire array (parity included) and correct read errors as
they're discovered.
A-Ha ... I've not been keeping up with the list for a bit - what's the
minimum
Yeechang Lee wrote:
[Also posted to
comp.sys.ibm.pc.hardware.storage,comp.arch.storage,alt.comp.hardware.pc-homebuilt,comp.os.linux.hardware.]
I'm shortly going to be setting up a Linux software RAID 5 array using
16 500GB SATA drives [...]
I'm of the opinion that more drives means more
Heres an oddity - Just built a server with 15 external disks over 2 SAS
channels and I've noticed that the kernel is saying it's RAID5 rather than
RAID6 ...
Hard to explain what I mean in words, but:
bertha:~# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md9 :
On Tue, 24 Oct 2006, David Greaves wrote:
Gordon Henderson wrote:
1747 ?S 724:25 [md9_raid5]
It's kernel 2.6.18 and
Wasn't the module merged to raid456 in 2.6.18?
Ah, was it? I might have missed that...
Are your mdx_raid6's earlier kernels. My raid 6 is on 2.7.17 and says
For anyone who cares about my saga so-far ;-) ...
I got physical access to the unit this morning and setup the drives as 15
RAID-0 Logical drives and booted up Linux, and it then attached all the
drives in the usual way.
And I can see all 15 drives. So the down-side is that I can't use any sort
On Tue, 17 Oct 2006, Andrew Moise wrote:
On 10/17/06, Gordon Henderson [EMAIL PROTECTED] wrote:
Anyway, it's currently in a RAID-1 configuration (which I used for some
initial soaktests) and seems to be just fine:
FilesystemSize Used Avail Use% Mounted on
/dev/md9
On Tue, 17 Oct 2006, Greg Dickie wrote:
Never lost an XFS filesystem completely. Can't say the same about ext3.
Whereas I have exactly the reverse of the problem... Never lost an ext2/3,
but had a few XFSs trashed when I played with it a couple of years ago...
My 2 euros,
Gordon
-
To
This might not be strictly on-topic here, but you may provide
enlightenment, as a lot of web searching hasn't helpmed me so-far )-:
A client has bought some Dell hardware - Dell 1950 1U server, 2 on-board
SATA drives connected to a Fusion MPT SAS controller. This works just
fine. The on-board
On Sun, 8 Oct 2006, Ian Brown wrote:
Then I created a RAID1 by running:
mdadm --create /dev/md0 --level=raid1 --raid-devices=2 /dev/sdb1 /dev/sdb2
I got : mdadm: array /dev/md0 started
cat /proc/mdstat shows:
Personalities : [raid1]
md0 : active raid1 sdb2[1] sdb1[0]
16000
On Sat, 16 Sep 2006, Dexter Filmore wrote:
Am Samstag, 16. September 2006 19:26 schrieb Bill Davidsen:
Dexter Filmore wrote:
Is anyone here who runs a soft raid on Slackware?
Out of the box there are no raid scripts, the ones I made myself seem a
little rawish, barely more than mdadm
On Tue, 5 Sep 2006, Paul Waldo wrote:
Hi all,
I have a RAID6 array and I wondering about care and feeding instructions :-)
Here is what I currently do:
- daily incremental and weekly full backups to a separate machine
- run smartd tests (short once a day, long once a week)
-
On Tue, 5 Sep 2006, Patrik Jonsson wrote:
mtbf seems to have an exponential dependence on temperature, so it pays
off to keep temp down. Exactly what temp you consider safe is
individual, but my drives only occasionally go above 40C.
I had a pair (2 x Hitachi IDE 80GB) that ran in a sealed
On Tue, 5 Sep 2006, Steve Cousins wrote:
Would people be willing to list their setup? Including such things as
mdadm.conf file, crontab -l, plus scripts that they use to check the
smart data and the array, mdadm daemon parameters and anything else that
is relevant to checking and maintaining
On Tue, 5 Sep 2006, Paul Waldo wrote:
Gordon Henderson wrote:
On Tue, 5 Sep 2006, Steve Cousins wrote:
[snip]
and my weekly badblocks script looks like:
#!/bin/csh
echo `uname -n`: Badblocks test starting at [`date`]
foreach disk ( a c )
foreach partition ( 1 2 3 5 6
On Thu, 24 Aug 2006, Richard Scobie wrote:
Gordon Henderson wrote:
While I haven't done this, I have a client who uses Firewire drives
(Lacie) as a backup solution and they seem to just work, and look like
locally attached SCSI drives (Performance is quite good too!) I guess you
won't
On Thu, 24 Aug 2006, Adam Kropelin wrote:
Generally speaking the channels on onboard ATA are independant with any
vaguely modern card.
Ahh, I did not know that. Does this apply to master/slave connections on
the same PATA cable as well? I know zero about PATA, but I assumed from
the
On Tue, 15 Aug 2006, andy liebman wrote:
-- If I were to create disk images of EACH drive (i.e., /dev/sda and
/dev/sdb), could I restore each of those images to NEW drives -- with
all of their respective partitions -- and have a working RAIDED OS? I
ask because my ultimate goal is to put a
A client of mine desperately wants a Dell solution rather than a
self-build. They are looking at an external Dell box with 15 x 500GB SATA
drives in it and a Dell 1U host controller - but the connection between
them is SAS, and they want to use a (Dell) PERC5e card in the host, so
does anyone
On Thu, 13 Jul 2006, Burn Alting wrote:
Last year, there were discussions on this list about the possible
use of a 'co-processor' (Intel's IOP333) to compute raid 5/6's
parity data.
We are about to see low cost, multi core cpu chips with very
high speed memory bandwidth. In light of this,
I've seen a few comments to the effect that some disks have problems when
used in a RAID setup and I'm a bit preplexed as to why this might be..
What's the difference between a drive in a RAID set (either s/w or h/w)
and a drive on it's own, assuming the load, etc. is roughly the same in
each
On Wed, 28 Jun 2006, Christian Pernegger wrote:
I also subscribe to the almost commodity hardware philosophy,
however I've not been able to find a case that comfortably takes even
8 drives. (The Stacker is an absolute nightmare ...) Even most
rackable cases stop at 6 3.5 drive bays -- either
On Sun, 25 Jun 2006, Chris Allen wrote:
Back to my 12 terabyte fileserver, I have decided to split the storage
into four partitions
each of 3TB. This way I can choose between XFS and EXT3 later on.
So now, my options are between the following:
1. Single 12TB /dev/md0, partitioned into four
On Fri, 23 Jun 2006, Chris Allen wrote:
Strange that whatever the filesystem you get equal numbers of people
saying that
they have never lost a single byte to those who have had horrible
corruption and
would never touch it again. We stopped using XFS about a year ago because we
were getting
On Thu, 22 Jun 2006, Chris Allen wrote:
Dear All,
I have a Linux storage server containing 16x750GB drives - so 12TB raw
space.
Just one thing - Do you want to use RAID-5 or RAID-6 ?
I just ask, as with that many drives (and that much data!) the
possibilities of a 2nd drive failure is
On Thu, 15 Jun 2006, Adam Talbot wrote:
What I hope to be an easy fix. Running Gentoo Linux and trying to setup
RAID 1 across the root partition hda3 hdc3. Have the fstab set up to
look for /dev/md3 and I have built the OS on /dev/md3. Works fine until
I reboot. System loads and states it
On Mon, 12 Jun 2006, Adam Talbot wrote:
RAID tuning?
Just got my new array setup running RAID 6 on 6 disks. Now I am looking
to tune it. I am still testing and playing with it, so I dont mind
rebuild the array a few times.
Is chunk size per disk or is it total stripe?
As I understand it,
On Tue, 13 Jun 2006, Justin Piszcz wrote:
mkfs -t xfs -f -d su=128k,sw=14 /dev/md9
Gordon, What speed do you get on your RAID, read and write?
When I made my XFS/RAID-5, I accepted the defaults for the XFS filesystem
but used a 512kb stripe. I get 80-90MB/s reads and ~39MB/s writes.
On Tue, 13 Jun 2006, Adam Talbot wrote:
I still have not figured out if block is per disk or per stripe?
My current array is rebuilding and states 64k chunk is this a per disk
number or is that a functional stripe?
The block-size in the argument to mkfs is the size of the basic data block
on
I'm just after conformation (or not!) of something I've done for a long
time which I think is right - it certainly seem right, but one of those
things I've always wondered about ...
When creating an array I allocate drives from alternative controllers with
the thought that the OS/system/hardware
I know this has come up before, but a few quick googles hasn't answered my
questions - I'm after the max. array size that can be created under
bog-standard 32-bit intel Linux, and any issues re. partitioning.
I'm aiming to create a raid-6 over 12 x 500GB drives - am I going to
have any problems?
On Sat, 13 May 2006, Raúl Gómez Cabrera wrote:
Hi everyone,
I have a installed a system (mail server) wich had a RAID 1 (software)
with two SCSI Disk running on Linux. The sdb disk has failed a few month
ago and the system is still working as expected.
Since the failure of the disk I've
On Sat, 13 May 2006, Ra�l G�mez Cabrera wrote:
Hi Gordon, thanks for your quick response.
Well my client does not want to spend more money on this particular
server, I think maybe that is because they are planning to replace it...
Ask your client just how valuable their email data is...
How
On Sat, 18 Mar 2006, Ewan Grantham wrote:
OK, managed to use assemble force to get the five remaining drives of
the array up in degraded mode. But running ex2fsck (I had an ext3 fs
on the RAID) is revealing a number of bad dtimes and invalid blocks.
Trying to run ex2fsck with the -p option
On Sun, 5 Mar 2006, Bill Davidsen wrote:
I agree, but it's easier to configure to keep going with a dead drive
than fan in many enclosures. You seem to have more heat tolerance and
monitoring than many installations. And you have done testing on the
heat issues, another unusual thing.
I got
On Sun, 5 Mar 2006, Bill Davidsen wrote:
Still scratching my head, trying to work out if raid-10 can withstand
(any) 2 disks of failure though, although after reading md(4) a few times
now, I'm begining to think it can't (unless you are lucky!) So maybe I'll
just stick with Raid-6 as I know
On Sat, 18 Feb 2006, PFC wrote:
Anybody tried a Raid1 or Raid5 on USB2.
If so did it crawl or was it usable ?
Why not external SATA ?
After all, the little cute SATA cables are a lot more suited to this
than
the old, ugly flat PATA cables...
Until you break a motherboard or
I'm building a little test server and I wanted ~500GB of storage with
2-drive redundancy, so the best price vs. num. drives vs. the need for 2
drive redundancy came to 4 x 250GB drives. (And I have a mobo with 5 SATA
ports, and taking into account case power requirements, etc. 4 drives has
worked
On Fri, 17 Feb 2006, berk walker wrote:
RAID-6 *will* give you your required 2-drive redundancy.
Hm. I was under the impression (mistakenly?) that RAID10 (as opposed to
RAID1+0) would give me 2 disk redundancy in far mode, however maybe I need
to re-read the stuff on RAID10 again ...
Gordon
-
On Fri, 17 Feb 2006, Francois Barre wrote:
2006/2/17, Gordon Henderson [EMAIL PROTECTED]:
On Fri, 17 Feb 2006, berk walker wrote:
RAID-6 *will* give you your required 2-drive redundancy.
Anyway, if you wish to resize your setup to 5 drives one day or
another, I guess raid 6 would
On Fri, 17 Feb 2006, Andy Smith wrote:
On Fri, Feb 17, 2006 at 03:14:37PM +, Gordon Henderson wrote:
Still scratching my head, trying to work out if raid-10 can withstand
(any) 2 disks of failure though, although after reading md(4) a few times
now, I'm begining to think it can't
On Wed, 8 Feb 2006, discman (sent by Nabble.com) wrote:
Hi.
Anyone with some experience on mdadm?
Just about everyone here, I'd hope ;-)
I have a running RAID0-array with mdadm and it`s using monitor-mode with an
e-mail address.
Anyone knows how to remove that e-mail adress without
On Wed, 1 Feb 2006, David Liontooth wrote:
We're wondering if it's possible to run the following --
* define 4 pairs of RAID 1 with an 8-port 3ware 9500S card
* the OS will see these are four normal drives
* use md to configure them into a RAID 6 array
Would this work? Would it be
On Thu, 2 Feb 2006, Mattias Wadenstein wrote:
Yes, but then you (probably) lose hotswap. A feature here was to use the
3ware hw raid for the raid1 pairs and use the hw-raid hotswap instead of
having to deal with linux hotswap (unless both drives in a raid1-set
dies).
I'm not familiar with
On Wed, 1 Feb 2006, Enrique Garcia Briones wrote:
I'd be tempted to remove the A1000 and install on the 2 internal
drives, then once that's happy, plug the A1000 back in again. It
might be that the OBP (Open Boot Prom) code is favouring the
external device to boot off, but it's been a
On Tue, 31 Jan 2006, Enrique Garcia Briones wrote:
Hi,
Let me introduce myself.
I'm newbie in linux and in RAID over linux, I have configured a RAID-0 in a
NetBSD 2.0 BOX. so, now let me explain what I'm trying to do,
Antecedents/Equipment:
I have a Sparc 420 with 4 processors and 4 Gb
.
The A1000 devices I've used had an on-board RAID controller and some
software that ran under Solaris to configure it, so that might be
somethign to look into too - to make sure it's doing what you expect it to
be doing.
Gordon
thanks
-- Forwarded Message ---
From: Gordon
On Tue, 24 Jan 2006, Francois Barre wrote:
Is it possible to make the drives turn slower ? To make the heads move slower
?
That would be my dream. No more heat, a 10mA consumption, no more noise...
Some drives do support quiet vs. performance modes.
hdparm will set this for you, however,
On Mon, 23 Jan 2006, Gilberto Diaz wrote:
The problem is that the following proccesses are using a lot of cpu
time.
md1_raid1
md1_resync
..
md6_raid1
md6_resync
Here is a sample of the uptime command
17:54:16 up 5:48, 2 users, load average: 5.02,
On Sat, 21 Jan 2006, Gerd Knops wrote:
Hi,
I have a RAID5 setup with 3 250GB SATA disks. Often the RAID is not
accessed for days, so I wonder if I can extend the life of the disks
by spinning them down, eg by setting the spindown timeout for the
drives with hdparm -S nn.
The hdparm man
On Thu, 5 Jan 2006, Francois Barre wrote:
Well, anyway, thanks for the advice. Guess I'll have to stay on ext3
if I don't want to have nightmares...
And you can always mount it as ext2 if you think the journal is corrupt.
Have you considered Raid-6 rather than R5?
The biggest worry I have is
On Fri, 16 Dec 2005, Neil Brown wrote:
- Does RAID6 have disadvantages wrt write speed?
Probably. I haven't done any measurements myself, but from a
theoretical standpoint, you would expect raid6 to impose more CPU load
(though that may not be noticeable) and as raid6 need to see the whole
On Tue, 2 Aug 2005, Boik Moon wrote:
Hi,
According to RAID theory, the READ performance with RAID0, 1 and 5
should
Be faster than one with non-RAID. I tested it on Redhat linux(ES) on
Pentium PC, but they are almost same. I am using
RocketRAID404(HPT374)PCI card to connect 4 master IDE
On Thu, 4 Aug 2005, Stefan Majer wrote:
Hi,
i have a server running on only one scsi disk. I got now one scsi disk
extra and i want to transform the actual installation to use both disks in
a raid1.
Therefore i want to mirror each partition with md.
The Question now is how to do that, if
On Thu, 14 Apr 2005, Laurent CARON wrote:
Hello,
We are in the process of increasing the size our RAID Arrays as our
storage needs increase.
I've got 2 solutions for this:
- Copy the data over a new array and replace the disks
Do this! You know it makes sense. If nothing else, it'll make
On Sat, 2 Apr 2005, Max Waterman wrote:
http://www.sonnettech.com/product/tempo-x_esata8.html
do you think this will work with Linux?
what about Linux on an Intel platform?
Hard to tell without knowing the actual chip-set on-board.
I wonder how it performs - esp. compared to the
On Sat, 2 Apr 2005, Matt Domsch wrote:
On Sat, Apr 02, 2005 at 04:04:48PM +0100, Gordon Henderson wrote:
The cheaper cards that I've used seem to have mostly the SII chipset - and
that appears to be well supported by Linux. The 3112 is a dual-port card,
the 3114, quad. There are RAID
On Sat, 2 Apr 2005, peter pilsl wrote:
The only explantion to me is, that I had the wrong entry in my
lilo.conf. I had root=/dev/hda6 there instead of root=/dev/md2
So maybe root was always mounted as /dev/hda6 and never as /dev/md2,
which was started, but never had any data written to it. Is
On Fri, 1 Apr 2005, Alvin Oga wrote:
- ambient temp should be 65F or less
and disk operating temp ( hddtemp ) should be 35 or less
Are we confusing F and C here?
hddtemp typically reports temperatures in C. 35F is bloody cold!
65F is barely room temperature. (18C)
Gordon
-
To
On Tue, 22 Mar 2005, Schuett Thomas EXT wrote:
Hello,
I am sorry for having to ask a question you might rate very stupid,
but I really want to know it:
If you think through system crash scenarios, what types of chrashes
are you are thinking of? Do you only consider harddisk faults, or
do
As part of a (Dell) server purchase, a client was given a free Dell 750
PowerEdge (Celeron) box with 2 x 120GB SATA drives... Opening the lid (as
you do :) revealed that the motherboard has on-board SATA, but Dell had
also plugged in an Adaptec 6-port SATA RAID card, and connected the 2
drives
On Tue, 8 Mar 2005, Tobias Hofmann wrote:
I stuffed a bunch of cheap SATA disks and crappy controllers in an old
system. (And replaced the power supply with one that has enough power
on the 12V rail.)
It's running 2.4, and since it's IDE disks, I just call 'hdparm
-Swhatever' in
When I was building a server recently, I ran into a file-system corruption
issue with ext3, although I didn't have the time to do anything about it
(or even verify that it wasn't me doing something stupid)
However, I only saw it when I used the stride=8 parameter to mk2efs, and
the -j (ext3)
On Sat, 19 Feb 2005, berk walker wrote:
[I usually do not spend bandwidth in quoting big stuff, but your's might
be worth it]
Properly chastised. One CAN do net raid, 4,000 [where's my pound key?]
is still a lot to me, [don't forget my name IS berk :)]
Thats for 2 servers, remember. Worry
On Sat, 19 Feb 2005, berk walker wrote:
Do you want a glass or some cheese?
Not really... I just thought I'd pass on my experiences and thank those
who gave me support recently. By posting my configurations and thoughts
and issues I've encountered during the way, I'm essentially opening myself
On Thu, 17 Feb 2005, Phantazm wrote:
I use master slave. Problem is that i cant break raid set couse if i do i
will loose over 1TB of data :/
Goin to see if i can get more controller cards though.
Do it. Use 4 2-port cards for your 8 drives and only one drive per cable.
It is possible, and
On Sun, 13 Feb 2005, Tim Moore wrote:
Gordon Henderson wrote:
What I wanted was an 8-way RAID-1 for the boot partition (all of /, in
reality) and I've done this many times in the past on other 2-5 way
systems without issue. So I do the stuff I've done in the past, and theres
nothing
On Sun, 13 Feb 2005, Mark Hahn wrote:
Interesting - the private mail was from me, and I've got two dual
Opterons in service. The one with significantly more PCI activity has
significantly more problems then the one with less PCI activity.
that's pretty odd, since the most intense IO
On Fri, 4 Feb 2005, Andrew Walrond wrote:
Hi Gordon,
Anyone using Tyan Thunder K8W motherboards???
I'm using K8W's here with a combo od raid0/1 on on-board SATA, and its been
rock solid for months (2.6.10). Looks like your problems are all with the PCI
cards, but I can't help there. Since
On Thu, 3 Feb 2005, H. Peter Anvin wrote:
Guy wrote:
Would you say that the 2.6 Kernel is suitable for storing mission-critical
data, then?
Sure. I'd trust 2.6 over 2.4 at this point.
This is interesting to hear.
I ask because I have read about a lot of problems with data corruption
On Sat, 29 Jan 2005, T. Ermlich wrote:
Hello there,
I just got here from http://cgi.cse.unsw.edu.au/~neilb/Contact ...
Hopefully I'm more/less right here.
Several month ago I set-up an raid1 using mdadm.
Two drives (/dev/sda /dev/sdb, each one is an 160GB Samsung SATA
disks) are used,
On Sat, 29 Jan 2005, T. Ermlich wrote:
That's right: each harddisk is partitioned absolutly identically, like:
0 - 19456 - /dev/sda1 - extended partition
1 - 6528 - /dev/sda5 - /dev/md0
6529 - 9138 - /dev/sda6 - /dev/md1
9139 - 16970 - /dev/sda7 - /dev/md2
16971 - 19456 -
On Thu, 20 Jan 2005, David Dougall wrote:
Perhaps I was asking a stupid question or an obvious one, but I have
received not response.
Maybe if I simplify the question...
If I am running software raid1 and a disk device starts throwing I/O
errors, Is the filesystem supposed to see any
On Thu, 20 Jan 2005, Mark Bellon wrote:
I've seen this too. The worst case can actually last for over 2 minutes.
We've been running with a patch to the RAID 1 driver that handles this
so critical applications do not hang for too long. Basically it uses
timers in the RAID 1 driver to force
On Mon, 17 Jan 2005, Janusz Zamecki wrote:
Hello!
After days of googling I've gave up and decided to ask for help.
The story is very simple: I have /dev/md6 raid1 array made of hdg and
hde disks. The resulting array is as fast as 1 disk only.
Why would you expect it to be any faster?
On Sun, 16 Jan 2005, Mitchell Laks wrote:
3) Also, i have a module driver question.
I use a asus K8V-X motherboard. It has sata and parallel ide channels. I use
the sata for my system and use the parallel for data storage on ide raid.
I am using combining the 2 motherboard IDE cable
On Thu, 13 Jan 2005, Neil Brown wrote:
There is no current support for raid6 in any 2.4 kernel and I am not
aware of anyone planning such support. Assume it is 2.6 only.
How real-life tested is RAID-6 so-far? Anyone using it in anger on a
production server?
I've spent the past day or 2
81 matches
Mail list logo