board/controller recommendations?

2008-02-25 Thread Dexter Filmore
Currently my array consists of four Samsung Spinpoint sATA drives, I'm about 
to enlarge to 6 drive.
As of now they sit on an Sil3114 controller via PCI, hence there's a 
bottleneck, can't squeeze more than 15-30 megs write speed (rather 15 today 
as the xfs partitions on it are brim full and started fragmenting).

Now, I'd like to go for a AMD board with 6 sATA channels connected via PCIe - 
can someone recomend a board here? Preferrably AMD 690 based so I won't need 
a video card or similar.

Dex

-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.vorratsdatenspeicherung.de
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: board/controller recommendations?

2008-02-25 Thread Dexter Filmore
On Monday 25 February 2008 15:02:31 Justin Piszcz wrote:
 On Mon, 25 Feb 2008, Dexter Filmore wrote:
  Currently my array consists of four Samsung Spinpoint sATA drives, I'm
  about to enlarge to 6 drive.
  As of now they sit on an Sil3114 controller via PCI, hence there's a
  bottleneck, can't squeeze more than 15-30 megs write speed (rather 15
  today as the xfs partitions on it are brim full and started fragmenting).
 
  Now, I'd like to go for a AMD board with 6 sATA channels connected via
  PCIe - can someone recomend a board here? Preferrably AMD 690 based so I
  won't need a video card or similar.
 
  Dex
 
  --
  -BEGIN GEEK CODE BLOCK-
  Version: 3.12
  GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
  w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@
  b++(+++) DI+++ D- G++ e* h++ r* y?
  --END GEEK CODE BLOCK--
 
  http://www.vorratsdatenspeicherung.de
  -
  To unsubscribe from this list: send the line unsubscribe linux-raid in
  the body of a message to [EMAIL PROTECTED]
  More majordomo info at  http://vger.kernel.org/majordomo-info.html

 That's always the question, which mobo?  I went Intel as many of their
 chipsets (965, p35, x38) have 6 SATA, I am sure AMD have some as well
 though, what I bought awhile back was a 6 port sata w/ 3 pci-e x1 and 1
 pci-e x16.  Then you buy the 2 port sata cards (x1) and plugin your
 drives.

Intel means big bucks since I'd need an intel cpu, too. Cheapest lga775 would 
be around 90 euros where I get a midrange amd x2 at 50-60.


 Promise also came out with a 4 port PCI-e x1 card but I have not tried it,
 seen any reviews for it and do not know if it is even supported in linux.

Now *that's* Promis-ing (huh huh) - happen to know the model name?


 Also, I'd recommend you run a check/resync on your array before removing
 it from your current box, and then make sure the two new drives do not
 have any problems, and (to be safe?) expand by adding 1 drive at a time?

Neil Brown told me to expand 2 drives at once, but I'll back up the array 
anyway to be safe and simply recreate. I guess selling the 750gig drive at 
ebay with 5 bucks off should do :)



-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.vorratsdatenspeicherung.de
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: board/controller recommendations?

2008-02-25 Thread Dexter Filmore
On Monday 25 February 2008 19:50:52 Justin Piszcz wrote:
 On Mon, 25 Feb 2008, Dexter Filmore wrote:
  On Monday 25 February 2008 15:02:31 Justin Piszcz wrote:
  On Mon, 25 Feb 2008, Dexter Filmore wrote:
  Currently my array consists of four Samsung Spinpoint sATA drives, I'm
  about to enlarge to 6 drive.
  As of now they sit on an Sil3114 controller via PCI, hence there's a
  bottleneck, can't squeeze more than 15-30 megs write speed (rather 15
  today as the xfs partitions on it are brim full and started
  fragmenting).
 
  Now, I'd like to go for a AMD board with 6 sATA channels connected via
  PCIe - can someone recomend a board here? Preferrably AMD 690 based so
  I won't need a video card or similar.
 
  Dex
 
  --
  -BEGIN GEEK CODE BLOCK-
  Version: 3.12
  GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
  w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@
  b++(+++) DI+++ D- G++ e* h++ r* y?
  --END GEEK CODE BLOCK--
 
  http://www.vorratsdatenspeicherung.de
  -
  To unsubscribe from this list: send the line unsubscribe linux-raid
  in the body of a message to [EMAIL PROTECTED]
  More majordomo info at  http://vger.kernel.org/majordomo-info.html
 
  That's always the question, which mobo?  I went Intel as many of their
  chipsets (965, p35, x38) have 6 SATA, I am sure AMD have some as well
  though, what I bought awhile back was a 6 port sata w/ 3 pci-e x1 and 1
  pci-e x16.  Then you buy the 2 port sata cards (x1) and plugin your
  drives.
 
  Intel means big bucks since I'd need an intel cpu, too. Cheapest lga775
  would be around 90 euros where I get a midrange amd x2 at 50-60.
 
  Promise also came out with a 4 port PCI-e x1 card but I have not tried
  it, seen any reviews for it and do not know if it is even supported in
  linux.
 
  Now *that's* Promis-ing (huh huh) - happen to know the model name?

 http://www.newegg.com/Product/Product.aspx?Item=N82E16816102117
 Type   SATA / SAS

Full blown raid 50 controller. A tad overkill-ish for softraid.
I just came across this one:

http://geizhals.at/deutschland/a254413.html

One would have to have a board featuring pcie 4x or 1x mechanically open at 
the end.
Then again, there's this board:

http://geizhals.at/deutschland/a244789.html

If that controller runs in Linux those two would make a nice combo. Just saw 
Adaptec provides open src drivers for Linux, so chances are it's included or 
at least scheduled.



-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.vorratsdatenspeicherung.de
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: when is a disk non-fresh?

2008-02-08 Thread Dexter Filmore
On Friday 08 February 2008 00:22:36 Neil Brown wrote:
 On Thursday February 7, [EMAIL PROTECTED] wrote:
  On Tuesday 05 February 2008 03:02:00 Neil Brown wrote:
   On Monday February 4, [EMAIL PROTECTED] wrote:
Seems the other topic wasn't quite clear...
  
   not necessarily.  sometimes it helps to repeat your question.  there
   is a lot of noise on the internet and somethings important things get
   missed... :-)
  
Occasionally a disk is kicked for being non-fresh - what does this
mean and what causes it?
  
   The 'event' count is too small.
   Every event that happens on an array causes the event count to be
   incremented.
 
  An 'event' here is any atomic action? Like write byte there or calc
  XOR?

 An 'event' is
- switch from clean to dirty
- switch from dirty to clean
- a device fails
- a spare finishes recovery
 things like that.

Is there a glossary that explains dirty and such in detail?


   If the event counts on different devices differ by more than 1, then
   the smaller number is 'non-fresh'.
  
   You need to look to the kernel logs of when the array was previously
   shut down to figure out why it is now non-fresh.
 
  The kernel logs show absolutely nothing. Log's fine, next time I boot up,
  one disk is kicked, I got no clue why, badblocks is fine, smartctl is
  fine, selft test fine, dmesg and /var/log/messages show nothing apart
  from that news that the disk was kicked and mdadm -E doesn't say anything
  suspicious either.

 Can you get mdadm -E on all devices *before* attempting to assemble
 the array?


Yes, can do. But now the array is in sync again, guess you want an -E scan 
when it's degraded?


  Question: what events occured on the 3 other disks that didn't occur on
  the last? It only happens after reboots, not while the machine is up so
  the closest assumption is that the array is not properly shut down
  somehow during system shutdown - only I wouldn't know why.

 Yes, most likely is that the array didn't shut down properly.

I noticed that *after* stoppping the array I get some message on the console 
about SCSI caches, but it disappeares too quickly to read and doesn't turn up 
in logs. Will try and video shoot it tho I issue sync anyway before 
stopping the array.


  Box is Slackware 11.0, 11 doesn't come with raid script of its own so I
  hacked them into the boot scripts myself and carefully watched that
  everything accessing the array is down before mdadm --stop --scan is
  issued. No NFS, no Samba, no other funny daemons, disks are synced and so
  on.
 
  I could write some failsafe inot it by checking if the event count is the
  same on all disks before --stop, but even if it wasn't, I really wouldn't
  know what to do about it.
 
  (btw mdadm -E gives me: Events : 0.1149316 - what's with the 0. ?)

 The events count is a 64bit number and for historical reasons it is
 printed as 2 32bit numbers.  I agree this is ugly.

 NeilBrown
 -
 To unsubscribe from this list: send the line unsubscribe linux-raid in
 the body of a message to [EMAIL PROTECTED]
 More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.vorratsdatenspeicherung.de
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: when is a disk non-fresh?

2008-02-07 Thread Dexter Filmore
On Tuesday 05 February 2008 03:02:00 Neil Brown wrote:
 On Monday February 4, [EMAIL PROTECTED] wrote:
  Seems the other topic wasn't quite clear...

 not necessarily.  sometimes it helps to repeat your question.  there
 is a lot of noise on the internet and somethings important things get
 missed... :-)

  Occasionally a disk is kicked for being non-fresh - what does this mean
  and what causes it?

 The 'event' count is too small.
 Every event that happens on an array causes the event count to be
 incremented.

An 'event' here is any atomic action? Like write byte there or calc XOR?


 If the event counts on different devices differ by more than 1, then
 the smaller number is 'non-fresh'.

 You need to look to the kernel logs of when the array was previously
 shut down to figure out why it is now non-fresh.

The kernel logs show absolutely nothing. Log's fine, next time I boot up, one 
disk is kicked, I got no clue why, badblocks is fine, smartctl is fine, selft 
test fine, dmesg and /var/log/messages show nothing apart from that news that 
the disk was kicked and mdadm -E doesn't say anything suspicious either.

Question: what events occured on the 3 other disks that didn't occur on the 
last? It only happens after reboots, not while the machine is up so the 
closest assumption is that the array is not properly shut down somehow during 
system shutdown - only I wouldn't know why.
Box is Slackware 11.0, 11 doesn't come with raid script of its own so I hacked 
them into the boot scripts myself and carefully watched that everything 
accessing the array is down before mdadm --stop --scan is issued.
No NFS, no Samba, no other funny daemons, disks are synced and so on.

I could write some failsafe inot it by checking if the event count is the same 
on all disks before --stop, but even if it wasn't, I really wouldn't know 
what to do about it.

(btw mdadm -E gives me: Events : 0.1149316 - what's with the 0. ?)

Dex



-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.vorratsdatenspeicherung.de
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


when is a disk non-fresh?

2008-02-04 Thread Dexter Filmore
Seems the other topic wasn't quite clear...
Occasionally a disk is kicked for being non-fresh - what does this mean and 
what causes it?

Dex



-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.vorratsdatenspeicherung.de
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


non-fresh: what?

2008-02-02 Thread Dexter Filmore
[   40.671910] md: md0 stopped.
[   40.676923] md: bindsdd1
[   40.677136] md: bindsda1
[   40.677370] md: bindsdb1
[   40.677572] md: bindsdc1
[   40.677618] md: kicking non-fresh sdd1 from array!

When is a disk non-fresh and what might lead to this? 
Happened about 15 times now since I built the array.

Dex


-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.vorratsdatenspeicherung.de
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Software based SATA RAID-5 expandable arrays?

2007-06-19 Thread Dexter Filmore
On Tuesday 19 June 2007 10:35:47 David Greaves wrote:
 Dexter Filmore wrote:
  Why dontcha just cut all the look how big my ePenis is chatter and tell
  us what you wanna do?
  Nobody gives a rat if your ultra1337 sound cards needs a 10 megawatt
  power supply.

 Chill Dexter.


 How many faults have you seen on this list attributed to poor PSUs?
 How many people whinge about the performance of their controllers/setups
 'cos they find out _after_ they bought them just how naff they are?

A 750W supply doesn't increase server stability - that's what redundant PSUs 
are for.
Plus: there are sh!tty 750W-PSUs otu there as well - numbers mean jack.


 Sure he went a bit OTT in the description - but if you'd rather see Hey
 dudez, what do I need for a really quick server then #linux is good :)

 He's clearly new to linux, (and granted, maybe a bit over-excited by
 hardware!) but give the guy a break.

 He very clearly told us what he wanted to do in the QUESTIONs bit.

Could have done right from the start instead of immersing 
into  Vista, X-Fi 8800 and Radeon.

 And don't think too badly of Dexter, he's usually OK.

Guess I had a newbie overdose since migrating the desktop box to Kubuntu.

 David

 PS, Dex, I wonder who posted these noobie sounding question in April last
 year:

 I'm currently planning my first raid array.
 I intend to go for softraid (budget's the limiting factor), not sure about
 5 or 6 yet.

 Plan so far: build a raid5 from 3 disks, later add a disk and reconf to
 raid6. Question: is that possible at all? Can a raid5 be reconfed to a
 raid6 with raidreconf?
 Next Question: how stable is it? Will I likely get away without making
 backups or is there like a 10% chance of failure?
 Other precautions advised?

Yes? What about that? All tech questions regarding file servers and raid 
setups on Linux. Don't see how I go about how my Radeon makes the 10k barrier 
in 3Dmark.

Nuff said.


-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


resync to last 27h - usually 3. what's this?

2007-06-18 Thread Dexter Filmore
Bootet today, got this in dmesg:

[   44.884915] md: bindsdd1
[   44.885150] md: bindsda1
[   44.885352] md: bindsdb1
[   44.885552] md: bindsdc1
[   44.885601] md: kicking non-fresh sdd1 from array!
[   44.885637] md: unbindsdd1
[   44.885671] md: export_rdev(sdd1)
[   44.900824] raid5: device sdc1 operational as raid disk 1
[   44.900860] raid5: device sdb1 operational as raid disk 3
[   44.900895] raid5: device sda1 operational as raid disk 2
[   44.901207] raid5: allocated 4203kB for md0
[   44.901241] raid5: raid level 5 set md0 active with 3 out of 4 devices, 
algorithm 2
[   44.901284] RAID5 conf printout:
[   44.901317]  --- rd:4 wd:3
[   44.901349]  disk 1, o:1, dev:sdc1
[   44.901381]  disk 2, o:1, dev:sda1
[   44.901414]  disk 3, o:1, dev:sdb1

Checked the disk, seemed fine (not the first time linux kicked a disk for no 
apparent reason), readded it with mdadm which triggered a resync.
Now having a look at it I get:

$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd[4] sdc1[1] sdb1[3] sda1[2]
  732563712 blocks level 5, 32k chunk, algorithm 2 [4/3] [_UUU]
  [=...]  recovery =  8.1% (19867520/244187904) 
finish=1661.6min speed=2248K/sec

1661 minutes is *way* too long. it's a 4x250GiB sATA array and usually takes 3 
hours to resync or check, for that matter.

So, what's this? 


-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Software based SATA RAID-5 expandable arrays?

2007-06-18 Thread Dexter Filmore
Why dontcha just cut all the look how big my ePenis is chatter and tell us 
what you wanna do?
Nobody gives a rat if your ultra1337 sound cards needs a 10 megawatt power 
supply.


-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


below 10MB/s write on raid5

2007-06-11 Thread Dexter Filmore
I recently upgraded my file server, yet I'm still unsatisfied with the write 
speed.
Machine now is a Athlon64 3400+ (Socket 754) equipped with 1GB of RAM.
The four RAID disks are attached to the board's onbaord sATA controller 
(Sil3114 attached via PCI)
Kernel is 2.6.21.1, custom on Slackware 11.0.
RAID is on four Samsung SpinPoint disks, has LVM, 3 volumes atop of each XFS.

The machine does some other work, too, but still I would have suspected to get 
into the 20-30MB/s area. Too much asked for?

Dex

-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: below 10MB/s write on raid5

2007-06-11 Thread Dexter Filmore
On Monday 11 June 2007 14:47:50 Justin Piszcz wrote:
 On Mon, 11 Jun 2007, Dexter Filmore wrote:
  I recently upgraded my file server, yet I'm still unsatisfied with the
  write speed.
  Machine now is a Athlon64 3400+ (Socket 754) equipped with 1GB of RAM.
  The four RAID disks are attached to the board's onbaord sATA controller
  (Sil3114 attached via PCI)
  Kernel is 2.6.21.1, custom on Slackware 11.0.
  RAID is on four Samsung SpinPoint disks, has LVM, 3 volumes atop of each
  XFS.
 
  The machine does some other work, too, but still I would have suspected
  to get into the 20-30MB/s area. Too much asked for?
 
  Dex

 What do you get without LVM?

Hard to tell: the PV hogs all of the disk space, can't really do non-LVM 
tests.



-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: below 10MB/s write on raid5

2007-06-11 Thread Dexter Filmore
 10gb read test:

 dd if=/dev/md0 bs=1M count=10240 of=/dev/null

 What is the result?

71,7MB/s - but that's reading to null. *writing* real data however looks quite 
different.


 I've read that LVM can incur a 30-50% slowdown.

Even then the 8-10MB/s I get would be a little low. 



-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


no journaling and loops on softraid?

2007-03-05 Thread Dexter Filmore
http://gentoo-wiki.com/HOWTO_Gentoo_Install_on_Software_RAID#Data_Scrubbing

Warning: Be aware that the combination of RAID5 and loop-devices will most 
likely cause severe filesystem damage, especially when using ext3 and 
ReiserFS. Some users suggest that XFS is not affected by this, but this has 
not been entirely confirmed. See kernel bug 6242 for updates on this bug.

Note: There are also reports that Journaled Filesystems are problematic on all 
Software-RAID levels. No kernel bugs have been submitted regarding this bug 
as of yet. A number of the reports involve using cryptoloops which are known 
to cause corruption. See http://forums.gentoo.org/viewtopic-t-412467.html for 
more information. This thread should be taken with a grain of salt.

Can someone shed some light on this? I have XFS on my raid and *had* problems 
when unpacking large archives with many small files.

Dex

-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Too much ECC?

2006-11-09 Thread Dexter Filmore
I just ran smartctl -d ata on my sATA disks (Samsung) and got these raw 
values:

195 Hardware_ECC_Recovered  3344107
195 Hardware_ECC_Recovered  2786896
195 Hardware_ECC_Recovered  617712
195 Hardware_ECC_Recovered  773986

Looking at a 5 year old 40GB Maxtor that's not been cooled too well I see 3 
as the raw value.
Should I be worried or am I just not properly reading this?

Dex


-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: resync starts over after each reboot (2.6.18.1)?

2006-10-24 Thread Dexter Filmore
Am Montag, 23. Oktober 2006 18:43 schrieben Sie:

Sound familiar... two things: what exact LVM2 version are you using there?
Could you try and shutdown the machine completely till power off and cold boot 
it a couple of times and see if the issue persists?

Dex

-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


LEDs for array?

2006-10-23 Thread Dexter Filmore
I noticed the new kernels have an API for driving LEDs - anyone used that to 
display the array status?

-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


how to set stride / raid-howto still up to date?

2006-09-24 Thread Dexter Filmore
Want to change a partition from xfs to ext3 but can't tell what to put for 
stride.

man page says:

stride=stripe-size
  Configure  the  filesystem  for  a  RAID  array with
  stripe-size filesystem blocks per stripe.

So, what is stripe size anyway? The same as chunk size? 

Is the example in the raid howto section 5.11 still valid? (It still tells 
about -R instead of -E. Seems a bit old.)

So going with -b 4096 for the ext3 with a 32k chunk size still comes down to a 
stride of 8, correct?

Dex

-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


scrub was Re: RAID5 Problem - $1000 reward for help

2006-09-17 Thread Dexter Filmore
Am Sonntag, 17. September 2006 13:36 schrieben Sie:
 On 9/17/06, Ask Bjørn Hansen [EMAIL PROTECTED] wrote:
   It's recommended to use a script to scrub the raid device regularly,
   to detect sleeping bad blocks early.
 
  What's the best way to do that?  dd the full md device to /dev/null?

 echo check /sys/block/md?/md/sync_action

 Distros may have cron scripts to do this right.

 And you need a fairly recent kernel.

Does this test stress the discs a lot, like a resync? 
How long does it take? 
Can I use it on a mounted array?

-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: access array from knoppix

2006-09-16 Thread Dexter Filmore
 His advice 
 was valid.

Maybe valid but not helping with my problem since the problem is/was, 
that /dev/md0 didn't exist at all. mdadm -C won't create device nodes.

But I figured the workaround meanwhile, so it doesn't matter anymore.
(In case someone wanna know: mknod in /lib/udev/devices does it on a hard disk 
install, I guess could work in /dev on knoppix, too, haven't tried yet.)


-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Slackware and RAID

2006-09-16 Thread Dexter Filmore
Am Samstag, 16. September 2006 19:26 schrieb Bill Davidsen:
 Dexter Filmore wrote:
 Is anyone here who runs a soft raid on Slackware?
 Out of the box there are no raid scripts, the ones I made myself seem a
  little rawish, barely more than mdadm --assemble/--stop.

 I'm pretty much off Slack now, but I have run, the scripts you describe
 are about 2/3 of what you need, see the thread(s) here about monitoring.
 mdadm doesn't need a lot of direction...

What's the remaining third?
I fumbled it into rc.S and rc.6, reason why I ask is that array degraded about 
6 times in the few months I run it and I can't figure why. Only thing I know 
is that it degrades somewhere in the reboot process, so I suspect it might 
not properly shutdown.

Dex


-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Slackware and RAID

2006-09-16 Thread Dexter Filmore
  What's the remaining third?
  I fumbled it into rc.S and rc.6, reason why I ask is that array degraded
  about 6 times in the few months I run it and I can't figure why. Only
  thing I know is that it degrades somewhere in the reboot process, so I
  suspect it might not properly shutdown.

 Have you tried simply setting the partition types to 0xFD and relying on
 the kernel auto-detect?

 I read here that there seems to be some resistance to this method though,
 but I've been using it for many years without any issues - however I'm not
 doing any thing clever like LVM or RAID on RAID (RAID1+0, etc.)

That's what I actually do: one big 0xFD partition per disk, raid5, lvm2, 3x 
xfs.
I run the array with
mdadm --assemble /dev/md0
and stop it with 
mdadm --stop --scan
after fs are unmounted and LVM is deactivated.


-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: access *exisiting* array from knoppix

2006-09-14 Thread Dexter Filmore
Am Donnerstag, 14. September 2006 17:58 schrieb Tuomas Leikola:
   mdadm --assemble /dev/md0 /dev/hda1 /dev/hdb1 # i think, man mdadm
 
  Not what I meant: there already exists an array on a file server that was
  created from the server os, I want to boot that server from knoppix
  instead and access the array.

 exactly what --assemble does. looks at disks, finds raid components,
 assembles an array out of them (meaning, tells the kernel where to
 find the pieces) and starts it.

 no? did you try? read the manual?

How about you read the rest of the thread, wisecracker?


-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: access *existing* array from knoppix

2006-09-13 Thread Dexter Filmore
Am Mittwoch, 13. September 2006 15:48 schrieb Rob Bray:
  Am Dienstag, 12. September 2006 16:08 schrieb Justin Piszcz:
  /dev/MAKEDEV /dev/md0
 
  also make sure the SW raid modules etc are loaded if necessary.
 
  Won't work, MAKEDEV doesn't know how to create [/dev/]md0.

 mknod /dev/md0 b 9 0
 perhaps?

Uh huh, go and try. Next boot it's gone again.
running that command in /lib/udev/devices however made it permanent. 

Took way too long to figure tho. If the kernel devs feel like they need to 
hurl over a working system they better release proper docs for the 
replacement. And stop screwing with the syntax.
(Yes, wrong list, I know...)


-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


access array from knoppix

2006-09-12 Thread Dexter Filmore
When running Knoppix on my file server, I can't mount /dev/md0 simply because 
it isn't there. 
Am I guessing right that I need to recreate the array?
How do I gather the necessary parameters?

-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: access *exisiting* array from knoppix

2006-09-12 Thread Dexter Filmore
Am Dienstag, 12. September 2006 15:29 schrieb Justin Piszcz:
 fdisk -l

 then you have to assemble the array

 mdadm --assemble /dev/md0 /dev/hda1 /dev/hdb1 # i think, man mdadm

Not what I meant: there already exists an array on a file server that was 
created from the server os, I want to boot that server from knoppix instead 
and access the array.

-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: access *existing* array from knoppix

2006-09-12 Thread Dexter Filmore
Am Dienstag, 12. September 2006 16:08 schrieb Justin Piszcz:
 /dev/MAKEDEV /dev/md0

 also make sure the SW raid modules etc are loaded if necessary.

Won't work, MAKEDEV doesn't know how to create [/dev/]md0.


-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: serious trouble - raid5 won't assemble properly, vgscan sees no volumes - solved

2006-09-03 Thread Dexter Filmore
Array is online, degraded for the moment but I can access the file systems for 
backups.

I passed -A --force to mdadm, seems that did the trick. 

What puzzles me still is that I had a degraded array for the third time now 
and never could tell why it happened in the first place.
This time the machine crashed for no apparent reason, tho I suspect heat 
issues. Will investigate and eventually move the disks to another machine.

-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com

-- 
VGER BF report: S 0.991633
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


serious trouble - raid5 won't assemble properly, vgscan sees no volumes

2006-09-02 Thread Dexter Filmore
My file server crashed *again* (think there's some harware faulty)
Now at boot time the array is not assembled, I get an I/O error.

/proc/mdstat looks like this:

Personalities : [raid5]
md0 : inactive sda1[0] sdd1[3] sdc1[2]
  732563712 blocks

unused devices: none

I am missing something like [] or so - what's going on here?

Before the server crash I had [U_UU] and vgscan didn't see any volumes.

Dex



-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com

-- 
VGER BF report: U 0.499989
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID6 fallen apart

2006-08-28 Thread Dexter Filmore
Am Montag, 28. August 2006 04:03 schrieben Sie:
 The easiest thing to do is simply recreate the array, making sure to
 have the drives in the correct order, and any options (like chunk
 size) the same.  This will not hurt the data (if done correctly).

First time I hear this. Good to know.
Thought recreate implies sync.

-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: invalid superblock - *again*

2006-08-22 Thread Dexter Filmore
Am Dienstag, 22. August 2006 03:18 schrieb Neil Brown:
 
  Most notable: [   38.536733] md: kicking non-fresh sdd1 from array!
  What does this mean?

 It means that the 'event' count on sdd1 is old compared to that on
 the other partitions.  The most likely explanation is that when the
 array was last running, sdd1 was not part of it.

Event count - so: a certain command or set of instructions was sent to all 
disks, but one didn't get it, hence the raid module can't ensure that the 
data on that disk is consistent with the rest of the array?

  What's happening here? What can I do? Do I have to readd sdd and resync?
  Or is there an easier way out? What causes these issues?

 Yes, you need to add sdd1 back to the array and it will resync.

Ok, if that's what it takes.

 I would need some precise recent history of the array to know why this
 happened.  That might not be easy to come by.

Depends on what exactly you mean. Disk age? smart data? Hardware types? Logs? 
OS?

I don't have more than a few vague guesses about what might have happened. 
First of all it might be possible that the file systems on the array were not 
unmounted properly during shutdown because a remote NFS mount was hogging 
them. If that be the case, LVM couldn't have shut down properly, then the md 
device wouldn't have stopped and the machine just powered down.
That would explain it.
Slackware has no raid runlevel scripts out of the box, I wrote them myself.
Maybe such conditions are not handled properly.

What speaks against that theory is that nfsd is stopped before lvm and raid is 
handled.

-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: invalid superblock - *again*

2006-08-21 Thread Dexter Filmore
Am Montag, 21. August 2006 13:04 schrieb Dexter Filmore:
I seriously don't know what's going on here.
I upgraded packages and rebooted the machine to find that now disk 4 of 4 is 
not assembled.

Here's dmesg and mdadm -E 

* dmesg **
[   38.439644] md: md0 stopped.
[   38.536089] md: bindsdb1
[   38.536301] md: bindsdc1
[   38.536501] md: bindsdd1
[   38.536702] md: bindsda1
[   38.536733] md: kicking non-fresh sdd1 from array!
[   38.536751] md: unbindsdd1
[   38.536765] md: export_rdev(sdd1)
[   38.536794] raid5: device sda1 operational as raid disk 0
[   38.536812] raid5: device sdc1 operational as raid disk 2
[   38.536831] raid5: device sdb1 operational as raid disk 1
[   38.537453] raid5: allocated 4195kB for md0
[   38.537471] raid5: raid level 5 set md0 active with 3 out of 4 devices, 
algor
ithm 2
[   38.537499] RAID5 conf printout:
[   38.537513]  --- rd:4 wd:3 fd:1
[   38.537528]  disk 0, o:1, dev:sda1
[   38.537543]  disk 1, o:1, dev:sdb1
[   38.537558]  disk 2, o:1, dev:sdc1
*

Most notable: [   38.536733] md: kicking non-fresh sdd1 from array!
What does this mean?

* mdadm -E /dev/sdd1 
/dev/sdd1:
  Magic : a92b4efc
Version : 00.90.02
   UUID : 7f103422:7be2c2ce:e67a70be:112a2914
  Creation Time : Tue May  9 01:11:41 2006
 Raid Level : raid5
Device Size : 244187904 (232.88 GiB 250.05 GB)
 Array Size : 732563712 (698.63 GiB 750.15 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0

Update Time : Tue Aug 22 01:42:36 2006
  State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
   Checksum : 33b2d59b - correct
 Events : 0.765488

 Layout : left-symmetric
 Chunk Size : 32K

  Number   Major   Minor   RaidDevice State
this 3   8   493  active sync   /dev/sdd1

   0 0   810  active sync   /dev/sda1
   1 1   8   171  active sync   /dev/sdb1
   2 2   8   332  active sync   /dev/sdc1
   3 3   8   493  active sync   /dev/sdd1

*

What's happening here? What can I do? Do I have to readd sdd and resync? Or is 
there an easier way out? What causes these issues?


-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


invalid superblock - why?

2006-08-20 Thread Dexter Filmore
raid5, 4 sata disks, slackware with 2.6.14.6.
Yesterday the machine hung, so I used MagicKey to sync, remount read only and 
reboot.
After that the third disk was not assembled into the array.
dmesg came up with:

[   34.652268] md: md0 stopped. 
[   34.742673] md: bindsdb1 
[   34.742900] md: invalid superblock checksum on sdc1 
[   34.742920] md: sdc1 has invalid sb, not importing! 
[   34.742941] md: md_import_device returned -22 
[   34.743257] md: bindsdd1 
[   34.743464] md: bindsda1 
[   34.743506] raid5: device sda1 operational as raid disk 0 
[   34.743524] raid5: device sdd1 operational as raid disk 3 
[   34.743542] raid5: device sdb1 operational as raid disk 1 
[   34.744128] raid5: allocated 4195kB for md0 
[   34.744146] raid5: raid level 5 set md0 active with 3 out of 4 devices, alg 
orithm 2 
[   34.744173] RAID5 conf printout: 
[   34.744187]  --- rd:4 wd:3 fd:1 
[   34.744202]  disk 0, o:1, dev:sda1 
[   34.744216]  disk 1, o:1, dev:sdb1 
[   34.744231]  disk 3, o:1, dev:sdd1

So after throwing a minor fit (my first degraded array) I started 
investigating options. Whe I was sure the disk was alright I re-added it and 
resyncing started. 
It's currently still at it.

Question: what happened to that superblock? just because it didn't shut down 
properly? Disk seems to be ok.

-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: second controller: what will my discs be called, and does it matter?

2006-07-17 Thread Dexter Filmore
Am Montag, 17. Juli 2006 20:28 schrieb Bill Davidsen:
 Next question: assembling by UUID, does that matter at all?

 No. There's the beauty of it.

That's what I needed to hear.


 (And while talking UUID - can I safely migrate to a udev-kernel? Someone
  on this list recently ran into trouble because of such an issue.)

 You shouldn't lose data unless you panic at the first learning
 experience and do something without thinking of the results. I would
 convert to UUID first, obviously.

Already happened. Biggest problem so far is coming up with a sane backup 
solution for 500gigs of data :P

-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PLEASE HELP ... raid5 array degraded - how fix it?

2006-07-15 Thread Dexter Filmore
Am Mittwoch, 12. Juli 2006 06:10 schrieben Sie:

Care to enlighten the rest of us what did the trick?

Dex

 please disregard this email .. after doing more google research i have
 re-assembled the array and once again am a true believer of software
 raid

 BEHOLD THE POWER OF MD
 -
 To unsubscribe from this list: send the line unsubscribe linux-raid in
 the body of a message to [EMAIL PROTECTED]
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


second controller: what will my discs be called, and does it matter?

2006-07-06 Thread Dexter Filmore
Currently I have 4 discs on a 4 channel sata controller which does its job 
quite well for 20 bucks. 
Now, if I wanted to grow the array I'd probably go for another one of these.

How can I tell if the discs on the new controller will become sd[e-h] or if 
they'll be the new a-d and push the existing ones back?

Next question: assembling by UUID, does that matter at all?
(And while talking UUID - can I safely migrate to a udev-kernel? Someone on 
this list recently ran into trouble because of such an issue.)

Dex

-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Large single raid... - XFS over NFS woes

2006-06-27 Thread Dexter Filmore
Am Freitag, 23. Juni 2006 14:50 schrieben Sie:
 Strange that whatever the filesystem you get equal numbers of people
 saying that
 they have never lost a single byte to those who have had horrible
 corruption and
 would never touch it again. We stopped using XFS about a year ago because
 we were getting kernel stack space panics under heavy load over NFS. It
 looks like
 the time has come to give it another try.

I'd tread on XFS land cautious - while I always favored XFS over Reiser (had 
way to many issues in its stable releases after my fancy) it has some 
drawbacks. First, you cannot shrink it. LVM becomes kinda pointless.
But especially with NFS I ran into trouble myself. 
Copying large amount of data sometimes stalls and eventually has locked the 
machine. 
Plus, recently I had some weird filesystem corruption like /root getting lost 
or similar. Running 2.6.14.1 and NFS3.

If performance is not top priority, stick to ext3 and create 2 partitions or 
volume groups.

My 0.02$ 

Dex

P.S.: How about JFS..? Don't know if it can resize or how stable it is, but I 
can't remember hearing more or less ups or downs than about any other 
journaling fs.


-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C UL++ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS+ PE Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D- G++ e* h++ r* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


which CPU for XOR?

2006-06-09 Thread Dexter Filmore
What type of operation is XOR anyway? Should be ALU, right?
So - what CPU is best for software raid? One with high integer processing 
power? 

Dex


-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C+++() UL+ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS++(+) PE(-) Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D G++ e* h++ r%* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 4 disks in raid 5: 33MB/s read performance?

2006-05-25 Thread Dexter Filmore
 On Monday May 22, [EMAIL PROTECTED] wrote:
  I just dd'ed a 700MB iso to /dev/null, dd returned 33MB/s.
  Isn't that a little slow?
  System is a sil3114 4 port sata 1 controller with 4 samsung spinpoint
  250GB, 8MB cache in raid 5 on a Athlon XP 2000+/512MB.

 Yes, read on raid5 isn't as fast as we might like at the moment.

 It looks like you are getting about 11MB/s of each disk which is
 probably quite a bit slower than they can manage (what is the
 single-drive read speed you get dding from /dev/sda or whatever).

 You could try playing with the readahead number (blockdev --setra/--getra).
 I'm beginning to think that the default setting is a little low.

Changed from 384 to 1024, no improvement.


 You could also try increasing the stripe-cache size by writing numbers
 to
/sys/block/mdX/md/stripe_cache_size

Actually, there's no directory /sys/block/md0/md/ here. Can I find that in 
proc somewhere? And what are sane numbers for this setting?

 I wonder if your SATA  controller is causing you grief.
 Could you try
dd if=/dev/SOMEDISK of=/dev/null bs=1024k count=1024
 and then do the same again on all devices in parallel
 e.g.
dd if=/dev/SOMEDISK of=/dev/null bs=1024k count=1024 
dd if=/dev/SOMEOTHERDISK of=/dev/null bs=1024k count=1024 
...

 4112 pts/0R  0:00 dd if /dev/sda of /dev/null bs 1024k count 1024
 4113 pts/0R  0:00 dd if /dev/sdb of /dev/null bs 1024k count 1024
 4114 pts/0R  0:00 dd if /dev/sdc of /dev/null bs 1024k count 1024
 4115 pts/0R  0:00 dd if /dev/sdd of /dev/null bs 1024k count 1024
 4116 pts/0R+ 0:00 ps ax

1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 Bytes (1,1 GB) kopiert, 34,5576 Sekunden, 31,1 MB/s
1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 Bytes (1,1 GB) kopiert, 36,073 Sekunden, 29,8 MB/s
1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 Bytes (1,1 GB) kopiert, 40,5109 Sekunden, 26,5 MB/s
1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 Bytes (1,1 GB) kopiert, 40,5054 Sekunden, 26,5 MB/s

(Partly german, but I think you get it)

A single disks pumps out 65-70MB/s. Since they are on a PCI 32bit controller 
the combined speeds when reading from all four disks at once pretty much max 
the 133MB/s PCI limit. (I' surprised it comes so close. That controller works 
pretty well for 18 bucks.)

Dex


-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C+++() UL+ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS++(+) PE(-) Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D G++ e* h++ r%* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid5 resize in 2.6.17 - how will it be different from raidreconf?

2006-05-22 Thread Dexter Filmore
  Will it be less risky to grow an array that way?

 It should be.  In particular it will survive an unexpected reboot (as
 long as you don't lose and drives at the same time) which I don't
 think raidreconf would.
 Testing results so far are quite positive.

Write cache comes to mind - did you test power fail scenarios?

  (And while talking of that: can I add for example two disks and grow
  *and* migrate to raid6 in one sweep or will I have to go raid6 and then
  add more disks?)

 Adding two disks would be the preferred way to do it.
 Add only one disk and going to raid6 is problematic because the
 reshape process will be over-writing live data the whole time, making
 crash protection quite expensive.
 By contrast, when you are expanding the size of the array, after the
 first few stripes you are writing to an area of the drives where there
 is no live data.

Let me see if I got this right: if I add *two* disks and go from raid 5 to 6 
with raidreconf, no live data needs to be overwritten and in case something 
fails I will still be able to assemble the old array..?


-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C+++() UL+ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS++(+) PE(-) Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D G++ e* h++ r%* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 4 disks in raid 5: 33MB/s read performance?

2006-05-22 Thread Dexter Filmore
Am Montag, 22. Mai 2006 22:31 schrieb Brendan Conoboy:
 Dexter Filmore wrote:
  I just dd'ed a 700MB iso to /dev/null, dd returned 33MB/s.
  Isn't that a little slow?
  System is a sil3114 4 port sata 1 controller with 4 samsung spinpoint
  250GB, 8MB cache in raid 5 on a Athlon XP 2000+/512MB.

 Which SATA driver is being used?  The ata_piix driver, for instance, has
 some of the multi-disk performance penalties as any ATA controller would
 have.

 -Brendan ([EMAIL PROTECTED])

lsmod lists sata_sil.


-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C+++() UL+ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS++(+) PE(-) Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D G++ e* h++ r%* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


raid5 resize in 2.6.17 - how will it be different from raidreconf?

2006-05-21 Thread Dexter Filmore
How will the raid5 resize in 2.6.17 be different from raidreconf? 
Will it be less risky to grow an array that way?
Will it be possible to migrate raid5 to raid6?

(And while talking of that: can I add for example two disks and grow *and* 
migrate to raid6 in one sweep or will I have to go raid6 and then add more 
disks?)

Dex


-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C+++() UL+ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS++(+) PE(-) Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D G++ e* h++ r%* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: softraid and multiple distros

2006-05-16 Thread Dexter Filmore
 An alternative is to use the --size option of mdadm to make the array
 slightly smaller than the smallest drive.  

timtowtdi, as usual

   However as you should be listing the uuids in /etc/mdadm.conf, any
  Umm... yeah, should I?
 What else would you use to uniquely identify the arrays?  Not device
 names I hope.

Right now all that's in mdadm.conf is:

DEVICE /dev/sd[abcd]1
ARRAY /dev/md0 devices=/dev/sda1,/dev/sdb1,/dev/sdc1,/dev/sdd1
MAILADDR [EMAIL PROTECTED]

Something important missing..?

  Should get me some sedatives for the day when this all explodes :P

 Just make sure it happens on your day off, then someone else will
 need the sedatives :-)

Yeah, the landlord.

-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C+++() UL+ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS++(+) PE(-) Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D G++ e* h++ r%* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: softraid and multiple distros

2006-05-16 Thread Dexter Filmore
 So in the very simple one-array situation, this is probably safe.
 But it doesn't generalise.  If you have two array of distinct devices,
 then something like

  ARRAY /dev/md0 devices=/dev/sda1,/dev/sdb1,/dev/sdc1,/dev/sdd1
  ARRAY /dev/md1 devices=/dev/sde1,/dev/sdf1,/dev/sdg1,/dev/sdh1

 is not safe.  If /dev/sda1 fails, you pull it out and reboot, then md1
 won't be assembled properly as everything will get renamed.
 In general it is safer to use UUIDs

   DEVICE /dev/sd[a-z][0-9]
   ARRAY /dev/md0 uuid=whatever
   ARRAY /dev/md1 uuid=whatever:else

 Hope that makes it reasonably clear.

Absolutely, thanks.


-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C+++() UL+ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS++(+) PE(-) Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D G++ e* h++ r%* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: softraid and multiple distros

2006-05-15 Thread Dexter Filmore
Am Sonntag, 14. Mai 2006 20:42 schrieben Sie:
  Now the devices have all two superblocks, the one left from the first try
  which are now kinda orphaned and those now active.
  Can I trust mdadm to handle this properly on its own?

 I'm not sure what properly means.  you should not leave around 0xfd

Well, properly means properly. As opposite of f!ck up.
Here: chose the superblocks from the partitions instead of from the entire 
hdd.
There is everything fine with those fs partitions and the array, but there 
seems some old superblocks lying to be around behind those partitions.

 or zero the superblock or both...

More questions: is the raid superblock the same as an ordinary file system 
superblock? 
Zero the superblock - the orphaned one, I assume? This is not like zero it 
and linux makes a new one or so?

Dex


-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C+++() UL+ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS++(+) PE(-) Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D G++ e* h++ r%* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: softraid and multiple distros

2006-05-15 Thread Dexter Filmore
 I always use entire disks if I want the entire disks raided (sounds
 obvious, doesn't it...)  I only use partitions when I want to vary the
 raid layout for different parts of the disk (e.g. mirrored root, mirrored
 swap, raid6 for the rest).   But that certainly doesn't mean it is
 wrong to use partitions for the whole disk.

The idea behind this is: let's say a disk fails, and you get a replacement, 
but it has a different geometry or a few blocks less - won't work. 
Even the same disk model might vary after a while.
So I made 0xfd partitions of the size (whole disk minus few megs).


  Now the devices have all two superblocks, the one left from the first try
  which are now kinda orphaned and those now active.
  Can I trust mdadm to handle this properly on its own?

 You can tell mdadm where to look.  If you want to be sure that it
 won't look at entire drives, only partitions, then a line like
DEVICES /dev/[hs]d*[0-1]
 in /etc/mdadm.conf might be what you want.
 However as you should be listing the uuids in /etc/mdadm.conf, any

Umm... yeah, should I?

 superblock with an unknown uuid will easily be ignored.

 If you are relying nf 0xfd autodetect to assemble your arrays, then
 obviously the entire-disk superblock will be ignored (because they
 wont be in the right place in any partition).

So mdadm --assemble --scan is fine for my scenario even with those orphaned 
superblocks.

Should get me some sedatives for the day when this all explodes :P

-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C+++() UL+ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS++(+) PE(-) Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D G++ e* h++ r%* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: softraid and multiple distros

2006-05-14 Thread Dexter Filmore
Am Sonntag, 14. Mai 2006 16:50 schrieben Sie:
  What do I need to do when I want to install a different distro on the
  machine with a raid5 array?
  Which files do I need? /etc/mdadm.conf? /etc/raittab? both?

 MD doesn't need any files to function, since it can auto-assemble
 arrays based on their superblocks (for partition-type 0xfd).

I see. Now an issue arises someone else here mentioned: 
My first attempt was to use the entire disks, then I was hinted that this 
approach wasn't too hot so I made partitions. 
Now the devices have all two superblocks, the one left from the first try 
which are now kinda orphaned and those now active. 
Can I trust mdadm to handle this properly on its own?

Dex


-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C+++() UL+ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS++(+) PE(-) Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D G++ e* h++ r%* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


xfs or ext3?

2006-05-10 Thread Dexter Filmore
Since xfs is not shrinkable (if this information is not correct anymore, let 
me know), I consider ext3. 
Do I need to no anything? Will there be a noticable performance impact 
(softraid5 on four sata disks, athlon xp2000+/512mb)?
Do I have to provide stride parameter like for ext2?

Dex

-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C+++() UL+ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS++(+) PE(-) Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D G++ e* h++ r%* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: slackware -current softraid5 boot problem - additional info

2006-05-09 Thread Dexter Filmore
Am Dienstag, 9. Mai 2006 07:50 schrieb Luca Berra:
 you don't give a lot of information about your setup,

You're sure right here, I was a bit off track yesterday from tinkering till 
night - info below.

 in any case it could be something like udev and the /dev/sdd device node
 not being available at boot?

Ok: 
Slackware-current with kernel 2.6.14.6, *no* udev, plain old hotplug
I had to put the raid start script in a reasonable place myself (not preconfed 
in Slack) so I have to figure yet if sees /etc/mdadm.conf  when the script is 
called. (If presence of mdadm.conf is totally uninteresting, let me know, I 
just started on raid.)
The other disks are seen fine, and since they are all the same type on the 
same controller there's no reason why it is not seen then.
(Unless for some reason mdadm talks to the *last* disk first and then stops - 
else it should complain about sda rather.)


* mdadm -E info *


# mdadm -E /dev/sdd
/dev/sdd:
  Magic : a92b4efc
Version : 00.90.02
   UUID : db7e5b65:e35c69dc:7c267a5a:e676c929
  Creation Time : Mon May  8 00:05:16 2006
 Raid Level : raid5
Device Size : 244198464 (232.89 GiB 250.06 GB)
 Array Size : 732595392 (698.66 GiB 750.18 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0

Update Time : Tue May  9 00:43:46 2006
  State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
   Checksum : 61f0ffd6 - correct
 Events : 0.24796

 Layout : left-symmetric
 Chunk Size : 32K

  Number   Major   Minor   RaidDevice State
this 3   8   483  active sync   /dev/sdd

   0 0   800  active sync   /dev/sda
   1 1   8   161  active sync   /dev/sdb
   2 2   8   322  active sync   /dev/sdc
   3 3   8   483  active sync   /dev/sdd

* mdstat *

Once I started the array manually (which works fine) mdstats look like:

# cat /proc/mdstat
Personalities : [raid5]
md0 : active raid5 sda1[0] sdd1[3] sdc1[2] sdb1[1]
  732563712 blocks level 5, 32k chunk, algorithm 2 [4/4] []

unused devices: none

-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C+++() UL+ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS++(+) PE(-) Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D G++ e* h++ r%* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: softraid5 boot problem - partly my fault, solved

2006-05-09 Thread Dexter Filmore
Mystery solved: had to probe another module. 
Wait, wait, I can defend myself :)

What led me to believe the controller was autoprobed during boot is that mdadm 
complained about *sdd*, but not about sd[abc], hence I assumed [abc] were all 
fine.
Plus, I didn't have to probe the module manually after boot was completed 
(appears that at that point some other module inserted it as a dependency).

So - is that how mdadm (or the kernel?) handle raid, is the last disk checked 
first by design?

Dex

-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C+++() UL+ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS++(+) PE(-) Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D G++ e* h++ r%* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


replace disk in raid5 without linux noticing?

2006-04-19 Thread Dexter Filmore
Let's say a disk in an array starts yielding smart errors but is still 
functional.
So instead of waiting for it to fail completely and start a sync and stress 
the other disks, could I clone that disk to a fresh one, put the array 
offline and replace the disk? 

 
-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C+++() UL+ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS++(+) PE(-) Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D G++ e* h++ r%* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


softraid: controller? 5-6? grow?

2006-04-04 Thread Dexter Filmore
Hi,

I'm currently planning my first raid array.
I intend to go for softraid (budget's the limiting factor), not sure about 5 
or 6 yet.

Plan so far: build a raid5 from 3 disks, later add a disk and reconf to raid6.
Question: is that possible at all? Can a raid5 be reconfed to a raid6 with 
raidreconf?
Next Question: how stable is it? Will I likely get away without making backups 
or is there like a 10% chance of failure?
Other precautions advised?

Next: Let's say I stick to raid5 for the moment and a disk dies. I heard that 
some people had another disk killed during resync because of the extra 
stress, hence losing the entire raid. 
This was attributed to the circumstance that all disks were from the same 
production series. True? Should I rather go for disks from different months? 
Or even brands? Are there vendors who ship sets fitting that purpose?

Another thing: I need a recommendation for a sATA controller.
Requirements: cheap, PCI, 4 ports sATA II. I had a look at a Sil3114 
controller, then noticed it was sATA-1. I don't even aim for the performance, 
but simply for the connectors. sATA-1 connectors are said to be troublesome 
for being fragile. If that can be neglected, let me know.

System is an nF2/AthlonXP2000+, PCI32bit, right now Slackware, 2.6.14.
Might become debian.

Thanks in advance,

Dex

-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C+++() UL+ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS++(+) PE(-) Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D G++ e* h++ r%* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html