through RS-232 or USB, and if a power-down
event is detected - issue hibernate or shutdown. Currently I am
issuing hibernate in this case, works pretty well for 2.6.22 and up.
--
Janek Kozicki |
-
To unsubscribe from this list: send the line
Beolach said: (by the date of Mon, 18 Feb 2008 05:38:15 -0700)
On Feb 17, 2008 10:26 PM, Janek Kozicki [EMAIL PROTECTED] wrote:
Conway S. Smith said: (by the date of Sun, 17 Feb 2008 07:45:26 -0700)
Well, I was reading that LVM2 had a 20%-50% performance penalty,
http://gentoo
(XFS, JFS, whatever) had that much
testing than ext* filesystems had.
Question to other people here - what is the maximum partition size
that ext3 can handle, am I correct it 4 TB ?
And to go above 4 TB we need to use ext4dev, right?
best regards
--
Janek Kozicki
or such.
--
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
redundancy means that either
yeah, sorry. I went too far.
I didn't have IO controller failure so far. But I've read about one
on this list, and that all data was lost.
You're right, better to duplicate a server with backup copy, so it is
independent of the original one.
--
Janek Kozicki
what is the penalty, but I'm totally sure I didn't
notice it.
--
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http
this benchmark?
Do you use anything else for benchmarks?
eg: 'zcav /dev/sda result' ?
I'm asking becuase I want to make some local benchmarks to determine
best chunk size in my HDD setup.
thanks in advance
--
Janek Kozicki
Bill Davidsen said: (by the date of Wed, 06 Feb 2008 13:16:14 -0500)
Janek Kozicki wrote:
Justin Piszcz said: (by the date of Tue, 5 Feb 2008 17:28:27 -0500
(EST))
writing on raid10 is supposed to be half the speed of reading. That's
because it must write to both mirrors
MDADM.
what is the update?
- you installed a new version of mdadm?
- you installed new kernel?
- something else?
- what was the version before, and what version is now?
- can you downgrade to previous version?
best regards
--
Janek Kozicki
don't erase it - info about raid
array will be still automatically found.
--
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http
Michael Tokarev said: (by the date of Tue, 05 Feb 2008 16:52:18 +0300)
Janek Kozicki wrote:
I'm not using mdadm.conf at all.
That's wrong, as you need at least something to identify the array
components.
I was afraid of that ;-) So, is that a correct way to automatically
generate
array
information in the config file.
whew, that was a long read. Thanks for detailed analysis. I hope that
your conclusion is correct, since I have no way to decide this by
myself. My knowledge is not enough here :)
best regards
--
Janek Kozicki
for this?
but input was closer to RAID5 speeds/did not seem affected (~550MiB/s).
reading in raid5 and raid10 is supposed to be close to raid-0 speed.
--
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body
performance in raid10 on three discs ?
--
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
errors and
add updates.
--
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
me some real figures.
yes... that would be great if someone could spend some time
benchmarking all possible configurations :-)
thanks for your help!
--
Janek Kozicki
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo
vger.kernel.org. Or even the
kernel.org itself. Mailing list admins - can you do it?
best regards.
--
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo
benchmark results.
How does overall performance change with the number of available drives?
Perhaps Raid-0 is best for 2 drives, while Raid-10 is best for 3, 4
and more drives?
best regards
--
Janek Kozicki |
-
To unsubscribe from this list: send
and random read/writes.
But I would like to have real test numbers.
Me too. Thanks. Are there any other raid levels that may count here?
Raid-10 with some other options?
--
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe
mdadm'). Does it exist only in debian packages, or what?
With 'man 4 md' I've found a little sparse info about raid10. But
still I don't get it.
--
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body
Michael Tokarev said: (by the date of Fri, 21 Dec 2007 23:56:09 +0300)
Janek Kozicki wrote:
what's your kernel version? I recall that recently there have been
some works regarding load balancing.
It was in my original email:
The kernel is 2.6.23
Strange I missed the new raid10
really use a way to fix that corruption. :(
ouch. To be honest I subscribed here just a month ago, so I'm not
sure. But I haven't seen other bugreports here so far.
I was expecting that there is some bugzilla?
--
Janek Kozicki
this - the xfs or raid ? Eventually
cross report to both places and write in the bugreport that you are
not sure on which side there is a bug.
best regards
--
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid
conclude that the server is seriously misconfigured.
apologies for my stance. Anyone can comment on this?
--
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED
Justin Piszcz schrieb:
Naturally, when it is reset, the device is disconnected and then
re-appears, when MD see's this it rebuilds the array.
Least you can do is to add an internal bitmap to your raid, this will
make rebuilds faster :-/
--
Janek Kozicki
my data on it.
better use badblocks. It writes data, then reads it afterwards:
In this example the data is semi random (quicker than /dev/urandom ;)
badblocks -c 10240 -s -w -t random -v /dev/sdc
--
Janek Kozicki |
-
To unsubscribe from
) then it will be included
in the array without any resync happening.
But I have here:
# mdadm --version
mdadm - v2.5.6 - 9 November 2006
maybe I stumbled on another bug?
--
Janek Kozicki |
-
To unsubscribe from this list: send the line
was with was RAID 5.
But also I have RAID 1 there, and after --add the drives
automatically resynced.
--
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info
, it seems that this command
mdadm --assemble --update=resync /dev/md1 /dev/hda3 /dev/sda3 /dev/hdc3
worked, becasue `mdadm -D /dev/md1` says that array is in
State : active (not degraded).
best regards
--
Janek Kozicki
Hello,
I did read 'man mdadm' from top to bottom, but I totally forgot to
look into /usr/share/doc/mdadm !
And there is much more - FAQs, recipes, etc!
Can you please add to the manual under 'SEE ALSO' a reference
to /usr/share/doc/mdadm ?
thanks :-)
--
Janek Kozicki
Hello,
I did read 'man mdadm' from top to bottom, but I totally forgot to
look into /usr/share/doc/mdadm !
And there is much more - FAQs, recipes, etc!
Can you please add do the manual under 'SEE ALSO' a reference
to /usr/share/doc/mdadm ?
thanks :-)
--
Janek Kozicki
Janek Kozicki said: (by the date of Mon, 5 Nov 2007 11:58:15 +0100)
I did read 'man mdadm' from top to bottom, but I totally forgot to
look into /usr/share/doc/mdadm !
PS: this why I asked so much questions on this list ;-)
--
Janek Kozicki
bitmap: 8/8 pages [32KB], 32768KB chunk
Was there a better way to do this, is it OK?
--
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info
?
This came to my mind when I saw this:
# mdadm --query --detail /dev/md1 | grep Prefer
Preferred Minor : 1
And also in the manual:
-W, --write-mostly [...] can be useful if mirroring over a slow link.
many thanks for all your help!
--
Janek Kozicki
option is to make sure that I can grow this fs
in the future.
PSS: I looked in the archive but didn't find this question asked
before. I'm sorry if it really was asked.
--
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe
will choose it will
have 0x80 number?
--
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
should use
RAIDdevicecount:3 but it gives following error:
terminate called after throwing an instance of 'std::bad_alloc'
what(): St9bad_alloc
Aborted
anybody else here is using xosview?
--
Janek Kozicki |
-
To unsubscribe from
will add hda1 to the array, and all three partitions
should become a raid1.
--
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http
Janek Kozicki said: (by the date of Tue, 30 Oct 2007 21:07:21 +0100)
then I did 'dd if=/dev/hda1 of=/dev/md0'. I carefully checked that
the partition sizes match exactly. So now md0 contains the same thing
as hda1.
in fact, to check the size I was using 'fdisk -l' because it gives
size
problem?
--
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
somehow first, or can I just create an array
again (overwriting the current one)?
--
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info
md8
crw-rw 1 root root 10, 63 Oct 15 10:29 mdesc
brw-rw 1 root disk 9, 127 Oct 16 10:03 mdp0
... crazy. Much better to create just /dev/md0 and use LVM
http://tldp.org/HOWTO/Software-RAID-HOWTO-11.html
--
Janek Kozicki
Michael Tokarev said: (by the date of Tue, 09 Oct 2007 02:52:06 +0400)
Janek Kozicki wrote:
Hello,
Recently I started to use mdadm and I'm very impressed by its
capabilities.
I have raid0 (250+250 GB) on my workstation. And I want to have
raid5 (4*500 = 1500 GB) on my backup
devices on files and find out?
But yes: you can grow to a degraded array providing you specify a
--backup-file.
Thanks! I'll test this on loopback devices :)
--
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe
: yes it's simple to make a degraded array of 3 drives, but I
cannot afford two discs at once...
--
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More
been blind that I missed this.
This completely solves my problem.
--
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http
Janek Kozicki said: (by the date of Tue, 9 Oct 2007 00:25:50 +0200)
Richard Scobie said: (by the date of Tue, 09 Oct 2007 08:26:35 +1300)
No, but you can make a degraded 3 drive array, containing 2 drives and
then add the next drive to complete it.
The array can then be grown
47 matches
Mail list logo