David On 26 Oct 2007, Neil Brown wrote:
On Thursday October 25, [EMAIL PROTECTED] wrote:
I also suspect that a *lot* of people will assume that the highest
superblock
version is the best and should be used for new installs etc.
Grumble... why can't people expect what I want them to
Bill == Bill Davidsen [EMAIL PROTECTED] writes:
Bill John Stoffel wrote:
Why do we have three different positions for storing the superblock?
Bill Why do you suggest changing anything until you get the answer to
Bill this question? If you don't understand why there are three
Bill locations
[ I was going to reply to this earlier, but the Red Sox and good
weather got into the way this weekend. ;-]
Michael == Michael Tokarev [EMAIL PROTECTED] writes:
Michael I'm doing a sysadmin work for about 15 or 20 years.
Welcome to the club! It's a fun career, always something new to
learn.
Michael == Michael Tokarev [EMAIL PROTECTED] writes:
Michael Doug Ledford wrote:
Michael []
1.0, 1.1, and 1.2 are the same format, just in different positions on
the disk. Of the three, the 1.1 format is the safest to use since it
won't allow you to accidentally have some sort of metadata
So,
Is it time to start thinking about deprecating the old 0.9, 1.0 and
1.1 formats to just standardize on the 1.2 format? What are the
issues surrounding this?
It's certainly easy enough to change mdadm to default to the 1.2
format and to require a --force switch to allow use of the older
Justin == Justin Piszcz [EMAIL PROTECTED] writes:
Justin Is a bitmap created by default with 1.x? I remember seeing
Justin reports of 15-30% performance degradation using a bitmap on a
Justin RAID5 with 1.x.
Not according to the mdadm man page. I'd probably give up that
performance if it
Justin == Justin Piszcz [EMAIL PROTECTED] writes:
Justin On Fri, 19 Oct 2007, John Stoffel wrote:
So,
Is it time to start thinking about deprecating the old 0.9, 1.0 and
1.1 formats to just standardize on the 1.2 format? What are the
issues surrounding this?
It's certainly easy
Doug == Doug Ledford [EMAIL PROTECTED] writes:
Doug On Fri, 2007-10-19 at 12:38 -0400, John Stoffel wrote:
1, 1.0, 1.1, 1.2
Use the new version-1 format superblock. This has few restrictions.
The different sub-versions store the superblock at different locations
on the device, either
Doug == Doug Ledford [EMAIL PROTECTED] writes:
Doug On Fri, 2007-10-19 at 11:46 -0400, John Stoffel wrote:
Justin == Justin Piszcz [EMAIL PROTECTED] writes:
Justin On Fri, 19 Oct 2007, John Stoffel wrote:
So,
Is it time to start thinking about deprecating the old 0.9, 1.0
Andrew == Andrew Clayton [EMAIL PROTECTED] writes:
Andrew On Thu, 4 Oct 2007 10:10:02 -0400 (EDT), Justin Piszcz wrote:
Also, did performance just go to crap one day or was it gradual?
Andrew IIRC I just noticed one day that firefox and vim was
Andrew stalling. That was back in
Andrew On Fri, 5 Oct 2007 15:02:22 -0400, John Stoffel wrote:
How much memory does this system have? Have you checked the output of
Andrew 2GB
/proc/mtrr at all? There' have been reports of systems with a bad
Andrew $ cat /proc/mtrr
Andrew reg00: base=0x ( 0MB), size=2048MB
John == John Stoffel [EMAIL PROTECTED] writes:
This is an update email, my system is now up and running properly,
though with some caveats.
John I've just installed a new SATA controller and a pair of 320Gb
John disks into my system. Went great. I'm running 2.6.21-rc6, with
John the ATA
Bill == Bill Davidsen [EMAIL PROTECTED] writes:
Is there anyway I can interrupt the command I used:
mdadm --grow /dev/md0 --size=#
which I know now I should have used the --size=max paramter instead,
but it wasn't in the man page or the online help. Oh well...
I tried removing
John I got the following on 2.6.18-rc5-mm1 when trying to lvextend a
John test logical volume that I had just created. This came about
John because I have been trying to expand some LVs on my system,
John which are based on a VG ontop of an MD mirror pair. It's an SMP
John box too if that
I got the following on 2.6.18-rc5-mm1 when trying to lvextend a test
logical volume that I had just created. This came about because I
have been trying to expand some LVs on my system, which are based on a
VG ontop of an MD mirror pair. It's an SMP box too if that means
anything.
Tomasz Lately I installed Debian on a Thecus n4100 machine. It's a
Tomasz 600 MHz ARM storage device, and has 4 x 400 GB drives.
Interesting box... how quiet is it? I'm thinking of one of these for
home use, but I'll probably go with an EPIA box so I can actually
setup a
For 2.6.18-rc5-mm1, just edit drivers/md/Kconfig and change the 'if
CONFIG_BLOCK' to 'if BLOCK' and you'll be all set. You can then run
'make oldconfig' and you should be all set.
It's a simple change, and easier than hunting down and applying a
patch at this point. :]
--
VGER BF report: U
Mark == Mark Hahn [EMAIL PROTECTED] writes:
RAID is no excuse for backups.
Mark I wish people would quit saying this: not only is it not helpful,
Mark but it's also wrong.
You've got to be kidding, right? A backup is another aspect of data
protection. RAID is another form. Both have their
Dexter Currently I have 4 discs on a 4 channel sata controller which
Dexter does its job quite well for 20 bucks. Now, if I wanted to
Dexter grow the array I'd probably go for another one of these.
So, which SATA controller are you using? I'm thinking my next box
will go SATA, but I'm still
Neil,
First off, thanks for all your hard work on this software, it's really
a great thing to have.
But I've got some interesting issues here. Though not urgent. As
I've said in other messages, I've got a pair of 120gb HDs mirrored.
I'm using MD across partitions, /dev/hde1 and /dev/hdg1.
Bryce == Bryce [EMAIL PROTECTED] writes:
Bryce Ah,.. yes,, udev has helpfully remapped where all the drives I
Bryce have were,.. and of course I've misread the log because my
Bryce brain is so fixated on expecting drives to be where they should
Bryce curse you UDEV!!
The problem is more
andy Here's a concrete example. I have two 3ware RAID-5 arrays, each
andy made up of 12 500 GB drives. When presented to Linux, these are
andy /dev/sda and /dev/sdb -- each 5.5 TB in size.
andy I want to stripe the two arrays together, so that 24 drives are
andy all operating as one unit.
Paul I am currently trying to build a RAID5 array across 12
Paul disks. The problem is there is a bug in the linux firewire
Paul drivers that is causes problems when multiple disks are accessed
Paul randomly at the same time.
With that number of disks, and that transport, why aren't you
Neil == Neil Brown [EMAIL PROTECTED] writes:
Neil On Tuesday January 17, [EMAIL PROTECTED] wrote:
NeilBrown == NeilBrown [EMAIL PROTECTED] writes:
NeilBrown Previously the array of disk information was included in
NeilBrown the raid5 'conf' structure which was allocated to an
NeilBrown
NeilBrown == NeilBrown [EMAIL PROTECTED] writes:
NeilBrown Previously the array of disk information was included in
NeilBrown the raid5 'conf' structure which was allocated to an
NeilBrown appropriate size. This makes it awkward to change the size
NeilBrown of that array. So we split it off
Francois Well, I think everything is in the subject... I am looking
Francois at this solution for a 6*250GB raid5 data server, evolving
Francois in a 12*250 rai5 in the months to come... Performance is
Francois absolutely not a big issue for me, but I would not
Francois appreciate any data loss.
Francois == Francois Barre [EMAIL PROTECTED] writes:
Francois 2006/1/5, John Stoffel [EMAIL PROTECTED]:
So what are you doing for backups, and can you allow the downtime
needed to restore all your data if there is a problem? Remember, it's
not the cost of doing backups which drives things
Lorac First I can't start the array because it complains about a bad
Lorac superblock.
What's the exact error you get here? And the version of mdadm that
you're using? What's the output of 'cat /proc/mdstat' and 'mdadm
--detail /dev/md?' where ? is the number of your raid 5 array?
Lorac
Hi all,
I've been working on getting my heavily upgraded Debian distro to have
mirrored /, /boot, /usr, /var and swap partitions. My /home and
/local are already built on LVM2 volumes ontop of a pair of mirrored
120gb disks (md0). I just fixed the RAID autodetection for that MD
volume by
Andrew == Andrew Burgess [EMAIL PROTECTED] writes:
3) Performance issues: I'm currently thinking of
using either RAID 10 or LVM2 with RAID 5 to serve as a
RAID server.
Andrew I think you always want LVM2 between raid and the
Andrew filesystem. Not only can you expand things but you can
Guys,
It's been at least a couple of years now that I've had the following
setup on my main machine, a Debian machine which is heavily upgraded
all the time, and runs fairly bleeding edge kernels.
Basically, I have a pair of 120gb disks. They're partitioned, each
with just one large
Norman What you should be able to do with software raid1 is the
Norman following: Stop the raid, mount both underlying devices
Norman instead of the raid device, but of course READ ONLY. Both
Norman contain the complete data and filesystem, and in addition to
Norman that the md superblock at the
Jeff == Jeff Breidenbach jeff@jab.org writes:
Jeff # mount | grep md0
Jeff /dev/md0 on /data1 type reiserfs (rw,noatime,nodiratime)
Ah, you're using reiserfs on here. It may or may not be having
problems with all those files per-directory that you have. Is there
any way you can split them up
33 matches
Mail list logo