Re: vinum raid5: newfs throws an error

2004-12-06 Thread Markus Hoenicka
Greg 'groggy' Lehey writes:
  There was once an error in the stripe size calculations that meant
  that there were holes in the plexes.  Maybe it's still there (old
  Vinum is not being maintained).  But you should have seen that in the
  console messages at create time.
  
   Vinum reports the disk sizes as 17500MB (da1) and 17359MB (da2,
   da3). The raid5 volume and plex have a size of 33GB.
  
  This looks like the kind of scenario where that could happen.  Try
  this:
  
  1.  First, find a better stripe size.  It shouldn't be a power of 2,
  but it should be a multiple of 16 kB.  I'd recommend 496 kB.  This
  won't fix the problem, but it's something you should do anyway
  
  2.  Calculate the length of an exact number of stripes, and create the
  subdisks in that length.  Try again and see what happens.
  
  3.  Use gvinum instead of vinum and try both ways.
  

Ok, I decreased the stripe size to 496, regardless of whether it has
anything to do with my problem. Next I set the subdisk length to
17359m on all disks, and things started to work ok. No more newfs
errors here.

Before doing this I also had a brief encounter with gvinum. There is
no manpage in 5.3BETA7, so I assumed it groks the same config files as
vinum. However, this did not do me any good as it simply rebooted the
box. I guess gvinum works better in RELEASE.

Thanks a lot for your help.

Markus

-- 
Markus Hoenicka
[EMAIL PROTECTED]
(Spam-protected email: replace the quadrupeds with mhoenicka)
http://www.mhoenicka.de

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum raid5: newfs throws an error

2004-12-06 Thread Greg 'groggy' Lehey
On Monday,  6 December 2004 at 23:44:59 +0100, Markus Hoenicka wrote:
 Greg 'groggy' Lehey writes:
 There was once an error in the stripe size calculations that meant
 that there were holes in the plexes.  Maybe it's still there (old
 Vinum is not being maintained).  But you should have seen that in the
 console messages at create time.

 Vinum reports the disk sizes as 17500MB (da1) and 17359MB (da2,
 da3). The raid5 volume and plex have a size of 33GB.

 This looks like the kind of scenario where that could happen.  Try
 this:

 1.  First, find a better stripe size.  It shouldn't be a power of 2,
 but it should be a multiple of 16 kB.  I'd recommend 496 kB.  This
 won't fix the problem, but it's something you should do anyway

 2.  Calculate the length of an exact number of stripes, and create the
 subdisks in that length.  Try again and see what happens.

 3.  Use gvinum instead of vinum and try both ways.


 Ok, I decreased the stripe size to 496, regardless of whether it has
 anything to do with my problem. Next I set the subdisk length to
 17359m on all disks, and things started to work ok. No more newfs
 errors here.

OK, looks like this was the hole in the plex issue.  I thought that
was gone.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgpn7akcOjP72.pgp
Description: PGP signature


vinum raid5: newfs throws an error

2004-12-05 Thread Markus Hoenicka
Hi all,

now that I can use the full capacity of my disks, I'm stuck again. I'm
trying to set up a raid5 from three SCSI disks (I know that a serious
raid5 should use five disks or more, but I have to make do with three at
the moment). The configuration is as follows:

drive ibma device /dev/da1s1e
drive ibmb device /dev/da2s1e
drive ibmc device /dev/da3s1e
volume raid5 setupstate
  plex org raid5 512k
sd length 0m drive ibma
sd length 0m drive ibmb
sd length 0m drive ibmc

This works ok. Then I run vinum init to initialize the drives. Trying
to create a filesystem on this construct results in the error message:

newfs: wtfs: 65536 bytes at sector 71130688: Input/output error

Is that trying to tell me that my calculation of the group size is
incorrect? Does it have to do anything with the fact that the three
disks have slightly different capacities?

Vinum reports the disk sizes as 17500MB (da1) and 17359MB (da2,
da3). The raid5 volume and plex have a size of 33GB.

BTW creating a concatenated volume on the same disks works ok, newfs
does not throw an error here.

Any help is appreciated.

Markus
-- 
Markus Hoenicka
[EMAIL PROTECTED]
(Spam-protected email: replace the quadrupeds with mhoenicka)
http://www.mhoenicka.de

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum raid5: newfs throws an error

2004-12-05 Thread Greg 'groggy' Lehey
On Monday,  6 December 2004 at  3:05:31 +0100, Markus Hoenicka wrote:
 Hi all,

 now that I can use the full capacity of my disks, I'm stuck again. I'm
 trying to set up a raid5 from three SCSI disks (I know that a serious
 raid5 should use five disks or more, but I have to make do with three at
 the moment). The configuration is as follows:

 drive ibma device /dev/da1s1e
 drive ibmb device /dev/da2s1e
 drive ibmc device /dev/da3s1e
 volume raid5 setupstate
   plex org raid5 512k
 sd length 0m drive ibma
 sd length 0m drive ibmb
 sd length 0m drive ibmc

 This works ok. Then I run vinum init to initialize the drives. Trying
 to create a filesystem on this construct results in the error message:

 newfs: wtfs: 65536 bytes at sector 71130688: Input/output error

 Is that trying to tell me that my calculation of the group size is
 incorrect? Does it have to do anything with the fact that the three
 disks have slightly different capacities?

There was once an error in the stripe size calculations that meant
that there were holes in the plexes.  Maybe it's still there (old
Vinum is not being maintained).  But you should have seen that in the
console messages at create time.

 Vinum reports the disk sizes as 17500MB (da1) and 17359MB (da2,
 da3). The raid5 volume and plex have a size of 33GB.

This looks like the kind of scenario where that could happen.  Try
this:

1.  First, find a better stripe size.  It shouldn't be a power of 2,
but it should be a multiple of 16 kB.  I'd recommend 496 kB.  This
won't fix the problem, but it's something you should do anyway

2.  Calculate the length of an exact number of stripes, and create the
subdisks in that length.  Try again and see what happens.

3.  Use gvinum instead of vinum and try both ways.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgpRINJrCFKRb.pgp
Description: PGP signature


vinum and newfs

2003-07-17 Thread Mike Maltese
What impact do disk block and fragment sizes have on a vinum volume?  I've
been benchmarking an array of three drives in striped and raid5
configurations with various stripe sizes.  I've noticed that I get better
results in just about every instance by passing -b 16384 -f 2048 to newfs.
This doesn't make sense to me as those are the defaults for newfs if they
are not specified, but looking at the disklabel after a newfs, it shows
8192/1024. Should these options really make a performance difference, and if
so, how?

Thanks, Mike

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum and newfs

2003-07-17 Thread Malcolm Kay
On Fri, 18 Jul 2003 10:26, Mike Maltese wrote:
 What impact do disk block and fragment sizes have on a vinum volume?  I've
 been benchmarking an array of three drives in striped and raid5
 configurations with various stripe sizes.  I've noticed that I get better
 results in just about every instance by passing -b 16384 -f 2048 to newfs.
 This doesn't make sense to me as those are the defaults for newfs if they
 are not specified, but looking at the disklabel after a newfs, it shows
 8192/1024. Should these options really make a performance difference, and
 if so, how?

 Thanks, Mike

I have had similar experience, getting 8192/1024 when using newfs on a vinum 
volume. Obviously 16384/2048 is not the default in this case, in spite of the 
newfs man pages. 

In a classical file system I believe these numbers are taken from the 
disklabel and it is realy the disklabel that supplies these defaults for the 
partitions. For vinum the individual volumes do not have a corresponding 
disklabel partition. -- All guess work so don't take it too seriously.

Malcolm

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


HELP: vinum and newfs on 5.0-RC1

2002-12-09 Thread Aaron D. Gifford
Okay,

I had a vinum RAID5 set working on 4.7.  Since I didn't have any data on 
it yet and saw today's announcement of 5.0-RC1, I thought I'd give 
5.0-RC1 a try on the box in question.  Vinum sees the RAID5 set I 
created just fine, so I decided to use newfs to create a UFS2 filesystem 
on the volume.  Call me an idiot, but I can't seem to figure out how to 
do this despite searching various FreeBSD mailing lists.  It appears 5.0 
 no longer supports the -v option, so I assume vinum support is now 
built-in.  Here's what I'm trying:

# newfs -O 2 -U /dev/vinum/raid5vol
newfs: /dev/vinum/raid5vol: 'e' partition is unavailable

That's funny.  After reading around a bit, I wondered if perhaps some 
sort of disk label is now required on the vinum volume.   However, no 
matter how I try using disklabel, disklabel complains about my attempts.

Here's my vinum setup:

  drive d0 device /dev/ad0s1f
  drive d1 device /dev/ad1s1f
  drive d2 device /dev/ad2s1f
  drive d3 device /dev/ad3s1f
  volume raid5vol
plex name p5 org raid5 512s vol raid5vol
  sd name sd0 drive d0 plex p5 len 232798208s
   driveoffset 265s plexoffset 0s
  sd name sd1 drive d1 plex p5 len 232798208s
   driveoffset 265s plexoffset 512s
  sd name sd2 drive d2 plex p5 len 232798208s
   driveoffset 265s plexoffset 1024s
  sd name sd3 drive d3 plex p5 len 232798208s
   driveoffset 265s plexoffset 1536s

DEVFS shows:

  /dev/vinum:
control
controld
plex:
  p5
sd:
  sd0
  sd1
  sd2
  sd3
raid5vol

Disklabel /dev/vinum/raid5vol shows me:
  type: Vinum
  disk: vinum
  label: radi
  flags:
  bytes/sector: 512
  sectors/track: 698394624
  tracks/cylinder: 1
  sectors/cylinder: 698394624
  cylinders: 1
  sectors/unit: 689394624
  rpm: 14400
  interleave: 1
  trackskew: 0
  cylinderskew: 0
  headswitch: 0  # milliseconds
  track-to-track seek: 0 # milliseconds
  drivedata: 0

  3 Partitions:
  #size   offsetfstype   [fsize bsize bps/cpg]
a: 69839462404.2BSD 1024  8192 0  # (Cyl. 0 - 0)
b: 6983946240  swap   # (Cyl. 0 - 0)
c: 6983946240unused0 0# (Cyl. 0 - 0)

I tried editing it, setting a: to unused, size 0, removing b:, and 
creating e: just like a: as above.  Of course disklabel complained about
a zero size, so I completely removed a:, then disklabel complained about 
e: being out of range a-c, so I renamed e: as a: and it seemed happy. 
At least until I double checked the data and a subsequent disklabel call 
showed that all my changes had been silently discarded and the above 
showd up anew.  The vinum label command also appears useless, happily 
executing but changing nothing.

Now for the questions:

How does one create a new filesystem (UFS2 in particular) on a vinum 
volume in 5.0?  Is some sort of label required?

Help!

Thanks!

Aaron out.


To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-questions in the body of the message


Re: HELP: vinum and newfs on 5.0-RC1

2002-12-09 Thread David Rhodus
I have also been having a few problems with vinum on 5.0-RC1.
I've been having a hard time making a striped or raid5 vinum with more 
than 4 disk's. On the 5th+ disk's vinum will see and says device is 
not configured. 


To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-questions in the body of the message


Re: HELP: vinum and newfs on 5.0-RC1

2002-12-09 Thread Greg 'groggy' Lehey
On Monday,  9 December 2002 at 20:23:44 -0500, David Rhodus wrote:
 I have also been having a few problems with vinum on 5.0-RC1.
 I've been having a hard time making a striped or raid5 vinum with more
 than 4 disk's. On the 5th+ disk's vinum will see and says device is
 not configured.

That's a very different issue.  I'm still investigating the one to
which you replied, but in your case I'd like to see the config file
and the contents of /dev.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-questions in the body of the message