Re: Vinum configuration syntax

2007-08-14 Thread CyberLeo Kitsana
Modulok wrote:
 Take the following example vinum config file:
 
 drive a device /dev/da2a
 drive b device /dev/da3a
 
 volume rambo
 plex org concat
 sd length 512m drive a
 plex org concat
 sd length 512m drive b
   
8cut here8
drive disk1 device /dev/ad4s1h
drive disk2 device /dev/ad5s1h
drive disk3 device /dev/ad6s1h

volume raid5
plex org raid5 512k
sd length 190782M drive disk1
sd length 190782M drive disk2
sd length 190782M drive disk3
8cut here8

This syntax still worked for me as of gvinum in 6.2. However, the new
SoC patches for geom_vinum functionality may change some things when
included.

-- 
Fuzzy love,
-CyberLeo
Technical Administrator
CyberLeo.Net Webhosting
http://www.CyberLeo.Net
[EMAIL PROTECTED]

Furry Peace! - http://.fur.com/peace/
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Vinum configuration syntax

2007-08-13 Thread Modulok
Take the following example vinum config file:

drive a device /dev/da2a
drive b device /dev/da3a

volume rambo
plex org concat
sd length 512m drive a
plex org concat
sd length 512m drive b

The keyword concat specifies the relationship between the plexes and
the subdisks. All writes, are always written to all plexes of a given
volume, thus the example above is a mirror with two plexes, each being
comprised of one very small subdisk. I understand this. What I don't
understand, is how to implement a RAID-5 volume.

The only two vinum plex organizations listed in the handbook were
striped and concat. How do I implement striping with distributed
parity (RAID 5)? This was not covered (or I missed it) in the
handbook, or the vinum(4) manual page, or the gvinum(8) manual page,
or in The Complete FreeBSD. There is a lot of great material on how
vinum is implemented and how great it will make your life, but
painfully little on the actual configuration syntax.

In the vinum(4) man page, it describes a number of mappings between
subdisks and plexes including: Concatenated, Striped and RAID-5,
however these are section headings and in the example config files,
the keywords were striped and concat, not Striped and
Concatenated were used. There has to be at least one other subdisk
to plex mapping:

Vinum implements the RAID-0, RAID-1 and RAID-5 models, both
individually and in combination.

RAID-5 is mentioned several times, but no examples were ever given.
What is the plex organization keyword, raid5, raid-5, RAID-5,
5, parity, disparity? I could use trial and error, but there has
to be a document with this information somewhere. Other than rummaging
through source code, is there any additional documentation on vinum
configuration syntax, (A strict specification would be great!)? I
found a FreeBSD Diary article using vinum, but it wasn't for RAID-5,
so no luck there.

FreeBSD 6.1-RELEASE
-Modulok-
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Vinum configuration lost at vinum stop / start

2004-11-11 Thread Kim Helenius
Greetings. I posted earlier about problems with vinum raid5 but it 
appears it's not restricted to that:

Let's make a fresh start with vinum resetconfig. Then vinum create 
kala.txt which contains:

drive d1 device /dev/ad4s1d
drive d2 device /dev/ad5s1d
volume vinum0
plex org concat
sd length 586096s drive d1
sd length 586096s drive d2
Output:
2 drives:
D d1State: up   /dev/ad4s1d A: 285894/286181 
MB (99%)
D d2State: up   /dev/ad5s1d A: 285894/286181 
MB (99%)

1 volumes:
V vinum0State: up   Plexes:   1 Size:572 MB
1 plexes:
P vinum0.p0   C State: up   Subdisks: 2 Size:572 MB
2 subdisks:
S vinum0.p0.s0  State: up   D: d1   Size:286 MB
S vinum0.p0.s1  State: up   D: d2   Size:286 MB
Now I can newfs /dev/vinum/vinum0, mount it, use it, etc. But when I do 
vinum stop, vinum start, vinum stop, and vinum start something amazing 
happens. Vinum l after this is as follows:

2 drives:
D d2State: up   /dev/ad5s1d A: 286181/286181 
MB (100%)
D d1State: up   /dev/ad4s1d A: 286181/286181 
MB (100%)

0 volumes:
0 plexes:
0 subdisks:
Where did my configuration go?! I can of course recreate it, with no 
data lost, but imagine this on a raid5 where the plex goes into init 
mode after creation. Not a pleasant scenario. Also recreating the 
configuration from a config file after every reboot doesn't sound 
interesting.

--
Kim Helenius
[EMAIL PROTECTED]
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum configuration lost at vinum stop / start

2004-11-11 Thread Artem Kazakov
Kim Helenius wrote:
Now I can newfs /dev/vinum/vinum0, mount it, use it, etc. But when I do 
vinum stop, vinum start, vinum stop, and vinum start something amazing 
happens. Vinum l after this is as follows:

2 drives:
D d2State: up   /dev/ad5s1d A: 286181/286181 
MB (100%)
D d1State: up   /dev/ad4s1d A: 286181/286181 
MB (100%)

0 volumes:
0 plexes:
0 subdisks:
Where did my configuration go?! I can of course recreate it, with no 
data lost, but imagine this on a raid5 where the plex goes into init 
mode after creation. Not a pleasant scenario. Also recreating the 
configuration from a config file after every reboot doesn't sound 
interesting.
you shoud issue a read command to vinum to make it read configureation.
in your case:
vinum read /dev/ad5 /dev/ad4

--
Kazakov Artem
 OOO CompTek
 tel: +7(095) 785-2525, ext.1802
 fax: +7(095) 785-2526
 WWW:http://www.comptek.ru
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum configuration lost at vinum stop / start

2004-11-11 Thread Kim Helenius
On Thu, 11 Nov 2004, Artem Kazakov wrote:

 Kim Helenius wrote:
 
  Now I can newfs /dev/vinum/vinum0, mount it, use it, etc. But when I do 
  vinum stop, vinum start, vinum stop, and vinum start something amazing 
  happens. Vinum l after this is as follows:
  
  2 drives:
  D d2State: up   /dev/ad5s1d A: 286181/286181 
  MB (100%)
  D d1State: up   /dev/ad4s1d A: 286181/286181 
  MB (100%)
  
  0 volumes:
  0 plexes:
  0 subdisks:
  
  Where did my configuration go?! I can of course recreate it, with no 
  data lost, but imagine this on a raid5 where the plex goes into init 
  mode after creation. Not a pleasant scenario. Also recreating the 
  configuration from a config file after every reboot doesn't sound 
  interesting.
 you shoud issue a read command to vinum to make it read configureation.
 in your case:
 vinum read /dev/ad5 /dev/ad4

Thank you for your answer. However, when I mentioned the configuration is 
lost, it is lost and not stored on the drives anymore. Thus 'vinum read' 
command cannot read it from the drives. In addition, 'vinum start' scans 
drives for vinum information so in fact 'vinum read' should not be needed.

 -- 
 Kazakov Artem
 
   OOO CompTek
   tel: +7(095) 785-2525, ext.1802
   fax: +7(095) 785-2526
   WWW:http://www.comptek.ru
 

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum configuration lost at vinum stop / start

2004-11-11 Thread Stijn Hoop
On Thu, Nov 11, 2004 at 12:00:52PM +0200, Kim Helenius wrote:
 Greetings. I posted earlier about problems with vinum raid5 but it 
 appears it's not restricted to that.

Are you running regular vinum on 5.x? It is known broken. Please use
'gvinum' instead.

There is one caveat: the gvinum that shipped with 5.3-RELEASE contains an
error in RAID-5 initialization. If you really need RAID-5 you either need
to wait for the first patch level release of 5.3, or you can build
RELENG_5 from source yourself. The fix went in on 2004-11-07.

--Stijn

-- 
Fairy tales do not tell children that dragons exist. Children already
know dragons exist. Fairy tales tell children the dragons can be
killed.
-- G.K. Chesterton


pgppv4V8ikVEN.pgp
Description: PGP signature


Re: Vinum configuration lost at vinum stop / start

2004-11-11 Thread Kim Helenius
On Thu, 11 Nov 2004, Stijn Hoop wrote:

 On Thu, Nov 11, 2004 at 12:00:52PM +0200, Kim Helenius wrote:
  Greetings. I posted earlier about problems with vinum raid5 but it 
  appears it's not restricted to that.
 
 Are you running regular vinum on 5.x? It is known broken. Please use
 'gvinum' instead.
 
 There is one caveat: the gvinum that shipped with 5.3-RELEASE contains an
 error in RAID-5 initialization. If you really need RAID-5 you either need
 to wait for the first patch level release of 5.3, or you can build
 RELENG_5 from source yourself. The fix went in on 2004-11-07.

Thank you for your answer. I tested normal concat with both 5.2.1-RELEASE and
5.3-RELEASE with similar results. Plenty of people (at least I get this
impression after browsing several mailing lists and websites) have working
vinum setups with 5.2.1 (where gvinum doesn't exist) so there's definately 
something I'm doing wrong here. So my problem is not limited to raid5.

I'm aware of gvinum and the bug and actually tried to cvsup  make world 
last night but it didn't succeed due to some missing files in netgraph 
dirs. I will try again tonight.

 
 --Stijn
 
 -- 
 Fairy tales do not tell children that dragons exist. Children already
 know dragons exist. Fairy tales tell children the dragons can be
 killed.
   -- G.K. Chesterton
 

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum configuration lost at vinum stop / start

2004-11-11 Thread Stijn Hoop
On Thu, Nov 11, 2004 at 03:32:58PM +0200, Kim Helenius wrote:
 On Thu, 11 Nov 2004, Stijn Hoop wrote:
  On Thu, Nov 11, 2004 at 12:00:52PM +0200, Kim Helenius wrote:
   Greetings. I posted earlier about problems with vinum raid5 but it 
   appears it's not restricted to that.
  
  Are you running regular vinum on 5.x? It is known broken. Please use
  'gvinum' instead.
  
  There is one caveat: the gvinum that shipped with 5.3-RELEASE contains an
  error in RAID-5 initialization. If you really need RAID-5 you either need
  to wait for the first patch level release of 5.3, or you can build
  RELENG_5 from source yourself. The fix went in on 2004-11-07.
 
 Thank you for your answer. I tested normal concat with both 5.2.1-RELEASE and
 5.3-RELEASE with similar results. Plenty of people (at least I get this
 impression after browsing several mailing lists and websites) have working
 vinum setups with 5.2.1 (where gvinum doesn't exist) so there's definately 
 something I'm doing wrong here. So my problem is not limited to raid5.

I don't know the state of affairs for 5.2.1-RELEASE, but in 5.3-RELEASE gvinum
is the way forward.

 I'm aware of gvinum and the bug and actually tried to cvsup  make world 
 last night but it didn't succeed due to some missing files in netgraph 
 dirs. I will try again tonight.

OK, I think that will help you out. But the strange thing is, RELENG_5 should
be buildable. Are you sure you are getting that?

Have you read

http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/current-stable.html

Particularly the 19.2.2 section, 'Staying stable with FreeBSD'?

HTH,

--Stijn

-- 
I have great faith in fools -- self confidence my friends call it.
-- Edgar Allan Poe


pgpZ9gvLIDHlj.pgp
Description: PGP signature


Re: Vinum configuration lost at vinum stop / start

2004-11-11 Thread Kim Helenius
Stijn Hoop wrote:
Greetings. I posted earlier about problems with vinum raid5 but it 
appears it's not restricted to that.
Are you running regular vinum on 5.x? It is known broken. Please use
'gvinum' instead.
There is one caveat: the gvinum that shipped with 5.3-RELEASE contains an
error in RAID-5 initialization. If you really need RAID-5 you either need
to wait for the first patch level release of 5.3, or you can build
RELENG_5 from source yourself. The fix went in on 2004-11-07.
Thank you for your answer. I tested normal concat with both 5.2.1-RELEASE and
5.3-RELEASE with similar results. Plenty of people (at least I get this
impression after browsing several mailing lists and websites) have working
vinum setups with 5.2.1 (where gvinum doesn't exist) so there's definately 
something I'm doing wrong here. So my problem is not limited to raid5.

I don't know the state of affairs for 5.2.1-RELEASE, but in 5.3-RELEASE gvinum
is the way forward.
Thanks again for answering. Agreed, but there still seems to be a long 
way to go. A lot of 'classic' vinum functionality is still missing and 
at least for me it still doesn't do the job the way I would find 
trustworthy. See below.

I'm aware of gvinum and the bug and actually tried to cvsup  make world 
last night but it didn't succeed due to some missing files in netgraph 
dirs. I will try again tonight.
I tested gvinum with some interesting results. First the whole system 
froze after creating a concatenated drive and trying to gvinum -rf -r 
objects (resetconfig command doesn't exist). Next, I created the volume, 
newfs, copied some data on it. The rebooted, and issued gvinum start. 
This is what follows:

2 drives:
D d1State: up   /dev/ad4s1d A: 285894/286181 
MB (99%)
D d2State: up   /dev/ad5s1d A: 285894/286181 
MB (99%)

1 volume:
V vinum0State: down Plexes:   1 Size:572 MB
1 plex:
P vinum0.p0   C State: down Subdisks: 2 Size:572 MB
2 subdisks:
S vinum0.p0.s0  State: staleD: d1   Size:286 MB
S vinum0.p0.s1  State: staleD: d2   Size:286 MB
I'm getting a bit confused. Issuing separately 'gvinum start vinum0' 
does seem to fix it (all states go 'up') but surely it should come up 
fine with just 'gvinum start'? This is how I would start it in loader.conf.

OK, I think that will help you out. But the strange thing is, RELENG_5 should
be buildable. Are you sure you are getting that?
Have you read
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/current-stable.html
Particularly the 19.2.2 section, 'Staying stable with FreeBSD'?
I have read it and used -stable in 4.x, and if I read it really 
carefully I figure out that -stable does not equal stable which is way 
I stopped tracking -stable in the first place. And when knowing I would 
only need it to fix raid5 init I'm a bit reluctant to do it as I found 
out I can't even create a concat volume correctly.

--
Kim Helenius
[EMAIL PROTECTED]
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum configuration lost at vinum stop / start

2004-11-11 Thread Stijn Hoop
Hi,

On Thu, Nov 11, 2004 at 04:53:39PM +0200, Kim Helenius wrote:
 Stijn Hoop wrote:
  I don't know the state of affairs for 5.2.1-RELEASE, but in 5.3-RELEASE 
  gvinum is the way forward.
 
 Thanks again for answering. Agreed, but there still seems to be a long 
 way to go. A lot of 'classic' vinum functionality is still missing and 
 at least for me it still doesn't do the job the way I would find 
 trustworthy. See below.

That's absolutely true. While 5.3 is IMHO pretty stable, gvinum is quite new
and therefore a bit less well tested than the rest of the system.  Fortunately
Lukas Ertl, the maintainer of gvinum, is pretty active and responsive to
problems.

So if you need a critically stable vinum environment you would be better off
with 4.x.

 I tested gvinum with some interesting results. First the whole system 
 froze after creating a concatenated drive and trying to gvinum -rf -r 
 objects (resetconfig command doesn't exist).

That's not good. Nothing in dmesg? If you can consistently get this to happen
you should send in a problem report.

 Next, I created the volume, 
 newfs, copied some data on it. The rebooted, and issued gvinum start. 

 This is what follows:
 
 2 drives:
 D d1State: up   /dev/ad4s1d A: 285894/286181 
 MB (99%)
 D d2State: up   /dev/ad5s1d A: 285894/286181 
 MB (99%)
 
 1 volume:
 V vinum0State: down Plexes:   1 Size:572 MB
 
 1 plex:
 P vinum0.p0   C State: down Subdisks: 2 Size:572 MB
 
 2 subdisks:
 S vinum0.p0.s0  State: staleD: d1   Size:286 MB
 S vinum0.p0.s1  State: staleD: d2   Size:286 MB
 
 I'm getting a bit confused. Issuing separately 'gvinum start vinum0' 
 does seem to fix it (all states go 'up') but surely it should come up 
 fine with just 'gvinum start'? This is how I would start it in loader.conf.

I think I've seen this too, but while testing an other unrelated problem.  At
the time I attributed it to other factors. Can you confirm that when you
restart again, it stays up? Or maybe try an explicit 'saveconfig' when it is
in the 'up' state, and then reboot.

  Have you read
 
 http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/current-stable.html
 
  Particularly the 19.2.2 section, 'Staying stable with FreeBSD'?
 
 
 I have read it and used -stable in 4.x, and if I read it really 
 carefully I figure out that -stable does not equal stable which is way 
 I stopped tracking -stable in the first place. And when knowing I would 
 only need it to fix raid5 init I'm a bit reluctant to do it as I found 
 out I can't even create a concat volume correctly.

That I can understand. If I may make a polite suggestion, it sounds like you
value stability above all else. In this case where vinum is involved, I would
recommend you to stay with 4.x until 5.4 is released. That should take another
6-8 months and probably most of the gvinum issues will have been tackled by
then. Although I know that there are a lot of users, myself included, that run
gvinum on 5.x, it is pretty new technology and therefore unfortunately
includes pretty new bugs.

The other option is to bite the bullet now, and fiddle with gvinum for a few
days. Since other users are using it, it is certainly possible.  This will
take you some time however. It will save you time when the upgrade to 5.4 will
be though.

It is your decision what part of the tradeoff you like the most.

HTH,

--Stijn

-- 
Apparently, 1 in 5 people in the world are Chinese. And there are 5 people
in my family, so it must be one of them. It's either my mum or my dad..
or maybe my older brother John. Or my younger brother Ho-Cha-Chu. But I'm
pretty sure it's John.


pgppjpGDUdUn1.pgp
Description: PGP signature


Re: Vinum configuration lost at vinum stop / start

2004-11-11 Thread Kim Helenius
Stijn Hoop wrote:
Hi,
On Thu, Nov 11, 2004 at 04:53:39PM +0200, Kim Helenius wrote:
Stijn Hoop wrote:
I don't know the state of affairs for 5.2.1-RELEASE, but in 5.3-RELEASE 
gvinum is the way forward.
Thanks again for answering. Agreed, but there still seems to be a long 
way to go. A lot of 'classic' vinum functionality is still missing and 
at least for me it still doesn't do the job the way I would find 
trustworthy. See below.

That's absolutely true. While 5.3 is IMHO pretty stable, gvinum is quite new
and therefore a bit less well tested than the rest of the system.  Fortunately
Lukas Ertl, the maintainer of gvinum, is pretty active and responsive to
problems.
So if you need a critically stable vinum environment you would be better off
with 4.x.

I tested gvinum with some interesting results. First the whole system 
froze after creating a concatenated drive and trying to gvinum -rf -r 
objects (resetconfig command doesn't exist).

That's not good. Nothing in dmesg? If you can consistently get this to happen
you should send in a problem report.

Next, I created the volume, 
newfs, copied some data on it. The rebooted, and issued gvinum start. 

This is what follows:
2 drives:
D d1State: up   /dev/ad4s1d A: 285894/286181 
MB (99%)
D d2State: up   /dev/ad5s1d A: 285894/286181 
MB (99%)

1 volume:
V vinum0State: down Plexes:   1 Size:572 MB
1 plex:
P vinum0.p0   C State: down Subdisks: 2 Size:572 MB
2 subdisks:
S vinum0.p0.s0  State: staleD: d1   Size:286 MB
S vinum0.p0.s1  State: staleD: d2   Size:286 MB
I'm getting a bit confused. Issuing separately 'gvinum start vinum0' 
does seem to fix it (all states go 'up') but surely it should come up 
fine with just 'gvinum start'? This is how I would start it in loader.conf.

I think I've seen this too, but while testing an other unrelated problem.  At
the time I attributed it to other factors. Can you confirm that when you
restart again, it stays up? Or maybe try an explicit 'saveconfig' when it is
in the 'up' state, and then reboot.

Have you read
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/current-stable.html
Particularly the 19.2.2 section, 'Staying stable with FreeBSD'?
I have read it and used -stable in 4.x, and if I read it really 
carefully I figure out that -stable does not equal stable which is way 
I stopped tracking -stable in the first place. And when knowing I would 
only need it to fix raid5 init I'm a bit reluctant to do it as I found 
out I can't even create a concat volume correctly.

That I can understand. If I may make a polite suggestion, it sounds like you
value stability above all else. In this case where vinum is involved, I would
recommend you to stay with 4.x until 5.4 is released. That should take another
6-8 months and probably most of the gvinum issues will have been tackled by
then. Although I know that there are a lot of users, myself included, that run
gvinum on 5.x, it is pretty new technology and therefore unfortunately
includes pretty new bugs.
The other option is to bite the bullet now, and fiddle with gvinum for a few
days. Since other users are using it, it is certainly possible.  This will
take you some time however. It will save you time when the upgrade to 5.4 will
be though.
It is your decision what part of the tradeoff you like the most.
HTH,
--Stijn
Stability is exactly what I'm looking for. However, I begin to doubt 
there's something strange going on with my setup. I mentioned gvinum 
freezing - there's indeed a fatal kernel trap message (page fault) on 
the console. Now, then, thinking of good old FreeBSD 4.x I decided to 
spend some more time on this issue.

Ok... so I tested vinum with FreeBSD 4.10 and amazing things just keep 
happening. Like with 5.x, I create a small test concat volume with 
vinum. Newfs, mount, etc, everything works. Now, then, I issue the 
following commands: vinum stop, then vinum start. Fatal kernel trap - 
automatic reboot. So, the root of the problem must lie deeper than 
(g)vinum in 5.x.

More info on my 5.3 setup:
Copyright (c) 1992-2004 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
The Regents of the University of California. All rights reserved.
FreeBSD 5.3-RELEASE #1: Mon Nov  8 21:43:07 EET 2004
[EMAIL PROTECTED]:/usr/obj/usr/src/sys/KIUKKU
Timecounter i8254 frequency 1193182 Hz quality 0
CPU: AMD Athlon(TM) XP 1600+ (1400.06-MHz 686-class CPU)
  Origin = AuthenticAMD  Id = 0x662  Stepping = 2
Features=0x383f9ffFPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,MMX,FXSR,SSE
  AMD Features=0xc048MP,AMIE,DSP,3DNow!
real memory  = 536788992 (511 MB)
avail memory = 519794688 (495 MB)
npx0: [FAST]
npx0: math processor on motherboard
npx0: INT 16 interface
acpi0: ASUS A7M266 on motherboard
acpi0: Power Button (fixed)
Timecounter 

Re: Vinum configuration lost at vinum stop / start

2004-11-11 Thread Greg 'groggy' Lehey
[Format recovered--see http://www.lemis.com/email/email-format.html]

Text wrapped.

On Thursday, 11 November 2004 at 12:00:52 +0200, Kim Helenius wrote:
 Greetings. I posted earlier about problems with vinum raid5 but it
 appears it's not restricted to that:

 Let's make a fresh start with vinum resetconfig. Then vinum create
 kala.txt which contains:

 ...

 Now I can newfs /dev/vinum/vinum0, mount it, use it, etc. But when I do
 vinum stop, vinum start, vinum stop, and vinum start something amazing
 happens. Vinum l after this is as follows:

 ...
 0 volumes:
 0 plexes:
 0 subdisks:

 Where did my configuration go?! I can of course recreate it, with no
 data lost, but imagine this on a raid5 where the plex goes into init
 mode after creation. Not a pleasant scenario. Also recreating the
 configuration from a config file after every reboot doesn't sound
 interesting.

There have been a lot of replies to this thread, but nobody asked you
the obvious: where is the information requested at
http://www.vinumvm.org/vinum/how-to-debug.html?

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgpKO8NfAJe5K.pgp
Description: PGP signature


Vinum configuration question.

2004-02-27 Thread Jason Schadel
Here's my situation.  I have two machines.  One running OpenBSD on a 
60gig drive and one running FreeBSD on a 10gig drive with a second 60gig 
for storage.  I want to take the 60 gig drive from the OpenBSD box and 
use it as a mirror to the 60 gig in the FreeBSD box.  The catch is I 
want to create the mirror on FreeBSD before I move the drive so I can 
copy the data on the OpenBSD box to the FreeBSD box.

Is there a way I can configure the drive on FreeBSD with vinum so I can 
copy the data from the OpenBSD drive then insert the 60 gig drive in the 
OpenBSD box into the array on the FreeBSD box?

TIA
Jason Schadel
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum configuration question.

2004-02-27 Thread Greg 'groggy' Lehey
On Friday, 27 February 2004 at 14:06:48 -0500, Jason Schadel wrote:
 Here's my situation.  I have two machines.  One running OpenBSD on a
 60gig drive and one running FreeBSD on a 10gig drive with a second 60gig
 for storage.  I want to take the 60 gig drive from the OpenBSD box and
 use it as a mirror to the 60 gig in the FreeBSD box.  The catch is I
 want to create the mirror on FreeBSD before I move the drive so I can
 copy the data on the OpenBSD box to the FreeBSD box.

 Is there a way I can configure the drive on FreeBSD with vinum so I can
 copy the data from the OpenBSD drive then insert the 60 gig drive in the
 OpenBSD box into the array on the FreeBSD box?

There are several ways to achieve what you want to do.  The most
obvious would be to create the Vinum volume with only one plex and
copy the data across via the network.  You could then move the disk on
the OpenBSD box and create another subdisk/plex on it, then start it.
There's no way to keep the original data on the OpenBSD disk, since
Vinum requires a different format from native disks, and OpenBSD has a
(slightly) different on-disk format from FreeBSD.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
Note: I discard all HTML mail unseen.
Finger [EMAIL PROTECTED] for PGP public key.
See complete headers for address and phone numbers.


pgp0.pgp
Description: PGP signature


vinum configuration

2003-11-23 Thread dave
Hello,
Trying to get vinum going on a 5.1 machine, with two IDE 40 gb hard
drives at the moment, two more will be added later once i know my setup is
working. Below are my disklabels for ad0s1 and ad1s1 as well as the vinum
configuration. I need to know if all of this is right and if not what is not
up? Also, how do i get the data from one drive to the other? As of now
drive2 is empty.
Thanks.
Dave.

#
# bsdlabel ad0s1 |more
# /dev/ad0s1:
8 partitions:
#size   offsetfstype   [fsize bsize bps/cpg]
  a:   245760  10485764.2BSD 2048 16384 15368
  b:  1048295  281  swap
  c: 781561620unused0 0 # raw part, don't
edit
  d:   245760  12943364.2BSD 2048 16384 15368
  e:   204800  15400964.2BSD 2048 16384 12808
  f:  6291456  17448964.2BSD 2048 16384 28552
  g: 70119810  80363524.2BSD 2048 16384 28552
  h: 78156146   16 vinum


# bsdlabel ad1s1 |more
# /dev/ad1s1:
8 partitions:
#size   offsetfstype   [fsize bsize bps/cpg]
  b:  10485760  swap
  c: 781561620unused0 0 # raw part, don't
edit
  d:   245760  10485764.2BSD 2048 16384 15368
  e:   204800  12943364.2BSD 2048 16384 12808
  f:   204800  14991364.2BSD 2048 16384 12808
  g:  6291456  17039364.2BSD 2048 16384 28552
  h: 70160770  79953924.2BSD 2048 16384 28552

# more /etc/vinum.conf
drive Vinum1 device /dev/ad0s1h
volume root setupstate
plex org concat
sd len 245760s driveoffset 1048576s
volume home setupstate
plex org concat
sd len 70119810s driveoffset 8036352s
volume swap setupstate
plex org concat
sd len 1048295s driveoffset 281s
volume tmp setupstate
plex org concat
sd len 204800s driveoffset 1540096s
volume var setupstate
plex org concat
sd len 245760s driveoffset 1294336s
volume usr setupstate
plex org concat
 sd len 6291456s driveoffset 1744896s



# vinum
vinum - list
1 drives:
D Vinum1State: up   /dev/ad0s1h A: 38162/38162 MB
(100%)

6 volumes:
V root  State: up   Plexes:   1 Size:  0  B
V home  State: up   Plexes:   1 Size:  0  B
V swap  State: up   Plexes:   1 Size:  0  B
V tmp   State: up   Plexes:   1 Size:  0  B
V var   State: up   Plexes:   1 Size:  0  B
V usr   State: up   Plexes:   1 Size:  0  B

6 plexes:
P root.p0 C State: up   Subdisks: 0 Size:  0  B
P home.p0 C State: up   Subdisks: 0 Size:  0  B
P swap.p0 C State: up   Subdisks: 0 Size:  0  B
P tmp.p0  C State: up   Subdisks: 0 Size:  0  B
P var.p0  C State: up   Subdisks: 0 Size:  0  B
P usr.p0  C State: up   Subdisks: 0 Size:  0  B

0 subdisks:
vinum -
#


___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum configuration

2003-11-23 Thread Greg 'groggy' Lehey
On Sunday, 23 November 2003 at 19:15:30 -0500, dave wrote:
 Hello,
 Trying to get vinum going on a 5.1 machine, with two IDE 40 gb hard
 drives at the moment, two more will be added later once i know my setup is
 working. Below are my disklabels for ad0s1 and ad1s1 as well as the vinum
 configuration. I need to know if all of this is right and if not what is not
 up? Also, how do i get the data from one drive to the other? As of now
 drive2 is empty.

So is drive Vinum1 from Vinum's point of view.

 # more /etc/vinum.conf
 drive Vinum1 device /dev/ad0s1h
 volume root setupstate
 plex org concat
 sd len 245760s driveoffset 1048576s

You haven't told Vinum where to put the subdisk.  You must have
received an error message telling you about that.  The result is
correct:

 0 subdisks:
 vinum -

That's not your question, but I'm surprised it isn't.  

To your question: if you add a second plex, it'll come up in corrupt
or some such state.  With 'start plex.p1' (for example) you can bring
it up, which involves copying the data.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgp0.pgp
Description: PGP signature


Re: Vinum configuration problem (RAID-1)

2003-11-21 Thread Jani Reinikainen
On Thu, 20 Nov 2003 10:56:40 +
Lewis Thompson [EMAIL PROTECTED] wrote:

 On Thu, Nov 20, 2003 at 11:53:52AM +0200, Jani Reinikainen wrote:
  Created a new partition 'h':
  - size = 12715857 ('c' partition) - 265 = 12715592
  - offset 16
 
 Why isn't that:
 
 - size = 12715857 ('c' partition) - 16 = 12715841
 - offset 16?

Doh! Of course :-) How silly of me not to notice that. Works fine now,
thanks!

   I am curious though -- vinum for just one disk?

I added another spindle for this setup, I just thought debugging one
spindle's setup at a time would be easier. Now my RAID-1 is complete and
working. I documented my setup here:

http://devel.reinikainen.net/docs/how-to/Vinum/

Comments are very welcome. English is not my native tongue, so
grammatical errors probably exist :-)


Cheers,
 JR.
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum configuration problem (RAID-1)

2003-11-21 Thread Lewis Thompson
On Fri, Nov 21, 2003 at 04:43:53PM +0200, Jani Reinikainen wrote:
 I added another spindle for this setup, I just thought debugging one
 spindle's setup at a time would be easier. Now my RAID-1 is complete
 and working.

I guessed as much but my reply wouldn't have been complete without the
obligatory ``stoopid'' response ;)

 Comments are very welcome. English is not my native tongue, so
 grammatical errors probably exist :-)

I honestly couldn't tell you weren't English from these posts, so I'm
sure it's perfect.

  Best wishes,

-lewiz.

-- 
I was so much older then, I'm younger than that now.  --Bob Dylan, 1964.

-| msn:[EMAIL PROTECTED] | jabber:[EMAIL PROTECTED] | url:www.lewiz.org |-


pgp0.pgp
Description: PGP signature


Vinum configuration problem (RAID-1)

2003-11-20 Thread Jani Reinikainen
Greetings,

I'm trying to setup Vinum on a FreeBSD 5.1-RELEASE box, according to the
instructions at http://www.vinumvm.org/cfbsd/vinum.txt, pages 236-240,
where the idea is to create one big Vinum partition which overlaps all
other partitions.

Basically, what I've done so far:

In disklabel, created swap first, so it gets offset 0, in the following
fashion:

PartMount   Size
ad0s1b  swap512m
ad0s1a  /   1024m
ad0s1d  /home   1024m
ad0s1e  /tmp512m
ad0s1f  /varthe rest (6424401 blocks)

Booted into single user mode:

# mount -u /

# bsdlabel -e ad0s1

Shortened swap ('b' partition) by 281s:
- 1048576 (original size) - 281 = 1048295
- Changed offset 0 to 281

Created a new partition 'h':
- size = 12715857 ('c' partition) - 265 = 12715592
- offset 16
- type vinum

Thus, resulting in a bsdlabel output such as this:

# bsdlabel ad0s1
# /dev/ad0s1:
8 partitions:
#  size   offset  fstype
a:  2097152  1048576  4.2BSD
b:  1048295  281swap
c: 127158570  unused
d:  2097152  3145728  4.2BSD
e:  1048576  5242880  4.2BSD
f:  6424401  6291456  4.2BSD
h: 12715592   16   vinum

Then, I created a config file for vinum like this:

drive YouCrazy device /dev/ad0s1h
 volume root
   plex org concat
 sd len 2097152s driveoffset 1048560s drive YouCrazy
 volume swap
   plex org concat
 sd len 1048295s driveoffset 265s drive YouCrazy
 volume home
   plex org concat
 sd len 2097152s driveoffset 3145712s drive YouCrazy
 volume tmp
   plex org concat
 sd len 1048576s driveoffset 5242864s drive YouCrazy
 volume var
   plex org concat
 sd len 6424401s driveoffset 6291440s drive YouCrazy

However, when I run:
vinum - create -f /etc/vinum.conf

I get the following output:

vinum: drive YouCrazy is up
viunm: root.p0.s0 is up
viunm: root.p0 is up
vinum: root is up
vinum: swap.p0.s0 is up
vinum: swap.p0 is up
vinum: swap is up
vinum: home.p0.s0 is up
vinum: home.p0 is up
vinum: home is up
vinum: tmp.p0.s0 is up
vinum: tmp.p0 is up
vinum: tmp is up
vinum: var.p0.s0 is up
vinum: var.p0 is up
  16: sd len 6424401s driveoffset 6291440s drive YouCrazy** 16
No space for
  on YouCrazy: No space left on device

I've tried this a few times, making sure my calculations are correct,
but I always get the same error. I suck at math though, so that might be
causing my problem :-P

I also tried using 4.9-RELEASE and UFS1, but that didn't help either.
Must be something obvious I'm missing. I'd appreciate it if someone
could please point me in the right diretion.

Thanks!
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum configuration problem (RAID-1)

2003-11-20 Thread Lewis Thompson
On Thu, Nov 20, 2003 at 11:53:52AM +0200, Jani Reinikainen wrote:
 Created a new partition 'h':
 - size = 12715857 ('c' partition) - 265 = 12715592
 - offset 16

Why isn't that:

- size = 12715857 ('c' partition) - 16 = 12715841
- offset 16?

  I'm no Vinum guru but afaik the 265 at the beginning of the disk is
for the vinum config.  If you have swap at the beginning then you take
into account the 265 in the vinum config itself (taken from your config):

volume swap
   plex org concat
 sd len 1048295s driveoffset 265s drive YouCrazy

  That might not solve your problem, but that's how I have my vinum
setup...

  I am curious though -- vinum for just one disk?

-lewiz.

-- 
I was so much older then, I'm younger than that now.  --Bob Dylan, 1964.

-| msn:[EMAIL PROTECTED] | jabber:[EMAIL PROTECTED] | url:www.lewiz.org |-


pgp0.pgp
Description: PGP signature


Vinum Configuration File

2003-08-20 Thread lukek
Hello,
I am having a bit of trouble getting the conf file correct. I have listed
the devices and slice names but get the following error

vinum - create -f vinum.conf
   1: drive d1 /dev/ad1s1e
** 1 Drive d1, invalid keyword: /dev/ad1s1e: Invalid argument
   2: drive d2 /dev/ad2s1e
** 2 Drive d2, invalid keyword: /dev/ad2s1e: Invalid argument
   3: drive d3 /dev/ad3s1e
** 3 Drive d3, invalid keyword: /dev/ad3s1e: Invalid argument
0 drives:
1 volumes:
V exportState: down Plexes:   1 Size:149 GB

1 plexes:
P export.p0  R5 State: faulty   Subdisks: 3 Size:149 GB

3 subdisks:
S export.p0.s0  State: crashed  PO:0  B Size: 74 GB
S export.p0.s1  State: crashed  PO:   32 kB Size: 74 GB
S export.p0.s2  State: crashed  PO:   64 kB Size: 74 GB

The offending conf file looks like this

drive d2 /dev/ad2s1e
drive d3 /dev/ad3s1e
volume export
plex org raid5 32k
sd length 76319MB drive d1
sd length 76319MB drive d2
sd length 76319MB drive d3

disklabel for the disks in question look like this

tamachi# disklabel -r ad1
# /dev/ad1c:
type: ESDI
disk: ad1s1
snip
8 partitions:
#size   offsetfstype   [fsize bsize bps/cpg]
  c: 1562963220unused0 0# (Cyl.0 -
9728*)
  e: 15629632204.2BSD 2048 1638489  # (Cyl.0 -
9728*)a

tamachi# disklabel -r ad2
# /dev/ad2c:
type: ESDI
disk: ad2s1
snip
8 partitions:
#size   offsetfstype   [fsize bsize bps/cpg]
  c: 1562963220unused0 0# (Cyl.0 -
9728*)
  e: 15629632204.2BSD 2048 1638489  # (Cyl.0 -
9728*)

tamachi# disklabel -r ad3
# /dev/ad3c:
type: ESDI
disk: ad3s1
snip
8 partitions:
#size   offsetfstype   [fsize bsize bps/cpg]
  c: 1562963220unused0 0# (Cyl.0 -
9728*)
  e: 15629632204.2BSD 2048 1638489  # (Cyl.0 -
9728*)

Can someone throw me a life ring here ? The fileserver has been down for a
day now due to hardware failures on the raid device and I am trying to use
Vinum to create a raid device with out much luck.

Also, If I decided to do away with a certain configuration I have been using
resetconfig , will this allow me to start from scratch again  ?

Appreciate any assistance someone can offer on this.

Thanks

LukeK

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Re: Vinum Configuration File

2003-08-20 Thread lukek
Things got only marginally better, I now have errors about disks being full
can someone tell me where that is comming from the disks are for all intents
and purposes empty.

  6: sd length 76g drive d1
** 6 No space for  on d1: No space left on device
   7: sd length 76g drive d2
** 7 Unnamed sd is not associated with a plex: Invalid argument
   8: sd length 76g drive d3
** 8 Unnamed sd is not associated with a plex: Invalid argument
3 drives:
D d1State: up   Device /dev/ad1s1e  Avail:
154140/76316 MB (201%)
D d2State: up   Device /dev/ad2s1e  Avail:
76316/76316 MB (100%)
D d3State: up   Device /dev/ad3s1e  Avail:
76316/76316 MB (100%)

1 volumes:
V exportState: up   Plexes:   1 Size:  0  B

1 plexes:
P export.p0   S State: up   Subdisks: 0 Size:  0  B

-1 subdisks:

- Original Message -
From: "lukek" [EMAIL PROTECTED]
To: "FreeBSD" [EMAIL PROTECTED]
Sent: 2003年8月21日 9:34
Subject: Vinum Configuration File


 Hello,
 I am having a bit of trouble getting the conf file correct. I have listed
 the devices and slice names but get the following error

 vinum - create -f vinum.conf
1: drive d1 /dev/ad1s1e
 ** 1 Drive d1, invalid keyword: /dev/ad1s1e: Invalid argument
2: drive d2 /dev/ad2s1e
 ** 2 Drive d2, invalid keyword: /dev/ad2s1e: Invalid argument
3: drive d3 /dev/ad3s1e
 ** 3 Drive d3, invalid keyword: /dev/ad3s1e: Invalid argument
 0 drives:
 1 volumes:
 V exportState: down Plexes:   1 Size:149
GB

 1 plexes:
 P export.p0  R5 State: faulty   Subdisks: 3 Size:149
GB

 3 subdisks:
 S export.p0.s0  State: crashed  PO:0  B Size: 74
GB
 S export.p0.s1  State: crashed  PO:   32 kB Size: 74
GB
 S export.p0.s2  State: crashed  PO:   64 kB Size: 74
GB

 The offending conf file looks like this

 drive d2 /dev/ad2s1e
 drive d3 /dev/ad3s1e
 volume export
 plex org raid5 32k
 sd length 76319MB drive d1
 sd length 76319MB drive d2
 sd length 76319MB drive d3

 disklabel for the disks in question look like this

 tamachi# disklabel -r ad1
 # /dev/ad1c:
 type: ESDI
 disk: ad1s1
 snip
 8 partitions:
 #size   offsetfstype   [fsize bsize bps/cpg]
   c: 1562963220unused0 0# (Cyl.0 -
 9728*)
   e: 15629632204.2BSD 2048 1638489  # (Cyl.0 -
 9728*)a

 tamachi# disklabel -r ad2
 # /dev/ad2c:
 type: ESDI
 disk: ad2s1
 snip
 8 partitions:
 #size   offsetfstype   [fsize bsize bps/cpg]
   c: 1562963220unused0 0# (Cyl.0 -
 9728*)
   e: 15629632204.2BSD 2048 1638489  # (Cyl.0 -
 9728*)

 tamachi# disklabel -r ad3
 # /dev/ad3c:
 type: ESDI
 disk: ad3s1
 snip
 8 partitions:
 #size   offsetfstype   [fsize bsize bps/cpg]
   c: 1562963220unused0 0# (Cyl.0 -
 9728*)
   e: 15629632204.2BSD 2048 1638489  # (Cyl.0 -
 9728*)

 Can someone throw me a life ring here ? The fileserver has been down for a
 day now due to hardware failures on the raid device and I am trying to use
 Vinum to create a raid device with out much luck.

 Also, If I decided to do away with a certain configuration I have been
using
 resetconfig , will this allow me to start from scratch again  ?

 Appreciate any assistance someone can offer on this.

 Thanks

 LukeK

 ___
 [EMAIL PROTECTED] mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to
"[EMAIL PROTECTED]"


___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Help with vinum configuration

2003-06-12 Thread Francis Vidal
Hi,

I'm using FreeBSD 4.8-STABLE on an Pentium III machine. I'm trying to
configure a RAID-5 storage consisting of 5 80GB IDE drives connected to
two (2) Promise Ultra-133 TX2 controllers (1 disk on each channel and one
on channel 2 of the system board). The disks were configured as
dangerously dedicated (using /stand/sysinstall).

Here's the output of `fdisk ad2':

*** Working on device /dev/ad2 ***
parameters extracted from in-core disklabel are:
cylinders=10767 heads=236 sectors/track=63 (14868 blks/cyl)

Figures below won't work with BIOS for partitions not in cyl 1
parameters to be used for BIOS calculations are:
cylinders=10767 heads=236 sectors/track=63 (14868 blks/cyl)

Media sector size is 512
Warning: BIOS sector numbering starts with sector 1
Information from DOS bootblock is:
The data for partition 1 is:
sysid 165,(FreeBSD/NetBSD/386BSD)
start 0, size 160086528 (78167 Meg), flag 80 (active)
beg: cyl 0/ head 0/ sector 1;
end: cyl 1023/ head 235/ sector 63
The data for partition 2 is:
UNUSED
The data for partition 3 is:
UNUSED
The data for partition 4 is:
UNUSED

Here's the output of one of the disks (ad2) `disklabel ad2':

# /dev/ad2c:
type: ESDI
disk: ad2s1
label:
flags:
bytes/sector: 512
sectors/track: 63
tracks/cylinder: 255
sectors/cylinder: 16065
cylinders: 9964
sectors/unit: 160086528
rpm: 3600
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0   # milliseconds
track-to-track seek: 0  # milliseconds
drivedata: 0

8 partitions:
#size   offsetfstype   [fsize bsize bps/cpg]
  c: 1600865280unused0 0# (Cyl.0 -
9964*)
  e: 1600865280 vinum   # (Cyl.0 -
9964*)


Here's my vinum configuration file (/etc/vinum.conf):

drive d1 device /dev/ad2s1e
drive d2 device /dev/ad4s1e
drive d3 device /dev/ad6s1e
drive d4 device /dev/ad8s1e
drive d5 device /dev/ad10s1e
volume raid
 plex org raid5 512k
  sd length 76345m drive d1
  sd length 76345m drive d2
  sd length 76345m drive d3
  sd length 76345m drive d4
  sd length 76345m drive d5

Here's the output of `vinum list' after doing `vinum create -v
/etc/vinum.conf':

5 drives:
D d1  State: up   Device /dev/ad2s1e  Avail: 1822/78167 MB (2%)
D d2  State: up   Device /dev/ad4s1e  Avail: 1822/78167 MB (2%)
D d3  State: up   Device /dev/ad6s1e  Avail: 0/76345 MB (0%)
D d4  State: up   Device /dev/ad8s1e  Avail: 1822/78167 MB (2%)
D d5  State: up   Device /dev/ad10s1e Avail: 1822/78167 MB (2%)

1 volumes:
V raid  State: down Plexes:   1 Size:298
GB

1 plexes:
P raid.p0R5 State: init Subdisks: 5 Size:298
GB

5 subdisks:
S raid.p0.s0State: emptyPO:0  B Size: 74
GB
S raid.p0.s1State: emptyPO:  512 kB Size: 74
GB
S raid.p0.s2State: emptyPO: 1024 kB Size: 74
GB
S raid.p0.s3State: emptyPO: 1536 kB Size: 74
GB
S raid.p0.s4State: emptyPO: 2048 kB Size: 74
GB

I can't figure out what's wrong with my configuration.

---
 francis a. vidal [bitstop network services] | http://www.bnshosting.net
 streaming media + web hosting   | http://www.bitstop.ph
 v(02)330-2871,(02)330-2872; f(02)330-2873   | http://www.kuro.ph
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Help with vinum configuration

2003-06-12 Thread Bill Moran
Francis Vidal wrote:
Hi,

I'm using FreeBSD 4.8-STABLE on an Pentium III machine. I'm trying to
configure a RAID-5 storage consisting of 5 80GB IDE drives connected to
two (2) Promise Ultra-133 TX2 controllers (1 disk on each channel and one
on channel 2 of the system board). The disks were configured as
dangerously dedicated (using /stand/sysinstall).
Here's the output of `fdisk ad2':

snip
Here's my vinum configuration file (/etc/vinum.conf):

drive d1 device /dev/ad2s1e
drive d2 device /dev/ad4s1e
drive d3 device /dev/ad6s1e
drive d4 device /dev/ad8s1e
drive d5 device /dev/ad10s1e
volume raid
 plex org raid5 512k
  sd length 76345m drive d1
  sd length 76345m drive d2
  sd length 76345m drive d3
  sd length 76345m drive d4
  sd length 76345m drive d5
Here's the output of `vinum list' after doing `vinum create -v
/etc/vinum.conf':
5 drives:
D d1  State: up   Device /dev/ad2s1e  Avail: 1822/78167 MB (2%)
D d2  State: up   Device /dev/ad4s1e  Avail: 1822/78167 MB (2%)
D d3  State: up   Device /dev/ad6s1e  Avail: 0/76345 MB (0%)
D d4  State: up   Device /dev/ad8s1e  Avail: 1822/78167 MB (2%)
D d5  State: up   Device /dev/ad10s1e Avail: 1822/78167 MB (2%)
1 volumes:
V raid  State: down Plexes:   1 Size:298
GB
1 plexes:
P raid.p0R5 State: init Subdisks: 5 Size:298
GB
5 subdisks:
S raid.p0.s0State: emptyPO:0  B Size: 74
GB
S raid.p0.s1State: emptyPO:  512 kB Size: 74
GB
S raid.p0.s2State: emptyPO: 1024 kB Size: 74
GB
S raid.p0.s3State: emptyPO: 1536 kB Size: 74
GB
S raid.p0.s4State: emptyPO: 2048 kB Size: 74
GB
I can't figure out what's wrong with my configuration.
I don't actually see anything wrong.  What makes you think something is wrong?

Wait for the plex to finish the init state and then do vinum start raid.
There may be another step in there somewhere, such as vinum start raid.p0,
but I'm not 100% sure.
It can take a bit of time to init 298G of raid 5.

--
Bill Moran
Potential Technologies
http://www.potentialtech.com
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]