Re: Vinum configuration syntax

2007-08-14 Thread CyberLeo Kitsana
Modulok wrote:
 Take the following example vinum config file:
 
 drive a device /dev/da2a
 drive b device /dev/da3a
 
 volume rambo
 plex org concat
 sd length 512m drive a
 plex org concat
 sd length 512m drive b
   
8cut here8
drive disk1 device /dev/ad4s1h
drive disk2 device /dev/ad5s1h
drive disk3 device /dev/ad6s1h

volume raid5
plex org raid5 512k
sd length 190782M drive disk1
sd length 190782M drive disk2
sd length 190782M drive disk3
8cut here8

This syntax still worked for me as of gvinum in 6.2. However, the new
SoC patches for geom_vinum functionality may change some things when
included.

-- 
Fuzzy love,
-CyberLeo
Technical Administrator
CyberLeo.Net Webhosting
http://www.CyberLeo.Net
[EMAIL PROTECTED]

Furry Peace! - http://.fur.com/peace/
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum stability?

2006-07-07 Thread Ian Jefferson

One thing you might consider is that gvinum is quite flexible.

The subdisks in vinum that make up a raid 5 plex are partitions.   
This means you can create raid 5 sets without using each entire disk   
and the disks don't need to be the same model or size.  It's also  
handy for spares.  If you start having media errors a new partition  
on the offending disk might be one option but any other disk that  
support a partition size equal to the ones used as subdisks in the  
raid 5 plex will also do.


Having said that I'm finding it tricky to understand and use gvinum.   
It seems to be on the mend though, the documentation is improving and  
the raid 5 set I had running seemed pretty stable for a 40 minute  
iozone benchmark.  That's all I've done with it to date.


IJ

On Jul 6, 2006, at 8:56 AM, Jeremy Ehrhardt wrote:

I have a quad-core Opteron nForce4 box running 6.1-RELEASE/amd64  
with a gvinum RAID 5 setup comprising six identical SATA drives on  
three controllers (the onboard nForce4 SATA, which is apparently  
two devices, and one Promise FastTrak TX2300 PCI SATA RAID  
controller in IDE mode), combined into one volume named drugs.  
We've been testing this box as a file server, and it usually works  
fine, but smartd reported a few bad sectors on one of the drives,  
then a few days later it crashed while I was running chmod -R on a  
directory on drugs and had to be manually rebooted. I can't  
figure out exactly what happened, especially given that RAID 5 is  
supposed to be robust against single drive failures and that  
despite the bad blocks smartctl claims the drive is healthy.


I have three questions:
1: what's up with gvinum RAID 5? Does it crash randomly? Is it  
considered stable? Will it lose data?
2: am I using a SATA controller that has serious problems or  
something like that? In other words, is this actually gvinum's fault?
3: would I be better off using a different RAID 5 system on another  
OS?


Jeremy Ehrhardt
[EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions- 
[EMAIL PROTECTED]


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum stability?

2006-07-06 Thread Chuck Swiger

Jeremy Ehrhardt wrote:
We've been testing this box as a 
file server, and it usually works fine, but smartd reported a few bad 
sectors on one of the drives, then a few days later it crashed while I 
was running chmod -R on a directory on drugs and had to be manually 
rebooted. I can't figure out exactly what happened, especially given 
that RAID 5 is supposed to be robust against single drive failures and 
that despite the bad blocks smartctl claims the drive is healthy.


As soon as you notice bad sectors appearing on a modern drive, it's time to 
replace it.  This is because modern drives already use spare sectors to 
replace failing data areas transparently, and when that no longer can be done 
because all of the spares have been used, the drive is likely to die shortly 
thereafter.


RAID-5 provides protection against a single-drive failure, but once errors are 
seen, the RAID-volume is operating in degraded mode which involves a 
significant performance penalty and you no longer have any protection against 
data loss-- if you have a problem with another disk in the meantime before the 
failing drive gets replaced, you're probably going to lose the entire RAID 
volume and all data on it.



I have three questions:
1: what's up with gvinum RAID 5? Does it crash randomly? Is it 
considered stable? Will it lose data?


Gvinum isn't supposed to crash randomly, and it reasonably stable, but it 
doesn't seen to be as reliable as either a hardware RAID setup or the older 
vinum from FreeBSD-4 and earlier.


As for losing data, see above.

2: am I using a SATA controller that has serious problems or something 
like that? In other words, is this actually gvinum's fault?


If you had a failing drive, that's not gvinum's fault.  gvinum is supposed to 
handle a single-drive failure, but it's not clear what actually went 
wrong...log messages or dmesg output might be useful.



3: would I be better off using a different RAID 5 system on another OS?


Changing OSes won't make much difference; using hardware to implement the RAID 
might be an improvement, rather than using gvinum's software RAID.  Of course, 
you'd have to adjust your config to fit within your hardware controller's 
capabilities.


--
-Chuck
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum stability?

2006-07-05 Thread Peter A. Giessel


On 7/5/2006 15:56, Jeremy Ehrhardt seems to have typed:
 3: would I be better off using a different RAID 5 system on another OS?

You would be best off with a 3ware card (www.3ware.com) running RAID 5
(hardware raid  software raid).

It works great in FreeBSD and is *very* stable and fault tolerant.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum stability?

2006-07-05 Thread Jonathan Horne
On Wednesday 05 July 2006 19:05, Peter A. Giessel wrote:
 On 7/5/2006 15:56, Jeremy Ehrhardt seems to have typed:
  3: would I be better off using a different RAID 5 system on another OS?

 You would be best off with a 3ware card (www.3ware.com) running RAID 5
 (hardware raid  software raid).

 It works great in FreeBSD and is *very* stable and fault tolerant.

 ___

i have a 3ware card in my production server running a RAID5, and its never 
skipped a beat.

if you dont buy a raid card (with 6 or more channels), try breaking the usage 
up.  put the system partitions on one controller, and build a 4 disk raid5 on 
the other, and see if it behaves differently (ie, remove the 
cross-controller-raid from the equation and see what happens).

cheers,
jonathahn
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum concat

2006-05-17 Thread Emil Thelin

On Wed, 17 May 2006, Joe Auty wrote:

Are their any tutorials explaining how to do so? So far, based on the lack of 
info I've been able to find, it seems to me that this is a rarely used 
configuration... I'm wondering what the reasons for this might be?


http://devel.reinikainen.net/docs/how-to/Vinum/ might be helpful.

/e

--
http://hostname.nu/~emil
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum concat

2006-05-17 Thread Joe Auty
There might be some helpful nuggets in there, but I'm looking to  
basically combine the storage of multiple disks, like RAID-0, except  
I want my second drive written to only when my first drive has been  
filled. I understand this can be done via vinum concatenation. I'm  
looking for general feedback on whether anybody has tried this setup,  
how it worked, and what was useful to know to get started.




On May 17, 2006, at 12:06 PM, Emil Thelin wrote:


On Wed, 17 May 2006, Joe Auty wrote:

Are their any tutorials explaining how to do so? So far, based on  
the lack of info I've been able to find, it seems to me that this  
is a rarely used configuration... I'm wondering what the reasons  
for this might be?


http://devel.reinikainen.net/docs/how-to/Vinum/ might be helpful.

/e

--
http://hostname.nu/~emil
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions- 
[EMAIL PROTECTED]


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum and upgrading from 5.1-RELEASE

2006-04-13 Thread Emil Thelin

On Thu, 13 Apr 2006, Chris Hastie wrote:


I tried to upgrade from 5.1-RELEASE to 5_RELENG last night and hit big
problems with vinum. 5_RELENG in retrospect was an error I suspect, as
what I really wanted was 5.4-RELEASE.


Since 5.3 you should probably use the geom-aware gvinum instead of vinum.

I think the manual says that both vinum and gvinum can be used for  5.3 
but I've never actually got vinum to work on  5.3.


/e
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum and upgrading from 5.1-RELEASE

2006-04-13 Thread Chris Hastie

On Thu, 13 Apr 2006, Emil Thelin [EMAIL PROTECTED] wrote:


On Thu, 13 Apr 2006, Chris Hastie wrote:


I tried to upgrade from 5.1-RELEASE to 5_RELENG last night and hit big
problems with vinum. 5_RELENG in retrospect was an error I suspect, as
what I really wanted was 5.4-RELEASE.


Since 5.3 you should probably use the geom-aware gvinum instead of vinum.

I think the manual says that both vinum and gvinum can be used for  5.3
but I've never actually got vinum to work on  5.3.


Any hints as to how I make this migration? Is it as simple as putting
geom_vinum_load=YES in /boot/loader.conf, or is there further configuration
to do? Is gvinum happy with the same configuration data as vinum? Presumably
device names are different so I'll have to change /etc/fstab accordingly?


--
Chris Hastie
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum and upgrading from 5.1-RELEASE

2006-04-13 Thread Emil Thelin

On Thu, 13 Apr 2006, Chris Hastie wrote:


On Thu, 13 Apr 2006, Emil Thelin [EMAIL PROTECTED] wrote:


On Thu, 13 Apr 2006, Chris Hastie wrote:


I tried to upgrade from 5.1-RELEASE to 5_RELENG last night and hit big
problems with vinum. 5_RELENG in retrospect was an error I suspect, as
what I really wanted was 5.4-RELEASE.


Since 5.3 you should probably use the geom-aware gvinum instead of vinum.

I think the manual says that both vinum and gvinum can be used for  5.3
but I've never actually got vinum to work on  5.3.


Any hints as to how I make this migration? Is it as simple as putting
geom_vinum_load=YES in /boot/loader.conf, or is there further configuration
to do? Is gvinum happy with the same configuration data as vinum? Presumably
device names are different so I'll have to change /etc/fstab accordingly?


I've never tried it so I'm not sure how to migrate from vinum to gvinum, 
check the handbook and ask google.


But my gut feeling about it is that it will probably not be that easy, 
g(vinum) has a way of causing headaches..


/e

--
http://blogg.hostname.nu || http://photoblog.hostname.nu
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum FreeBSD 4.11-STABLE

2006-01-29 Thread Greg 'groggy' Lehey
On Monday, 30 January 2006 at 10:31:13 +0300, Forth wrote:
 Hi,
 I am trying setup a vinum mirrored plex with two disks:
 ad2: 38204MB SAMSUNG SP0411N [77622/16/63] at ata1-master UDMA33
 ad3: 38204MB SAMSUNG SP0411N [77622/16/63] at ata1-slave UDMA33
 Disks are new and fully functional, but when i do:
 #vinum start
 #vinum
 vinum - mirror -v -n mirror /dev/ad2 /dev/ad3
 i get this:
 drive vinumdrive0 device /dev/ad2
 Can't create drive vinumdrive0, device /dev/ad2: Can't initialize drive
 vinumdrive0

You should be using partitions, not disk drives.  Create a partition
of type vinum, for example /dev/ad2s1h, and specify that.  The man
page explains in more detail.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgpHrIkJCSqEG.pgp
Description: PGP signature


Re: vinum stuck in 'initalizing' state ...

2005-11-28 Thread Marc G. Fournier


Ignore, Google'd a bit longer and found the answer ...

On Mon, 28 Nov 2005, Marc G. Fournier wrote:



but, from what I can tell, is doing absolutely nothing, both via iostat and 
vinum list ... the server has been 'up' for 15 minutes ... when I 
accidentally typed 'vinum init vm.p0.s0' instead of 'vinum start vm.p0.s0', 
the machine hung up and required cold boot ... when it came back up, the 
initalize was running:


Subdisk vm.p0.s0:
   Size:  72865336320 bytes (69489 MB)
   State: initializing
   Plex vm.p0 at offset 0 (0  B)
   Initialize pointer:  0  B (0%)
   Initialize blocksize:0  B
   Initialize interval: 0 seconds
   Drive d0 (/dev/da1s1a) at offset 135680 (132 kB)

But, nothing is happening on da1:

neptune# iostat 5 5
 tty da0  da1  da2 cpu
tin tout  KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
  0  144  2.00 1513  2.95   0.00   0  0.00   0.00   0  0.00   2  0  4  0 93
  0   64 14.24 191  2.66   0.00   0  0.00   0.00   0  0.00  12  0 14  2 71
  0  138  7.92 172  1.33   0.00   0  0.00   0.00   0  0.00   7  0  6  0 87
  0   19 11.27 159  1.75   0.00   0  0.00   0.00   0  0.00  16  0  5  1 78
  0   69  8.95 157  1.37   0.00   0  0.00   0.00   0  0.00  23  0  4  1 72

Not sure what to do/try to give it a kick :(  Or did I just lose that whole 
file system?


This is with FreeBSD 4.x still ...



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664




Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum migration 4.x-5.4

2005-08-19 Thread Stijn Hoop
On Fri, Aug 19, 2005 at 11:01:55AM -0500, Robin Smith wrote:
 There seems to be a consensus in the references I've found that vinum
 is completely broken on 5.4

That is true. IMHO it should be removed from RELENG_5 and _6 if it isn't
already.

 and that gvinum/geom_vinum is not ready for production use.

Well the only reason it might not be is that it hasn't seen widespread
testing, as far as I can tell it should all just work. I do use gvinum
on a 5-STABLE host and it has worked well for me in the past [1].

 As it seems to me, this means that anyone using
 4.11 (say) and vinum will have to abandon vinum (i.e. quit doing software
 RAID) in order to upgrade to 5.4.

5.4 does have alternatives to vinum (which is another reason why gvinum
hasn't received as much testing): gmirror, graid3, gstripe, gconcat.

 That can be both laborious and slow
 (e.g. if you have /usr on, say, a four-drive vinum volume in 4.11, you're
 going to have to replace those drives with something else in order to go
 to 5.4.

I'd say building a new test box is about the only sane way to do it.

 Is that false, and is there a relatively simple way to get 
 geom_vinum in 5.4 to read a vinum configuration produced under 4.11 and
 start the vinum volume as it is?

As far as I can tell, it should just work. To pick up the latest round
of vinum fixes it might be best to run 5-STABLE (ie. RELENG_5) but it
should not be necessary unless you run into difficulties.

But the only way to know for sure if things work, is to test...

--Stijn

[1] for some reason I discovered a configuration problem earlier this
week, but the other part of the mirror is holding up and it seems
that I can reconstruct the broken part this weekend. If anything,
it seems that a gvinum mirrored plex is robust.

-- 
Coca-Cola is solely responsible for ensuring that people - too stupid to know
not to tip half-ton machines on themselves - are safe. Forget parenting - the
blame is entirely on the corporation for designing machines that look so
innocent and yet are so deadly.
-- http://www.kuro5hin.org/?op=displaystory;sid=2001/10/28/212418/42


pgpPDFFxixFyi.pgp
Description: PGP signature


Re: Vinum migration 4.x-5.4

2005-08-19 Thread Paul Mather
On Fri, 19 Aug 2005 11:01:55 -0500, Robin Smith
[EMAIL PROTECTED] wrote:

 There seems to be a consensus in the references I've found that vinum
 is completely broken on 5.4 and that gvinum/geom_vinum is not ready
 for production use.  As it seems to me, this means that anyone using
 4.11 (say) and vinum will have to abandon vinum (i.e. quit doing
 software
 RAID) in order to upgrade to 5.4.  That can be both laborious and slow
 (e.g. if you have /usr on, say, a four-drive vinum volume in 4.11,
 you're
 going to have to replace those drives with something else in order to
 go
 to 5.4.  Is that false, and is there a relatively simple way to get 
 geom_vinum in 5.4 to read a vinum configuration produced under 4.11
 and
 start the vinum volume as it is?

I am using geom_vinum on RELENG_5 without problems.  However, I use only
mirrored and concat plexes, and most of the problems I've heard people
experiencing involve RAID 5 plexes.

Geom_vinum uses the same on-disk metadata format as Vinum, so it will
read a configuration produced under 4.x---in fact, this was one of its
design goals.  BTW, Vinum is not the only software RAID option under
5.x: you can use geom_concat (gconcat) or geom_stripe (gstripe) for RAID
0; geom_mirror (gmirror) for RAID 1; and geom_raid3 (graid3) for RAID 3.
I successfully replaced my all-mirrored geom_vinum setup in-place on one
system with a geom_mirror setup.

Finally, if you are migrating from 4.x to 5.x, you might consider a
binary installation with restore rather than a source upgrade.  That
way, you can newfs your filesystems as UFS2 and get support for, e.g.,
snapshots, background fsck, etc.

Cheers,

Paul.
-- 
e-mail: [EMAIL PROTECTED]

Without music to decorate it, time is just a bunch of boring production
 deadlines or dates by which bills must be paid.
--- Frank Vincent Zappa
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum Bootstrap Help

2005-07-22 Thread David Kelly


On Jul 22, 2005, at 5:51 PM, Ben Craig wrote:

However, it appears that the vinum config isn't being saved, as  
rebooting
the machine can't find the vinum root partition, and after manually  
booting
to the pre-vinum root (ufs:ad0s1a) running vinum  list shows no  
volume

information.


Wasn't trying to boot root but had same problem. Changed  
start_cmd=gvinum start (note the g) in /etc/rc.d/vinum (note the  
filename was not renamed) and suitably edit /etc/fstab and have had  
no problem since.


Or apparently rather than use start_vinum=YES in /etc/rc.conf and  
changing /etc/rc.d/vinum, placing geom_vinum_load=YES in /boot/ 
loader.conf does the trick.


gvinum is said not to have all the features of vinum, but if vinum  
won't start on boot then it isn't of much good. gvinum has enough  
features for me to work.


--
David Kelly N4HHE, [EMAIL PROTECTED]

Whom computers would destroy, they must first drive mad.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum subdisk crash

2005-06-30 Thread Gareth Bailey
Hello everyone,

Just to add, I had to type out the above message manually since I can't get 
access to anything with the crashed subdisk on /usr.
With regard to Greg's requests for information when reporting vinum problems 
as stated on vinumvm.org http://vinumvm.org's website, I can provide the 
following info:

What problems are you having?
My usr.p0.s0 subdisk reports a 'crashed' status

Which version of FreeBSD are you running?
4.10 Stable

Have you made any changes to the system sources, including Vinum?
No

Supply the output of the *vinum list* command. If you can't start Vinum, 
supply the on-disk configuration, as described below. If you can't start 
Vinum,
then (and only then) send a copy of the configuration file.
I can't get anything off the system, and its too long to type out! (I have 
the same layout as in the Van Valzah article.)

Supply an *extract* of the Vinum history file. Unless you have explicitly 
renamed it, it will be */var/log/vinum_history*. This file can get very big; 
please limit it to the time around when you have the problems. Each line 
contains a timestamp at the beginning, so you will have no difficulty in 
establishing which data is of relevance.
I will summarise the tail of vinum_history (doesn't seem to provide any 
interesting info):
30 Jun 2005 ***vinum started***
30 Jun 2005 list
30 Jun 2005 ***vinum started***
30 Jun 2005 dumpconfig

Supply an *extract* of the file */var/log/messages*. Restrict the extract to 
the same time frame as the history file. Again, each line contains a 
timestamp at the beginning, so you will have no difficulty in establishing 
which data is of relevance.
Again, I will summarise the tail contents of messages:
30 Jun server /kernel: ad0s1h: hard error reading fsbn 59814344 of 
29126985-29127080 (ad0s1 bn 59814344; cn 3723 tn 69 sn 2) trying PIO mode
30 Jun server /kernel: ad0s1h: hard error reading fsbn 59814344 of 
29126985-29127080 (ad0s1 bn 59814344; cn 3723 tn 69 sn 2) status=59 error=40
30 Jun server /kernel: vinum: usr.p0.s0 is crashed by force
30 Jun server /kernel: vinum: usr.p0 is faulty
30 Jun server /kernel: vinum: usr is down
30 Jun server /kernel: fatal:usr.p0.s0 read error, block 29126985 for 49152 
bytes
30 Jun server /kernel: usr.p0.s0: user buffer block 28102720 for 49152 bytes

If you have a crash, please supply a backtrace from the dump analysis as 
discussed below under Kernel Panics. Please don't delete the crash dump; it 
may be needed for further analysis.
I'm not sure if a kernel panic occurred?

I hope this information helps, and that someone can give me some advice! 

Cheers,
Gareth

On 6/30/05, Gareth Bailey [EMAIL PROTECTED] wrote:
 
 Hello,
 
 It appears that one of the vinum subdisks on my server has crashed. On 
 rebooting I get the following message:
 
 
 -- start message --
 
 Warning: defective objects
 
 V usr State:down Plexes:2 Size:37GB
 P usr.p0 C State:faulty Subdisks:1 Size:37GB
 P usr.p1 C State:faulty Subdisks:1 Size:37GB
 S usr.p0.s0 State:crashed P0:0 B Size:37GB
 S usr.p1.s0 State:stale P0:0 B Size:37GB
 
 [some fsck messages]
 Can't open /dev/vinum/usr: Input/output error
 [some more fsck messages]
 THE FOLLOWING FILE SYSTEM HAD AN UNEXPEXTED INCONSISTENCY:
 /dev/vinum/usr (/usr)
 Automatic file system checj failed . . . help!
 Enter full pathname of shell
 
 -- end message --
 
 I have a straight forward configuration based on the Bootstrapping Vinum: 
 A Foundation for Reliable Servers article by Robert Van Valzah.
 What could have caused this? The disks are pretty new. Please advise on 
 the quickest route to getting our server back online.
 
 Much appreciated,
 
 Gareth Bailey
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum subdisk crash

2005-06-30 Thread Greg 'groggy' Lehey
[Format recovered--see http://www.lemis.com/email/email-format.html]

Broken wrapping, unclear attribution, incorrect quotation levels.  It
took five minutes of my time fixing this message to a point where I
could reply to it.

On Thursday, 30 June 2005 at 15:37:56 +0200, Gareth Bailey wrote:
 Hello everyone,

 Just to add, I had to type out the above message manually since I can't get
 access to anything with the crashed subdisk on /usr.
 With regard to Greg's requests for information when reporting vinum problems
 as stated on vinumvm.org http://vinumvm.org's website, I can provide the
 following info:

 What problems are you having?

 My usr.p0.s0 subdisk reports a 'crashed' status

 Supply the output of the *vinum list* command. If you can't start
 Vinum, supply the on-disk configuration, as described below. If you
 can't start Vinum, then (and only then) send a copy of the
 configuration file.

 I can't get anything off the system, and its too long to type out!
 (I have the same layout as in the Van Valzah article.)

Would you like a reply like I have an answer, but it's too long to
type out?  Do you really expect me to go and re-read Bob's article?
This, along with your hard-to-read message, is a good way to be
ignored.

 Supply an *extract* of the file */var/log/messages*. Restrict the
 extract to the same time frame as the history file. Again, each
 line contains a timestamp at the beginning, so you will have no
 difficulty in establishing which data is of relevance.

 30 Jun server /kernel: ad0s1h: hard error reading fsbn 59814344 of 
 29126985-29127080 (ad0s1 bn 59814344; cn 3723 tn 69 sn 2) trying PIO mode
 30 Jun server /kernel: ad0s1h: hard error reading fsbn 59814344 of 
 29126985-29127080 (ad0s1 bn 59814344; cn 3723 tn 69 sn 2) status=59 error=40

You have a hardware problem with /dev/ad0.

 30 Jun server /kernel: vinum: usr.p0.s0 is crashed by force
 30 Jun server /kernel: vinum: usr.p0 is faulty
 30 Jun server /kernel: vinum: usr is down

And this suggests that your configuration is  non-resilient.

 I hope this information helps, and that someone can give me some
 advice!

Yes.  Replace the disk.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
The virus contained in this message was detected by LEMIS anti-virus.

Finger [EMAIL PROTECTED] for PGP public key.
See complete headers for address and phone numbers.


pgp6FGHYtqDfO.pgp
Description: PGP signature


Re: Vinum and Volumes Larger Than 2TB

2005-06-29 Thread Nikolas Britton
On 6/27/05, Bri [EMAIL PROTECTED] wrote:
  Howdy,
 
  I'm attempting to use Vinum to concat multiple plexes together to make a 
 single 4.5TB volume. I've noticed that once I hit the 2TB mark it seems to 
 fail, it looks like once it hits 2TB the size gets reset to 0. Example below,
 
 [EMAIL PROTECTED]:~# vinum create /etc/vinum0.conf
 2 drives:
 D partition0State: up   /dev/da0A: 0/0 MB
 D partition1State: up   /dev/da1A: 0/0 MB
 
 1 volumes:
 V vinum0State: up   Plexes:   1 Size:   1858 GB
 
 1 plexes:
 P vinum0.p0   C State: up   Subdisks: 2 Size:   3906 GB
 
 2 subdisks:
 S vinum0.p0.s0  State: up   D: partition0   Size:   1953 GB
 S vinum0.p0.s1  State: up   D: partition1   Size:   1953 GB
 
 
  Now I've seen mentions of people using Vinum on larger partitions and it 
 seems to work ok. Also when I use gvinum it succeeds, however given the state 
 of the gvinum implementation I'd like to stick with vinum.
 
 
  Suggestions/comments anyone?
 

What version of FreeBSD are you using? also I seem to remember from my
readings somewhere that there is still a soft limit of 2TB depending
on what and how you do it and that vinum had this problem.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum GVinum Recommendations

2005-06-27 Thread Mac Mason
On Mon, Jun 27, 2005 at 11:57:15AM -0700, Drew Tomlinson wrote:
 I'd appreciate hearing of your experiences with vinum, gvinum, and ccd, 
 especially as they relate to firewire devices.

In my experience, vinum doesn't play well with GEOM, and gvinum isn't
anywhere near feature-complete. (I haven't looked at gvinum in a while;
it has probably improved greatly)

On the other hand, using GEOM itself has worked quite well; I use
gmirror to mirror /usr, and gconcat to string a collection of drives
together. Both have worked flawlessly since I set them up.

--Mac


pgpp15JFv1K1O.pgp
Description: PGP signature


Re: Vinum GVinum Recommendations

2005-06-27 Thread Drew Tomlinson

On 6/27/2005 12:04 PM Mac Mason wrote:


On Mon, Jun 27, 2005 at 11:57:15AM -0700, Drew Tomlinson wrote:
 

I'd appreciate hearing of your experiences with vinum, gvinum, and ccd, 
especially as they relate to firewire devices.
   



In my experience, vinum doesn't play well with GEOM, and gvinum isn't
anywhere near feature-complete. (I haven't looked at gvinum in a while;
it has probably improved greatly)

On the other hand, using GEOM itself has worked quite well; I use
gmirror to mirror /usr, and gconcat to string a collection of drives
together. Both have worked flawlessly since I set them up.

   --Mac
 

I GEOM something that is built in to 5 or is it a port?  I don't see it 
in ports so I assume it's built in.  If gconcat works well with 
firewire, that would suit my needs just fine.


Thanks for your reply,

Drew

--
Visit The Alchemist's Warehouse
Magic Tricks, DVDs, Videos, Books,  More!

http://www.alchemistswarehouse.com

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum question

2005-06-05 Thread TAOKA Fumiyoshi
On Sun, 5 Jun 2005 14:14:43 +0200 (CEST)
Wojciech Puchar [EMAIL PROTECTED] wrote:

 i'm reading about vinum disk manager, did some test, works fine.
 
 but the question - could root partition be vinum volume? (i have compiled 
 kernel with vinum built in, not as module).

FreeBSD Handbook
17.9 Using Vinum for the Root Filesystem
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/vinum-root.html

-- 
TAOKA Fumiyoshi
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum problems on FreeBSD 5.4-STABLE

2005-05-28 Thread FreeBSD questions mailing list


On 28 mei 2005, at 02:21, Kris Kirby wrote:



Trying to make a mirror of two slices, but I see to be running into  
some

issues here. I'm not on questions, so please CC me on all replies.

Maybe it's a good idea to subscribe cuz then you would have been able  
to read what we wrote about this for the lastcouple of days



# /etc/vinum.conf
volume var
plex name var.p0 org concat
drive va device /dev/ad0s1g
sd name var.p0.s0 drive va length 256m
plex name var.p1 org concat
drive vb device /dev/ad1s1d
sd name var.p1.s0 drive vb length 256m

When I create it, using -f, I get:

vinum - l
2 drives:
D vaState: up   /dev/ad0s1g A:  
1791/2048 MB

(87%) - note triple allocation
D vbState: up   /dev/ad1s1d A: 255/512 MB
(49%)

1 volumes: V var State: up Plexes:  2 Size:  256 MB

2 plexes:
P var.p0 C State: up Subdisks:  1 Size:  256 MB
P var.p1 C State: faulty Subdisks:  1 Size:  256 MB

2 subdisks:
S var.p0.s0 State: up D: va Size:  256 MB
S var.p1.s0 State: empty D: vb Size:  256 MB


vinum does not like FBSD 5.4
or vice versa
it is not supported

you could manually 'setstate up var.p1.s0' and 'setstate up var.p1'
that should work
i tested it with a raid 1 mirror, removed the HD that came up ok and  
checked if the mirror would still come up, which it did

i do definitely not recommend it though
while you aren't still useing the raid, go use something else than vinum


Doesn't seem like this is right. I also run into a problem when  
trying to

do a resetconfig:

vinum - resetconfig
 WARNING!  This command will completely wipe out your vinum  
configuration.

 All data will be lost.  If you really want to do this, enter the text

 NO FUTURE
 Enter text - NO FUTURE
Can't find vinum config: Inappropriate ioctl for device

you'll be getting this all the time if yu continue to use vinum in  
FBSD 5.4
and the configuration is still there after I try to resetconfig.  
When I
reboot and do I vinum start, I get an error that there are no  
drives in

the vinum config.


do 'vinum read' but expect to have kernel panics...



FreeBSD ginsu.catonic.net 5.4-STABLE FreeBSD 5.4-STABLE #1: Fri May 27
01:28:15 PDT 2005 [EMAIL PROTECTED]:/usr/src/sys/i386/ 
compile/SMP

i386

dmesg availible on request. No securelevel in place, at -1. Thanks in
advance.


again, i'd stay away from vinum/gvinum in FBSD 5.x  if i were you...
if you don't, come join me in the mental institute for stubbern vinum  
users :)

Arno
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum: Inappropriate ioctl for device

2005-05-28 Thread FreeBSD questions mailing list


On 28 mei 2005, at 09:01, [EMAIL PROTECTED] wrote:





[...]


you're welcome

maybe the complete vinum chapter should be removed from the handbook?

Arno



Perhaps the Vinum chapter should say up front that Vinum works with  
FreeBSD 4.x but not with 5.x


jd



yeah better idea
Arno
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum: Inappropriate ioctl for device

2005-05-28 Thread web



[...]

you're welcome

maybe the complete vinum chapter should be removed from the handbook?

Arno


Perhaps the Vinum chapter should say up front that Vinum works with FreeBSD 
4.x but not with 5.x


jd


Janos Dohanics
[EMAIL PROTECTED]
http://www.3dresearch.com/ 


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum: Inappropriate ioctl for device

2005-05-28 Thread FreeBSD questions mailing list


On 27 mei 2005, at 16:38, [EMAIL PROTECTED] wrote:





[...]

not very encouraging... Is it a RAID 5 you were able to make work
under 5.4?

jd



hey
yeah a 3x160 GB RAID5 and 2x80 GB RAID 1 in FreeBSD 5.4-p1
 i get the same error message everytime i start vinum or whenever i
execute a command in vinum
and really loads of kernel panics whenever i try to get it to work
after a crash
i have an unorthodox way of recovering from this because it's nearly
impossible to do stuff in vinum without causing kernel panics
yesterday evrything was ruined again (after upgrading to p1)
i wiped the complete config (rest config - NO FUTURE)
read in the config file (create RAID5)
set every state up manually (setstate up ...)
and rebuild parity
took about 10 hours to rebuild but then everything is back up and
running

i tried Gvinum too but that doesn't have the setstate nor the ruibld
parity command and still you can't stop gvinum (gvinum stop doesn't
work, nor does kldunload geom_vinum.ko)

i have no way of changing to a different soft raid due to lack of
space to backup so i'm stuck with this for as lo0ng as it takes :)

so, one advice don't do it
hehehe
Arno



Many thanx... I guess I better stick with 4.11

jd



you're welcome

maybe the complete vinum chapter should be removed from the handbook?

Arno

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum: Inappropriate ioctl for device

2005-05-27 Thread FreeBSD questions mailing list


On 26 mei 2005, at 23:10, jd wrote:



I am trying to set up Vinum on a new system, and I get the error  
message:

vinum: Inappropriate ioctl for device.

Here are the details:

...

Vinum used to work beautiful under 4.11; I'm wondering what do I  
need to

change to make it work under 5.4?



Go back to 4.11!
vinum is a nightmare in 5.4
and gvinum is not nearly mature enough...

I do have it running but every update it takes me 2 days to get the  
RAIDs back up


Arno


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum: Inappropriate ioctl for device

2005-05-27 Thread FreeBSD questions mailing list


On 27 mei 2005, at 16:13, [EMAIL PROTECTED] wrote:





[...]





Go back to 4.11!
vinum is a nightmare in 5.4
and gvinum is not nearly mature enough...

I do have it running but every update it takes me 2 days to get the
RAIDs back up

Arno



Arno,

not very encouraging... Is it a RAID 5 you were able to make work  
under 5.4?


jd



hey
yeah a 3x160 GB RAID5 and 2x80 GB RAID 1 in FreeBSD 5.4-p1
 i get the same error message everytime i start vinum or whenever i  
execute a command in vinum
and really loads of kernel panics whenever i try to get it to work  
after a crash
i have an unorthodox way of recovering from this because it's nearly  
impossible to do stuff in vinum without causing kernel panics

yesterday evrything was ruined again (after upgrading to p1)
i wiped the complete config (rest config - NO FUTURE)
read in the config file (create RAID5)
set every state up manually (setstate up ...)
and rebuild parity
took about 10 hours to rebuild but then everything is back up and  
running


i tried Gvinum too but that doesn't have the setstate nor the ruibld  
parity command and still you can't stop gvinum (gvinum stop doesn't  
work, nor does kldunload geom_vinum.ko)


i have no way of changing to a different soft raid due to lack of  
space to backup so i'm stuck with this for as lo0ng as it takes :)


so, one advice don't do it
hehehe
Arno
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum: Inappropriate ioctl for device

2005-05-27 Thread Mark Bucciarelli

FreeBSD questions mailing list wrote:

i tried Gvinum too but that doesn't have the setstate nor the ruibld  
parity command and still you can't stop gvinum (gvinum stop doesn't  
work, nor does kldunload geom_vinum.ko)


try gmirror for raid 1.  it worked great for me.

could gmirror and gstripe be used to get raid5?  i think i read a geom 
provider can be used as a consumer ...


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum: Inappropriate ioctl for device

2005-05-26 Thread Andrea Venturoli

jd wrote:

I am trying to set up Vinum on a new system, and I get the error message:
vinum: Inappropriate ioctl for device.

Here are the details:

- FreeBSD 5.4-RELEASE


I'd be glad if someone steps in and says I'm wrong, but AFAIK vinum is 
not supported anymore on new releases.
At least in my experience it stopped working when I upgraded from 5.2.1 
to 5.3, producing continuous crashes until I switched to gmirror.
You might want to look at gvinum, but last I checked it wasn't quite 
finished.


 bye
av.

P.S. Please, let me know if you manage to make it.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum (Again)

2005-04-26 Thread Greg 'groggy' Lehey
On Friday, 22 April 2005 at 16:11:55 -0400, Timothy Radigan wrote:
 Hi all,

 Ok, I'm still having trouble with vinum, I got it to load at start, but the
 vinum.autostart=YES in /boot/loader.conf returns a vinum: no drives
 found message.

 I had the mirrored set up and running before the reboot and the file system
 was mounted and everything, I even made sure to issue a saveconfig in
 vinum to make sure the configuration was written to the drives.

 There is no mention of anything else in the Handbook or in the Complete
 FreeBSD chapter on Vinum that describes how to get the configured drives
 loaded at boot up.  Am I missing something?

 Here is my /etc/vinum.conf:

 drive a device /dev/ad1
 drive b device /dev/ad2

These should be partitions, not disks.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgpr4tBYTS3Dy.pgp
Description: PGP signature


Re: Vinum

2005-04-22 Thread Kevin Kinsey
Timothy Radigan wrote:
I know this topic has come up before, but how in the world do you get vinum
to load AND start itself at boot time so I don't have to repair my mirrored
volumes every reboot?
I have tried to add start_vinum=YES to /etc/rc.conf but that ends in a
'panic dangling node' error.  I also tried to add vinum_load=YES to
/boot/loader.conf and that will load vinum at boot without panicing on me,
but it doesn't start vinum and bring the mirrored devices up.  I have to
physically log into the server and type vinum start to bring up vinum.
What I want, is that on a reboot, vinum is loaded at boot and is started so
that I can mount my vinum drive through /etc/fstab
Any ideas/suggestions?
--Tim
 

If you're on 5.X, I believe you want gvinum instead of vinum; vinum
has been trying hard to catch up with the addition of the GEOM layer
in 5.X (Kudos to Lukas E!)
Kevin Kinsey
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum

2005-04-22 Thread Peter Giessel
On Friday, April 22, 2005, at 08:37AM, Timothy Radigan [EMAIL PROTECTED] 
wrote:


I also tried to add vinum_load=YES to
/boot/loader.conf


See page 239 of The Complete FreeBSD (page 19 of the PDF)
http://www.vinumvm.org/cfbsd/vinum.pdf

also add vinum.autostart=YES to /boot/loader.conf

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum (Again)

2005-04-22 Thread Robert Slade
On Fri, 2005-04-22 at 21:11, Timothy Radigan wrote:
 Hi all,
 
 Ok, I'm still having trouble with vinum, I got it to load at start, but the
 vinum.autostart=YES in /boot/loader.conf returns a vinum: no drives
 found message.
 
 I had the mirrored set up and running before the reboot and the file system
 was mounted and everything, I even made sure to issue a saveconfig in
 vinum to make sure the configuration was written to the drives.  
 
 There is no mention of anything else in the Handbook or in the Complete
 FreeBSD chapter on Vinum that describes how to get the configured drives
 loaded at boot up.  Am I missing something?  
 
 Here is my /etc/vinum.conf:
 
 drive a device /dev/ad1
 drive b device /dev/ad2
 volume mirror
   plex org concat
 sd length 38146m drive a
   plex org concat
 sd length 38146m drive b

Timothy,

Not sure if this helps but there is a note in the handbook errata:

(31 Oct 2004, updated on 12 Nov 2004) The vinum(4) subsystem works on
5.3, but it can cause a system panic at boot time. As a workaround you
can add vinum_load=YES to /boot/loader.conf.

As an alternative you can also use the new geom(4)-based vinum(4)
subsystem. To activate the geom(4)-aware vinum at boot time, add
geom_vinum_load=YES to /boot/loader.conf and remove start_vinum=YES
in /etc/rc.conf if it exists.

While some uncommon configurations, such as multiple vinum drives on a
disk, are not supported, it is generally backward compatible. Note that
for the geom(4)-aware vinum, its new userland control program, gvinum,
should be used, and it is not yet feature-complete.

Mind you, I had a similar problem to you and failed to get it to work.

Rob




___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum setup

2005-04-13 Thread Chuck Robey
Lowell Gilbert wrote:
Chuck Robey [EMAIL PROTECTED] writes:

Sorry, I'm having a miserable time trying to get vinum working on my
amd64 system.  Vinum tells me that it can't load the kernel (vinum:
Kernel module not available: No such file or directory).  Gvinum
simply refuses to take any commands at all.  I tried looking at
/boot/kernel, naturally didn't find any such module, so I wanted to
see about building one, but I can't get device vinum to pass
config's purview.
Does vinum work on amd64's?

Shouldn't that be gvinum?
when I wrote that, I didn't understand the difference between vinum and 
gvinum.  I do now, but I tell you, gvinum sure as heck needs 
documentation.  Fixing the resetconfig command would be a really good 
thing, too.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum trouble on 5.3-Stable

2005-04-05 Thread Ean Kingston

 I upgraded to 5.3 on one system a while ago. And
 when it boots up vinum panics the system on startup
 with this message:
 panic: unmount: dangling vnode

 I found that if I boot in single user mode and
 mount / to make it rw, then start vinum, everything
 is fine.

 I just patched the kernel for the sendfile bug so
 this has come up again.

 Is this an order of execution problem? Do I change when
 vinum is started? What the solution.

AFAIK the only current solution is to switch to gvinum. There are more
details about it in the archive.

 I'm happy (apart from this) with 5.x and plan to upgrade
 my main server to 5.x. Now that I got a good handle (I
 think on bind 9).

-- 
Ean Kingston
E-Mail: ean_AT_hedron_DOT_org
 PGP KeyID: 1024D/CBC5D6BB
   URL: http://www.hedron.org/


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum Problem

2005-03-28 Thread Ean Kingston

 On Sun, 2005-03-27 at 16:59, Ean Kingston wrote:
 On March 27, 2005 10:35 am, Robert Slade wrote:
  Hi,
 
  I have managed to setup a vinum volume using 2 striped disks, the
 volume
  is created and I can do newfs on it and mount it.
 
  However, when I set start_vinum=YES in rc.conf, vinum loads then I
 get
  panic, followed by hanging vnode.
 
  I'm using 5.3.
 
  Any pointers please.

 In 5.3, you need to use gvinum instead of vinum. To do this set
 start_vinum=NO in /etc/rc.conf and set geom_vinum_load=YES
 in /boot/loader.conf.

 gvinum will read your vinum configuration just fine so you only need to
 make
 the changes I suggested to get it to work.

 Althought this is documented, it is not what I would call 'well
 documented'
 yet.

 Ean,

 Thank you, that got me further, I appears to have created a new
 /dev/gvinum/test, which seems to the right size, but when I mount it as
 /test, I get not a directory when I try and ls it.

The mount point needs to exist prior to mounting a filesystem so, try
something like this (as root):

mkdir /test
mount /dev/gvinum/test /test
mount | grep test

That last one should produce the following output,

/dev/gvinum/test on /test (ufs, local, soft-updates)

which indicates that you have a mounted filesystem on /test.

 I have tried to find documentation on geom, but that seems to be related
 to mirroring.

Ya, documentation is still being worked on. For basic stuff (like creating
concatinated volumes) you can use the vinum documentation and replace
'vinum' with 'gvinum' when you try things. Using your 'test' filesystem is
a very good idea. Some aspects of vinum still aren't fully implemented in
gvinum.

Remember, if you just created your /test volume. It should be empty. You
did run 'newfs /dev/gvinum/test' after creating it and before mouting it,
right?

-- 
Ean Kingston
E-Mail: ean_AT_hedron_DOT_org
 PGP KeyID: 1024D/CBC5D6BB
   URL: http://www.hedron.org/


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum Problem

2005-03-27 Thread Ean Kingston
On March 27, 2005 10:35 am, Robert Slade wrote:
 Hi,

 I have managed to setup a vinum volume using 2 striped disks, the volume
 is created and I can do newfs on it and mount it.

 However, when I set start_vinum=YES in rc.conf, vinum loads then I get
 panic, followed by hanging vnode.

 I'm using 5.3.

 Any pointers please.

In 5.3, you need to use gvinum instead of vinum. To do this set 
start_vinum=NO in /etc/rc.conf and set geom_vinum_load=YES 
in /boot/loader.conf.

gvinum will read your vinum configuration just fine so you only need to make 
the changes I suggested to get it to work.

Althought this is documented, it is not what I would call 'well documented' 
yet.

-- 
Ean Kingston

E-Mail: ean AT hedron DOT org
URL: http://www.hedron.org/
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum Problem

2005-03-27 Thread Robert Slade
On Sun, 2005-03-27 at 16:59, Ean Kingston wrote:
 On March 27, 2005 10:35 am, Robert Slade wrote:
  Hi,
 
  I have managed to setup a vinum volume using 2 striped disks, the volume
  is created and I can do newfs on it and mount it.
 
  However, when I set start_vinum=YES in rc.conf, vinum loads then I get
  panic, followed by hanging vnode.
 
  I'm using 5.3.
 
  Any pointers please.
 
 In 5.3, you need to use gvinum instead of vinum. To do this set 
 start_vinum=NO in /etc/rc.conf and set geom_vinum_load=YES 
 in /boot/loader.conf.
 
 gvinum will read your vinum configuration just fine so you only need to make 
 the changes I suggested to get it to work.
 
 Althought this is documented, it is not what I would call 'well documented' 
 yet.

Ean,

Thank you, that got me further, I appears to have created a new
/dev/gvinum/test, which seems to the right size, but when I mount it as
/test, I get not a directory when I try and ls it.

I have tried to find documentation on geom, but that seems to be related
to mirroring.

Rob

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum, newfs: could not open special device

2005-03-09 Thread FreeBSD questions mailing list
On 09 mrt 2005, at 22:11, Benjamin Keating wrote:
Hey all. Running FreeBSD 5.3 with GENERIC kernel. This isn't high
priority, but I've never had problems with Vinum before and this one's
got me stumped.

 /etc/vinum.conf ###

bigbang# cat /etc/vinum.conf
drive a device /dev/ad4e
drive b device /dev/ad5e
drive c device /dev/ad6e
volume backup
  plex org raid5 384k
sd length 0 drive a
sd length 0 drive b
sd length 0 drive c

### MY STEPS ###
1). bsdlabel -w /dev/ad{4,5,6}
2). bsdlabel -e /dev/ad{4,5,6}
2.1). removed partiion a
  copied partition c and changed fstype to vinum
  :wq
3). vinum config -f /etc/vinum.conf
try
3) vinum create -f /etc/vinum.conf
Arno
4). newfs /dev/vinum/backup
### PROBLEM  ###
newfs: /dev/vinum/backup: could not open special device
___
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum raid5 problems......

2005-03-04 Thread matt virus
Greg 'groggy' Lehey wrote:
On Thursday,  3 March 2005 at 15:35:31 -0600, matt virus wrote:
Hi all:
I have a FBSD 5.2.1 box running vinum.  7 *160gb drives in a raid5 array.
I can post specific errors and logs and such later, i'm away from the
box right now --- anybody have any thoughts ?

How about http://www.vinumvm.org/vinum/how-to-debug.html?
Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.
freebsd 5.2.1
specific problems:
1) a post-mount FSCK causes a kernel panic
2) an FSCK from single user mode errors with cannot allocate  
bytes for inphead
3) in single user mode, the array will mount and do a small fsck - 
recalculate the superblock, and then allow me to traverse the array. 
When i try to access files on the array, i get an error :
null rqg

zero source changes
Vinum List:
vinum - list
7 drives:
D d7   State: up   /dev/ad11s1dA: 0/156327 MB (0%)
D d6   State: up   /dev/ad10s1dA: 0/156327 MB (0%)
D d5   State: up   /dev/ad8s1d A: 0/156327 MB (0%)
D d4   State: up   /dev/ad7s1d A: 0/156327 MB (0%)
D d3   State: up   /dev/ad6s1d A: 0/156327 MB (0%)
D d2   State: up   /dev/ad5s1d A: 0/156327 MB (0%)
D d1   State: up   /dev/ad4s1d A: 0/156327 MB (0%)
1 volumes:
V raid5State: up   Plexes:   1 Size:915 GB
1 plexes:
P raid5.p0  R5 State: up   Subdisks: 7 Size:915 GB
7 subdisks:
S raid5.p0.s0  State: up   D: d1   Size:152 GB
S raid5.p0.s1  State: up   D: d2   Size:152 GB
S raid5.p0.s2  State: up   D: d3   Size:152 GB
S raid5.p0.s3  State: up   D: d4   Size:152 GB
S raid5.p0.s4  State: up   D: d5   Size:152 GB
S raid5.p0.s5  State: up   D: d6   Size:152 GB
S raid5.p0.s6  State: up   D: d7   Size:152 GB
Vinum log extract:
A few days ago, the array was mucked -- i saw one of subdisks was 
down...here's the log from that incident

2 Mar 2005 21:46:22.721677 *** vinum started ***
 2 Mar 2005 21:46:25.871262 list
 2 Mar 2005 21:46:28.935388 start
 2 Mar 2005 21:46:31.153046 list
 2 Mar 2005 21:46:46.616922 start raid5.p0.s6
 2 Mar 2005 21:46:49.949753 quit
Other than that - there is nothing of particular note in the vinum 
history - the entire file can be supplied if need be.

Kernel dumps will be supplied later today or tomorrow - i'm not going to 
load up and dump the raid5 array for fear of further corruption.  I'm 
gathering hardware to make an image of it onto a hardware raid5 
controller and see if i can salvage it from there.

If you need more, say the word.  The kernel dumps will come soon

--
Matt Virus (veer-iss)
http://www.mattvirus.net
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum raid5 problems......

2005-03-03 Thread Greg 'groggy' Lehey
On Thursday,  3 March 2005 at 15:35:31 -0600, matt virus wrote:
 Hi all:

 I have a FBSD 5.2.1 box running vinum.  7 *160gb drives in a raid5 array.

 I can post specific errors and logs and such later, i'm away from the
 box right now --- anybody have any thoughts ?

How about http://www.vinumvm.org/vinum/how-to-debug.html?

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgpFkoYE2PgXm.pgp
Description: PGP signature


Re: vinum vs. DPT smartcacheIV raid

2005-02-16 Thread Ean Kingston

 I have a box with DPT PM2044 SmartCacheIV UW-SCSI PCI cards which can do
 RAID-5 in hardware, but I'd have to use the DOS volume manager to set up
 the array. I have heard reports that vinum woudl be faster than using the
 native card. Is this true?

Doubtful, though I have heard that there are some rare special
circumstances where software raid can be faster. Given your hardware, you
will probably not experience those conditions.

 Should I not bother with doing the hardware
 raid
 and just go with vinum?

Use the hardware RAID, especially if you are going to use a simple RAID
configuration (like one big RAID-5 virtual disk). Just make sure you have
some way of figuring out if one of the disks goes bad. Worst case you
could boot off a DOS floppy once in a while to make sure all the disks are
still good.

 The rest of the system is a k6-2 400mhz with 256mb ram (amount might
 change).
 I will also have moderate network i/o on the pci bus (obviously).

-- 
Ean Kingston

E-Mail: ean_AT_hedron_DOT_org
URL: http://www.hedron.org/

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum vs. DPT smartcacheIV raid

2005-02-16 Thread Peter C. Lai
On Wed, Feb 16, 2005 at 09:58:17AM -0500, Ean Kingston wrote:
 
  I have a box with DPT PM2044 SmartCacheIV UW-SCSI PCI cards which can do
  RAID-5 in hardware, but I'd have to use the DOS volume manager to set up
  the array. I have heard reports that vinum woudl be faster than using the
  native card. Is this true?
 
 Doubtful, though I have heard that there are some rare special
 circumstances where software raid can be faster. Given your hardware, you
 will probably not experience those conditions.

The reason I asked is because
http://www.shub-internet.org/brad/FreeBSD/vinum.html
suggests vinum can be marginally better than the hardware raid on the
smartraid range of cards (which have an even faster processor onboard
than the smartcache range). The CPU platform is more or less comparable.
Then again it is with old Fbsd, so I don't know how accurate that is.

 
  Should I not bother with doing the hardware
  raid
  and just go with vinum?
 
 Use the hardware RAID, especially if you are going to use a simple RAID
 configuration (like one big RAID-5 virtual disk). Just make sure you have
 some way of figuring out if one of the disks goes bad. Worst case you
 could boot off a DOS floppy once in a while to make sure all the disks are
 still good.
 
  The rest of the system is a k6-2 400mhz with 256mb ram (amount might
  change).
  I will also have moderate network i/o on the pci bus (obviously).
 
 -- 
 Ean Kingston
 
 E-Mail: ean_AT_hedron_DOT_org
 URL: http://www.hedron.org/
 

-- 
Peter C. Lai
University of Connecticut
Dept. of Molecular and Cell Biology
Yale University School of Medicine
SenseLab | Research Assistant
http://cowbert.2y.net/

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum vs. DPT smartcacheIV raid

2005-02-16 Thread Ean Kingston

 On Wed, Feb 16, 2005 at 09:58:17AM -0500, Ean Kingston wrote:

  I have a box with DPT PM2044 SmartCacheIV UW-SCSI PCI cards which can
 do
  RAID-5 in hardware, but I'd have to use the DOS volume manager to set
 up
  the array. I have heard reports that vinum woudl be faster than using
 the
  native card. Is this true?

 Doubtful, though I have heard that there are some rare special
 circumstances where software raid can be faster. Given your hardware,
 you
 will probably not experience those conditions.

 The reason I asked is because
 http://www.shub-internet.org/brad/FreeBSD/vinum.html

I did not know that. Interesting read.

 suggests vinum can be marginally better than the hardware raid on the
 smartraid range of cards (which have an even faster processor onboard
 than the smartcache range). The CPU platform is more or less comparable.
 Then again it is with old Fbsd, so I don't know how accurate that is.

You may have noticed that there were comments about not trusting vinum's
RAID5 support in that article. If you are using FreeBSD 5.3, the default
is now gvinum (sort of second generation of vinum). The gvinum tools don't
give you the ability to create RAID5 virutal disks so if that is what you
want, you may not want to go with vinum or gvinum.

Another thing to consider is if you use software RAID and your application
gets CPU bound, you are going to take a double performance hit (both disk
and cpu).

I don't know your situation so it is your call.


  Should I not bother with doing the hardware
  raid
  and just go with vinum?

 Use the hardware RAID, especially if you are going to use a simple RAID
 configuration (like one big RAID-5 virtual disk). Just make sure you
 have
 some way of figuring out if one of the disks goes bad. Worst case you
 could boot off a DOS floppy once in a while to make sure all the disks
 are
 still good.

  The rest of the system is a k6-2 400mhz with 256mb ram (amount might
  change).
  I will also have moderate network i/o on the pci bus (obviously).

 --
 Ean Kingston

 E-Mail: ean_AT_hedron_DOT_org
 URL: http://www.hedron.org/


 --
 Peter C. Lai
 University of Connecticut
 Dept. of Molecular and Cell Biology
 Yale University School of Medicine
 SenseLab | Research Assistant
 http://cowbert.2y.net/

 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to
 [EMAIL PROTECTED]



-- 
Ean Kingston

E-Mail: ean_AT_hedron_DOT_org
URL: http://www.hedron.org/

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum vs. DPT smartcacheIV raid

2005-02-16 Thread Willem Jan Withagen
Peter C. Lai wrote:
I have a box with DPT PM2044 SmartCacheIV UW-SCSI PCI cards which can do 
RAID-5 in hardware, but I'd have to use the DOS volume manager to set up 
the array. I have heard reports that vinum woudl be faster than using the 
native card. Is this true? Should I not bother with doing the hardware raid 
and just go with vinum?

The rest of the system is a k6-2 400mhz with 256mb ram (amount might change).
I will also have moderate network i/o on the pci bus (obviously).
I still have one here lingering around somewhere on a shelf. It ran a 
4*1Gb diskarray RAID-5 when 1GB disk where still sort of big. So that is 
how old this card is.

With that I did have some unplesant experiences with this card:
- First and most major it seems that you need to have the right version 
firmware in it. Otherwise things might get seriously hossed at 
unexpected times. Just buffers timing out in the middle of the night.
- The other issue was that my disk where in an external cabinet and once 
the cable came loose. It killed the raid as expected, but it took me a 
long time to find some tools to force the disk up the brute way. Just to 
see if I could recover some of the data.
And like you say: Al these tools are DOS based.

Currently I'm running a 4*60Gb ATA RAID5 with old vinum on a 233 MHZ P2, 
256Mb with FBSD 5.1. This ATA just because ATA disk are so much cheaper 
per MB, and I do not need utmost dependability for my 6 PC office.
I've ordered 4*250Gb ATA disks this week to build a new RAID5, and I'll 
go again for vinum.

--WjW
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum vs. DPT smartcacheIV raid

2005-02-16 Thread Greg 'groggy' Lehey
[Format recovered--see http://www.lemis.com/email/email-format.html]

 X-Mailer: SquirrelMail/1.4.3a

This seems to have difficulty wrapping quotes.

On Wednesday, 16 February 2005 at 10:52:24 -0500, Ean Kingston wrote:

 On Wed, Feb 16, 2005 at 09:58:17AM -0500, Ean Kingston wrote:

 I have a box with DPT PM2044 SmartCacheIV UW-SCSI PCI cards which
 can do RAID-5 in hardware, but I'd have to use the DOS volume
 manager to set up the array. I have heard reports that vinum
 woudl be faster than using the native card. Is this true?

 Doubtful, though I have heard that there are some rare special
 circumstances where software raid can be faster.

Recall that there are no real hardware RAID controllers on the
market.  The difference is whether you have a special processor on the
controller card or not.  To determine which is faster, you need to
compare the hardware on the card and the hardware in the system.
 The reason I asked is because
 http://www.shub-internet.org/brad/FreeBSD/vinum.html

 I did not know that. Interesting read.

 suggests vinum can be marginally better than the hardware raid on
 the smartraid range of cards (which have an even faster processor
 onboard than the smartcache range). The CPU platform is more or
 less comparable.  Then again it is with old Fbsd, so I don't know
 how accurate that is.

I'd guess that the version of FreeBSD is no particular relevance.

 You may have noticed that there were comments about not trusting
 vinum's RAID5 support in that article.

You'll also note that these claims are in no way substantiated.  It's
word of mouth:

 However, I still don't trust RAID-5 under vinum (it has had a long
 and colorful history of surprisingly negative interactions with
 software that it should not, such as softupdates),

There is in fact no substantiation whatsoever for this claim.
problems have been reported that suspected either Vinum or soft
updates, and which were frequently related to neither.  We have no
reason to believe that there was ever the kind of problem he's talking
about here.

 and I have not yet had a chance to test vinum under failure mode
 conditions (where at least one disk of a RAID set has failed).

I have.  It works.  There appears to be a bug in reintegrating disks
with a live file system, so it should be unmounted first.

(end of quotation).

 If you are using FreeBSD 5.3, the default is now gvinum (sort of
 second generation of vinum). The gvinum tools don't give you the
 ability to create RAID5 virutal disks so if that is what you want,
 you may not want to go with vinum or gvinum.

I'm not sure what you mean here.  I haven't tried RAID-5 on gvinum,
but it's the first time I've heard that it's not supported.

 Another thing to consider is if you use software RAID and your
 application gets CPU bound, you are going to take a double
 performance hit (both disk and cpu).

One or the other.  It's a tradeoff.

[54 lines extraneous text in original removed]

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgpiegA6yPsDd.pgp
Description: PGP signature


Re: vinum vs. DPT smartcacheIV raid

2005-02-16 Thread Peter C. Lai
On Thu, Feb 17, 2005 at 09:44:51AM +1030, Greg 'groggy' Lehey wrote:
snip 
 Recall that there are no real hardware RAID controllers on the
 market.  The difference is whether you have a special processor on the
 controller card or not.  To determine which is faster, you need to
 compare the hardware on the card and the hardware in the system.
/snip

If I understand the DPT manual correctly:
My cards have a motorola 68000-based cpu. The faster smartraid cards have
motorola 68020-based cpus as well as much larger cache. My card has a max
transaction rate of 20MHz. It sends 2bytes down the wire per clockcycle (SCSI
DDR? LOL) so it has a max througput of 40Mbps.

-- 
Peter C. Lai
University of Connecticut
Dept. of Molecular and Cell Biology
Yale University School of Medicine
SenseLab | Research Assistant
http://cowbert.2y.net/



smime.p7s
Description: S/MIME cryptographic signature


Re: vinum in 4.x poor performer?

2005-02-12 Thread Michael L. Squires

On Wed, 9 Feb 2005, Marc G. Fournier wrote:
I read that somewhere, but then every example shows 256k as being the strip 
size :(  Now, with a 5 drives RAID5 array (which I'll be moving that server 
to over the next couple of weeks), 256k isn't an issue?  or is there 
something better i should set it to?

The 4.10 man 8 vinum page shows 512K in the example, but later on it says 
that if cylinder groups are 32MB in size you should avoid a power of 2 
which will place all superblocks and inodes on the same subdisk and that 
an odd number (479kB is the example) should be chosen.

Mike Squires
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum in 4.x poor performer?

2005-02-09 Thread Loren M. Lang
On Wed, Feb 09, 2005 at 02:32:30AM -0400, Marc G. Fournier wrote:
 
 Is there a command that I can run that provide me the syscall/sec value, 
 that I could use in a script?  I know vmstat reports it, but is there an 
 easier way the having to parse the output? a perl module maybe, that 
 already does it?

vmstat shouldn't be too hard to parse, try the following:

vmstat|tail -1|awk '{print $15;}'

To print out the 15th field of vmstat.  Now if you want vmstat to keep
running every five seconds or something, it's a little more complicated:

vmstat 5|grep -v 'procs\|avm'|awk '{print $15;}'

 
 Thanks ...
 
 On Wed, 9 Feb 2005, Marc G. Fournier wrote:
 
 On Tue, 8 Feb 2005, Dan Nelson wrote:
 
 Details on the array's performance, I think.  Software RAID5 will
 definitely have poor write performance (logging disks solve that
 problem but vinum doesn't do that), but should have excellent read
 rates.  From this output, however:
 
 systat -v output help:
 4 usersLoad  4.64  5.58  5.77
 
 Proc:r  p  d  s  wCsw  Trp  Sys  Int  Sof  Flt
 24 9282   949 8414*  678  349 8198
 
 54.6%Sys   0.2%Intr 45.2%User  0.0%Nice  0.0%Idl
 
 Disks   da0   da1   da2   da3   da4 pass0 pass1
 KB/t   5.32  9.50 12.52 16.00  9.00  0.00  0.00
 tps  23 2 4 3 1 0 0
 MB/s   0.12  0.01  0.05  0.04  0.01  0.00  0.00
 % busy3 1 1 1 0 0 0
 
 , it looks like your disks aren't being touched at all.  You are doing
 over 9 syscalls/second, though, which is mighty high.  The 50% Sys
 doesn't look good either.  You may have a runaway process doing some
 syscall over and over.  If this is not an MPSAFE syscall (see
 /sys/kern/syscalls.master ), it will also prevent other processes from
 making non-MPSAFE syscalls, and in 4.x that's most of them.
 
 Wow, that actually pointed me in the right direction, I think ... I just 
 killed an http process that was using alot of CPU, and syscalls drop'd 
 down to a numeric value again ... I'm still curious as to why this only 
 seem sto affect my Dual-Xeon box though :(
 
 Thanks ...
 
 
 Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
 Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to 
 [EMAIL PROTECTED]
 
 
 
 Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
 Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to [EMAIL PROTECTED]

-- 
I sense much NT in you.
NT leads to Bluescreen.
Bluescreen leads to downtime.
Downtime leads to suffering.
NT is the path to the darkside.
Powerful Unix is.

Public Key: ftp://ftp.tallye.com/pub/lorenl_pubkey.asc
Fingerprint: B3B9 D669 69C9 09EC 1BCD  835A FAF3 7A46 E4A3 280C
 
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum in 4.x poor performer?

2005-02-09 Thread Mark A. Garcia
Marc G. Fournier wrote:
Self-followup .. the server config is as follows ... did I do maybe 
mis-configure the array?

# Vinum configuration of neptune.hub.org, saved at Wed Feb  9 00:13:52 
2005
drive d0 device /dev/da1s1a
drive d1 device /dev/da2s1a
drive d2 device /dev/da3s1a
drive d3 device /dev/da4s1a
volume vm
plex name vm.p0 org raid5 1024s vol vm sd name vm.p0.s0 drive d0 plex 
vm.p0 len 142314496s driveoffset 265s plexoffset 0s
sd name vm.p0.s1 drive d1 plex vm.p0 len 142314496s driveoffset 265s 
plexoffset 1024s
sd name vm.p0.s2 drive d2 plex vm.p0 len 142314496s driveoffset 265s 
plexoffset 2048s
sd name vm.p0.s3 drive d3 plex vm.p0 len 142314496s driveoffset 265s 
plexoffset 3072s

bassed on an initial config file that looks like:
neptune# cat /root/raid5
drive d0 device /dev/da1s1a
drive d1 device /dev/da2s1a
drive d2 device /dev/da3s1a
drive d3 device /dev/da4s1a
volume vm
 plex org raid5 512k
 sd length 0 drive d0
 sd length 0 drive d1
 sd length 0 drive d2
 sd length 0 drive d3
It's worth pointing out that your performance on the raid-5 can change 
for the better if you avoid having the stripe size be a power of 2.  
This is especially true if the (n)umber of disks are a 2^n.

Cheers,
-.mag
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum in 4.x poor performer?

2005-02-09 Thread Mark A. Garcia
Olivier Nicole wrote:
All servers run RAID5 .. only one other is using vinum, the other 3 are 
using hardware RAID controllers ...
   


Come on, of course a software solution will be slower than an hardware
solution. What would you expect? :))
(Given it is same disk type/speed/controler...)
 

Usually this is the case, but it's also very dependent on the hardware 
raid controller.  There are situations where a software raid (vinum in 
this case) can outperform some hardward controlers under specific 
circumstances, i.e. sequential reads w/very large stripe size.  An 
example is an image server where the average image might be 3MB.  A 
stripe size of 434kB would cause ~7 transfers of data.  A case for a 
larger stripe size of 5MB would greatly improve performance.  There 
would be an 2MB diff in the avg file size that doesn't have any useable 
data.  Only 1 transfer of data would occur.  Vinum optimizes the data 
transfered to the exact 3MB of the file, whereas some hardware controls 
would transfer the whole 5MB stripe, adding some bandwidth latency and 
transfer time.  Again, it's a matter of specific cases, and assuming 
'performance' based on differing conduits for data transfer can just 
skirt the real issue, if there is any.

Cheers,
-.mag
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum in 4.x poor performer?

2005-02-09 Thread Marc G. Fournier
On Wed, 9 Feb 2005, Mark A. Garcia wrote:
Marc G. Fournier wrote:
Self-followup .. the server config is as follows ... did I do maybe 
mis-configure the array?

# Vinum configuration of neptune.hub.org, saved at Wed Feb  9 00:13:52 2005
drive d0 device /dev/da1s1a
drive d1 device /dev/da2s1a
drive d2 device /dev/da3s1a
drive d3 device /dev/da4s1a
volume vm
plex name vm.p0 org raid5 1024s vol vm sd name vm.p0.s0 drive d0 plex vm.p0 
len 142314496s driveoffset 265s plexoffset 0s
sd name vm.p0.s1 drive d1 plex vm.p0 len 142314496s driveoffset 265s 
plexoffset 1024s
sd name vm.p0.s2 drive d2 plex vm.p0 len 142314496s driveoffset 265s 
plexoffset 2048s
sd name vm.p0.s3 drive d3 plex vm.p0 len 142314496s driveoffset 265s 
plexoffset 3072s

bassed on an initial config file that looks like:
neptune# cat /root/raid5
drive d0 device /dev/da1s1a
drive d1 device /dev/da2s1a
drive d2 device /dev/da3s1a
drive d3 device /dev/da4s1a
volume vm
 plex org raid5 512k
 sd length 0 drive d0
 sd length 0 drive d1
 sd length 0 drive d2
 sd length 0 drive d3
It's worth pointing out that your performance on the raid-5 can change for 
the better if you avoid having the stripe size be a power of 2.  This is 
especially true if the (n)umber of disks are a 2^n.
I read that somewhere, but then every example shows 256k as being the 
strip size :(  Now, with a 5 drives RAID5 array (which I'll be moving that 
server to over the next couple of weeks), 256k isn't an issue?  or is 
there something better i should set it to?


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


99% CPU usage in System (Was: Re: vinum in 4.x poor performer?)

2005-02-09 Thread Marc G. Fournier
still getting this:
# vmstat 5
 procs  memory  pagedisks faults  cpu
 r b w avmfre  flt  re  pi  po  fr  sr da0 da1   in   sy  cs us sy id
11 2 0 3020036 267944  505   2   1   1 680  62   0   0  515 4005 918  7 38 55
19 2 0 3004568 268672  242   0   0   0 277   0   0   3  338 2767 690  1 99  0
21 2 0 2999152 271240  135   0   0   0 306   0   6   9  363 1749 525  1 99  0
13 2 0 3001508 269692   87   0   0   0  24   0   3   3  302 1524 285  1 99  0
17 2 0 3025892 268612   98   0   1   0  66   0   5   6  312 1523 479  3 97  0
Is there a way of determining what is sucking up so much Sys time?  stuff 
like pperl scripts running and such would use 'user time', no?  I've got 
some high CPU processes running, but would expect them to be shooting up 
the 'user time' ...

USER PID %CPU %MEM   VSZ  RSS  TT  STAT STARTED  TIME COMMAND
setiathome 21338 16.3  0.2  7888 7408  ??  RJ9:05PM   0:11.35 /usr/bin/perl 
-wT /usr/local/majordomo/bin/mj_queuerun -v 0
setiathome 21380 15.1  0.1  2988 2484  ??  RsJ   9:06PM   0:02.42 /usr/bin/perl 
-wT /usr/local/majordomo/bin/mj_enqueue -r -d postgresql.org -l pgsql-sql -P10 
-p10
setiathome 21384 15.5  0.1  2988 2484  ??  RsJ   9:06PM   0:02.31 /usr/bin/perl 
-wT /usr/local/majordomo/bin/mj_enqueue -r -d postgresql.org -l pgsql-docs -P10 
-p10
setiathome 21389 15.0  0.1  2720 2216  ??  RsJ   9:06PM   0:02.06 /usr/bin/perl 
-wT /usr/local/majordomo/bin/mj_enqueue -r -d postgresql.org -l pgsql-hackers 
-P10 -p10
setiathome 21386 13.7  0.1  2720 2216  ??  RsJ   9:06PM   0:02.03 /usr/bin/perl 
-wT /usr/local/majordomo/bin/mj_enqueue -r -d postgresql.org -l pgsql-ports 
-P10 -p10
setiathome 21387 13.2  0.1  2724 2220  ??  RsJ   9:06PM   0:01.92 /usr/bin/perl 
-wT /usr/local/majordomo/bin/mj_enqueue -r -d postgresql.org -l 
pgsql-interfaces -P10 -p10
setiathome 21390 14.6  0.1  2724 2216  ??  RsJ   9:06PM   0:01.93 /usr/bin/perl 
-wT /usr/local/majordomo/bin/mj_enqueue -o -d postgresql.org -l 
pgsql-performance -P10 -p10
setiathome 21330 12.0  0.2  8492 7852  ??  RJ9:05PM   0:15.55 /usr/bin/perl 
-wT /dev/fd/3//usr/local/www/mj/mj_wwwusr (perl5.8.5)
setiathome  7864  8.9  0.2  8912 8452  ??  RJ7:20PM  29:54.88 /usr/bin/perl 
-wT /usr/local/majordomo/bin/mj_trigger -t hourly
Is there some way of finding out where all the Sys Time is being used? 
Something more fine grained them what vmstat/top shows?

On Wed, 9 Feb 2005, Loren M. Lang wrote:
On Wed, Feb 09, 2005 at 02:32:30AM -0400, Marc G. Fournier wrote:
Is there a command that I can run that provide me the syscall/sec value,
that I could use in a script?  I know vmstat reports it, but is there an
easier way the having to parse the output? a perl module maybe, that
already does it?
vmstat shouldn't be too hard to parse, try the following:
vmstat|tail -1|awk '{print $15;}'
To print out the 15th field of vmstat.  Now if you want vmstat to keep
running every five seconds or something, it's a little more complicated:
vmstat 5|grep -v 'procs\|avm'|awk '{print $15;}'
Thanks ...
On Wed, 9 Feb 2005, Marc G. Fournier wrote:
On Tue, 8 Feb 2005, Dan Nelson wrote:
Details on the array's performance, I think.  Software RAID5 will
definitely have poor write performance (logging disks solve that
problem but vinum doesn't do that), but should have excellent read
rates.  From this output, however:
systat -v output help:
   4 usersLoad  4.64  5.58  5.77

Proc:r  p  d  s  wCsw  Trp  Sys  Int  Sof  Flt
   24 9282   949 8414*  678  349 8198

54.6%Sys   0.2%Intr 45.2%User  0.0%Nice  0.0%Idl

Disks   da0   da1   da2   da3   da4 pass0 pass1
KB/t   5.32  9.50 12.52 16.00  9.00  0.00  0.00
tps  23 2 4 3 1 0 0
MB/s   0.12  0.01  0.05  0.04  0.01  0.00  0.00
% busy3 1 1 1 0 0 0
, it looks like your disks aren't being touched at all.  You are doing
over 9 syscalls/second, though, which is mighty high.  The 50% Sys
doesn't look good either.  You may have a runaway process doing some
syscall over and over.  If this is not an MPSAFE syscall (see
/sys/kern/syscalls.master ), it will also prevent other processes from
making non-MPSAFE syscalls, and in 4.x that's most of them.
Wow, that actually pointed me in the right direction, I think ... I just
killed an http process that was using alot of CPU, and syscalls drop'd
down to a numeric value again ... I'm still curious as to why this only
seem sto affect my Dual-Xeon box though :(
Thanks ...

Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to
[EMAIL PROTECTED]

Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: 

Re: 99% CPU usage in System (Was: Re: vinum in 4.x poor performer?)

2005-02-09 Thread Chad Leigh -- Shire . Net LLC
On Feb 9, 2005, at 6:34 PM, Marc G. Fournier wrote:
Most odd, there definitely has to be a problem with the Dual-Xeon 
ysystem ... doing the same vmstat on my other vinum based system, 
running more, but on a Dual-PIII shows major idle time:

# vmstat 5
 procs  memory  pagedisks faults  
cpu
 r b w avmfre  flt  re  pi  po  fr  sr da0 da1   in   sy  cs 
us sy id
20 1 0 4088636 219556 1664   1   2   1 3058 217   0   0  856 7937 2186 
51 15 34
20 1 0 4115372 224220  472   0   0   0 2066   0   0  35  496 2915 745  
7  7 86
10 1 0 4125252 221788  916   0   0   0 2513   0   2  71  798 4821 1538 
 6 11 83
 9 1 0   36508 228452  534   0   0   2 2187   0   0  46  554 3384 1027 
 3  8 89
11 1 0   27672 218828  623   0   6   0 2337   0   0  61  583 2607 679  
3  9 88
16 1 05776 220540  989   0   0   0 2393   0   9  32  514 3247 1115 
 3  8 90

Which leads me further to believe this is a Dual-Xeon problem, and 
much further away from believing it has anything to do with software 
RAID :(
I only use AMD, so I cannot provide specifics, but look in the BIOS at 
boot time and see if there is anything strange looking in the settings.

Chad

On Wed, 9 Feb 2005, Marc G. Fournier wrote:
still getting this:
# vmstat 5
procs  memory  pagedisks faults  
cpu
r b w avmfre  flt  re  pi  po  fr  sr da0 da1   in   sy  cs 
us sy id
11 2 0 3020036 267944  505   2   1   1 680  62   0   0  515 4005 918  
7 38 55
19 2 0 3004568 268672  242   0   0   0 277   0   0   3  338 2767 690  
1 99  0
21 2 0 2999152 271240  135   0   0   0 306   0   6   9  363 1749 525  
1 99  0
13 2 0 3001508 269692   87   0   0   0  24   0   3   3  302 1524 285  
1 99  0
17 2 0 3025892 268612   98   0   1   0  66   0   5   6  312 1523 479  
3 97  0

Is there a way of determining what is sucking up so much Sys time?  
stuff like pperl scripts running and such would use 'user time', no?  
I've got some high CPU processes running, but would expect them to be 
shooting up the 'user time' ...

USER PID %CPU %MEM   VSZ  RSS  TT  STAT STARTED  TIME 
COMMAND
setiathome 21338 16.3  0.2  7888 7408  ??  RJ9:05PM   0:11.35 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_queuerun -v 0
setiathome 21380 15.1  0.1  2988 2484  ??  RsJ   9:06PM   0:02.42 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -r -d 
postgresql.org -l pgsql-sql -P10 -p10
setiathome 21384 15.5  0.1  2988 2484  ??  RsJ   9:06PM   0:02.31 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -r -d 
postgresql.org -l pgsql-docs -P10 -p10
setiathome 21389 15.0  0.1  2720 2216  ??  RsJ   9:06PM   0:02.06 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -r -d 
postgresql.org -l pgsql-hackers -P10 -p10
setiathome 21386 13.7  0.1  2720 2216  ??  RsJ   9:06PM   0:02.03 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -r -d 
postgresql.org -l pgsql-ports -P10 -p10
setiathome 21387 13.2  0.1  2724 2220  ??  RsJ   9:06PM   0:01.92 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -r -d 
postgresql.org -l pgsql-interfaces -P10 -p10
setiathome 21390 14.6  0.1  2724 2216  ??  RsJ   9:06PM   0:01.93 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -o -d 
postgresql.org -l pgsql-performance -P10 -p10
setiathome 21330 12.0  0.2  8492 7852  ??  RJ9:05PM   0:15.55 
/usr/bin/perl -wT /dev/fd/3//usr/local/www/mj/mj_wwwusr (perl5.8.5)
setiathome  7864  8.9  0.2  8912 8452  ??  RJ7:20PM  29:54.88 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_trigger -t hourly

Is there some way of finding out where all the Sys Time is being 
used? Something more fine grained them what vmstat/top shows?

On Wed, 9 Feb 2005, Loren M. Lang wrote:
On Wed, Feb 09, 2005 at 02:32:30AM -0400, Marc G. Fournier wrote:
Is there a command that I can run that provide me the syscall/sec 
value,
that I could use in a script?  I know vmstat reports it, but is 
there an
easier way the having to parse the output? a perl module maybe, that
already does it?
vmstat shouldn't be too hard to parse, try the following:
vmstat|tail -1|awk '{print $15;}'
To print out the 15th field of vmstat.  Now if you want vmstat to 
keep
running every five seconds or something, it's a little more 
complicated:
vmstat 5|grep -v 'procs\|avm'|awk '{print $15;}'
Thanks ...
On Wed, 9 Feb 2005, Marc G. Fournier wrote:
On Tue, 8 Feb 2005, Dan Nelson wrote:
Details on the array's performance, I think.  Software RAID5 will
definitely have poor write performance (logging disks solve that
problem but vinum doesn't do that), but should have excellent read
rates.  From this output, however:
systat -v output help:
   4 usersLoad  4.64  5.58  5.77
Proc:r  p  d  s  wCsw  Trp  Sys  Int  Sof  Flt
   24 9282   949 8414*  678  349 8198
54.6%Sys   0.2%Intr 45.2%User  0.0%Nice  0.0%Idl
Disks   da0   da1   da2   da3   da4 pass0 pass1
KB/t   5.32  9.50 12.52 16.00  9.00  0.00  0.00
tps  23 2 4 3 1 0 0
MB/s   0.12  0.01 

Re: 99% CPU usage in System (Was: Re: vinum in 4.x poor performer?)

2005-02-09 Thread Marc G. Fournier
On Wed, 9 Feb 2005, Chad Leigh -- Shire.Net LLC wrote:
On Feb 9, 2005, at 6:34 PM, Marc G. Fournier wrote:
Most odd, there definitely has to be a problem with the Dual-Xeon ysystem 
... doing the same vmstat on my other vinum based system, running more, but 
on a Dual-PIII shows major idle time:

# vmstat 5
 procs  memory  pagedisks faults  cpu
 r b w avmfre  flt  re  pi  po  fr  sr da0 da1   in   sy  cs us sy 
id
20 1 0 4088636 219556 1664   1   2   1 3058 217   0   0  856 7937 2186 51 
15 34
20 1 0 4115372 224220  472   0   0   0 2066   0   0  35  496 2915 745  7  7 
86
10 1 0 4125252 221788  916   0   0   0 2513   0   2  71  798 4821 1538  6 
11 83
 9 1 0   36508 228452  534   0   0   2 2187   0   0  46  554 3384 1027  3 
8 89
11 1 0   27672 218828  623   0   6   0 2337   0   0  61  583 2607 679  3  9 
88
16 1 05776 220540  989   0   0   0 2393   0   9  32  514 3247 1115  3 
8 90

Which leads me further to believe this is a Dual-Xeon problem, and much 
further away from believing it has anything to do with software RAID :(
I only use AMD, so I cannot provide specifics, but look in the BIOS at boot 
time and see if there is anything strange looking in the settings.
Unfortunately, I'm dealing with remote servers, so without something 
specific to get a remote tech to check, BIOS related stuff will have to 
wait until I can visit the servers persoally :(

Chad

On Wed, 9 Feb 2005, Marc G. Fournier wrote:
still getting this:
# vmstat 5
procs  memory  pagedisks faults  cpu
r b w avmfre  flt  re  pi  po  fr  sr da0 da1   in   sy  cs us sy 
id
11 2 0 3020036 267944  505   2   1   1 680  62   0   0  515 4005 918  7 38 
55
19 2 0 3004568 268672  242   0   0   0 277   0   0   3  338 2767 690  1 99 
0
21 2 0 2999152 271240  135   0   0   0 306   0   6   9  363 1749 525  1 99 
0
13 2 0 3001508 269692   87   0   0   0  24   0   3   3  302 1524 285  1 99 
0
17 2 0 3025892 268612   98   0   1   0  66   0   5   6  312 1523 479  3 97 
0

Is there a way of determining what is sucking up so much Sys time?  stuff 
like pperl scripts running and such would use 'user time', no?  I've got 
some high CPU processes running, but would expect them to be shooting up 
the 'user time' ...

USER PID %CPU %MEM   VSZ  RSS  TT  STAT STARTED  TIME COMMAND
setiathome 21338 16.3  0.2  7888 7408  ??  RJ9:05PM   0:11.35 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_queuerun -v 0
setiathome 21380 15.1  0.1  2988 2484  ??  RsJ   9:06PM   0:02.42 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -r -d postgresql.org 
-l pgsql-sql -P10 -p10
setiathome 21384 15.5  0.1  2988 2484  ??  RsJ   9:06PM   0:02.31 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -r -d postgresql.org 
-l pgsql-docs -P10 -p10
setiathome 21389 15.0  0.1  2720 2216  ??  RsJ   9:06PM   0:02.06 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -r -d postgresql.org 
-l pgsql-hackers -P10 -p10
setiathome 21386 13.7  0.1  2720 2216  ??  RsJ   9:06PM   0:02.03 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -r -d postgresql.org 
-l pgsql-ports -P10 -p10
setiathome 21387 13.2  0.1  2724 2220  ??  RsJ   9:06PM   0:01.92 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -r -d postgresql.org 
-l pgsql-interfaces -P10 -p10
setiathome 21390 14.6  0.1  2724 2216  ??  RsJ   9:06PM   0:01.93 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_enqueue -o -d postgresql.org 
-l pgsql-performance -P10 -p10
setiathome 21330 12.0  0.2  8492 7852  ??  RJ9:05PM   0:15.55 
/usr/bin/perl -wT /dev/fd/3//usr/local/www/mj/mj_wwwusr (perl5.8.5)
setiathome  7864  8.9  0.2  8912 8452  ??  RJ7:20PM  29:54.88 
/usr/bin/perl -wT /usr/local/majordomo/bin/mj_trigger -t hourly

Is there some way of finding out where all the Sys Time is being used? 
Something more fine grained them what vmstat/top shows?

On Wed, 9 Feb 2005, Loren M. Lang wrote:
On Wed, Feb 09, 2005 at 02:32:30AM -0400, Marc G. Fournier wrote:
Is there a command that I can run that provide me the syscall/sec value,
that I could use in a script?  I know vmstat reports it, but is there an
easier way the having to parse the output? a perl module maybe, that
already does it?
vmstat shouldn't be too hard to parse, try the following:
vmstat|tail -1|awk '{print $15;}'
To print out the 15th field of vmstat.  Now if you want vmstat to keep
running every five seconds or something, it's a little more complicated:
vmstat 5|grep -v 'procs\|avm'|awk '{print $15;}'
Thanks ...
On Wed, 9 Feb 2005, Marc G. Fournier wrote:
On Tue, 8 Feb 2005, Dan Nelson wrote:
Details on the array's performance, I think.  Software RAID5 will
definitely have poor write performance (logging disks solve that
problem but vinum doesn't do that), but should have excellent read
rates.  From this output, however:
systat -v output help:
   4 usersLoad  4.64  5.58  5.77
Proc:r  p  d  s  wCsw  Trp  Sys  Int  Sof  Flt
   24 9282   949 

Re: vinum in 4.x poor performer?

2005-02-08 Thread Olivier Nicole
 and it performs worse then any of 
 my other servers, and I have less running on it then the other servers ...

What are you other servers? What RAID system/level?

Of course a software RAID5 is slower than a plain file system on a
disk.

Olivier
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum in 4.x poor performer?

2005-02-08 Thread Greg 'groggy' Lehey
On Tuesday,  8 February 2005 at 23:21:54 -0400, Marc G. Fournier wrote:

 I have a Dual-Xeon server with 4G of RAM, with its primary file system
 consisting of 4x73G SCSI drives running RAID5 using vinum ... the
 operating system is currently FreeBSD 4.10-STABLE #1: Fri Oct 22 15:06:55
 ADT 2004 ... swap usage is 0% (6149) ... and it performs worse then any of
 my other servers, and I have less running on it then the other servers ...

 I also have HTT disabled on this server ... and softupdates enabled on the
 file system ...

 That said ... am I hitting limits of software raid or is there something I
 should be looking at as far as performance is concerned?  Maybe something
 I have misconfigured?

Based on what you've said, it's impossible to tell.  Details would be
handy.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgpCHjBCzWxDc.pgp
Description: PGP signature


Re: vinum in 4.x poor performer?

2005-02-08 Thread Marc G. Fournier
On Wed, 9 Feb 2005, Olivier Nicole wrote:
and it performs worse then any of
my other servers, and I have less running on it then the other servers ...
What are you other servers? What RAID system/level?
All servers run RAID5 .. only one other is using vinum, the other 3 are 
using hardware RAID controllers ...


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum in 4.x poor performer?

2005-02-08 Thread Olivier Nicole
 All servers run RAID5 .. only one other is using vinum, the other 3 are 
 using hardware RAID controllers ...


Come on, of course a software solution will be slower than an hardware
solution. What would you expect? :))

(Given it is same disk type/speed/controler...)

Olivier
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum in 4.x poor performer?

2005-02-08 Thread Marc G. Fournier
Self-followup .. the server config is as follows ... did I do maybe 
mis-configure the array?

# Vinum configuration of neptune.hub.org, saved at Wed Feb  9 00:13:52 2005
drive d0 device /dev/da1s1a
drive d1 device /dev/da2s1a
drive d2 device /dev/da3s1a
drive d3 device /dev/da4s1a
volume vm
plex name vm.p0 org raid5 1024s vol vm 
sd name vm.p0.s0 drive d0 plex vm.p0 len 142314496s driveoffset 265s plexoffset 0s
sd name vm.p0.s1 drive d1 plex vm.p0 len 142314496s driveoffset 265s plexoffset 1024s
sd name vm.p0.s2 drive d2 plex vm.p0 len 142314496s driveoffset 265s plexoffset 2048s
sd name vm.p0.s3 drive d3 plex vm.p0 len 142314496s driveoffset 265s plexoffset 3072s

bassed on an initial config file that looks like:
neptune# cat /root/raid5
drive d0 device /dev/da1s1a
drive d1 device /dev/da2s1a
drive d2 device /dev/da3s1a
drive d3 device /dev/da4s1a
volume vm
 plex org raid5 512k
 sd length 0 drive d0
 sd length 0 drive d1
 sd length 0 drive d2
 sd length 0 drive d3
On Tue, 8 Feb 2005, Marc G. Fournier wrote:
I have a Dual-Xeon server with 4G of RAM, with its primary file system 
consisting of 4x73G SCSI drives running RAID5 using vinum ... the operating 
system is currently FreeBSD 4.10-STABLE #1: Fri Oct 22 15:06:55 ADT 2004 ... 
swap usage is 0% (6149) ... and it performs worse then any of my other 
servers, and I have less running on it then the other servers ...

I also have HTT disabled on this server ... and softupdates enabled on the 
file system ...

That said ... am I hitting limits of software raid or is there something I 
should be looking at as far as performance is concerned?  Maybe something I 
have misconfigured?



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
On Tue, 8 Feb 2005, Marc G. Fournier wrote:
On Wed, 9 Feb 2005, Olivier Nicole wrote:
and it performs worse then any of
my other servers, and I have less running on it then the other servers ...
What are you other servers? What RAID system/level?
All servers run RAID5 .. only one other is using vinum, the other 3 are using 
hardware RAID controllers ...


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum in 4.x poor performer?

2005-02-08 Thread Marc G. Fournier
On Wed, 9 Feb 2005, Greg 'groggy' Lehey wrote:
On Tuesday,  8 February 2005 at 23:21:54 -0400, Marc G. Fournier wrote:
I have a Dual-Xeon server with 4G of RAM, with its primary file system
consisting of 4x73G SCSI drives running RAID5 using vinum ... the
operating system is currently FreeBSD 4.10-STABLE #1: Fri Oct 22 15:06:55
ADT 2004 ... swap usage is 0% (6149) ... and it performs worse then any of
my other servers, and I have less running on it then the other servers ...
I also have HTT disabled on this server ... and softupdates enabled on the
file system ...
That said ... am I hitting limits of software raid or is there something I
should be looking at as far as performance is concerned?  Maybe something
I have misconfigured?
Based on what you've said, it's impossible to tell.  Details would be
handy.
Like?  I'm not sure what would be useful for this one ... I just sent in 
my current drive config ... something else useful?

systat -v output help:
4 usersLoad  4.64  5.58  5.77  Feb  9 00:24
Mem:KBREALVIRTUAL VN PAGER  SWAP PAGER
Tot   Share  TotShareFree in  out in  out
Act 1904768  137288  3091620   381128  159276 count
All 3850780  221996  1078752   605460 pages
 7921 zfod   Interrupts
Proc:r  p  d  s  wCsw  Trp  Sys  Int  Sof  Flt242 cow 681 total
24 9282   949 8414*  678  349 8198 566916 wireahd0 irq16
  2527420 act  67 ahd1 irq17
54.6%Sys   0.2%Intr 45.2%User  0.0%Nice  0.0%Idl   608208 inact   157 em0 irq18
|||||||||| 146620 cache   200 clk irq0
===12656 free257 rtc irq8
  daefr
Namei Name-cacheDir-cache7363 prcfr
Calls hits% hits% react
4610646005  100   130 pdwake
  pdpgs
Disks   da0   da1   da2   da3   da4 pass0 pass1   intrn
KB/t   5.32  9.50 12.52 16.00  9.00  0.00  0.00204096 buf
tps  23 2 4 3 1 0 0  1610 dirtybuf
MB/s   0.12  0.01  0.05  0.04  0.01  0.00  0.00512000 desiredvnodes
% busy3 1 1 1 0 0 0397436 numvnodes
   166179 freevnodes
Drives da1 - da4 are used on the vinum array da0 is just the system drive 
...


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Dual-Xeon vs Dual-PIII (Was: Re: vinum in 4.x poor performer?)

2005-02-08 Thread Marc G. Fournier
The more I'm looking at this, the less I can believe my 'issue' is with 
vinum ... based on one of my other machines, it just doesn't *feel* right 


I have two servers that are fairly similar in config ... both running 
vinum RAID5 over 4 disks ... one is the Dual-Xeon that I'm finding 
problematic with 73G Seagate drives, and the other is the Dual-PIII with 
36G Seagate drives ...

The reason that I'm finding it hard to believe that my problem is with 
vinum is that the Dual-PIII is twice as loaded as the Dual-Xeon, but 
hardly seems to break a sweat ...

In fact, out of all my servers (3xDual-PIII, 1xDual-Athlon and 
1xDual-Xeon), only the Dual-Xeon doesn't seem to be able to perform ...

Now, out of all of the servers, only the Dual-Xeon, of course, supports 
HTT, which I *believe* is disabled, but from dmesg:

Copyright (c) 1992-2004 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
The Regents of the University of California. All rights reserved.
FreeBSD 4.10-STABLE #1: Fri Oct 22 15:06:55 ADT 2004
[EMAIL PROTECTED]:/usr/obj/usr/src/sys/kernel
Timecounter i8254  frequency 1193182 Hz
CPU: Intel(R) Xeon(TM) CPU 2.40GHz (2392.95-MHz 686-class CPU)
  Origin = GenuineIntel  Id = 0xf27  Stepping = 7
  
Features=0xbfebfbffFPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE
  Hyperthreading: 2 logical CPUs
real memory  = 4026466304 (3932096K bytes)
avail memory = 3922362368 (3830432K bytes)
Programming 24 pins in IOAPIC #0
IOAPIC #0 intpin 2 - irq 0
Programming 24 pins in IOAPIC #1
Programming 24 pins in IOAPIC #2
FreeBSD/SMP: Multiprocessor motherboard: 4 CPUs
 cpu0 (BSP): apic id:  0, version: 0x00050014, at 0xfee0
 cpu1 (AP):  apic id:  1, version: 0x00050014, at 0xfee0
 cpu2 (AP):  apic id:  6, version: 0x00050014, at 0xfee0
 cpu3 (AP):  apic id:  7, version: 0x00050014, at 0xfee0
 io0 (APIC): apic id:  8, version: 0x00178020, at 0xfec0
 io1 (APIC): apic id:  9, version: 0x00178020, at 0xfec81000
 io2 (APIC): apic id: 10, version: 0x00178020, at 0xfec81400
Preloaded elf kernel kernel at 0x80339000.
Warning: Pentium 4 CPU: PSE disabled
Pentium Pro MTRR support enabled
Using $PIR table, 19 entries at 0x800f2f30
Its showing 4 CPUs ... but:
machdep.hlt_logical_cpus: 1
which, from /usr/src/UPDATING indicates that the HTT cpus aren't enabled:
20031022:
Support for HyperThread logical CPUs has now been enabled by
default.  As a result, the HTT kernel option no longer exists.
Instead, the logical CPUs are always started so that they can
handle interrupts.  However, the extra logical CPUs are prevented
from executing user processes by default.  To enable the logical
CPUs, change the value of the machdep.hlt_logical_cpus from 1 to
0.  This value can also be set from the loader as a tunable of
the same name.
Finally ... top shows:
last pid: 73871;  load averages:  9.76,  9.23,  8.16
   up 9+02:02:26  00:57:06
422 processes: 8 running, 413 sleeping, 1 zombie
CPU states: 19.0% user,  0.0% nice, 81.0% system,  0.0% interrupt,  0.0% idle
Mem: 2445M Active, 497M Inact, 595M Wired, 160M Cache, 199M Buf, 75M Free
Swap: 2048M Total, 6388K Used, 2041M Free
  PID USERNAME   PRI NICE  SIZERES STATE  C   TIME   WCPUCPU COMMAND
28298 www 64   0 28136K 12404K CPU2   2  80:59 24.51% 24.51% httpd
69232 excalibur   64   0 80128K 76624K RUN2   2:55 16.50% 16.50% lisp.run
72879 www 64   0 22664K  9444K RUN0   0:12 12.94% 12.94% httpd
14154 www 64   0 36992K 22880K RUN0  55:07 12.70% 12.70% httpd
69758 www 63   0 15400K  8756K RUN0   0:18 11.87% 11.87% httpd
 7553 nobody   2   0   158M   131M poll   0  33:19  8.98%  8.98% nsd
70752 setiathome   2   0 14644K 14084K select 2   0:47  8.98%  8.98% perl
71191 setiathome   2   0 13220K 12804K select 0   0:29  8.40%  8.40% perl
70903 setiathome   2   0 14224K 13676K select 0   0:42  7.37%  7.37% perl
33932 setiathome   2   0 21712K 21144K select 0   2:23  4.59%  4.59% perl
In this case ... 0% idle, 81% in system?
As a comparison the Dual-PIII/vinum server looks like:
last pid: 90614;  load averages:  3.64,  2.41,  2.69
  up 3+08:45:17  
00:59:27
955 processes: 12 running, 942 sleeping, 1 zombie
CPU states: 63.9% user,  0.0% nice, 32.6% system,  3.5% interrupt,  0.0% idle
Mem: 2432M Active, 687M Inact, 563M Wired, 147M Cache, 199M Buf, 5700K Free
Swap: 8192M Total, 12M Used, 8180M Free, 12K In
  PID USERNAME   PRI NICE  SIZERES STATE  C   TIME   WCPUCPU COMMAND
90506 scrappy 56   0 19384K 14428K RUN0   0:06 22.98% 16.41% postgres
90579 root57   0  3028K  2156K CPU1   1   0:04 26.23% 14.45% top
90554 pgsql   -6   0 12784K  7408K RUN1   0:04 

Re: vinum in 4.x poor performer?

2005-02-08 Thread Dan Nelson
In the last episode (Feb 09), Marc G. Fournier said:
 On Wed, 9 Feb 2005, Greg 'groggy' Lehey wrote:
 On Tuesday,  8 February 2005 at 23:21:54 -0400, Marc G. Fournier wrote:
 I have a Dual-Xeon server with 4G of RAM, with its primary file
 system consisting of 4x73G SCSI drives running RAID5 using vinum
 ... the operating system is currently FreeBSD 4.10-STABLE #1: Fri
 Oct 22 15:06:55 ADT 2004 ... swap usage is 0% (6149) ... and it
 performs worse then any of my other servers, and I have less
 running on it then the other servers ...
 
 I also have HTT disabled on this server ... and softupdates enabled
 on the file system ...
 
 That said ... am I hitting limits of software raid or is there
 something I should be looking at as far as performance is
 concerned?  Maybe something I have misconfigured?
 
 Based on what you've said, it's impossible to tell.  Details would
 be handy.
 
 Like?  I'm not sure what would be useful for this one ... I just sent
 in my current drive config ... something else useful?

Details on the array's performance, I think.  Software RAID5 will
definitely have poor write performance (logging disks solve that
problem but vinum doesn't do that), but should have excellent read
rates.  From this output, however:
 
 systat -v output help:
 4 usersLoad  4.64  5.58  5.77 

 Proc:r  p  d  s  wCsw  Trp  Sys  Int  Sof  Flt
 24 9282   949 8414*  678  349 8198

 54.6%Sys   0.2%Intr 45.2%User  0.0%Nice  0.0%Idl  

 Disks   da0   da1   da2   da3   da4 pass0 pass1   
 KB/t   5.32  9.50 12.52 16.00  9.00  0.00  0.00   
 tps  23 2 4 3 1 0 0   
 MB/s   0.12  0.01  0.05  0.04  0.01  0.00  0.00   
 % busy3 1 1 1 0 0 0   

, it looks like your disks aren't being touched at all.  You are doing
over 9 syscalls/second, though, which is mighty high.  The 50% Sys
doesn't look good either.  You may have a runaway process doing some
syscall over and over.  If this is not an MPSAFE syscall (see
/sys/kern/syscalls.master ), it will also prevent other processes from
making non-MPSAFE syscalls, and in 4.x that's most of them.

-- 
Dan Nelson
[EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum in 4.x poor performer?

2005-02-08 Thread Marc G. Fournier
On Tue, 8 Feb 2005, Dan Nelson wrote:
Details on the array's performance, I think.  Software RAID5 will
definitely have poor write performance (logging disks solve that
problem but vinum doesn't do that), but should have excellent read
rates.  From this output, however:
systat -v output help:
4 usersLoad  4.64  5.58  5.77

Proc:r  p  d  s  wCsw  Trp  Sys  Int  Sof  Flt
24 9282   949 8414*  678  349 8198

54.6%Sys   0.2%Intr 45.2%User  0.0%Nice  0.0%Idl

Disks   da0   da1   da2   da3   da4 pass0 pass1
KB/t   5.32  9.50 12.52 16.00  9.00  0.00  0.00
tps  23 2 4 3 1 0 0
MB/s   0.12  0.01  0.05  0.04  0.01  0.00  0.00
% busy3 1 1 1 0 0 0
, it looks like your disks aren't being touched at all.  You are doing
over 9 syscalls/second, though, which is mighty high.  The 50% Sys
doesn't look good either.  You may have a runaway process doing some
syscall over and over.  If this is not an MPSAFE syscall (see
/sys/kern/syscalls.master ), it will also prevent other processes from
making non-MPSAFE syscalls, and in 4.x that's most of them.
Wow, that actually pointed me in the right direction, I think ... I just 
killed an http process that was using alot of CPU, and syscalls drop'd 
down to a numeric value again ... I'm still curious as to why this only 
seem sto affect my Dual-Xeon box though :(

Thanks ...

Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum in 4.x poor performer?

2005-02-08 Thread Marc G. Fournier
Is there a command that I can run that provide me the syscall/sec value, 
that I could use in a script?  I know vmstat reports it, but is there an 
easier way the having to parse the output? a perl module maybe, that 
already does it?

Thanks ...
On Wed, 9 Feb 2005, Marc G. Fournier wrote:
On Tue, 8 Feb 2005, Dan Nelson wrote:
Details on the array's performance, I think.  Software RAID5 will
definitely have poor write performance (logging disks solve that
problem but vinum doesn't do that), but should have excellent read
rates.  From this output, however:
systat -v output help:
4 usersLoad  4.64  5.58  5.77

Proc:r  p  d  s  wCsw  Trp  Sys  Int  Sof  Flt
24 9282   949 8414*  678  349 8198

54.6%Sys   0.2%Intr 45.2%User  0.0%Nice  0.0%Idl

Disks   da0   da1   da2   da3   da4 pass0 pass1
KB/t   5.32  9.50 12.52 16.00  9.00  0.00  0.00
tps  23 2 4 3 1 0 0
MB/s   0.12  0.01  0.05  0.04  0.01  0.00  0.00
% busy3 1 1 1 0 0 0
, it looks like your disks aren't being touched at all.  You are doing
over 9 syscalls/second, though, which is mighty high.  The 50% Sys
doesn't look good either.  You may have a runaway process doing some
syscall over and over.  If this is not an MPSAFE syscall (see
/sys/kern/syscalls.master ), it will also prevent other processes from
making non-MPSAFE syscalls, and in 4.x that's most of them.
Wow, that actually pointed me in the right direction, I think ... I just 
killed an http process that was using alot of CPU, and syscalls drop'd down 
to a numeric value again ... I'm still curious as to why this only seem sto 
affect my Dual-Xeon box though :(

Thanks ...

Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]

Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum bootable RAID-1 setup help

2004-12-25 Thread Matthias F. Brandstetter
-- quoting Faisal Ali --
 I really tried my best to follow the FreeBSD handbook documentation to
 setup bootable RAID-1 volume, I just can't seem to understand Section
 17.9.2, Iam working with 5.3 i386 Release.

Since I had problems with vinum under 5.3 as well I successfully tried 
gmirror (man gmirror) and had no problems with it so far. You can create 
a bootable RAID1 array with it ...

HTH! Greetings, Matthias

-- 
That's weird.  It's like something out of that twilighty show about
that zone.

  -- Homer Simpson
 Treehouse of Horror VI
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum raid5: newfs throws an error

2004-12-06 Thread Markus Hoenicka
Greg 'groggy' Lehey writes:
  There was once an error in the stripe size calculations that meant
  that there were holes in the plexes.  Maybe it's still there (old
  Vinum is not being maintained).  But you should have seen that in the
  console messages at create time.
  
   Vinum reports the disk sizes as 17500MB (da1) and 17359MB (da2,
   da3). The raid5 volume and plex have a size of 33GB.
  
  This looks like the kind of scenario where that could happen.  Try
  this:
  
  1.  First, find a better stripe size.  It shouldn't be a power of 2,
  but it should be a multiple of 16 kB.  I'd recommend 496 kB.  This
  won't fix the problem, but it's something you should do anyway
  
  2.  Calculate the length of an exact number of stripes, and create the
  subdisks in that length.  Try again and see what happens.
  
  3.  Use gvinum instead of vinum and try both ways.
  

Ok, I decreased the stripe size to 496, regardless of whether it has
anything to do with my problem. Next I set the subdisk length to
17359m on all disks, and things started to work ok. No more newfs
errors here.

Before doing this I also had a brief encounter with gvinum. There is
no manpage in 5.3BETA7, so I assumed it groks the same config files as
vinum. However, this did not do me any good as it simply rebooted the
box. I guess gvinum works better in RELEASE.

Thanks a lot for your help.

Markus

-- 
Markus Hoenicka
[EMAIL PROTECTED]
(Spam-protected email: replace the quadrupeds with mhoenicka)
http://www.mhoenicka.de

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum raid5: newfs throws an error

2004-12-06 Thread Greg 'groggy' Lehey
On Monday,  6 December 2004 at 23:44:59 +0100, Markus Hoenicka wrote:
 Greg 'groggy' Lehey writes:
 There was once an error in the stripe size calculations that meant
 that there were holes in the plexes.  Maybe it's still there (old
 Vinum is not being maintained).  But you should have seen that in the
 console messages at create time.

 Vinum reports the disk sizes as 17500MB (da1) and 17359MB (da2,
 da3). The raid5 volume and plex have a size of 33GB.

 This looks like the kind of scenario where that could happen.  Try
 this:

 1.  First, find a better stripe size.  It shouldn't be a power of 2,
 but it should be a multiple of 16 kB.  I'd recommend 496 kB.  This
 won't fix the problem, but it's something you should do anyway

 2.  Calculate the length of an exact number of stripes, and create the
 subdisks in that length.  Try again and see what happens.

 3.  Use gvinum instead of vinum and try both ways.


 Ok, I decreased the stripe size to 496, regardless of whether it has
 anything to do with my problem. Next I set the subdisk length to
 17359m on all disks, and things started to work ok. No more newfs
 errors here.

OK, looks like this was the hole in the plex issue.  I thought that
was gone.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgpn7akcOjP72.pgp
Description: PGP signature


Re: vinum limits disk size to 255MB

2004-12-05 Thread Greg 'groggy' Lehey
On Monday,  6 December 2004 at  0:28:01 +0100, Markus Hoenicka wrote:
 Hi all,

 I'm trying to set up vinum on a freshly installed FreeBSD 5.3-BETA7
 box. The system is installed on da0. I want to use three 18G SCSI
 drives to create a vinum volume.

 For some reason vinum believes the disks hold a mere 255MB. This is
 what vinum sets the subdisk length if I specify 0m as the length
 (meaning use all available space according to the manual). If I
 specify any larger size manually, vinum complains that there is No
 space left on device which strikes me odd. I *believe* that the fdisk
 and bsdlabel outputs reprinted below show that the disks are indeed
 18G. According to my math, the appropriate size to specify in the
 vinum configuration would be 35566215s, that is the number of sectors
 of the smaller disks minus 265.

 I see only one minor problem: The disks are not identical, with
 da1 having a slightly larger capacity than da2 and da3.

 Can anyone throw me a ring here?

You don't say whether you're using vinum or gvinum.  I've never seen
this problem before, but if you're getting incorrect subdisk sizes,
try specifying them explicitly:

 sd length 35840952s drive ibma 

I wonder whether the problem is related to specifying the size as 0m
instead of 0.  It shouldn't be.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgpGC9vNURlE3.pgp
Description: PGP signature


Re: vinum limits disk size to 255MB

2004-12-05 Thread Markus Hoenicka
Greg 'groggy' Lehey writes:
  You don't say whether you're using vinum or gvinum.  I've never seen
  this problem before, but if you're getting incorrect subdisk sizes,
  try specifying them explicitly:
  
   sd length 35840952s drive ibma 
  
  I wonder whether the problem is related to specifying the size as 0m
  instead of 0.  It shouldn't be.
  

Actually I've never heard of gvinum, so I'm pretty sure we're looking
at vinum here.

Looks like this is an odd pilot error. If I label disks from within
sysinstall and on the command line, one tool apparently doesn't know
what the other is doing. This must be a gross misunderstanding on my
side of how these tools work. Using bsdlabel to set the partition
sizes apparently is not sufficient. To make a long story short, I went
back to sysinstall, created a partition using the full disk, then went
to bsdlabel -e to turn the partition into type vinum. Now I get the
full capacity.

I still can't make a raid5 out of these drives. I'm still fiddling and
will get back to the list if I can't figure this out myself.

Thanks anyway for your prompt answer.

Markus


-- 
Markus Hoenicka
[EMAIL PROTECTED]
(Spam-protected email: replace the quadrupeds with mhoenicka)
http://www.mhoenicka.de

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum raid5: newfs throws an error

2004-12-05 Thread Greg 'groggy' Lehey
On Monday,  6 December 2004 at  3:05:31 +0100, Markus Hoenicka wrote:
 Hi all,

 now that I can use the full capacity of my disks, I'm stuck again. I'm
 trying to set up a raid5 from three SCSI disks (I know that a serious
 raid5 should use five disks or more, but I have to make do with three at
 the moment). The configuration is as follows:

 drive ibma device /dev/da1s1e
 drive ibmb device /dev/da2s1e
 drive ibmc device /dev/da3s1e
 volume raid5 setupstate
   plex org raid5 512k
 sd length 0m drive ibma
 sd length 0m drive ibmb
 sd length 0m drive ibmc

 This works ok. Then I run vinum init to initialize the drives. Trying
 to create a filesystem on this construct results in the error message:

 newfs: wtfs: 65536 bytes at sector 71130688: Input/output error

 Is that trying to tell me that my calculation of the group size is
 incorrect? Does it have to do anything with the fact that the three
 disks have slightly different capacities?

There was once an error in the stripe size calculations that meant
that there were holes in the plexes.  Maybe it's still there (old
Vinum is not being maintained).  But you should have seen that in the
console messages at create time.

 Vinum reports the disk sizes as 17500MB (da1) and 17359MB (da2,
 da3). The raid5 volume and plex have a size of 33GB.

This looks like the kind of scenario where that could happen.  Try
this:

1.  First, find a better stripe size.  It shouldn't be a power of 2,
but it should be a multiple of 16 kB.  I'd recommend 496 kB.  This
won't fix the problem, but it's something you should do anyway

2.  Calculate the length of an exact number of stripes, and create the
subdisks in that length.  Try again and see what happens.

3.  Use gvinum instead of vinum and try both ways.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgpRINJrCFKRb.pgp
Description: PGP signature


Re: vinum problems

2004-11-21 Thread
Chris Smith wrote:
Hi,
I've just built a machine with a vinum root successfully.  All vinum
sets show that they are up and working.  There are two ATA disks in a
RAID1 root formation.
Some questions?
1. The set has just failed completely (sorry it isn't up and working
now) on the first reboot.  It is possible to bring the machine up from
the primary disk with no problems but any attempt to start the mirror
drive causes a panic with a hardware error although checking it with
bsdlabel shows the partition table is intact.  Any ideas?  the drive is
fine - i've pulled it and tested it and it's fine.  It's booting using
BootMgr.
I've killed it completely now by reloading the vinum config twice.  so
it's out of action permanently. 

I did make sure that there was enough space at the start of the disk for
the vinum info to survive.
2. Speed.  The vinum set is a concat disk.  The read performance was
really slow (visibly so).  Can you boot off a striped volume and will it
benefit me at all making it a striped volume at all rather than a
concat?  

hardware:
SIL3112 SATA RAID controller.
2x 80Gb Seagate barracuda ATA disks (ad4, ad6)
FreeBSD-5.3-STABLE
config:
cant provide this - it won't boot any more.
Any ideas?  Or any good resources on setting up a RAID1 root disk???
Cheers,
- Chris.
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]
 

From my point of view, vinum root partition is really useless. You will 
never have the availability comparable to hardware RAID (i mean, 
downtime), and will have a lot of troubles with booting from it.
From my own expirience, the same results as root vinum partition can be 
made easily:
1. Make a small partition on every drive in the system. 128 Megs from 
every drive will be enought.
2. Make each partition bootable, and every day (by cron job in least 
activity hour) make a dump/restore from the main root partition to all 
other.
3. usr, var and all other partitions must reside on vinum partition, 
using rest of the space on the partition.
4. Swap partitions must also be made on every drive.

You can easily boot up from every disk in the system, and it will be 
always configured propertly (assuming you don't change a lot in root 
filesystem every day).

The described configuration is used at least one year on my work, for 
the SCM system server.

The only problem can be that you can't hot-unplug the disk from which 
you are booted.
But it's really very few system's build on average PC hardware and 
doesn't allowed to went of to the 5-10 minutes. In any case, the RAID 
(even hardware) is not enought to make a system with less than 1 
hour/year offline.

Best Regards,
Alexander Derevianko
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum problems

2004-11-21 Thread Christian Hiris
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Sunday 21 November 2004 14:36,   wrote:
[...]
  From my point of view, vinum root partition is really useless. You will
 never have the availability comparable to hardware RAID (i mean,
 downtime), and will have a lot of troubles with booting from it.
  From my own expirience, the same results as root vinum partition can be
 made easily:
 1. Make a small partition on every drive in the system. 128 Megs from
 every drive will be enought.
 2. Make each partition bootable, and every day (by cron job in least
 activity hour) make a dump/restore from the main root partition to all
 other.
 3. usr, var and all other partitions must reside on vinum partition,
 using rest of the space on the partition.
 4. Swap partitions must also be made on every drive.

Points 1., 2. (and 4.) also could be solved by setting up a gmirror(8) raid1 
array on the partitions in question. This works w/o the dump/restore/cron and 
the setup process is very simple. There is also a gstripe(8) and a gconcat(8) 
geom-class available in 5.3.

- -- 
Christian Hiris [EMAIL PROTECTED] | OpenPGP KeyID 0x3BCA53BE 
OpenPGP-Key at hkp://wwwkeys.eu.pgp.net and http://pgp.mit.edu
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.6 (FreeBSD)

iD8DBQFBoLSu09WjGjvKU74RAsVTAJ9fhTjB1ZM2xLL8qom7XjnwkSlTqwCfYFNs
O3aVVBC5ksXfYc5aYrgrS8I=
=xqaZ
-END PGP SIGNATURE-
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum problems

2004-11-17 Thread Gary Dunn
On Tue, 2004-11-16 at 08:51, Chris Smith wrote:
[snip]
 ... Can you boot off a striped volume and will it
 benefit me at all making it a striped volume at all rather than a
 concat?  

I don't think you can boot off a vinum partition, because you have to
load vinum *after* the kernel is running. It usually loads early during
the run through /etc/rc as the system goes multiuser, before visiting
/etc/fstab with mount. Perhaps you have your root partition on another
disk and just didn't mention it? Or is there a tricky way to do this
that I am ignorant of?

-- 

Gary Dunn
[EMAIL PROTECTED]
Honolulu

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum problems

2004-11-17 Thread Greg 'groggy' Lehey
On Tuesday, 16 November 2004 at 23:10:22 -1000, Gary Dunn wrote:
 On Tue, 2004-11-16 at 08:51, Chris Smith wrote:
 [snip]
 ... Can you boot off a striped volume and will it
 benefit me at all making it a striped volume at all rather than a
 concat?

 I don't think you can boot off a vinum partition, because you have to
 load vinum *after* the kernel is running. It usually loads early during
 the run through /etc/rc as the system goes multiuser, before visiting
 /etc/fstab with mount. Perhaps you have your root partition on another
 disk and just didn't mention it? Or is there a tricky way to do this
 that I am ignorant of?

There's a tricky was to do this; apparently you're ignorant of it :-)
It's described in the man page and at
http://www.vinumvm.org/cfbsd/vinum.pdf .

Basically, it involves overlaying the Vinum subdisk with a BSD
partition for booting.  This means that the plex must be concat.
Striped plexes have a layout that is incompatible with BSD partitions.

To answer Chris's other question: no, I can't see any particular
advantage.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgpw5o5jHFXmh.pgp
Description: PGP signature


Re: vinum problems

2004-11-16 Thread Greg 'groggy' Lehey
On Tuesday, 16 November 2004 at 18:51:38 +, Chris Smith wrote:
 Hi,

 I've just built a machine with a vinum root successfully.  All vinum
 sets show that they are up and working.  There are two ATA disks in a
 RAID1 root formation.

 Some questions?

How about some details first?  There's not much with the information
you supply.  

worn-out-record-mode
Read http://www.vinumvm.org/vinum/how-to-debug.html and send the
information asked for there.
/worn-out-record-mode

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgpfS3pAtBCvQ.pgp
Description: PGP signature


Re: Vinum configuration lost at vinum stop / start

2004-11-11 Thread Artem Kazakov
Kim Helenius wrote:
Now I can newfs /dev/vinum/vinum0, mount it, use it, etc. But when I do 
vinum stop, vinum start, vinum stop, and vinum start something amazing 
happens. Vinum l after this is as follows:

2 drives:
D d2State: up   /dev/ad5s1d A: 286181/286181 
MB (100%)
D d1State: up   /dev/ad4s1d A: 286181/286181 
MB (100%)

0 volumes:
0 plexes:
0 subdisks:
Where did my configuration go?! I can of course recreate it, with no 
data lost, but imagine this on a raid5 where the plex goes into init 
mode after creation. Not a pleasant scenario. Also recreating the 
configuration from a config file after every reboot doesn't sound 
interesting.
you shoud issue a read command to vinum to make it read configureation.
in your case:
vinum read /dev/ad5 /dev/ad4

--
Kazakov Artem
 OOO CompTek
 tel: +7(095) 785-2525, ext.1802
 fax: +7(095) 785-2526
 WWW:http://www.comptek.ru
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum configuration lost at vinum stop / start

2004-11-11 Thread Kim Helenius
On Thu, 11 Nov 2004, Artem Kazakov wrote:

 Kim Helenius wrote:
 
  Now I can newfs /dev/vinum/vinum0, mount it, use it, etc. But when I do 
  vinum stop, vinum start, vinum stop, and vinum start something amazing 
  happens. Vinum l after this is as follows:
  
  2 drives:
  D d2State: up   /dev/ad5s1d A: 286181/286181 
  MB (100%)
  D d1State: up   /dev/ad4s1d A: 286181/286181 
  MB (100%)
  
  0 volumes:
  0 plexes:
  0 subdisks:
  
  Where did my configuration go?! I can of course recreate it, with no 
  data lost, but imagine this on a raid5 where the plex goes into init 
  mode after creation. Not a pleasant scenario. Also recreating the 
  configuration from a config file after every reboot doesn't sound 
  interesting.
 you shoud issue a read command to vinum to make it read configureation.
 in your case:
 vinum read /dev/ad5 /dev/ad4

Thank you for your answer. However, when I mentioned the configuration is 
lost, it is lost and not stored on the drives anymore. Thus 'vinum read' 
command cannot read it from the drives. In addition, 'vinum start' scans 
drives for vinum information so in fact 'vinum read' should not be needed.

 -- 
 Kazakov Artem
 
   OOO CompTek
   tel: +7(095) 785-2525, ext.1802
   fax: +7(095) 785-2526
   WWW:http://www.comptek.ru
 

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum configuration lost at vinum stop / start

2004-11-11 Thread Stijn Hoop
On Thu, Nov 11, 2004 at 12:00:52PM +0200, Kim Helenius wrote:
 Greetings. I posted earlier about problems with vinum raid5 but it 
 appears it's not restricted to that.

Are you running regular vinum on 5.x? It is known broken. Please use
'gvinum' instead.

There is one caveat: the gvinum that shipped with 5.3-RELEASE contains an
error in RAID-5 initialization. If you really need RAID-5 you either need
to wait for the first patch level release of 5.3, or you can build
RELENG_5 from source yourself. The fix went in on 2004-11-07.

--Stijn

-- 
Fairy tales do not tell children that dragons exist. Children already
know dragons exist. Fairy tales tell children the dragons can be
killed.
-- G.K. Chesterton


pgppv4V8ikVEN.pgp
Description: PGP signature


Re: Vinum configuration lost at vinum stop / start

2004-11-11 Thread Kim Helenius
On Thu, 11 Nov 2004, Stijn Hoop wrote:

 On Thu, Nov 11, 2004 at 12:00:52PM +0200, Kim Helenius wrote:
  Greetings. I posted earlier about problems with vinum raid5 but it 
  appears it's not restricted to that.
 
 Are you running regular vinum on 5.x? It is known broken. Please use
 'gvinum' instead.
 
 There is one caveat: the gvinum that shipped with 5.3-RELEASE contains an
 error in RAID-5 initialization. If you really need RAID-5 you either need
 to wait for the first patch level release of 5.3, or you can build
 RELENG_5 from source yourself. The fix went in on 2004-11-07.

Thank you for your answer. I tested normal concat with both 5.2.1-RELEASE and
5.3-RELEASE with similar results. Plenty of people (at least I get this
impression after browsing several mailing lists and websites) have working
vinum setups with 5.2.1 (where gvinum doesn't exist) so there's definately 
something I'm doing wrong here. So my problem is not limited to raid5.

I'm aware of gvinum and the bug and actually tried to cvsup  make world 
last night but it didn't succeed due to some missing files in netgraph 
dirs. I will try again tonight.

 
 --Stijn
 
 -- 
 Fairy tales do not tell children that dragons exist. Children already
 know dragons exist. Fairy tales tell children the dragons can be
 killed.
   -- G.K. Chesterton
 

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum configuration lost at vinum stop / start

2004-11-11 Thread Stijn Hoop
On Thu, Nov 11, 2004 at 03:32:58PM +0200, Kim Helenius wrote:
 On Thu, 11 Nov 2004, Stijn Hoop wrote:
  On Thu, Nov 11, 2004 at 12:00:52PM +0200, Kim Helenius wrote:
   Greetings. I posted earlier about problems with vinum raid5 but it 
   appears it's not restricted to that.
  
  Are you running regular vinum on 5.x? It is known broken. Please use
  'gvinum' instead.
  
  There is one caveat: the gvinum that shipped with 5.3-RELEASE contains an
  error in RAID-5 initialization. If you really need RAID-5 you either need
  to wait for the first patch level release of 5.3, or you can build
  RELENG_5 from source yourself. The fix went in on 2004-11-07.
 
 Thank you for your answer. I tested normal concat with both 5.2.1-RELEASE and
 5.3-RELEASE with similar results. Plenty of people (at least I get this
 impression after browsing several mailing lists and websites) have working
 vinum setups with 5.2.1 (where gvinum doesn't exist) so there's definately 
 something I'm doing wrong here. So my problem is not limited to raid5.

I don't know the state of affairs for 5.2.1-RELEASE, but in 5.3-RELEASE gvinum
is the way forward.

 I'm aware of gvinum and the bug and actually tried to cvsup  make world 
 last night but it didn't succeed due to some missing files in netgraph 
 dirs. I will try again tonight.

OK, I think that will help you out. But the strange thing is, RELENG_5 should
be buildable. Are you sure you are getting that?

Have you read

http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/current-stable.html

Particularly the 19.2.2 section, 'Staying stable with FreeBSD'?

HTH,

--Stijn

-- 
I have great faith in fools -- self confidence my friends call it.
-- Edgar Allan Poe


pgpZ9gvLIDHlj.pgp
Description: PGP signature


Re: Vinum configuration lost at vinum stop / start

2004-11-11 Thread Kim Helenius
Stijn Hoop wrote:
Greetings. I posted earlier about problems with vinum raid5 but it 
appears it's not restricted to that.
Are you running regular vinum on 5.x? It is known broken. Please use
'gvinum' instead.
There is one caveat: the gvinum that shipped with 5.3-RELEASE contains an
error in RAID-5 initialization. If you really need RAID-5 you either need
to wait for the first patch level release of 5.3, or you can build
RELENG_5 from source yourself. The fix went in on 2004-11-07.
Thank you for your answer. I tested normal concat with both 5.2.1-RELEASE and
5.3-RELEASE with similar results. Plenty of people (at least I get this
impression after browsing several mailing lists and websites) have working
vinum setups with 5.2.1 (where gvinum doesn't exist) so there's definately 
something I'm doing wrong here. So my problem is not limited to raid5.

I don't know the state of affairs for 5.2.1-RELEASE, but in 5.3-RELEASE gvinum
is the way forward.
Thanks again for answering. Agreed, but there still seems to be a long 
way to go. A lot of 'classic' vinum functionality is still missing and 
at least for me it still doesn't do the job the way I would find 
trustworthy. See below.

I'm aware of gvinum and the bug and actually tried to cvsup  make world 
last night but it didn't succeed due to some missing files in netgraph 
dirs. I will try again tonight.
I tested gvinum with some interesting results. First the whole system 
froze after creating a concatenated drive and trying to gvinum -rf -r 
objects (resetconfig command doesn't exist). Next, I created the volume, 
newfs, copied some data on it. The rebooted, and issued gvinum start. 
This is what follows:

2 drives:
D d1State: up   /dev/ad4s1d A: 285894/286181 
MB (99%)
D d2State: up   /dev/ad5s1d A: 285894/286181 
MB (99%)

1 volume:
V vinum0State: down Plexes:   1 Size:572 MB
1 plex:
P vinum0.p0   C State: down Subdisks: 2 Size:572 MB
2 subdisks:
S vinum0.p0.s0  State: staleD: d1   Size:286 MB
S vinum0.p0.s1  State: staleD: d2   Size:286 MB
I'm getting a bit confused. Issuing separately 'gvinum start vinum0' 
does seem to fix it (all states go 'up') but surely it should come up 
fine with just 'gvinum start'? This is how I would start it in loader.conf.

OK, I think that will help you out. But the strange thing is, RELENG_5 should
be buildable. Are you sure you are getting that?
Have you read
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/current-stable.html
Particularly the 19.2.2 section, 'Staying stable with FreeBSD'?
I have read it and used -stable in 4.x, and if I read it really 
carefully I figure out that -stable does not equal stable which is way 
I stopped tracking -stable in the first place. And when knowing I would 
only need it to fix raid5 init I'm a bit reluctant to do it as I found 
out I can't even create a concat volume correctly.

--
Kim Helenius
[EMAIL PROTECTED]
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum configuration lost at vinum stop / start

2004-11-11 Thread Stijn Hoop
Hi,

On Thu, Nov 11, 2004 at 04:53:39PM +0200, Kim Helenius wrote:
 Stijn Hoop wrote:
  I don't know the state of affairs for 5.2.1-RELEASE, but in 5.3-RELEASE 
  gvinum is the way forward.
 
 Thanks again for answering. Agreed, but there still seems to be a long 
 way to go. A lot of 'classic' vinum functionality is still missing and 
 at least for me it still doesn't do the job the way I would find 
 trustworthy. See below.

That's absolutely true. While 5.3 is IMHO pretty stable, gvinum is quite new
and therefore a bit less well tested than the rest of the system.  Fortunately
Lukas Ertl, the maintainer of gvinum, is pretty active and responsive to
problems.

So if you need a critically stable vinum environment you would be better off
with 4.x.

 I tested gvinum with some interesting results. First the whole system 
 froze after creating a concatenated drive and trying to gvinum -rf -r 
 objects (resetconfig command doesn't exist).

That's not good. Nothing in dmesg? If you can consistently get this to happen
you should send in a problem report.

 Next, I created the volume, 
 newfs, copied some data on it. The rebooted, and issued gvinum start. 

 This is what follows:
 
 2 drives:
 D d1State: up   /dev/ad4s1d A: 285894/286181 
 MB (99%)
 D d2State: up   /dev/ad5s1d A: 285894/286181 
 MB (99%)
 
 1 volume:
 V vinum0State: down Plexes:   1 Size:572 MB
 
 1 plex:
 P vinum0.p0   C State: down Subdisks: 2 Size:572 MB
 
 2 subdisks:
 S vinum0.p0.s0  State: staleD: d1   Size:286 MB
 S vinum0.p0.s1  State: staleD: d2   Size:286 MB
 
 I'm getting a bit confused. Issuing separately 'gvinum start vinum0' 
 does seem to fix it (all states go 'up') but surely it should come up 
 fine with just 'gvinum start'? This is how I would start it in loader.conf.

I think I've seen this too, but while testing an other unrelated problem.  At
the time I attributed it to other factors. Can you confirm that when you
restart again, it stays up? Or maybe try an explicit 'saveconfig' when it is
in the 'up' state, and then reboot.

  Have you read
 
 http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/current-stable.html
 
  Particularly the 19.2.2 section, 'Staying stable with FreeBSD'?
 
 
 I have read it and used -stable in 4.x, and if I read it really 
 carefully I figure out that -stable does not equal stable which is way 
 I stopped tracking -stable in the first place. And when knowing I would 
 only need it to fix raid5 init I'm a bit reluctant to do it as I found 
 out I can't even create a concat volume correctly.

That I can understand. If I may make a polite suggestion, it sounds like you
value stability above all else. In this case where vinum is involved, I would
recommend you to stay with 4.x until 5.4 is released. That should take another
6-8 months and probably most of the gvinum issues will have been tackled by
then. Although I know that there are a lot of users, myself included, that run
gvinum on 5.x, it is pretty new technology and therefore unfortunately
includes pretty new bugs.

The other option is to bite the bullet now, and fiddle with gvinum for a few
days. Since other users are using it, it is certainly possible.  This will
take you some time however. It will save you time when the upgrade to 5.4 will
be though.

It is your decision what part of the tradeoff you like the most.

HTH,

--Stijn

-- 
Apparently, 1 in 5 people in the world are Chinese. And there are 5 people
in my family, so it must be one of them. It's either my mum or my dad..
or maybe my older brother John. Or my younger brother Ho-Cha-Chu. But I'm
pretty sure it's John.


pgppjpGDUdUn1.pgp
Description: PGP signature


Re: Vinum configuration lost at vinum stop / start

2004-11-11 Thread Kim Helenius
Stijn Hoop wrote:
Hi,
On Thu, Nov 11, 2004 at 04:53:39PM +0200, Kim Helenius wrote:
Stijn Hoop wrote:
I don't know the state of affairs for 5.2.1-RELEASE, but in 5.3-RELEASE 
gvinum is the way forward.
Thanks again for answering. Agreed, but there still seems to be a long 
way to go. A lot of 'classic' vinum functionality is still missing and 
at least for me it still doesn't do the job the way I would find 
trustworthy. See below.

That's absolutely true. While 5.3 is IMHO pretty stable, gvinum is quite new
and therefore a bit less well tested than the rest of the system.  Fortunately
Lukas Ertl, the maintainer of gvinum, is pretty active and responsive to
problems.
So if you need a critically stable vinum environment you would be better off
with 4.x.

I tested gvinum with some interesting results. First the whole system 
froze after creating a concatenated drive and trying to gvinum -rf -r 
objects (resetconfig command doesn't exist).

That's not good. Nothing in dmesg? If you can consistently get this to happen
you should send in a problem report.

Next, I created the volume, 
newfs, copied some data on it. The rebooted, and issued gvinum start. 

This is what follows:
2 drives:
D d1State: up   /dev/ad4s1d A: 285894/286181 
MB (99%)
D d2State: up   /dev/ad5s1d A: 285894/286181 
MB (99%)

1 volume:
V vinum0State: down Plexes:   1 Size:572 MB
1 plex:
P vinum0.p0   C State: down Subdisks: 2 Size:572 MB
2 subdisks:
S vinum0.p0.s0  State: staleD: d1   Size:286 MB
S vinum0.p0.s1  State: staleD: d2   Size:286 MB
I'm getting a bit confused. Issuing separately 'gvinum start vinum0' 
does seem to fix it (all states go 'up') but surely it should come up 
fine with just 'gvinum start'? This is how I would start it in loader.conf.

I think I've seen this too, but while testing an other unrelated problem.  At
the time I attributed it to other factors. Can you confirm that when you
restart again, it stays up? Or maybe try an explicit 'saveconfig' when it is
in the 'up' state, and then reboot.

Have you read
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/current-stable.html
Particularly the 19.2.2 section, 'Staying stable with FreeBSD'?
I have read it and used -stable in 4.x, and if I read it really 
carefully I figure out that -stable does not equal stable which is way 
I stopped tracking -stable in the first place. And when knowing I would 
only need it to fix raid5 init I'm a bit reluctant to do it as I found 
out I can't even create a concat volume correctly.

That I can understand. If I may make a polite suggestion, it sounds like you
value stability above all else. In this case where vinum is involved, I would
recommend you to stay with 4.x until 5.4 is released. That should take another
6-8 months and probably most of the gvinum issues will have been tackled by
then. Although I know that there are a lot of users, myself included, that run
gvinum on 5.x, it is pretty new technology and therefore unfortunately
includes pretty new bugs.
The other option is to bite the bullet now, and fiddle with gvinum for a few
days. Since other users are using it, it is certainly possible.  This will
take you some time however. It will save you time when the upgrade to 5.4 will
be though.
It is your decision what part of the tradeoff you like the most.
HTH,
--Stijn
Stability is exactly what I'm looking for. However, I begin to doubt 
there's something strange going on with my setup. I mentioned gvinum 
freezing - there's indeed a fatal kernel trap message (page fault) on 
the console. Now, then, thinking of good old FreeBSD 4.x I decided to 
spend some more time on this issue.

Ok... so I tested vinum with FreeBSD 4.10 and amazing things just keep 
happening. Like with 5.x, I create a small test concat volume with 
vinum. Newfs, mount, etc, everything works. Now, then, I issue the 
following commands: vinum stop, then vinum start. Fatal kernel trap - 
automatic reboot. So, the root of the problem must lie deeper than 
(g)vinum in 5.x.

More info on my 5.3 setup:
Copyright (c) 1992-2004 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
The Regents of the University of California. All rights reserved.
FreeBSD 5.3-RELEASE #1: Mon Nov  8 21:43:07 EET 2004
[EMAIL PROTECTED]:/usr/obj/usr/src/sys/KIUKKU
Timecounter i8254 frequency 1193182 Hz quality 0
CPU: AMD Athlon(TM) XP 1600+ (1400.06-MHz 686-class CPU)
  Origin = AuthenticAMD  Id = 0x662  Stepping = 2
Features=0x383f9ffFPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,MMX,FXSR,SSE
  AMD Features=0xc048MP,AMIE,DSP,3DNow!
real memory  = 536788992 (511 MB)
avail memory = 519794688 (495 MB)
npx0: [FAST]
npx0: math processor on motherboard
npx0: INT 16 interface
acpi0: ASUS A7M266 on motherboard
acpi0: Power Button (fixed)
Timecounter 

Re: Vinum configuration lost at vinum stop / start

2004-11-11 Thread Greg 'groggy' Lehey
[Format recovered--see http://www.lemis.com/email/email-format.html]

Text wrapped.

On Thursday, 11 November 2004 at 12:00:52 +0200, Kim Helenius wrote:
 Greetings. I posted earlier about problems with vinum raid5 but it
 appears it's not restricted to that:

 Let's make a fresh start with vinum resetconfig. Then vinum create
 kala.txt which contains:

 ...

 Now I can newfs /dev/vinum/vinum0, mount it, use it, etc. But when I do
 vinum stop, vinum start, vinum stop, and vinum start something amazing
 happens. Vinum l after this is as follows:

 ...
 0 volumes:
 0 plexes:
 0 subdisks:

 Where did my configuration go?! I can of course recreate it, with no
 data lost, but imagine this on a raid5 where the plex goes into init
 mode after creation. Not a pleasant scenario. Also recreating the
 configuration from a config file after every reboot doesn't sound
 interesting.

There have been a lot of replies to this thread, but nobody asked you
the obvious: where is the information requested at
http://www.vinumvm.org/vinum/how-to-debug.html?

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgpKO8NfAJe5K.pgp
Description: PGP signature


Re: Vinum 1TB filesystem limit questions

2004-11-08 Thread matt virus
same exact error.  Tried without, tried -O 2, no dice! :-)
-matt
Martin Hepworth wrote:
Matt
what happens if you drop the -O flag. Newfs will default to ufs2 in the 
5.x versions. or even do '-O 2'???

--
Martin Hepworth
Snr Systems Administrator
Solid State Logic
Tel: +44 (0)1865 842300
matt virus wrote:
Hi All -
with some help from people on this list, i managed to get vinum and 
raid5 all figured out!

I had 4 * 160gb raid5 array running perfectly.  When i ventured home 
this past weekend, i found another ATA controller and figured I'd 
change my raid5 array to have 8 drives.

I cleaned the drives, reformatted and labeled to have a nice clean 
start, rewrote my config file and I get this:

--
2day# newfs -U -O2 /dev/vinum/raid5
/dev/vinum/raid5: 1094291.2MB (2241108324 sectors) block size 16384, 
fragment size 2048
using 5955 cylinder groups of 183.77MB, 11761 blks, 23552 inodes.
with soft updates

newfs: can't read old UFS1 superblock: read error from block device: 
Invalid argument
---
.
After some reading, found out this is a pesky problem a lot of people 
are having.  Is there a solution for FBSD 5.2.1 running vinum or do I 
need to upgrade to 5.3 or some other release using geom-vinum?  Does 
anybody know (for sure) if geom-vinum works with 1TB filesystems?

WORST case - i'll remove a drive and bump it down to under 1TB, but it 
seems like a waste.

-matt
**
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. If you have received this email in error please notify
the system manager.
This footnote confirms that this email message has been swept
for the presence of computer viruses and is believed to be clean.
**


--
Matt Virus (veer-iss)
http://www.mattvirus.net
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum disklabel FBSD 5.2.1....

2004-11-08 Thread FreeBSD questions mailing list
On 07 nov 2004, at 00:19, Greg 'groggy' Lehey wrote:
On Sunday, 31 October 2004 at 14:03:18 +0100, FreeBSD questions 
mailing list wrote:
On 31 okt 2004, at 07:41, matt virus wrote:
matt virus wrote:
Hi all!
I have (8) maxtor 160gb drives I plan on constructing a vinum raid5
array with.
the devices are:
ad4ad11
All drives have been fdisk'd and such,
ad4s1d.ad11s1d
The first step of setting up vinum is changing the disklabel
disklabel -e /dev/ad4
The disk label says it has 8 partitions, but only the A and C
partitions are shown...
**MY DISKLABEL
# /dev/ad4:
8 partitions:
#size   offsetfstype   [fsize bsize bps/cpg]
 a: 320173040   16unused0  0
 c: 3201730560unused0  0 # raw part, don't 
edit
**


c: is not a valid disk label. You need to create one first. See the
example below first: there's an e label.  You can do this in
sysinstall: Configure / Label / ad4 and then C to create one.  Once
that's done it'll show up in disklabel as you write below.  Then in
disklabel you can change the 4.2BSD to vinum.
You should also not use 'c' for Vinum.
Greg
--
Bit of confusion from my side: I meant C as the key that should be 
pressed to create a new slice not as a name for a disklabel
Arno

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum 1TB filesystem limit questions

2004-11-08 Thread Greg 'groggy' Lehey
On Monday,  8 November 2004 at  1:01:04 -0600, matt virus wrote:
 Hi All -

 with some help from people on this list, i managed to get vinum and
 raid5 all figured out!

 I had 4 * 160gb raid5 array running perfectly.  When i ventured home
 this past weekend, i found another ATA controller and figured I'd change
 my raid5 array to have 8 drives.

 I cleaned the drives, reformatted and labeled to have a nice clean
 start, rewrote my config file and I get this:

 --
 2day# newfs -U -O2 /dev/vinum/raid5
 /dev/vinum/raid5: 1094291.2MB (2241108324 sectors) block size 16384,
 fragment size 2048
 using 5955 cylinder groups of 183.77MB, 11761 blks, 23552 inodes.
 with soft updates

 newfs: can't read old UFS1 superblock: read error from block device:
 Invalid argument

Hmm.  Interesting.  UFS 1 is limited to 1 TB, so the message is
understandable.  But why is it looking for a UFS 1 superblock?  What
happens if you first create a smaller UFS 2 file system (use the -s
option to set the size explicitly to, say, 500 GB), and then repeat
making it for the full size?

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgppnoDujV088.pgp
Description: PGP signature


Re: vinum disklabel FBSD 5.2.1....

2004-11-06 Thread Greg 'groggy' Lehey
On Sunday, 31 October 2004 at 14:03:18 +0100, FreeBSD questions mailing list 
wrote:

 On 31 okt 2004, at 07:41, matt virus wrote:

 matt virus wrote:
 Hi all!
 I have (8) maxtor 160gb drives I plan on constructing a vinum raid5
 array with.
 the devices are:
 ad4ad11
 All drives have been fdisk'd and such,
 ad4s1d.ad11s1d
 The first step of setting up vinum is changing the disklabel
 disklabel -e /dev/ad4
 The disk label says it has 8 partitions, but only the A and C
 partitions are shown...
 **MY DISKLABEL
 # /dev/ad4:
 8 partitions:
 #size   offsetfstype   [fsize bsize bps/cpg]
  a: 320173040   16unused0  0
  c: 3201730560unused0  0 # raw part, don't edit
 **


 c: is not a valid disk label. You need to create one first. See the
 example below first: there's an e label.  You can do this in
 sysinstall: Configure / Label / ad4 and then C to create one.  Once
 that's done it'll show up in disklabel as you write below.  Then in
 disklabel you can change the 4.2BSD to vinum.

You should also not use 'c' for Vinum.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgpP1ctMiCuRw.pgp
Description: PGP signature


Re: vinum disklabel FBSD 5.2.1....

2004-10-31 Thread matt virus
nobody ???


matt virus wrote:
Hi all!
I have (8) maxtor 160gb drives I plan on constructing a vinum raid5 
array with.

the devices are:
ad4ad11
All drives have been fdisk'd and such,
ad4s1d.ad11s1d
The first step of setting up vinum is changing the disklabel
disklabel -e /dev/ad4
The disk label says it has 8 partitions, but only the A and C partitions 
are shown...

**MY DISKLABEL
# /dev/ad4:
8 partitions:
#size   offsetfstype   [fsize bsize bps/cpg]
  a: 320173040   16unused0  0
  c: 3201730560unused0  0 # raw part, don't edit
**
Now, i know i have to change *something* to   vinum  but i'm unsure 
which one, or if I need to actually add a line or ???

This is my first time playing with vinum, i've read a handful of howtos 
and all the documentation I find shows the disklabel looking like this:

*HOWTO's Disklabel
# disklabel da0
[snip]
#size   offsetfstype   [fsize bsize bps/cpg]
  a:  1024000  10240004.2BSD 2048 1638490
  b:  10240000  swap
  c: 179124120unused0 0
  e: 15864412  2048000 vinum

(source: http://org.netbase.org/vinum-mirrored.html)
Any direction is appreciated :-)
-matt
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to 
[EMAIL PROTECTED]


--
Matt Virus (veer-iss)
www.mattvirus.net
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum disklabel FBSD 5.2.1....

2004-10-31 Thread FreeBSD questions mailing list
On 31 okt 2004, at 07:41, matt virus wrote:
nobody ???
OK I'll give it a try. I have a vinum RAID 1 running though, but the 
way to get it tunning isn't very different.

matt virus wrote:
Hi all!
I have (8) maxtor 160gb drives I plan on constructing a vinum raid5 
array with.
the devices are:
ad4ad11
All drives have been fdisk'd and such,
ad4s1d.ad11s1d
The first step of setting up vinum is changing the disklabel
disklabel -e /dev/ad4
The disk label says it has 8 partitions, but only the A and C 
partitions are shown...
**MY DISKLABEL
# /dev/ad4:
8 partitions:
#size   offsetfstype   [fsize bsize bps/cpg]
  a: 320173040   16unused0  0
  c: 3201730560unused0  0 # raw part, don't edit
**

c: is not a valid disk label. You need to create one first. See the 
example below first: there's an e label.
You can do this in sysinstall: Configure / Label / ad4 and then C to 
create one.
Once that's done it'll show up in disklabel as you write below.
Then in disklabel you can change the 4.2BSD to vinum.

Of course you can also add the whole label-line in disklabel itself but 
I find sysinstall easier.

Arno
Now, i know i have to change *something* to   vinum  but i'm unsure 
which one, or if I need to actually add a line or ???
This is my first time playing with vinum, i've read a handful of 
howtos and all the documentation I find shows the disklabel looking 
like this:
*HOWTO's Disklabel
# disklabel da0
[snip]
#size   offsetfstype   [fsize bsize bps/cpg]
  a:  1024000  10240004.2BSD 2048 1638490
  b:  10240000  swap
  c: 179124120unused0 0
  e: 15864412  2048000 vinum

(source: http://org.netbase.org/vinum-mirrored.html)
Any direction is appreciated :-)
-matt
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum: df -h size and vinum list size not nearly the same

2004-10-28 Thread Mark Frasa
h0444lp6 [EMAIL PROTECTED] wrote:

 Dear list,
 
 I do wonder a little about the difference in the size for my
 /dev/vinum/usr reported by ¡§vinum list¡¨ and ¡§df ¡Vh¡¨.
 I concatenated three 1303MB partitions. vinum list shows as expected a
 size of 3909MB for volume usr, but df -h shows me only the size 1303MB.
 Why?
 
 TIA
 
 zheyu
 
 # vinum list
 3 drives:
 D drive_1   State: up Device /dev/ad0s1f  Avail:
 0/1303 MB (0%)
 D drive_2   State: up Device /dev/ad1s1g  Avail:
 0/1303 MB (0%)
 D drive_3   State: up Device /dev/ad2s1g  Avail:
 0/1303 MB (0%)
 
 1 volumes:
 V usr   State: up Plexes:   1 Size:   3909
 MB
 
 1 plexes:
 P usr.p0  C State: up Subdisks: 3 Size:   3909
 MB
 
 3 subdisks:
 S usr.p0.s0 State: up PO:0  B Size:   1303
 MB
 S usr.p0.s1 State: up PO: 1303 MB Size:   1303
 MB
 S usr.p0.s2 State: up PO: 2606 MB Size:   1303
 MB
 
 # df ¡Vh
 Filesystem   Size   Used  Avail Capacity  Mounted on
 /dev/ad0s1a  252M35M   197M15%/
 /dev/vinum/usr   1.3G   260M   921M22%/usr
 /dev/ad1s1e  252M   218K   232M 0%/var
 /dev/ad2s1e  252M   4.0K   232M 0%/var/tmp
 procfs   4.0K   4.0K 0B   100%/proc
 
 

Can u show us your vinum config file?

Mark.

 
 ___
 [EMAIL PROTECTED] mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to [EMAIL PROTECTED]
 

 


___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum, and poweroutage... HELP!

2004-10-23 Thread Erik Udo
Why didn't you answer yes to fsck? I dont think it would have removed
anything...
Few keywords you might want to check:
scan_ffs
fsck_ffs
Thomas Rasmussen wrote:
Have a vinum setup... two disk...
Vinum.conf
drive d1 device /dev/ad2s1e
drive d2 device /dev/ad3s1e
   volume mirror
 plex org concat
   sd length 19m drive d1
 plex org concat
   sd length 19m drive d2

when I try to run fsck /dev/vinum/mirror
it outputs..
BAD SUPER BLOCK: VALUES IN SUPER BLOCK DISAGREE WITH THOSE IN FIRST
ALTERNATE
CANNOT FIGURE OUT FILE SYSTEM PARTITION
So I try to run 

fsck -b 32 -n /dev/vinum/mirror
and get a lot of these erros... any?
Should a run fsck -b 32 /dev/vinum/mirror and say yes? 

PARTIALLY ALLOCATED INODE I=1838571
CLEAR? no
PARTIALLY ALLOCATED INODE I=1838572
CLEAR? no
PARTIALLY ALLOCATED INODE I=1838573
CLEAR? no
PARTIALLY ALLOCATED INODE I=1838574
CLEAR? no
PARTIALLY ALLOCATED INODE I=1838575
CLEAR? no
PARTIALLY ALLOCATED INODE I=1838576
CLEAR? no
PARTIALLY ALLOCATED INODE I=1838577
CLEAR? no
PARTIALLY ALLOCATED INODE I=1838578
CLEAR? no
PARTIALLY ALLOCATED INODE I=1838579
CLEAR? no
UNKNOWN FILE TYPE I=1838580
CLEAR? no
PARTIALLY ALLOCATED INODE I=1838581
CLEAR? no
PARTIALLY ALLOCATED INODE I=1838582
CLEAR? no
PARTIALLY ALLOCATED INODE I=1838583
CLEAR? no
PARTIALLY ALLOCATED INODE I=1838584
CLEAR? no
PARTIALLY ALLOCATED INODE I=1838585
CLEAR? no
PARTIALLY ALLOCATED INODE I=1838586
CLEAR? no
PARTIALLY ALLOCATED INODE I=1838587
CLEAR? no
PARTIALLY ALLOCATED INODE I=1838588
CLEAR? no
PARTIALLY ALLOCATED INODE I=1838589
CLEAR? no
PARTIALLY ALLOCATED INODE I=1838590
CLEAR? no
PARTIALLY ALLOCATED INODE I=1838591
CLEAR? no
PARTIALLY ALLOCATED INODE I=1839040
CLEAR? no
PARTIALLY ALLOCATED INODE I=1839041
CLEAR? no
PARTIALLY ALLOCATED INODE I=1839042

Med venlig hilsen/Best regards
Thomas Rasmussuen
Network manager, NM Net ApS
Email: [EMAIL PROTECTED]
Tlf./Phone: +45 8677 0606 Fax: +45 8677 0615
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]
 

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum, and poweroutage...

2004-10-22 Thread Chris Howells
On Friday 22 October 2004 15:23, Thomas Rasmussen wrote:
 Have a vinum setup... two disk...
 when I try to run fsck /dev/vinum/mirror

 it outputs..

 BAD SUPER BLOCK: VALUES IN SUPER BLOCK DISAGREE WITH THOSE IN FIRST
 ALTERNATE
 CANNOT FIGURE OUT FILE SYSTEM PARTITION

Was it created under 4.x with UFS 1 and you are not using 5.x? If so, try 
fsck_ufs instead. Well, that's the same error message I got in that 
situation.

Also if you are using 5.x you should try gvinum. (geom aware vinum)

-- 
Cheers, Chris Howells -- [EMAIL PROTECTED], [EMAIL PROTECTED]
Web: http://chrishowells.co.uk, PGP ID: 0x33795A2C
KDE/Qt/C++/PHP Developer: http://www.kde.org


pgp8BjXsv6zfT.pgp
Description: PGP signature


Re: Vinum problems/questions

2004-10-20 Thread Dave Swegen
[ Stuff deleted ]

Thanks for the pointer - the setupstate keyword did the trick. And my
apologies for not RTFM :) *goes off with burning cheeks*

If you're still interested in the panic output I'll try and find some time
in the near future to try and get hold of it.

Cheers 
Dave

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vinum swap no longer working.

2004-10-11 Thread Mark Frasa
On 2004.10.11 10:43:02 +0930, Greg 'groggy' Lehey wrote:
 [Format recovered--see http://www.lemis.com/email/email-format.html]
 
 Overlong lines.
 
 On Sunday, 10 October 2004 at 19:23:24 +0200, Mark Frasa wrote:
  Hello,
 
  After installing FreeBSD 5.2.1, because 4.10 and even 5.1 did not
  reconized mij SATA controller, i CVS-upped and upgraded to 5.2.1-p11
  RELEASE
 
  After that I configured Vinum to mirror (RAID 1) 2 80G Maxtor SATA
  disks.
 
  The error i am getting is:
 
  swapon /dev/vinum/swap  swapon: /dev/vinum/swap: Operation not
  supported by device
 
  I have taken notice of this message:
 
  -
  [missing attribution to Greg Lehey]
  On Sunday, 28 December 2003 at 20:00:04 -0800, Micheas Herman wrote:
  This may belong on current, I upgraded to 5.2 from 5.1 and my
  kernel (GENERIC) now refuses to use /dev/vinum/swap as my swap
  device. # swapon /dev/vinum/swap swapon: /dev/vinum/swap:
  Operation not supported by device # Is this a 5.2 bug or do I have
  vinum incorrectly configured?
 
  This is a 5.2 bug.  It was last mentioned here a day or two ago, and
  I'm currently chasing it.
 
  Since this is a message from the 28th of December 2003 , can anyone
  tell me when this issue will be solved?  Otherwise i have to
  consider buying PATA disks which allows me to run 4.10 again.
 
 Vinum is being rewritten; the new one is called gvinum or geom_vinum.
 It handles swap, and it should be in 5.3.
 
 Greg
 --
 When replying to this message, please copy the original recipients.
 If you don't, I may ignore the reply or reply to the original recipients.
 For more information, see http://www.lemis.com/questions.html
 See complete headers for address and phone numbers.


Does the vinum in FreeBSD 4.10 has the same problem? 
If not i might consider to buy PATA disks and run software raid because 
i rather use 4.10 then 5.3.

Mark.


pgpYpk9Kau1vE.pgp
Description: PGP signature


Re: vinum swap no longer working.

2004-10-11 Thread Greg 'groggy' Lehey
On Monday, 11 October 2004 at 11:26:13 +0200, Mark Frasa wrote:
 On 2004.10.11 10:43:02 +0930, Greg 'groggy' Lehey wrote:
 [missing attribution to Greg Lehey]
 On Sunday, 28 December 2003 at 20:00:04 -0800, Micheas Herman wrote:
 This may belong on current, I upgraded to 5.2 from 5.1 and my
 kernel (GENERIC) now refuses to use /dev/vinum/swap as my swap
 device. # swapon /dev/vinum/swap swapon: /dev/vinum/swap:
 Operation not supported by device # Is this a 5.2 bug or do I have
 vinum incorrectly configured?

 This is a 5.2 bug.  It was last mentioned here a day or two ago, and
 I'm currently chasing it.

 Since this is a message from the 28th of December 2003 , can anyone
 tell me when this issue will be solved?  Otherwise i have to
 consider buying PATA disks which allows me to run 4.10 again.

 Vinum is being rewritten; the new one is called gvinum or geom_vinum.
 It handles swap, and it should be in 5.3.

 Does the vinum in FreeBSD 4.10 has the same problem?

No.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgpQBI7f8A4vi.pgp
Description: PGP signature


Re: vinum swap no longer working.

2004-10-10 Thread Greg 'groggy' Lehey
[Format recovered--see http://www.lemis.com/email/email-format.html]

Overlong lines.

On Sunday, 10 October 2004 at 19:23:24 +0200, Mark Frasa wrote:
 Hello,

 After installing FreeBSD 5.2.1, because 4.10 and even 5.1 did not
 reconized mij SATA controller, i CVS-upped and upgraded to 5.2.1-p11
 RELEASE

 After that I configured Vinum to mirror (RAID 1) 2 80G Maxtor SATA
 disks.

 The error i am getting is:

 swapon /dev/vinum/swap  swapon: /dev/vinum/swap: Operation not
 supported by device

 I have taken notice of this message:

 -
 [missing attribution to Greg Lehey]
 On Sunday, 28 December 2003 at 20:00:04 -0800, Micheas Herman wrote:
 This may belong on current, I upgraded to 5.2 from 5.1 and my
 kernel (GENERIC) now refuses to use /dev/vinum/swap as my swap
 device. # swapon /dev/vinum/swap swapon: /dev/vinum/swap:
 Operation not supported by device # Is this a 5.2 bug or do I have
 vinum incorrectly configured?

 This is a 5.2 bug.  It was last mentioned here a day or two ago, and
 I'm currently chasing it.

 Since this is a message from the 28th of December 2003 , can anyone
 tell me when this issue will be solved?  Otherwise i have to
 consider buying PATA disks which allows me to run 4.10 again.

Vinum is being rewritten; the new one is called gvinum or geom_vinum.
It handles swap, and it should be in 5.3.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgpnk3lqXPXTO.pgp
Description: PGP signature


Re: VINUM: Disk crash with striped raid

2004-10-08 Thread Paul Everlund
Greg 'groggy' Lehey wrote:
On Thursday,  7 October 2004 at 18:11:52 +0200, Paul Everlund wrote:
 Hi Greg and list!
Thank you for your reply!
 I did have two 120 GB's disk drives in vinum as a striped raid.
Can you be more specific?
My vinum.conf looks like this, if this is to be more specific:
   drive ad5 device /dev/ad5s1e
   drive ad6 device /dev/ad6s1e
   volume raid0
   plex org striped 127k
   sd length 0 drive ad5
   sd length 0 drive ad6
 I did contact a data recovery company and they say they need both
 disks to restore the raid, because of that the raid initializing
 might be corrupted.

 My questions is:

 Do they need both disks?
That depends on your configuration.
That is as above.
 Isn't it enough if they make a disk image of the failed drive, and I
 will then be able to restore the raid data initialization in vinum
 by a vinum create, or something similar?
The command will be 'vinum start'.
Thank you!
 Will they be able to recreate the raid data without using vinum
 anyway?
Who knows?
You at least know from this mail that I don't... :-)
The real issue is the configuration of your volume (not raid).
Sorry.
 If it only has a single plex, you're in trouble.
Well, then I'm in trouble.
 In that case, you need your recovery company to get an image of
 the failed disk. Then you should put it on a similar disk, create
 a configuration entry and perform some other incantations, and you
 should be up and running again.
Can you please be more specific? Is the configuration entry the one
above, vinum.conf? Perform other incantations?
If you have two or more plexes, you shouldn't need to do any of this.
As I don't have two or more plexes it seems I have to do all of it. :-)
Thank you for a reply in advance!
Best regards,
Paul
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: VINUM: Disk crash with striped raid

2004-10-08 Thread Greg 'groggy' Lehey
On Friday,  8 October 2004 at 14:52:48 +0200, Paul Everlund wrote:
 Greg 'groggy' Lehey wrote:
 On Thursday,  7 October 2004 at 18:11:52 +0200, Paul Everlund wrote:
 Can you be more specific?

 My vinum.conf looks like this, if this is to be more specific:

drive ad5 device /dev/ad5s1e
drive ad6 device /dev/ad6s1e
volume raid0
plex org striped 127k
sd length 0 drive ad5
sd length 0 drive ad6

It's a *very* bad idea to name Vinum drives after their current
location.  You can take the (physical) drives out and put them
elsewhere, and Vinum will still find them.  The naming is then very
confusing.

 If it only has a single plex, you're in trouble.

 Well, then I'm in trouble.

 In that case, you need your recovery company to get an image of
 the failed disk. Then you should put it on a similar disk, create
 a configuration entry and perform some other incantations, and you
 should be up and running again.

 Can you please be more specific? Is the configuration entry the one
 above, vinum.conf? Perform other incantations?

The easiest way is to recover the exact Vinum partition (drive) and
copy it as it is onto a new Vinum drive with the same name.  Then just
do a 'setstate up' on the plex and subdisks.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgpmm9KgDqqZx.pgp
Description: PGP signature


  1   2   3   4   >