Modulok wrote:
Take the following example vinum config file:
drive a device /dev/da2a
drive b device /dev/da3a
volume rambo
plex org concat
sd length 512m drive a
plex org concat
sd length 512m drive b
8cut here8
drive disk1 device
One thing you might consider is that gvinum is quite flexible.
The subdisks in vinum that make up a raid 5 plex are partitions.
This means you can create raid 5 sets without using each entire disk
and the disks don't need to be the same model or size. It's also
handy for spares. If you
Jeremy Ehrhardt wrote:
We've been testing this box as a
file server, and it usually works fine, but smartd reported a few bad
sectors on one of the drives, then a few days later it crashed while I
was running chmod -R on a directory on drugs and had to be manually
rebooted. I can't figure out
On 7/5/2006 15:56, Jeremy Ehrhardt seems to have typed:
3: would I be better off using a different RAID 5 system on another OS?
You would be best off with a 3ware card (www.3ware.com) running RAID 5
(hardware raid software raid).
It works great in FreeBSD and is *very* stable and fault
On Wednesday 05 July 2006 19:05, Peter A. Giessel wrote:
On 7/5/2006 15:56, Jeremy Ehrhardt seems to have typed:
3: would I be better off using a different RAID 5 system on another OS?
You would be best off with a 3ware card (www.3ware.com) running RAID 5
(hardware raid software raid).
It
On Wed, 17 May 2006, Joe Auty wrote:
Are their any tutorials explaining how to do so? So far, based on the lack of
info I've been able to find, it seems to me that this is a rarely used
configuration... I'm wondering what the reasons for this might be?
There might be some helpful nuggets in there, but I'm looking to
basically combine the storage of multiple disks, like RAID-0, except
I want my second drive written to only when my first drive has been
filled. I understand this can be done via vinum concatenation. I'm
looking for general
On Thu, 13 Apr 2006, Chris Hastie wrote:
I tried to upgrade from 5.1-RELEASE to 5_RELENG last night and hit big
problems with vinum. 5_RELENG in retrospect was an error I suspect, as
what I really wanted was 5.4-RELEASE.
Since 5.3 you should probably use the geom-aware gvinum instead of
On Thu, 13 Apr 2006, Emil Thelin [EMAIL PROTECTED] wrote:
On Thu, 13 Apr 2006, Chris Hastie wrote:
I tried to upgrade from 5.1-RELEASE to 5_RELENG last night and hit big
problems with vinum. 5_RELENG in retrospect was an error I suspect, as
what I really wanted was 5.4-RELEASE.
Since 5.3
On Thu, 13 Apr 2006, Chris Hastie wrote:
On Thu, 13 Apr 2006, Emil Thelin [EMAIL PROTECTED] wrote:
On Thu, 13 Apr 2006, Chris Hastie wrote:
I tried to upgrade from 5.1-RELEASE to 5_RELENG last night and hit big
problems with vinum. 5_RELENG in retrospect was an error I suspect, as
what I
On Monday, 30 January 2006 at 10:31:13 +0300, Forth wrote:
Hi,
I am trying setup a vinum mirrored plex with two disks:
ad2: 38204MB SAMSUNG SP0411N [77622/16/63] at ata1-master UDMA33
ad3: 38204MB SAMSUNG SP0411N [77622/16/63] at ata1-slave UDMA33
Disks are new and fully functional, but when
Ignore, Google'd a bit longer and found the answer ...
On Mon, 28 Nov 2005, Marc G. Fournier wrote:
but, from what I can tell, is doing absolutely nothing, both via iostat and
vinum list ... the server has been 'up' for 15 minutes ... when I
accidentally typed 'vinum init vm.p0.s0' instead
On Fri, Aug 19, 2005 at 11:01:55AM -0500, Robin Smith wrote:
There seems to be a consensus in the references I've found that vinum
is completely broken on 5.4
That is true. IMHO it should be removed from RELENG_5 and _6 if it isn't
already.
and that gvinum/geom_vinum is not ready for
On Fri, 19 Aug 2005 11:01:55 -0500, Robin Smith
[EMAIL PROTECTED] wrote:
There seems to be a consensus in the references I've found that vinum
is completely broken on 5.4 and that gvinum/geom_vinum is not ready
for production use. As it seems to me, this means that anyone using
4.11 (say)
On Jul 22, 2005, at 5:51 PM, Ben Craig wrote:
However, it appears that the vinum config isn't being saved, as
rebooting
the machine can't find the vinum root partition, and after manually
booting
to the pre-vinum root (ufs:ad0s1a) running vinum list shows no
volume
information.
Wasn't
Hello everyone,
Just to add, I had to type out the above message manually since I can't get
access to anything with the crashed subdisk on /usr.
With regard to Greg's requests for information when reporting vinum problems
as stated on vinumvm.org http://vinumvm.org's website, I can provide the
[Format recovered--see http://www.lemis.com/email/email-format.html]
Broken wrapping, unclear attribution, incorrect quotation levels. It
took five minutes of my time fixing this message to a point where I
could reply to it.
On Thursday, 30 June 2005 at 15:37:56 +0200, Gareth Bailey wrote:
On 6/27/05, Bri [EMAIL PROTECTED] wrote:
Howdy,
I'm attempting to use Vinum to concat multiple plexes together to make a
single 4.5TB volume. I've noticed that once I hit the 2TB mark it seems to
fail, it looks like once it hits 2TB the size gets reset to 0. Example below,
[EMAIL
On Mon, Jun 27, 2005 at 11:57:15AM -0700, Drew Tomlinson wrote:
I'd appreciate hearing of your experiences with vinum, gvinum, and ccd,
especially as they relate to firewire devices.
In my experience, vinum doesn't play well with GEOM, and gvinum isn't
anywhere near feature-complete. (I
On 6/27/2005 12:04 PM Mac Mason wrote:
On Mon, Jun 27, 2005 at 11:57:15AM -0700, Drew Tomlinson wrote:
I'd appreciate hearing of your experiences with vinum, gvinum, and ccd,
especially as they relate to firewire devices.
In my experience, vinum doesn't play well with GEOM, and
On Sun, 5 Jun 2005 14:14:43 +0200 (CEST)
Wojciech Puchar [EMAIL PROTECTED] wrote:
i'm reading about vinum disk manager, did some test, works fine.
but the question - could root partition be vinum volume? (i have compiled
kernel with vinum built in, not as module).
FreeBSD Handbook
17.9
On 28 mei 2005, at 02:21, Kris Kirby wrote:
Trying to make a mirror of two slices, but I see to be running into
some
issues here. I'm not on questions, so please CC me on all replies.
Maybe it's a good idea to subscribe cuz then you would have been able
to read what we wrote about this
On 28 mei 2005, at 09:01, [EMAIL PROTECTED] wrote:
[...]
you're welcome
maybe the complete vinum chapter should be removed from the handbook?
Arno
Perhaps the Vinum chapter should say up front that Vinum works with
FreeBSD 4.x but not with 5.x
jd
yeah better idea
Arno
[...]
you're welcome
maybe the complete vinum chapter should be removed from the handbook?
Arno
Perhaps the Vinum chapter should say up front that Vinum works with FreeBSD
4.x but not with 5.x
jd
Janos Dohanics
[EMAIL PROTECTED]
http://www.3dresearch.com/
On 27 mei 2005, at 16:38, [EMAIL PROTECTED] wrote:
[...]
not very encouraging... Is it a RAID 5 you were able to make work
under 5.4?
jd
hey
yeah a 3x160 GB RAID5 and 2x80 GB RAID 1 in FreeBSD 5.4-p1
i get the same error message everytime i start vinum or whenever i
execute a command
On 26 mei 2005, at 23:10, jd wrote:
I am trying to set up Vinum on a new system, and I get the error
message:
vinum: Inappropriate ioctl for device.
Here are the details:
...
Vinum used to work beautiful under 4.11; I'm wondering what do I
need to
change to make it work under 5.4?
On 27 mei 2005, at 16:13, [EMAIL PROTECTED] wrote:
[...]
Go back to 4.11!
vinum is a nightmare in 5.4
and gvinum is not nearly mature enough...
I do have it running but every update it takes me 2 days to get the
RAIDs back up
Arno
Arno,
not very encouraging... Is it a RAID 5 you
FreeBSD questions mailing list wrote:
i tried Gvinum too but that doesn't have the setstate nor the ruibld
parity command and still you can't stop gvinum (gvinum stop doesn't
work, nor does kldunload geom_vinum.ko)
try gmirror for raid 1. it worked great for me.
could gmirror and gstripe
jd wrote:
I am trying to set up Vinum on a new system, and I get the error message:
vinum: Inappropriate ioctl for device.
Here are the details:
- FreeBSD 5.4-RELEASE
I'd be glad if someone steps in and says I'm wrong, but AFAIK vinum is
not supported anymore on new releases.
At least in my
On Friday, 22 April 2005 at 16:11:55 -0400, Timothy Radigan wrote:
Hi all,
Ok, I'm still having trouble with vinum, I got it to load at start, but the
vinum.autostart=YES in /boot/loader.conf returns a vinum: no drives
found message.
I had the mirrored set up and running before the reboot
Timothy Radigan wrote:
I know this topic has come up before, but how in the world do you get vinum
to load AND start itself at boot time so I don't have to repair my mirrored
volumes every reboot?
I have tried to add start_vinum=YES to /etc/rc.conf but that ends in a
'panic dangling node' error.
On Friday, April 22, 2005, at 08:37AM, Timothy Radigan [EMAIL PROTECTED]
wrote:
I also tried to add vinum_load=YES to
/boot/loader.conf
See page 239 of The Complete FreeBSD (page 19 of the PDF)
http://www.vinumvm.org/cfbsd/vinum.pdf
also add vinum.autostart=YES to /boot/loader.conf
On Fri, 2005-04-22 at 21:11, Timothy Radigan wrote:
Hi all,
Ok, I'm still having trouble with vinum, I got it to load at start, but the
vinum.autostart=YES in /boot/loader.conf returns a vinum: no drives
found message.
I had the mirrored set up and running before the reboot and the file
Lowell Gilbert wrote:
Chuck Robey [EMAIL PROTECTED] writes:
Sorry, I'm having a miserable time trying to get vinum working on my
amd64 system. Vinum tells me that it can't load the kernel (vinum:
Kernel module not available: No such file or directory). Gvinum
simply refuses to take any commands
I upgraded to 5.3 on one system a while ago. And
when it boots up vinum panics the system on startup
with this message:
panic: unmount: dangling vnode
I found that if I boot in single user mode and
mount / to make it rw, then start vinum, everything
is fine.
I just patched the kernel
On Sun, 2005-03-27 at 16:59, Ean Kingston wrote:
On March 27, 2005 10:35 am, Robert Slade wrote:
Hi,
I have managed to setup a vinum volume using 2 striped disks, the
volume
is created and I can do newfs on it and mount it.
However, when I set start_vinum=YES in rc.conf, vinum
On March 27, 2005 10:35 am, Robert Slade wrote:
Hi,
I have managed to setup a vinum volume using 2 striped disks, the volume
is created and I can do newfs on it and mount it.
However, when I set start_vinum=YES in rc.conf, vinum loads then I get
panic, followed by hanging vnode.
I'm using
On Sun, 2005-03-27 at 16:59, Ean Kingston wrote:
On March 27, 2005 10:35 am, Robert Slade wrote:
Hi,
I have managed to setup a vinum volume using 2 striped disks, the volume
is created and I can do newfs on it and mount it.
However, when I set start_vinum=YES in rc.conf, vinum loads
On 09 mrt 2005, at 22:11, Benjamin Keating wrote:
Hey all. Running FreeBSD 5.3 with GENERIC kernel. This isn't high
priority, but I've never had problems with Vinum before and this one's
got me stumped.
/etc/vinum.conf ###
bigbang# cat
Greg 'groggy' Lehey wrote:
On Thursday, 3 March 2005 at 15:35:31 -0600, matt virus wrote:
Hi all:
I have a FBSD 5.2.1 box running vinum. 7 *160gb drives in a raid5 array.
I can post specific errors and logs and such later, i'm away from the
box right now --- anybody have any thoughts ?
How
On Thursday, 3 March 2005 at 15:35:31 -0600, matt virus wrote:
Hi all:
I have a FBSD 5.2.1 box running vinum. 7 *160gb drives in a raid5 array.
I can post specific errors and logs and such later, i'm away from the
box right now --- anybody have any thoughts ?
How about
I have a box with DPT PM2044 SmartCacheIV UW-SCSI PCI cards which can do
RAID-5 in hardware, but I'd have to use the DOS volume manager to set up
the array. I have heard reports that vinum woudl be faster than using the
native card. Is this true?
Doubtful, though I have heard that there are
On Wed, Feb 16, 2005 at 09:58:17AM -0500, Ean Kingston wrote:
I have a box with DPT PM2044 SmartCacheIV UW-SCSI PCI cards which can do
RAID-5 in hardware, but I'd have to use the DOS volume manager to set up
the array. I have heard reports that vinum woudl be faster than using the
native
On Wed, Feb 16, 2005 at 09:58:17AM -0500, Ean Kingston wrote:
I have a box with DPT PM2044 SmartCacheIV UW-SCSI PCI cards which can
do
RAID-5 in hardware, but I'd have to use the DOS volume manager to set
up
the array. I have heard reports that vinum woudl be faster than using
the
Peter C. Lai wrote:
I have a box with DPT PM2044 SmartCacheIV UW-SCSI PCI cards which can do
RAID-5 in hardware, but I'd have to use the DOS volume manager to set up
the array. I have heard reports that vinum woudl be faster than using the
native card. Is this true? Should I not bother with
[Format recovered--see http://www.lemis.com/email/email-format.html]
X-Mailer: SquirrelMail/1.4.3a
This seems to have difficulty wrapping quotes.
On Wednesday, 16 February 2005 at 10:52:24 -0500, Ean Kingston wrote:
On Wed, Feb 16, 2005 at 09:58:17AM -0500, Ean Kingston wrote:
I have a box
On Thu, Feb 17, 2005 at 09:44:51AM +1030, Greg 'groggy' Lehey wrote:
snip
Recall that there are no real hardware RAID controllers on the
market. The difference is whether you have a special processor on the
controller card or not. To determine which is faster, you need to
compare the
On Wed, 9 Feb 2005, Marc G. Fournier wrote:
I read that somewhere, but then every example shows 256k as being the strip
size :( Now, with a 5 drives RAID5 array (which I'll be moving that server
to over the next couple of weeks), 256k isn't an issue? or is there
something better i should set
On Wed, Feb 09, 2005 at 02:32:30AM -0400, Marc G. Fournier wrote:
Is there a command that I can run that provide me the syscall/sec value,
that I could use in a script? I know vmstat reports it, but is there an
easier way the having to parse the output? a perl module maybe, that
already
Marc G. Fournier wrote:
Self-followup .. the server config is as follows ... did I do maybe
mis-configure the array?
# Vinum configuration of neptune.hub.org, saved at Wed Feb 9 00:13:52
2005
drive d0 device /dev/da1s1a
drive d1 device /dev/da2s1a
drive d2 device /dev/da3s1a
drive d3 device
Olivier Nicole wrote:
All servers run RAID5 .. only one other is using vinum, the other 3 are
using hardware RAID controllers ...
Come on, of course a software solution will be slower than an hardware
solution. What would you expect? :))
(Given it is same disk type/speed/controler...)
On Wed, 9 Feb 2005, Mark A. Garcia wrote:
Marc G. Fournier wrote:
Self-followup .. the server config is as follows ... did I do maybe
mis-configure the array?
# Vinum configuration of neptune.hub.org, saved at Wed Feb 9 00:13:52 2005
drive d0 device /dev/da1s1a
drive d1 device /dev/da2s1a
drive
still getting this:
# vmstat 5
procs memory pagedisks faults cpu
r b w avmfre flt re pi po fr sr da0 da1 in sy cs us sy id
11 2 0 3020036 267944 505 2 1 1 680 62 0 0 515 4005 918 7 38 55
19 2 0 3004568 268672 242 0 0
On Feb 9, 2005, at 6:34 PM, Marc G. Fournier wrote:
Most odd, there definitely has to be a problem with the Dual-Xeon
ysystem ... doing the same vmstat on my other vinum based system,
running more, but on a Dual-PIII shows major idle time:
# vmstat 5
procs memory page
On Wed, 9 Feb 2005, Chad Leigh -- Shire.Net LLC wrote:
On Feb 9, 2005, at 6:34 PM, Marc G. Fournier wrote:
Most odd, there definitely has to be a problem with the Dual-Xeon ysystem
... doing the same vmstat on my other vinum based system, running more, but
on a Dual-PIII shows major idle time:
and it performs worse then any of
my other servers, and I have less running on it then the other servers ...
What are you other servers? What RAID system/level?
Of course a software RAID5 is slower than a plain file system on a
disk.
Olivier
___
On Tuesday, 8 February 2005 at 23:21:54 -0400, Marc G. Fournier wrote:
I have a Dual-Xeon server with 4G of RAM, with its primary file system
consisting of 4x73G SCSI drives running RAID5 using vinum ... the
operating system is currently FreeBSD 4.10-STABLE #1: Fri Oct 22 15:06:55
ADT 2004
On Wed, 9 Feb 2005, Olivier Nicole wrote:
and it performs worse then any of
my other servers, and I have less running on it then the other servers ...
What are you other servers? What RAID system/level?
All servers run RAID5 .. only one other is using vinum, the other 3 are
using hardware RAID
All servers run RAID5 .. only one other is using vinum, the other 3 are
using hardware RAID controllers ...
Come on, of course a software solution will be slower than an hardware
solution. What would you expect? :))
(Given it is same disk type/speed/controler...)
Olivier
Self-followup .. the server config is as follows ... did I do maybe
mis-configure the array?
# Vinum configuration of neptune.hub.org, saved at Wed Feb 9 00:13:52 2005
drive d0 device /dev/da1s1a
drive d1 device /dev/da2s1a
drive d2 device /dev/da3s1a
drive d3 device /dev/da4s1a
volume vm
plex
On Wed, 9 Feb 2005, Greg 'groggy' Lehey wrote:
On Tuesday, 8 February 2005 at 23:21:54 -0400, Marc G. Fournier wrote:
I have a Dual-Xeon server with 4G of RAM, with its primary file system
consisting of 4x73G SCSI drives running RAID5 using vinum ... the
operating system is currently FreeBSD
The more I'm looking at this, the less I can believe my 'issue' is with
vinum ... based on one of my other machines, it just doesn't *feel* right
I have two servers that are fairly similar in config ... both running
vinum RAID5 over 4 disks ... one is the Dual-Xeon that I'm finding
In the last episode (Feb 09), Marc G. Fournier said:
On Wed, 9 Feb 2005, Greg 'groggy' Lehey wrote:
On Tuesday, 8 February 2005 at 23:21:54 -0400, Marc G. Fournier wrote:
I have a Dual-Xeon server with 4G of RAM, with its primary file
system consisting of 4x73G SCSI drives running RAID5 using
On Tue, 8 Feb 2005, Dan Nelson wrote:
Details on the array's performance, I think. Software RAID5 will
definitely have poor write performance (logging disks solve that
problem but vinum doesn't do that), but should have excellent read
rates. From this output, however:
systat -v output help:
Is there a command that I can run that provide me the syscall/sec value,
that I could use in a script? I know vmstat reports it, but is there an
easier way the having to parse the output? a perl module maybe, that
already does it?
Thanks ...
On Wed, 9 Feb 2005, Marc G. Fournier wrote:
On Tue,
-- quoting Faisal Ali --
I really tried my best to follow the FreeBSD handbook documentation to
setup bootable RAID-1 volume, I just can't seem to understand Section
17.9.2, Iam working with 5.3 i386 Release.
Since I had problems with vinum under 5.3 as well I successfully
Greg 'groggy' Lehey writes:
There was once an error in the stripe size calculations that meant
that there were holes in the plexes. Maybe it's still there (old
Vinum is not being maintained). But you should have seen that in the
console messages at create time.
Vinum reports the
On Monday, 6 December 2004 at 23:44:59 +0100, Markus Hoenicka wrote:
Greg 'groggy' Lehey writes:
There was once an error in the stripe size calculations that meant
that there were holes in the plexes. Maybe it's still there (old
Vinum is not being maintained). But you should have seen that
On Monday, 6 December 2004 at 0:28:01 +0100, Markus Hoenicka wrote:
Hi all,
I'm trying to set up vinum on a freshly installed FreeBSD 5.3-BETA7
box. The system is installed on da0. I want to use three 18G SCSI
drives to create a vinum volume.
For some reason vinum believes the disks hold
Greg 'groggy' Lehey writes:
You don't say whether you're using vinum or gvinum. I've never seen
this problem before, but if you're getting incorrect subdisk sizes,
try specifying them explicitly:
sd length 35840952s drive ibma
I wonder whether the problem is related to
On Monday, 6 December 2004 at 3:05:31 +0100, Markus Hoenicka wrote:
Hi all,
now that I can use the full capacity of my disks, I'm stuck again. I'm
trying to set up a raid5 from three SCSI disks (I know that a serious
raid5 should use five disks or more, but I have to make do with three at
Chris Smith wrote:
Hi,
I've just built a machine with a vinum root successfully. All vinum
sets show that they are up and working. There are two ATA disks in a
RAID1 root formation.
Some questions?
1. The set has just failed completely (sorry it isn't up and working
now) on the first reboot. It
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Sunday 21 November 2004 14:36, wrote:
[...]
From my point of view, vinum root partition is really useless. You will
never have the availability comparable to hardware RAID (i mean,
downtime), and will have a lot of troubles with booting from
On Tue, 2004-11-16 at 08:51, Chris Smith wrote:
[snip]
... Can you boot off a striped volume and will it
benefit me at all making it a striped volume at all rather than a
concat?
I don't think you can boot off a vinum partition, because you have to
load vinum *after* the kernel is running.
On Tuesday, 16 November 2004 at 23:10:22 -1000, Gary Dunn wrote:
On Tue, 2004-11-16 at 08:51, Chris Smith wrote:
[snip]
... Can you boot off a striped volume and will it
benefit me at all making it a striped volume at all rather than a
concat?
I don't think you can boot off a vinum
On Tuesday, 16 November 2004 at 18:51:38 +, Chris Smith wrote:
Hi,
I've just built a machine with a vinum root successfully. All vinum
sets show that they are up and working. There are two ATA disks in a
RAID1 root formation.
Some questions?
How about some details first? There's not
Kim Helenius wrote:
Now I can newfs /dev/vinum/vinum0, mount it, use it, etc. But when I do
vinum stop, vinum start, vinum stop, and vinum start something amazing
happens. Vinum l after this is as follows:
2 drives:
D d2State: up /dev/ad5s1d A: 286181/286181
MB
On Thu, 11 Nov 2004, Artem Kazakov wrote:
Kim Helenius wrote:
Now I can newfs /dev/vinum/vinum0, mount it, use it, etc. But when I do
vinum stop, vinum start, vinum stop, and vinum start something amazing
happens. Vinum l after this is as follows:
2 drives:
D d2
On Thu, Nov 11, 2004 at 12:00:52PM +0200, Kim Helenius wrote:
Greetings. I posted earlier about problems with vinum raid5 but it
appears it's not restricted to that.
Are you running regular vinum on 5.x? It is known broken. Please use
'gvinum' instead.
There is one caveat: the gvinum that
On Thu, 11 Nov 2004, Stijn Hoop wrote:
On Thu, Nov 11, 2004 at 12:00:52PM +0200, Kim Helenius wrote:
Greetings. I posted earlier about problems with vinum raid5 but it
appears it's not restricted to that.
Are you running regular vinum on 5.x? It is known broken. Please use
'gvinum'
On Thu, Nov 11, 2004 at 03:32:58PM +0200, Kim Helenius wrote:
On Thu, 11 Nov 2004, Stijn Hoop wrote:
On Thu, Nov 11, 2004 at 12:00:52PM +0200, Kim Helenius wrote:
Greetings. I posted earlier about problems with vinum raid5 but it
appears it's not restricted to that.
Are you running
Stijn Hoop wrote:
Greetings. I posted earlier about problems with vinum raid5 but it
appears it's not restricted to that.
Are you running regular vinum on 5.x? It is known broken. Please use
'gvinum' instead.
There is one caveat: the gvinum that shipped with 5.3-RELEASE contains an
error in
Hi,
On Thu, Nov 11, 2004 at 04:53:39PM +0200, Kim Helenius wrote:
Stijn Hoop wrote:
I don't know the state of affairs for 5.2.1-RELEASE, but in 5.3-RELEASE
gvinum is the way forward.
Thanks again for answering. Agreed, but there still seems to be a long
way to go. A lot of 'classic'
Stijn Hoop wrote:
Hi,
On Thu, Nov 11, 2004 at 04:53:39PM +0200, Kim Helenius wrote:
Stijn Hoop wrote:
I don't know the state of affairs for 5.2.1-RELEASE, but in 5.3-RELEASE
gvinum is the way forward.
Thanks again for answering. Agreed, but there still seems to be a long
way to go. A lot of
[Format recovered--see http://www.lemis.com/email/email-format.html]
Text wrapped.
On Thursday, 11 November 2004 at 12:00:52 +0200, Kim Helenius wrote:
Greetings. I posted earlier about problems with vinum raid5 but it
appears it's not restricted to that:
Let's make a fresh start with vinum
same exact error. Tried without, tried -O 2, no dice! :-)
-matt
Martin Hepworth wrote:
Matt
what happens if you drop the -O flag. Newfs will default to ufs2 in the
5.x versions. or even do '-O 2'???
--
Martin Hepworth
Snr Systems Administrator
Solid State Logic
Tel: +44 (0)1865 842300
matt
On 07 nov 2004, at 00:19, Greg 'groggy' Lehey wrote:
On Sunday, 31 October 2004 at 14:03:18 +0100, FreeBSD questions
mailing list wrote:
On 31 okt 2004, at 07:41, matt virus wrote:
matt virus wrote:
Hi all!
I have (8) maxtor 160gb drives I plan on constructing a vinum raid5
array with.
the
On Monday, 8 November 2004 at 1:01:04 -0600, matt virus wrote:
Hi All -
with some help from people on this list, i managed to get vinum and
raid5 all figured out!
I had 4 * 160gb raid5 array running perfectly. When i ventured home
this past weekend, i found another ATA controller and
On Sunday, 31 October 2004 at 14:03:18 +0100, FreeBSD questions mailing list
wrote:
On 31 okt 2004, at 07:41, matt virus wrote:
matt virus wrote:
Hi all!
I have (8) maxtor 160gb drives I plan on constructing a vinum raid5
array with.
the devices are:
ad4ad11
All drives have been
nobody ???
matt virus wrote:
Hi all!
I have (8) maxtor 160gb drives I plan on constructing a vinum raid5
array with.
the devices are:
ad4ad11
All drives have been fdisk'd and such,
ad4s1d.ad11s1d
The first step of setting up vinum is changing the disklabel
disklabel -e /dev/ad4
The
On 31 okt 2004, at 07:41, matt virus wrote:
nobody ???
OK I'll give it a try. I have a vinum RAID 1 running though, but the
way to get it tunning isn't very different.
matt virus wrote:
Hi all!
I have (8) maxtor 160gb drives I plan on constructing a vinum raid5
array with.
the devices are:
h0444lp6 [EMAIL PROTECTED] wrote:
Dear list,
I do wonder a little about the difference in the size for my
/dev/vinum/usr reported by ¡§vinum list¡¨ and ¡§df ¡Vh¡¨.
I concatenated three 1303MB partitions. vinum list shows as expected a
size of 3909MB for volume usr, but df -h shows me only
Why didn't you answer yes to fsck? I dont think it would have removed
anything...
Few keywords you might want to check:
scan_ffs
fsck_ffs
Thomas Rasmussen wrote:
Have a vinum setup... two disk...
Vinum.conf
drive d1 device /dev/ad2s1e
drive d2 device /dev/ad3s1e
volume mirror
plex org
On Friday 22 October 2004 15:23, Thomas Rasmussen wrote:
Have a vinum setup... two disk...
when I try to run fsck /dev/vinum/mirror
it outputs..
BAD SUPER BLOCK: VALUES IN SUPER BLOCK DISAGREE WITH THOSE IN FIRST
ALTERNATE
CANNOT FIGURE OUT FILE SYSTEM PARTITION
Was it created under 4.x
[ Stuff deleted ]
Thanks for the pointer - the setupstate keyword did the trick. And my
apologies for not RTFM :) *goes off with burning cheeks*
If you're still interested in the panic output I'll try and find some time
in the near future to try and get hold of it.
Cheers
Dave
On 2004.10.11 10:43:02 +0930, Greg 'groggy' Lehey wrote:
[Format recovered--see http://www.lemis.com/email/email-format.html]
Overlong lines.
On Sunday, 10 October 2004 at 19:23:24 +0200, Mark Frasa wrote:
Hello,
After installing FreeBSD 5.2.1, because 4.10 and even 5.1 did not
On Monday, 11 October 2004 at 11:26:13 +0200, Mark Frasa wrote:
On 2004.10.11 10:43:02 +0930, Greg 'groggy' Lehey wrote:
[missing attribution to Greg Lehey]
On Sunday, 28 December 2003 at 20:00:04 -0800, Micheas Herman wrote:
This may belong on current, I upgraded to 5.2 from 5.1 and my
[Format recovered--see http://www.lemis.com/email/email-format.html]
Overlong lines.
On Sunday, 10 October 2004 at 19:23:24 +0200, Mark Frasa wrote:
Hello,
After installing FreeBSD 5.2.1, because 4.10 and even 5.1 did not
reconized mij SATA controller, i CVS-upped and upgraded to 5.2.1-p11
Greg 'groggy' Lehey wrote:
On Thursday, 7 October 2004 at 18:11:52 +0200, Paul Everlund wrote:
Hi Greg and list!
Thank you for your reply!
I did have two 120 GB's disk drives in vinum as a striped raid.
Can you be more specific?
My vinum.conf looks like this, if this is to be more specific:
On Friday, 8 October 2004 at 14:52:48 +0200, Paul Everlund wrote:
Greg 'groggy' Lehey wrote:
On Thursday, 7 October 2004 at 18:11:52 +0200, Paul Everlund wrote:
Can you be more specific?
My vinum.conf looks like this, if this is to be more specific:
drive ad5 device /dev/ad5s1e
1 - 100 of 334 matches
Mail list logo