> * What problems are you having?
fsck reports dozens and dozens of errors and the filesystem as corrupt
whenever I start Plex 1.
> * Which version of FreeBSD are you running?
5.2.1-RELEASE
> * Have you made any changes to the system sources, including Vinum?
No, I recompiled the
Stijn Hoop <[EMAIL PROTECTED]> writes:
> On Wed, Aug 25, 2004 at 12:08:53PM +0200, Christian Laursen wrote:
>
> > When reviving a disk the data on that disk is calculated from the data and the
> > parity on the other disks.
>
> Yes, but the parity should be recalculated at the same time, right?
Hi,
thanks for your response, I didn't notice it at first because it only
went to the mailing list :)
On Wed, Aug 25, 2004 at 12:08:53PM +0200, Christian Laursen wrote:
> Stijn Hoop <[EMAIL PROTECTED]> writes:
> > I was wondering about the vinum 'rebuildparity' com
Stijn Hoop <[EMAIL PROTECTED]> writes:
> I was wondering about the vinum 'rebuildparity' command, especially the
> times when one needs to use this.
I run rebuildparity if checkparity finds any errors after unclean shutdowns.
> The problem is that I can't find an
Hi,
I was wondering about the vinum 'rebuildparity' command, especially the
times when one needs to use this.
I just recently found out, based on reading the RAIDframe documentation,
that you're supposed to recheck/rebuild the parity after every disk crash.
As I hadn't been d
ive eight(p1.s2)
> drive nine (p1.s3)
> drive ten (p1.s4)
> drive twelve (p1.s5)
That's impossible. You can't put plexes on drives; they've got to be
on subdisks, which you don't mention. This is why I ask
(http://www.vinumvm.org/vinum/how-to-d
, so i went to
replace it, created a configfile:
drive twelve device /dev/ad11s1h
# vinum create configfile
# vinum start array.p1.s5
when p1.s5 finished reviving, I got all kinds of fsck errors such
as "INCORRECT BLOCK COUNT", "EXCESSIVE BAD BLOCKS", etc.
Fortunately, I had
Greg 'groggy' Lehey said:
> The current status of Vinum in -CURRENT is that it is being
> rewritten. The introduction of the GEOM layer has badly broken Vinum,
> and it has been decided better to rewrite it than to fix it. It'll be
> a while before it's smooth a
On Thursday, 8 July 2004 at 2:21:30 -0500, Mario Doria wrote:
> Hi,
>
> Another vinum question. I have a machine running 5-CURRENT sources from
> yesterday (yes I know the dangers of running CURRENT and I did read the
> mailing list archives).
Well, the issue was discussed ther
On Tuesday, 10 August 2004 at 2:45:12 -0700, Darren Pilgrim wrote:
> On a machine running RELENG_4_8, I have two parititions, ad0s1d and
> ad4s1e, configured as a mirror using vinum. I need to move one of the
> drives to another controller, resulting in ad4 changing to ad2. I read
>
> I am wanting to set up a vinum configuration so that I have a spanned
> volume containing a large partition on one drive, and a second entire
> disk. I am a little confused whether I need to build a striped or
> concat volume.
>
> The usable size on one disk is different from
I am wanting to set up a vinum configuration so that I have a spanned
volume containing a large partition on one drive, and a second entire
disk. I am a little confused whether I need to build a striped or
concat volume.
The usable size on one disk is different from the usable size on the
other
On a machine running RELENG_4_8, I have two parititions, ad0s1d and
ad4s1e, configured as a mirror using vinum. I need to move one of the
drives to another controller, resulting in ad4 changing to ad2. I read
through the vinum man page, saw the move command, then read elsewhere
that vinum
Mentioned in the past week or so that I have two 160G SATA drives with
one slice each, each slice has 1G reserved (not currently used) for
swap, the remainder s1d is for vinum. The two are striped with "vinum
stripe -v /dev/ad4s1d /dev/ad6s1d".
At boot vinum often does not reme
On Aug 5, 2004, at 12:54 AM, Stijn Hoop wrote:
# vinum list
2 drives:
D vinumdrive1 State: up /dev/ad6s1d A: 156041/156041 MB (100%)
D vinumdrive0 State: up /dev/ad4s1d A: 156041/156041 MB (100%)
0 volumes:
0 plexes:
0 subdisks:
#
How early in the boot is this? Have you done '
On Aug 5, 2004, at 9:28 AM, David Kelly wrote:
Am tempted to rerun "stripe -v /dev/ad4s1d /dev/ad6s1d" again but
would rather not lose the data on the volume.
Update: did exactly that described above and my old data survived!
--
David Kelly N4HHE, [EMAIL PROTECTED]
On Wed, Aug 04, 2004 at 10:40:10PM -0500, David Kelly wrote:
> Am not so much as moving the vinum drives so much as replacing the
> system drive FreeBSD 5.2.1 was installed upon. The same system which
> created my striped vinum volume.
>
> System drive was a parallel ATA 40G
Am not so much as moving the vinum drives so much as replacing the
system drive FreeBSD 5.2.1 was installed upon. The same system which
created my striped vinum volume.
System drive was a parallel ATA 40G. Two SATA 160G drives were used to
create a striped vinum volume with the simple vinum
All,
I have two 160GB disks I want to mirror (ad0 and ad2). I loaded a fresh
install FreeBSD 4.10-RELEASE on the boot disk, and are following the
mirroring instructions at:
http://devel.reinikainen.net/docs/how-to/Vinum/
I have successfully added one half of the mirror and now are
> drive e device /dev/da5s1e
> drive f device /dev/da6s1e
> volume raid10
> plex org striped 512k
> sd length 0 drive a
> sd length 0 drive b
> sd length 0 drive c
> plex org striped 512k
> sd length 0 drive d
> sd length 0 drive e
> sd length 0 d
On Saturday, 24 July 2004 at 17:03:41 +1000, Chris Keladis wrote:
> Hi all,
>
> I was wondering what Vinum configurations others are using and what is
> a good balance between performance/redundancy?
>
> I've gone with RAID10 (2 striped plexes with 3 subdisks each. The
&
Hi all,
I was wondering what Vinum configurations others are using and what is
a good balance between performance/redundancy?
I've gone with RAID10 (2 striped plexes with 3 subdisks each. The
plexes are mirrored to each other).
vinum.conf:
drive a device /dev/da1s1e
drive b device /dev/d
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
I'm playing a bit with vinum. I'm trying to get to work RAID-0
(Striping) on 5.2.1-Release. The two hdds I'm using are slightly
different (38162MB and 39079MB) If I set vinum to use the same size for
both it works nicely, but if I
Steve Shorter wrote:
On Mon, Jul 12, 2004 at 06:40:01AM +0930, Greg 'groggy' Lehey wrote:
I see no drives.
Ideas?
I have concluded that this is the result of somekind
of vinum/hardware incompatibility. The problem in question
occured during the upgrade to faster disks, sp
On Mon, Jul 12, 2004 at 06:40:01AM +0930, Greg 'groggy' Lehey wrote:
> I see no drives.
>
> > Ideas?
>
I have concluded that this is the result of somekind
of vinum/hardware incompatibility. The problem in question
occured during the upgrade to faster dis
> I see no drives.
Not sure what you mean see below
>
> > Ideas?
>
> http://www.vinumvm.org/vinum/how-to-debug.html
I recreated a simpler situation with just 2 mirrored
drives.
After reboot the vinum lv -r -v reports (*missing *)
drives. But before re
Howdy!
I have 4 identical disks, labels etc are also identical.
It looks like vinum after reboot does not recognize drives
properly, as it did immedialtely after initial configuration.
One drive/subdisk in each plex isn't recognized, and the other one
is duplicated,
start.
vinum: loaded
panic: unmount: dangling vnode
syncing disks, buffers remaining... 220 220 220 220 220 220 220 220 220
220 220 220 220 220 220 220 220 220 220 220 220
Giving up on 183 buffers
uptime: 18s
And that's it
Mario
___
[EMAIL
Hi,
Another vinum question. I have a machine running 5-CURRENT sources from
yesterday (yes I know the dangers of running CURRENT and I did read the
mailing list archives). I think I found a bug, if I add
start_vinum="YES" to /etc/rc.conf, at boot I get a panic with a message
sayi
On Thursday, 8 July 2004 at 1:57:35 -0500, Mario Doria wrote:
> Hi,
>
> Im running 5-CURRENT on a machine with vinum enabled, its works perfectly
> but the command "vinum ld" gives only this output:
>
> vinum -> ld
> D tecdigital2 State: up /dev
Hi,
Im running 5-CURRENT on a machine with vinum enabled, its works perfectly
but the command "vinum ld" gives only this output:
vinum -> ld
D tecdigital2 State: up /dev/da3s1e A: 0/35016 MB (0%)
D tecdigital1 State: up /dev/da2s1e A: 0/35
In the last episode (Jul 04), Stephan van Beerschoten said:
> Is there an implementation of an LVM in FreeBSD ? Vinum is not
> sufficient. I am looking for a true LVM where creating and resizing
> volumes is a must. Something like Solaris Disksuite with
> softpartitioning is something
On Sunday, 4 July 2004 at 21:06:20 +0200, Stephan van Beerschoten wrote:
> Is there an implementation of an LVM in FreeBSD ?
> Vinum is not sufficient. I am looking for a true LVM where creating and
> resizing volumes is a must.
And what's missing in Vinum?
> Something like
Is there an implementation of an LVM in FreeBSD ?
Vinum is not sufficient. I am looking for a true LVM where creating and
resizing volumes is a must. Something like Solaris Disksuite with
softpartitioning is something I could live with already, but it just
does not seem to exist.
Am I correct
it.
Do I need to look towards vinum, or is there something else I am missing ?
The server won't be build untill Q4 this year, so I'm hoping to utilize
any 5-RELEASE that may come up then since I'm already a fan of the
current -CURRENT which I actively use on my
Everything is fine until "newfs -U -O2 /dev/vinum/..." gives the "can't read
old UFS1 superblock" error. I'm using FreeBSD 5.2.1 csvup'd to REL_ENG_5_2
in mid June 2004.
Thanks,
Richard Holmes
Lambda Software Development
Rocklin, CA
(916) 390-2057 (cell)
[E
l the discs. If this happens to be a disc that is
part of a RAID-0 array, then when vinum starts up it detects that one of the
discs have disappeared and (correctly) marks the array as crashed. There is
no "proper" way to recover from a crashed RAID-0 array - your data is
normally lost fore
Benjamin P. Keating wrote:
> I've found on the net that I can switch the state by doing:
>
> $ vinum setstate up backup.p0 backup.p0.s3
Ouch, this is a bad move. You just told vinum to start using the stale (=out
of date data) disc as if it was up to date and nothing was
This is a good one.
$ uname -a
FreeBSD 4.7-RELEASE FreeBSD 4.7-RELEASE #0: Wed Oct 9 15:08:34 GMT
2002 [EMAIL PROTECTED]:/usr/obj/usr/src/sys/GENERIC
i386
First I'll explain the stale/degraded issue. Then ask about fsck on a
raid5 vinum. My goal is to have the vinum volume check o
Hi,
I just had a crash on two of the drives on one vinum RAID-5 volume;
fortunately I could recover the data and build a new one. However, while doing
this I did something 'stupid', which I managed to fix, but I wanted to know
what the right way would have been.
Like I said one RAID-5
uot;
synrat wrote:
thanx Greg, you're right as always.
I booted into single user from start up and sure enough everything was
editable then.
I got another problem though.
I set up 2 disks for vinum and want to allocate exisiting partitions to
vinum mirrored volumes. I left 16 blocks at the begining
thanx Greg, you're right as always.
I booted into single user from start up and sure enough everything was
editable then.
I got another problem though.
I set up 2 disks for vinum and want to allocate exisiting partitions to
vinum mirrored volumes. I left 16 blocks at the begining, moved
[Format recovered--see http://www.lemis.com/email/email-format.html]
Output wrapped.
On Thursday, 17 June 2004 at 12:04:01 -0400, synrat wrote:
> 4.8
>
> I'm trying to setup partitions for vinum.
> I am in single user mode, but every time I try
> to modify swap size to
4.8
I'm trying to setup partitions for vinum.
I am in single user mode, but every time I try
to modify swap size to accomodate for vinum confuration
I get
disklabel: ioctl DIOCWDINFO: open partition would move or shrink
re-edit the label? [y]
#size offsetfstype [fsize bsiz
continues and
> then all of the sudden the computer reboots.
>
> The volume which I try to extract files on is a Vinum volume containing
> 2 Western Digital 36gb SATA raptor disks (striped) connected via a
> Tekram TR822 SATA-controller. Upon reboot it complains about / not being
tract files on is a Vinum volume containing
2 Western Digital 36gb SATA raptor disks (striped) connected via a
Tekram TR822 SATA-controller. Upon reboot it complains about / not being
properly unmounted, and fsck says something about SUPERBLK, or similar.
-
ws01-omi# vinum list
2 driv
On Thursday, 27 May 2004 at 13:12:34 -0400, dave wrote:
> Hello,
> I've got vinum running on 5.2.1 doing raid1 on two ide drives. I was doing a
> installworld in single user mode when it crashed. I was unable to use any
> commands because of missing libraries not installed. I unp
Hello,
I've got vinum running on 5.2.1 doing raid1 on two ide drives. I was doing a
installworld in single user mode when it crashed. I was unable to use any
commands because of missing libraries not installed. I unplugged the master
drive, therew in a spare drive as master then did a mi
--- Greg 'groggy' Lehey <[EMAIL PROTECTED]> wrote:
> > The error messages I'm seeing are...
> >
> > vinum: dataraid.p0.s1 is stale by force
> > vinum: dataraid.p0 is corrupt
> >
>
>
> OK, here your second subdisk has gone stale, meani
On Friday, 14 May 2004 at 20:22:37 -0500, Doug Poland wrote:
> Hello,
>
> I've been running a striped vinum configuration on 7 2.1GB SCSI drives
> for almost three years. I recently returned from a vacation to find my
> vinum fileserver had crashed. The logs indicated some ty
Hello,
I've been running a striped vinum configuration on 7 2.1GB SCSI drives
for almost three years. I recently returned from a vacation to find my
vinum fileserver had crashed. The logs indicated some type of problem
with my Adaptec 2940AU host adapter. The 2940 seems be working now bu
[Format recovered--see http://www.lemis.com/email/email-format.html]
Quoted text unevenly wrapped.
On Thursday, 13 May 2004 at 15:00:22 -0400, Lee Dilkie wrote:
> On Thursday, May 13, 2004 2:11 PM, Chris Collins wrote:
>>
>> I am trying to setup vinum but I am having a config
On Thursday, 13 May 2004 at 13:10:35 -0500, Chris Collins wrote:
> Hello
>
> I am trying to setup vinum but I am having a config problem I hope sombody
> can answer... I cannot get my /usr/ and /var and for some reason my swap is
> showing at 18G... What am I doing wrong?
>
>
i don't think you want to vinum a swap partition. you can just create the
swap partitions and add them to fstab( or if you run /stand/sysinstall to do
the disklabel, it'll populate fstab for you), the OS will nicely share
amongst all your swap partitions.
and if you only have one driv
Hello
I am trying to setup vinum but I am having a config problem I hope sombody
can answer... I cannot get my /usr/ and /var and for some reason my swap is
showing at 18G... What am I doing wrong?
OUTOUT
vinum -> create vinum.conf
1 drives:
D YouCrazy State:
that the extra drive I added
taxed my power supply just a bit too much. When the 5 baracudda drives were
operational under vinum (and striping made them *all* active at once) the
resultant strain on the PS lead to signal errors on the scsi bus (fast and
wide but not differential). Powering off th
restore a drive and re-try the vinum...
don't know if that a dumb idea or not but it seemed logical).
anyway, i have to figure out a way to get around this read error. or find
out what file(s) it affects so i can avoid trying to copy them. Any pointers
would be welcome (as I fire up google.
On Saturday, 8 May 2004 at 13:37:42 -0400, Lee Dilkie wrote:
> Hi there,
>
> I've been running a 5 disk vinum array (sripted, no redundancy) for a few
> months now. It's composed of 5 scsi drives of 4G each. I bought a new 120G
> ide drive, with the intention of copyin
Hi there,
I've been running a 5 disk vinum array (sripted, no redundancy) for a few
months now. It's composed of 5 scsi drives of 4G each. I bought a new 120G
ide drive, with the intention of copying over all the files from the vinum
array and retiring the array (the scsi drives are r
On Friday, 23 April 2004 at 14:47:30 -0400, synrat wrote:
> On Fri, 23 Apr 2004, Greg 'groggy' Lehey wrote:
>
>> On Thursday, 22 April 2004 at 11:32:22 -0400, synrat wrote:
>>>
>>> does vinum configuration need to be located
>>> in the beginning o
thanx Greg. like your book.
hope there will be another edition.
Does all this mean that if I don't have ~133kb available
for Vinum in the beginning of the disk before my first ( root )
partition, I can't use Vinum on that disk ? or would it write half
of the configuration in that firs
On Thursday, 22 April 2004 at 11:32:22 -0400, synrat wrote:
>
> does vinum configuration need to be located
> in the beginning of the drive after bootstrap or
> is it possible to store at the end of the drive ?
Currently it must be at the beginning of a drive.
> the reason I ask
does vinum configuration need to be located
in the beginning of the drive after bootstrap or
is it possible to store at the end of the drive ?
the reason I ask is I only have 60kb available in
the beginining and the first partition is /.
I should be able to shrink swap, which is at the end
of the
On Wednesday, 21 April 2004 at 20:34:38 -0400, Shaun T. Erickson wrote:
>> http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/vinum-root.html
>
>> http://www.vinumvm.org/cfbsd/vinum.pdf
>
> Ok. I've read both documents, which were quite educational. Thanks. :)
>
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/vinum-root.html
http://www.vinumvm.org/cfbsd/vinum.pdf
Ok. I've read both documents, which were quite educational. Thanks. :)
It seems that what I want to do is install to the first system disk, as
normally, and then convert that
Greg 'groggy' Lehey wrote:
On Wednesday, 21 April 2004 at 18:28:47 -0400, Bill Moran wrote:
I believe this is still valid:
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/vinum-root.html
Thanks. I just read that chapter, and, while it makes some sense, it
didn't tell me
wo ide disks, during install, and for all
>> partitions? This is just to mirror the system disk, so that we can avoid
>> downtime, and going to backups in case of a disk failure. If it can be
>> done, how do I do it? I've never used vinum before, and only know what
>> it
to backups in case of a disk failure. If it can be
done, how do I do it? I've never used vinum before, and only know what
it is, but nothing about it.
I believe this is still valid:
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/vinum-root.html
--
Bill Moran
Potential Te
sk failure. If it can be
done, how do I do it? I've never used vinum before, and only know what
it is, but nothing about it.
I wish I had more than one night to figure this out, but I don't. If it
isn't FreeBSD, he going to likely want me to install Fedora Core 2
Linux, in
On Sunday, 18 April 2004 at 16:59:59 -0800, Peter A. Giessel wrote:
> I have a rather large (ok, I'm insane, its that large) Vinum array,
> which works fine in 4.9, but crashes in 5.2.1. I don't think its
> vinum's fault, but I could be wrong.
>
> My question is:
I have a rather large (ok, I'm insane, its that large) Vinum array,
which works fine in 4.9, but crashes in 5.2.1. I don't think its
vinum's fault, but I could be wrong.
My question is: any ideas as to why the drives crash when accessed and
can't be labeled (other than my
[please honour Mail-Followup-To:, can't read the list until I solve thi
until I solve this]
Hi,
For unknown reasons, vinum has broken itself again (the only weird thing
I did was booting from a different root partition). The only commands I
have given are start/stop whatever (pretty
about the situation that means -f is needed?
In any case, it seems -f does not help. I did a stop on the test/test3 volumes
followed by an 'rm -r -f test'. I received no error message, but 'vinum l'
still lists the test volume - except now it is still in state 'down
On Thursday, 15 April 2004 at 1:21:02 +0200, Peter Schuller wrote:
> I tried:
>
>scode-whitestar# vinum stop test
>scode-whitestar# vinum stop test3
>scode-whitestar# vinum rm -r test
>Can't remove test: Device busy (16)
>
> As I interpret the manpag
Hello,
so I had set up a rudimentary vinum configuration. I eventually rebooted the
machine to make sure everything came up again correctly. It did, but it also
picked up a couple of old drives and volumes that I had previously used for
testing. "vinum l" now yields:
3 drive
Just a follow up on my recent vinum disk replacement question...
I have a two disk mirror and one of the disks died; I'd asked if
I could use dd to copy the good disk to a replacement disk and
was being persistent about whether that would work or not. %-)
I didn't do that (dd).
Instead
b Ellis wrote:
> >>> We have a machine with a vinum mirror, all the partitions except
> >>> the root partition mirrored between two disks. The second
> >>> disk has died and I want to replace it. I can't find a disk exactly
> >>> the same, so I
On Monday, 5 April 2004 at 19:09:46 -0400, Rob Ellis wrote:
> On Tue, Apr 06, 2004 at 08:12:49AM +0930, Greg 'groggy' Lehey wrote:
>> On Monday, 5 April 2004 at 12:42:12 -0400, Rob Ellis wrote:
>>> We have a machine with a vinum mirror, all the partitions except
>
On Tue, Apr 06, 2004 at 08:12:49AM +0930, Greg 'groggy' Lehey wrote:
> On Monday, 5 April 2004 at 12:42:12 -0400, Rob Ellis wrote:
> > We have a machine with a vinum mirror, all the partitions except
> > the root partition mirrored between two disks. The second
> &
On Monday, 5 April 2004 at 12:42:12 -0400, Rob Ellis wrote:
> We have a machine with a vinum mirror, all the partitions except
> the root partition mirrored between two disks. The second
> disk has died and I want to replace it. I can't find a disk exactly
> the same, so I h
We have a machine with a vinum mirror, all the partitions except
the root partition mirrored between two disks. The second
disk has died and I want to replace it. I can't find a disk exactly
the same, so I have a disk that's bigger (80GB, old one was 60GB).
Can I...
- shutdown, r
m Yahoo!.
Thank you very much for your reply. I’ve tried to
clean this up so it remains readable when it reaches
you.
Heh. I suppose that's a good enough reason.
> I have a 4.9-RELEASE-p4 system. I have not made
> any changes to any source code. I have 3 vinum
> volumes configured.
[Format recovered--see http://www.lemis.com/email/email-format.html]
Single line paragraph
On Thursday, 1 April 2004 at 0:00:29 +0100, Mike Woods wrote:
> Ok, the query is this, whats the proccedure, is any for upgrading a
> drive used in a vinum volume, i mean could i have a vinum volum
Ok, the query is this, whats the proccedure, is any for upgrading a drive used in a
vinum volume, i mean could i have a vinum volume with a 20gb disk in, change it for a
40gb alter the vinum configuration accordingly and have it work (after mirroring the
data over of course).
--
Mike Woods
IT
>>> It has 'historical' reason. I started with vinum, when it was
>>> not possible to mirror root partition (at least I found just
>>> document 'Bootstrapping Vinum: A Foundation for Reliable
>>> Servers' by R.
>>> What you do now depends on the state of the file system.
>>> Hopefully you still have the original contents. In this case,
>> Yes, I have original contents. The volume size is 15GB just as
>> before.
> OK. You've seen le's message.
No. Probably, I missed it.
lk
___
At 2004-03-29T21:04:19Z, "Greg 'groggy' Lehey" <[EMAIL PROTECTED]> writes:
> On Monday, 29 March 2004 at 18:38:20 +0200, Ludo Koren wrote:
>> It has 'historical' reason. I started with vinum, when it was not
>> possible to mirror root partitio
On Monday, 29 March 2004 at 18:38:20 +0200, Ludo Koren wrote:
>
>
>>>
>>> vinum -> l ...
>>> D d1State: up /dev/da1s1e A: 0/15452 MB (0%)
>>> D rd1 State: up /dev/da1s1h A: 0/1023 MB (0%)
&g
>>
>> vinum -> l ...
>> D d1State: up /dev/da1s1e A: 0/15452 MB (0%)
>> D rd1 State: up /dev/da1s1h A: 0/1023 MB (0%)
> You shouldn't have more than one drive per spindle.
It
On Mon, 29 Mar 2004, Greg 'groggy' Lehey wrote:
> On Monday, 29 March 2004 at 9:37:39 +0200, Ludo Koren wrote:
> > It finished with the following error:
> >
> > growfs: bad inode number 1 to ginode
> >
> > I have searched the archives, but did not find any answer. Please,
> > could you point to m
On Monday, 29 March 2004 at 9:37:39 +0200, Ludo Koren wrote:
>
>
> on 5.2.1-RELEASE-p3, I did:
>
> ...
> vinum create vinumgrow
>
> vinumgrow:
> drive d3 device /dev/da2s1h
> drive d4 device /dev/da3s1h
> sd name mirror.p0.s1 drive d3 plex mirror.p0 size 0
>
0 # "raw" part, don't edit
h: 1433639970 vinum
# /dev/da3s1:
8 partitions:
#size offsetfstype [fsize bsize bps/cpg]
a: 143363981 16unused0 0
c: 1433639970unused0 0
very much for your reply. Ive tried to
clean this up so it remains readable when it reaches
you.
> Heh. I suppose that's a good enough reason.
> > I have a 4.9-RELEASE-p4 system. I have not made
> > any changes to any source code. I have 3 vinum
> > volumes configured.
changes to any
> source code. I have 3 vinum volumes configured. While attempting
> to diagnose problems with one of the volumes that uses a firewire
> drive, my system crashed with a trap 12 error. I have my /usr
> configured as a striped vinum volume with one plex and two subdis
Please excuse whatever format in which this email
arrives. My system is unusable so I am posting from
Yahoo!.
I have a 4.9-RELEASE-p4 system. I have not made any
changes to any source code. I have 3 vinum volumes
configured. While attempting to diagnose problems
with one of the volumes that
B Size: 4100 MB
> > S spanned_log.p0.s1 State: crashed PO: 4100 MB Size: 4100 MB
>
> Take a look at http://www.vinumvm.org/vinum/how-to-debug.html if you
> haven't already done so.
Ok, great.
--
Sean
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"
On Friday, 26 March 2004 at 16:10:13 -0800, Sean Ellis wrote:
> On Sat, Mar 27, 2004 at 10:00:18AM +1030, Greg 'groggy' Lehey wrote:
>> On Friday, 26 March 2004 at 11:12:45 -0800, Sean Ellis wrote:
>
>>> after adding too drives as a concatenated vinum volume I se
On Sat, Mar 27, 2004 at 10:00:18AM +1030, Greg 'groggy' Lehey wrote:
> On Friday, 26 March 2004 at 11:12:45 -0800, Sean Ellis wrote:
> > after adding too drives as a concatenated vinum volume I see a line in
> > the `vinum list` output which doesn't look right to
On Friday, 26 March 2004 at 11:12:45 -0800, Sean Ellis wrote:
> Hello,
>
> after adding too drives as a concatenated vinum volume I see a line in
> the `vinum list` output which doesn't look right to me. Specifically, the
> line that refers to spanned_log.p0.s0. Is there a typ
Hello,
after adding too drives as a concatenated vinum volume I see a line in
the `vinum list` output which doesn't look right to me. Specifically, the
line that refers to spanned_log.p0.s0. Is there a typical explanation for
something like this?
2 drives:
D a Stat
301 - 400 of 951 matches
Mail list logo