gvinum + growfs on 5.4-STABLE

2005-12-14 Thread Ludo Koren

Hi,

is it possible (tested?) to use safely growfs on gvinum mirror volume
on FreeBSD 5.4-STABLE ? (I know there was problem on earlier 5.x
version with vinum).


Thanks,

lk

PS: I did not search yet, but has anybody step by step docs on
replacing a dead drive in gvinum mirror?

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


growfs on file-backed fs

2004-04-19 Thread Richard P. Williamson
Hello all,

according to the handbook, I can create a file-backed fs
(Example 12-5. Creating a New File-Backed Disk with vnconfig)

# dd if=/dev/zero of=newimage bs=1k count=5k
..
# vnconfig -s labels -c vn0 newimage
# disklabel -r -w vn0 auto
# newfs vn0c
...
# mount /dev/vn0c /mnt


I've been trying to do that, and end up with a file that I
can turn into an mfsroot.gz (or mdroot.gz for that matter).
But I can't get my device to boot from a flash device using
a 'home-built' mfsroot.gz.  I've tried various combinations
of switches for vnconfig, disklabel and newfs, but nothing
wants to allow it to play the way I need.

When I replace the mfsroot.gz with mfsroot.flp from the floppy
startup images, then it boots.

I have, in the past done a growfs on the mfsroot.flp, in order
to make it big enough to hold the files I need.  

But I'd rather just do the commands as described in example 12-5
above.  What do I need to add to that to make it a bootable mfsroot?

Regards,
rip 

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: growfs on vinum volume

2004-03-31 Thread Ludo Koren


>>> It has 'historical' reason. I started with vinum, when it was
>>> not possible to mirror root partition (at least I found just
>>> document 'Bootstrapping Vinum: A Foundation for Reliable
>>> Servers' by R. A. Van Valzah in 2001 or 2002 on
>>> www.freebsd.org)

>> There has never been any reason to have more than one drive per
>> module.  You're confusing drives and subdisks.

 > I think he meant that he original had a root partition and a
 > vinum partition, and later converted the root partition to
 > another vinum partition.  

Exactly. That I meant by 'historical' reason.


 > =2D-=20 Kirk Strauser

lk
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: growfs on vinum volume

2004-03-31 Thread Ludo Koren


>>> What you do now depends on the state of the file system.
>>> Hopefully you still have the original contents.  In this case,
>>  Yes, I have original contents. The volume size is 15GB just as
>> before.

 > OK.  You've seen le's message.

No. Probably, I missed it.

lk

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: growfs on vinum volume

2004-03-29 Thread Kirk Strauser
At 2004-03-29T21:04:19Z, "Greg 'groggy' Lehey" <[EMAIL PROTECTED]> writes:

> On Monday, 29 March 2004 at 18:38:20 +0200, Ludo Koren wrote:

>> It has 'historical' reason. I started with vinum, when it was not
>> possible to mirror root partition (at least I found just document
>> 'Bootstrapping Vinum: A Foundation for Reliable Servers' by R. A. Van
>> Valzah in 2001 or 2002 on www.freebsd.org)

> There has never been any reason to have more than one drive per module.
> You're confusing drives and subdisks.

I think he meant that he original had a root partition and a vinum
partition, and later converted the root partition to another vinum
partition.
-- 
Kirk Strauser

"94 outdated ports on the box,
 94 outdated ports.
 Portupgrade one, an hour 'til done,
 82 outdated ports on the box."


pgp0.pgp
Description: PGP signature


Re: growfs on vinum volume

2004-03-29 Thread Greg 'groggy' Lehey
On Monday, 29 March 2004 at 18:38:20 +0200, Ludo Koren wrote:
>
>
>>>
>>> vinum -> l ...
>>> D d1State: up   /dev/da1s1e A: 0/15452 MB (0%)
>>> D rd1   State: up   /dev/da1s1h A: 0/1023 MB (0%)
>
>> You shouldn't have more than one drive per spindle.
>
> It has 'historical' reason. I started with vinum, when it was not
> possible to mirror root partition (at least I found just document
> 'Bootstrapping Vinum: A Foundation for Reliable Servers' by R. A. Van
> Valzah in 2001 or 2002 on www.freebsd.org)

There has never been any reason to have more than one drive per
module.  You're confusing drives and subdisks.

>> What you do now depends on the state of the file system.
>> Hopefully you still have the original contents.  In this case,
>
> Yes, I have original contents. The volume size is 15GB just as
> before.

OK.  You've seen le's message.

>> you could get hold of the version from -CURRENT, which should
>> compile with no problems, and try again.  It wouldn't do any
>> harm to take down one of the plexes so that you can recover if
>> something goes wrong.
>
> Thank you very much for your advice. I'll try it on weekend, because
> there is no possible downtime during the working days.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
Note: I discard all HTML mail unseen.
Finger [EMAIL PROTECTED] for PGP public key.
See complete headers for address and phone numbers.


pgp0.pgp
Description: PGP signature


Re: growfs on vinum volume

2004-03-29 Thread Ludo Koren


>> 
>> vinum -> l ... 
>> D d1State: up   /dev/da1s1e A: 0/15452 MB (0%)
>> D rd1   State: up   /dev/da1s1h A: 0/1023 MB (0%)

 > You shouldn't have more than one drive per spindle.

It has 'historical' reason. I started with vinum, when it was not
possible to mirror root partition (at least I found just document
'Bootstrapping Vinum: A Foundation for Reliable Servers' by R. A. Van
Valzah in 2001 or 2002 on www.freebsd.org)


>> ...
>> growfs /dev/vinum/mirror

 > You've missed out some information.  May I assume that you had
 > a valid file system on the 15 GB volume mirror before you
 > started this?

Yes

>> It finished with the following error:
>> 
>> growfs: bad inode number 1 to ginode
>> 
>> I have searched the archives, but did not find any
>> answer. Please, could you point to me what I did wrong?

 > You trusted growfs on 5.2.1 :-)

I did it successfully before on 5.x (I don't remember exactly, about 6
months ago) but not on vinum volume...

 > growfs is suffering from lack of love, and presumably you had a
 > UFS 2 file system on the drive.  It's only recently been fixed
 > for that, in 5-CURRENT.

There is UFS 1 file system on the drive.

 > What you do now depends on the state of the file system.
 > Hopefully you still have the original contents.  In this case,

Yes, I have original contents. The volume size is 15GB just as before.

 > you could get hold of the version from -CURRENT, which should
 > compile with no problems, and try again.  It wouldn't do any
 > harm to take down one of the plexes so that you can recover if
 > something goes wrong.

Thank you very much for your advice. I'll try it on weekend, because
there is no possible downtime during the working days.

 > Greg

lk
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: growfs on vinum volume

2004-03-29 Thread Lukas Ertl
On Mon, 29 Mar 2004, Greg 'groggy' Lehey wrote:

> On Monday, 29 March 2004 at  9:37:39 +0200, Ludo Koren wrote:
> > It finished with the following error:
> >
> > growfs: bad inode number 1 to ginode
> >
> > I have searched the archives, but did not find any answer. Please,
> > could you point to me what I did wrong?
>
> You trusted growfs on 5.2.1 :-)
>
> growfs is suffering from lack of love, and presumably you had a UFS 2
> file system on the drive.  It's only recently been fixed for that, in
> 5-CURRENT.

Well, this particular bug is not fixed right now, I fixed a different one.

The 'bad inode number 1' problem happens when you grow your filesystem so
large that the cylinder group summary needs to allocate a new block.

I haven't found a fix for that yet, but interestingly enough, you should
now run fsck on that filesystem, and *after* that, you should be able to
grow that filesystem successfully (yeah, it's some kind of voodoo).

cheers,
le

-- 
Lukas Ertl   http://mailbox.univie.ac.at/~le/
[EMAIL PROTECTED]   http://people.freebsd.org/~le/
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: growfs on vinum volume

2004-03-29 Thread Greg 'groggy' Lehey
On Monday, 29 March 2004 at  9:37:39 +0200, Ludo Koren wrote:
>
>
> on 5.2.1-RELEASE-p3, I did:
>
> ...
> vinum create vinumgrow
>
> vinumgrow:
> drive d3 device /dev/da2s1h
> drive d4 device /dev/da3s1h
> sd name mirror.p0.s1 drive d3 plex mirror.p0 size 0
> sd name mirror.p1.s1 drive d4 plex mirror.p1 size 0
>
> after
>
> vinum -> start mirror.p0.s1
>
> and
>
> vinum -> start mirror.p1.s1
>
> I have
>
> vinum -> l
> ...
> D d1State: up   /dev/da1s1e A: 0/15452 MB (0%)
> D rd1   State: up   /dev/da1s1h A: 0/1023 MB (0%)

You shouldn't have more than one drive per spindle.

> D d2State: up   /dev/da0s1f A: 0/15452 MB (0%)
> D rd2   State: up   /dev/da0s1h A: 0/1023 MB (0%)

Ditto.

> D d3State: up   /dev/da2s1h A: 0/70001 MB (0%)
> D d4State: up   /dev/da3s1h A: 0/70001 MB (0%)
>
> 2 volumes:
> V mirrorState: up   Plexes:   2 Size: 83 GB
>
> 4 plexes:
> P mirror.p0   C State: up   Subdisks: 2 Size: 83 GB
> P mirror.p1   C State: up   Subdisks: 2 Size: 83 GB
>
> 6 subdisks:
> S mirror.p0.s0  State: up   D: d1   Size: 15 GB
> S mirror.p1.s0  State: up   D: d2   Size: 15 GB
> S mirror.p0.s1  State: up   D: d3   Size: 68 GB
> S mirror.p1.s1  State: up   D: d4   Size: 68 GB
> vinum ->
>
> which seems to be correct...
>
> Than I did
>
> growfs /dev/vinum/mirror

You've missed out some information.  May I assume that you had a valid
file system on the 15 GB volume mirror before you started this?

> It finished with the following error:
>
> growfs: bad inode number 1 to ginode
>
> I have searched the archives, but did not find any answer. Please,
> could you point to me what I did wrong?

You trusted growfs on 5.2.1 :-)

growfs is suffering from lack of love, and presumably you had a UFS 2
file system on the drive.  It's only recently been fixed for that, in
5-CURRENT.

What you do now depends on the state of the file system.  Hopefully
you still have the original contents.  In this case, you could get
hold of the version from -CURRENT, which should compile with no
problems, and try again.  It wouldn't do any harm to take down one of
the plexes so that you can recover if something goes wrong.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
Note: I discard all HTML mail unseen.
Finger [EMAIL PROTECTED] for PGP public key.
See complete headers for address and phone numbers.


pgp0.pgp
Description: PGP signature


growfs on vinum volume

2004-03-28 Thread Ludo Koren

Hi list


on 5.2.1-RELEASE-p3, I did:

fdisk -BI da2
fdisk -BI da3

and 
disklabel -e da2s1
disklabel -e da3s1

# /dev/da2s1:
8 partitions:
#size   offsetfstype   [fsize bsize bps/cpg]
  a: 143363981   16unused0 0   
  c: 1433639970unused0 0 # "raw" part, don't edit
  h: 1433639970 vinum



# /dev/da3s1:
8 partitions:
#size   offsetfstype   [fsize bsize bps/cpg]
  a: 143363981   16unused0 0   
  c: 1433639970unused0 0 # "raw" part, don't edit
  h: 1433639970 vinum

vinum create vinumgrow

vinumgrow:
drive d3 device /dev/da2s1h
drive d4 device /dev/da3s1h
sd name mirror.p0.s1 drive d3 plex mirror.p0 size 0
sd name mirror.p1.s1 drive d4 plex mirror.p1 size 0

after 

vinum -> start mirror.p0.s1

and 

vinum -> start mirror.p1.s1

I have

vinum -> l
6 drives:
D d1State: up   /dev/da1s1e A: 0/15452 MB (0%)
D rd1   State: up   /dev/da1s1h A: 0/1023 MB (0%)
D d2State: up   /dev/da0s1f A: 0/15452 MB (0%)
D rd2   State: up   /dev/da0s1h A: 0/1023 MB (0%)
D d3State: up   /dev/da2s1h A: 0/70001 MB (0%)
D d4State: up   /dev/da3s1h A: 0/70001 MB (0%)

2 volumes:
V mirrorState: up   Plexes:   2 Size: 83 GB
V root  State: up   Plexes:   2 Size:   1023 MB

4 plexes:
P mirror.p0   C State: up   Subdisks: 2 Size: 83 GB
P mirror.p1   C State: up   Subdisks: 2 Size: 83 GB
P root.p1 C State: up   Subdisks: 1 Size:   1023 MB
P root.p0 C State: up   Subdisks: 1 Size:   1023 MB

6 subdisks:
S mirror.p0.s0  State: up   D: d1   Size: 15 GB
S mirror.p1.s0  State: up   D: d2   Size: 15 GB
S root.p1.s0State: up   D: rd2  Size:   1023 MB
S root.p0.s0State: up   D: rd1  Size:   1023 MB
S mirror.p0.s1  State: up   D: d3   Size: 68 GB
S mirror.p1.s1  State: up   D: d4   Size: 68 GB
vinum -> 

which seems to be correct...

Than I did 

growfs /dev/vinum/mirror


It finished with the following error:

growfs: bad inode number 1 to ginode

I have searched the archives, but did not find any answer. Please,
could you point to me what I did wrong?

Thank you very much.

lk
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: growfs on /

2003-12-13 Thread Jerry McAllister
> 
> Hey all...
> 
> Have FreeBSD 4.8-STABLE, and I've run out of space on the slice /
> 
> I really want to avoid having to backup and reformat, or doing anything 
> that's super-time-intensive - from reading various posts and blogs 
> related to FreeBSD, it appears to me that I can resolve my issue by 
> using growfs - the next slice after / is /tmp which has plenty of room 
> free, and can afford to be reduced by a little. It doesn't seem to be 
> affecting system use except that I can't add new users.
> 
> Here's what I look like now:
> 
> Filesystem  1K-blocksUsedAvail Capacity  Mounted on
> /dev/ad0s1a128990  127682-9010   108%/
> /dev/ad0s1f2579981254   236106 1%/tmp
> /dev/ad0s1g  18359694 4955608 1193531229%/usr
> /dev/ad0s1e2579988400   228960 4%/var
> procfs  4   40   100%/proc
> 
> I understand the process in general, but am a little afraid of hosing 
> the box in the process; most of the stuff I've seen assumes a greater 
> familiarity with tools like disklabel than I have. Does anyone know of 
> a step-by-step tutorial or article on doing this? If not, would anyone 
> be so kind as to give me a high-level breakdown?
> 
> Alternatively, if anyone knows how I can free up some space in / 
> perhaps by moving something to another slice, I'd be open to that 
> possibility.

That is what your first choice should be.
Us du(1) to find out which directories are taking up the space.
  cd /
  du -sk *will give you a useful list.

It may be possible to move one or more of them to another slice
and make a soft link to it.But, most of the things in root
have to be there instead of somewhere else.   You already have
the biggies somewhere else (/tmp, /usr, /var).   Some of the
remaining directories may have extra junk left over from some
stuff that could well be deleted.   A frequent culprit is home
directories for root accounts.   If you are doing a bunch of
development, they can accumulate a lot of junk.

I wouldn't even consider growfs until after cleaning stuff up and
would not be confidant of using it on root anyway.

jerry

> 
> Thanks in advance
> Jeff LaMarche
> 
> ___
> [EMAIL PROTECTED] mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "[EMAIL PROTECTED]"
> 

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: growfs on /

2003-12-13 Thread Sergey 'DoubleF' Zaharchenko
On Sat, 13 Dec 2003 14:02:59 +1030
Malcolm Kay <[EMAIL PROTECTED]> probably wrote:

> On Sat, 13 Dec 2003 13:08, Jeff LaMarche wrote:
> > Hey all...
> >
> > Have FreeBSD 4.8-STABLE, and I've run out of space on the slice /
> >
> > I really want to avoid having to backup and reformat, or doing anything
> > that's super-time-intensive - from reading various posts and blogs
> > related to FreeBSD, it appears to me that I can resolve my issue by
> > using growfs - the next slice after / is /tmp which has plenty of room
> > free, and can afford to be reduced by a little. It doesn't seem to be
> > affecting system use except that I can't add new users.
> >
> > Here's what I look like now:
> >
> > Filesystem  1K-blocksUsedAvail Capacity  Mounted on
> > /dev/ad0s1a128990  127682-9010   108%/
> 
> I should have thought that 125Mb or so should have been ample for / when
> /tmp, /var and /usr have there own partitions.
> 
> Your mail prompted me to look at what I have under / and was somewhat
> surprised to find about 90Mb. But when I examined this I found about 40Mb 
> was pure junk -- things like temproot, modules.old and etc.old1 left over from 
> a system update and a core file or two.
> 
> Have you been running X applications (especially browsers) as root  a 
> practice frowned upon, mostly I guess, because it can swallow large gulps of 
> space on /.

Not because of this, but because of security reasons. I believe something like

# du -hx -d 1 /

will help you find out what things use up the space.

> I would certainly look at getting the total file size down in / rather than 
> trying to grow it.
> 
> Malcolm
> 
> ___
> [EMAIL PROTECTED] mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "[EMAIL PROTECTED]"
> 


-- 
DoubleF
Help a swallow land at Capistrano.


pgp0.pgp
Description: PGP signature


Re: growfs on /

2003-12-12 Thread Malcolm Kay
On Sat, 13 Dec 2003 13:08, Jeff LaMarche wrote:
> Hey all...
>
> Have FreeBSD 4.8-STABLE, and I've run out of space on the slice /
>
> I really want to avoid having to backup and reformat, or doing anything
> that's super-time-intensive - from reading various posts and blogs
> related to FreeBSD, it appears to me that I can resolve my issue by
> using growfs - the next slice after / is /tmp which has plenty of room
> free, and can afford to be reduced by a little. It doesn't seem to be
> affecting system use except that I can't add new users.
>
> Here's what I look like now:
>
> Filesystem  1K-blocksUsedAvail Capacity  Mounted on
> /dev/ad0s1a128990  127682-9010   108%/

I should have thought that 125Mb or so should have been ample for / when
/tmp, /var and /usr have there own partitions.

Your mail prompted me to look at what I have under / and was somewhat
surprised to find about 90Mb. But when I examined this I found about 40Mb 
was pure junk -- things like temproot, modules.old and etc.old1 left over from 
a system update and a core file or two.

Have you been running X applications (especially browsers) as root  a 
practice frowned upon, mostly I guess, because it can swallow large gulps of 
space on /.

I would certainly look at getting the total file size down in / rather than 
trying to grow it.

Malcolm

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


growfs on /

2003-12-12 Thread Jeff LaMarche
Hey all...

Have FreeBSD 4.8-STABLE, and I've run out of space on the slice /

I really want to avoid having to backup and reformat, or doing anything 
that's super-time-intensive - from reading various posts and blogs 
related to FreeBSD, it appears to me that I can resolve my issue by 
using growfs - the next slice after / is /tmp which has plenty of room 
free, and can afford to be reduced by a little. It doesn't seem to be 
affecting system use except that I can't add new users.

Here's what I look like now:

Filesystem  1K-blocksUsedAvail Capacity  Mounted on
/dev/ad0s1a128990  127682-9010   108%/
/dev/ad0s1f2579981254   236106 1%/tmp
/dev/ad0s1g  18359694 4955608 1193531229%/usr
/dev/ad0s1e2579988400   228960 4%/var
procfs  4   40   100%/proc
I understand the process in general, but am a little afraid of hosing 
the box in the process; most of the stuff I've seen assumes a greater 
familiarity with tools like disklabel than I have. Does anyone know of 
a step-by-step tutorial or article on doing this? If not, would anyone 
be so kind as to give me a high-level breakdown?

Alternatively, if anyone knows how I can free up some space in / 
perhaps by moving something to another slice, I'd be open to that 
possibility.

Thanks in advance
Jeff LaMarche
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


vinum and growfs on 5.1R

2003-09-10 Thread Mark McKinstry
I'm trying to concatenate two disk using vinum on FreeBSD 5.1-RELEASE and
want to make sure I am doing everything correctly.

I have two drives, a 30 gig and a 40 gig. I have vinum setup and running
on the 40 gig right now. I would like to copy information from the 30 gig
to the 40 gig, Then concatenate the 30 gig to the 40 gig and use growfs to
have one 70 gig "drive."

I know that on 5.0R growfs would not work with a vinum volume. I found a
bug report (1) saying it was fixed but am unsure if this means it was
included in 5.1R.

Assuming growfs will work all I should have to do is change the drive's
label using bsdlabel, make a new config file for the new drive, read it
in, set the state as up, start the plex, unmount the filesystem, use
growfs, the remount. Correct?


1. http://www.freebsd.org/cgi/query-pr.cgi?pr=bin/51138



Thanks,
Mark





___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"