Could anyone give me some input on this. .
Issue: Trying to grow an existing mirrored filesystem with growfs fails. . :
Given:
FreeBSD 7.2-RELEASE (GENERIC) #0: Fri May 1 07:18:07 UTC 2000
-
2 drives:
D data1 State: up /dev/da1s1 A:
432325/1430506 MB (30%)
D
for the OP.
On 5/29/09, Vikash Badal vikash.ba...@is.co.za wrote:
Can someone please advise why growfs would return:
growfs: we are not growing (8388607-4194303) ?
I have a FreeBSD 7.2 server in a VM.
I initially had 5 x 4G disks
Created a raid
graid3 label datavol da2 da3 da4 da5 da6
I
Can someone please advise why growfs would return:
growfs: we are not growing (8388607-4194303) ?
I have a FreeBSD 7.2 server in a VM.
I initially had 5 x 4G disks
Created a raid
graid3 label datavol da2 da3 da4 da5 da6
I upgraded them to 5 x 8g disks
swopped out the virtual disks one
I want to ensure that I am correctly applying the concept of the growfs
command.
I want to remove /dev/ad2s1h and expand /dev/ad2s1g to occupy all of the
space left behind by the deletion of /dev/ad2s1h.
[EMAIL PROTECTED]:/root# bsdlabel -e /dev/ad2s1
# /dev/ad2s1:
8 partitions:
#size
All,
I am running 7.0-Release and am trying to add an additional disk to a
gconcat and expand the ufs onto it. The concat works fine but when I run
growfs I get an error We are not growing.
A couple of things about the setup:
1) 16-hotswap SAS drive bays only 8 contain drives. So figuring
I'm merging my ext2-partition with my /usr by shrinking ext2 and doing
growfs on the free space. I did it with soft updates enabled and it
seemed to work until I tried to use /usr which resulted in kernel panic
(ffs_valloc: dup alloc). After plenty of 'fcsk -y' I got it fixed while
losing few
Teemu Korhonen wrote:
I'm merging my ext2-partition with my /usr by shrinking ext2 and doing
growfs on the free space. I did it with soft updates enabled and it
seemed to work until I tried to use /usr which resulted in kernel
panic (ffs_valloc: dup alloc). After plenty of 'fcsk -y' I got
(ffs_valloc: dup alloc). After plenty of 'fcsk -y' I got it fixed while
losing few random files. :( The errors were mostly related to soft updates.
Should soft updates be disabled before using growfs?
False alarm. It's all the same with soft updates disabled.. I guess growfs
needs some work
On Wed, 16 Jan 2008 17:48:44 +0100 (CET)
Wojciech Puchar [EMAIL PROTECTED] wrote:
growfs is for people that like challenges ;)
i have to use it growing 800GB filesystem to 1400GB, finally (after
patching it a bit) i did it, but root directory was destroyed (no
idea why). all subdirs
do is create a new
partition and symlink things into it.
it was network users shared directory for movies music etc.
i already told them that i will probably remove it unless they will back
it up.
so i don't have to worry in case growfs would screw it up completely.
anyway where to send
anyone can fix it completely?
i patched it so it DO works when sectorsize!=512 bytes.
but when growing to 1.3TB it prints negative values due to overflow but it
works.
but would it work on larger partitions?
___
freebsd-questions@freebsd.org
to be right wingers. The don't qualify as honest conservatives.
jerry
efilnikufecin and a long cold winter,
Kris
From: Jerry McAllister [EMAIL PROTECTED]
To: Kristopher Yates [EMAIL PROTECTED]
CC: freebsd-questions@freebsd.org
Subject: Re: growfs HELP
Date: Thu, 28 Sep 2006 15:17
Hi everyone,
First of all, glad to still be running FBSD after all these years.
I tried following some docs I found online in order to make my /usr
partition bigger and made it all the way to growfs -s, which is where I got
stuck.
First of all, here was my original dilema, what I did
On Thu, Sep 28, 2006 at 03:26:27AM -0500, Kristopher Yates wrote:
Hi everyone,
First of all, glad to still be running FBSD after all these years.
I tried following some docs I found online in order to make my /usr
partition bigger and made it all the way to growfs -s, which is where I
Anyone else have any suggestions?
I was thinking I could rewrite my partition table to what it was originally,
then growfs using the empty partionable space.. but I dont exactly know how.
I just had some docs I found online (URL is below). Before I did fdisk
-s, the 2.888GB was an empty
On Thu, Sep 28, 2006 at 01:54:26PM -0500, Kristopher Yates wrote:
Anyone else have any suggestions?
I was thinking I could rewrite my partition table to what it was
originally, then growfs using the empty partionable space.. but I dont
exactly know how. I just had some docs I found online
Subject: Re: growfs HELP
Date: Thu, 28 Sep 2006 15:17:17 -0400
On Thu, Sep 28, 2006 at 01:54:26PM -0500, Kristopher Yates wrote:
Anyone else have any suggestions?
I was thinking I could rewrite my partition table to what it was
originally, then growfs using the empty partionable space.. but I
Hi,
is it possible (tested?) to use safely growfs on gvinum mirror volume
on FreeBSD 5.4-STABLE ? (I know there was problem on earlier 5.x
version with vinum).
Thanks,
lk
PS: I did not search yet, but has anybody step by step docs on
replacing a dead drive in gvinum mirror
Wojciech Puchar wrote:
i asked the question recently, no answers, but finally did it this way
and all worked fine. i shifted my partition left with dd and resized
with growfs.
Thanksgiving break may have taken at least some of the reading list population
out of regular contact, at least
i asked the question recently, no answers, but finally did it this way and
all worked fine. i shifted my partition left with dd and resized with
growfs.
but can bsdlabel be forced to write label with overlapping slices? for
temporary operations it will be useful if i know what i'm doing
4) bsdlabel and remove d
5) FINALLY - growfs /dev/ad0a
6) boot0cfg to make it all bootable.
can i do 5) without fear? i want to do full dump of my data, but don't
like to do it twice (before for sure, and then after repartitioning).
___
freebsd
for that.)
Extending file systems is something different. AIX can do it, yes,
and can even shrink them (which is much more work), albeit it might
take forever, depending on the load. Solaris' growfs isn't that much
more capable than FreeBSD's growfs is, except Solaris has got the
lockfs(2) syscall
Anyone successful with extending existing gstripe using growfs instead
of newfs or via using a similar hack?
Example of result desired:
# gstripe stop data
# gstripe label -v -s 65535 data /dev/ad3 /dev/ad4 /dev/ad5 /dev/ad6
# growfs -s [size of total volume i have] /dev/stripe/data
# /sbin
a script
to growfs /usr/local to the end of the disk.
But, I don't know for growfs, and I'm concerned that you'd have to do
some magic to the partition table first. Maybe someone is already doing
something of similar cleverosity? (And would care to comment.)
Thanks,
-danny
--
http
Hello,
I've added another disk to a gconcat system by remaking the device with
another drive on the list. After doing this I can remount the filesystem and it
comes up without any problems, however when I try to grow the UFS filesystem it
fails.
[EMAIL PROTECTED]:~# growfs /dev/concat/data
) .
After which, I went to expand the fs, but to my dismay, growfs is having
issues. I've already set the new sizes in fdisk and disklabel.
When I do: growfs -s 3396841102 /dev/da0s1h
I get: growfs: we are not growing (727144389-536870911)
Now I obviously don't know enough about growfs
Hi,
Please excuse the re-post. I'm hoping that my question just got lost in
the numerous conversations over the weekend and that I'm not suffering
from bleading-edge technology that nobody else has tried in production
yet.
How do I make growfs actually grow a gvinum disk on FreeBSD 5.3? I've
Hi,
How do I make growfs actually grow a gvinum disk on FreeBSD 5.3? I've
read the man pages, the Handbook, and done some searching with no luck.
To help understand what I'm trying to accomplish here, I've created a
filesystem that mounts to /export on a gvinum volume. The volume is
configured
Hi, all. Running FreeBSD 5.3 using hardware RAID (Dell Perc/4 Di).
I added a disk to the Logical Volume, rebuilt the RAID array, resized
the slice (fdisk), and changed the label (bsdlabel). All of that
worked just fine. However, when I tried to grow the filesystem,
growfs failed, saying growfs
Any idea why a growfs to this size works
growfs: 493962.0MB (1011634176 sectors) block size 16384, fragment size
2048
using 2688 cylinder groups of 183.77MB, 11761 blks, 23552
inodes.
with soft updates
super-block backups (for fsck -b #) at:
1010881632, 1011257984
but a growfs
On FreeBSD 5.3 I added disks to a disk array. The array contained two
250 GB disks stripped (actually four mirrored and striped but it's all
done in hardware). I added two more pairs to the virtual disk, rebooted
the machine, rewrote the disklabel for the additional capacity and ran
growfs
Sorry for the double post but I found a copy of the actual error...
growfs: rdfs: seek error: 237231962044550260: Unknown error: 0
On Feb 20, 2005, at 2:40 AM, Michael Conlen wrote:
On FreeBSD 5.3 I added disks to a disk array. The array contained two
250 GB disks stripped (actually four
FreeBSD 4.10-stable. I want to grow a filesystem that is at the end of
the partition. Reading the man pages for disklabel(8) and fdisk(8) I am
still not sure how to make the slice larger before using growfs. Any tips
appreciated!
--Karl
did u get fbsd working with large disks
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]
On Mon, 2003-10-20 at 00:12, dag vilmar tveit wrote:
did u get the correct year set in fbsd?
did u get fbsd working with large disks
Keep an eye on: http://www.freebsd.org/projects/bigdisk/ for more
information on that work.
--
Jeremy Faulkner [EMAIL PROTECTED]
signature.asc
:149 GB
#growfs -y /dev/vinum/bigger
...
bad inode number 1 to ginode
Where is the problem? At start partition was 432 GB, using step by step
growing(increasing -s option in growfs) I make 534GB and then I have
the same:
#growfs -y -s 1156529856 /dev/vunim/bigger
...
bad inode number 1 to ginode
On Sep 29, 2004, at 4:56 AM, Kostya Falkov wrote:
#growfs -y /dev/vinum/bigger
...
bad inode number 1 to ginode
Where is the problem? At start partition was 432 GB, using step by step
growing(increasing -s option in growfs) I make 534GB and then I have
the same:
#growfs -y -s 1156529856 /dev/vunim
it to play the way I need.
When I replace the mfsroot.gz with mfsroot.flp from the floppy
startup images, then it boots.
I have, in the past done a growfs on the mfsroot.flp, in order
to make it big enough to hold the files I need.
But I'd rather just do the commands as described in example 12
What you do now depends on the state of the file system.
Hopefully you still have the original contents. In this case,
Yes, I have original contents. The volume size is 15GB just as
before.
OK. You've seen le's message.
No. Probably, I missed it.
lk
It has 'historical' reason. I started with vinum, when it was
not possible to mirror root partition (at least I found just
document 'Bootstrapping Vinum: A Foundation for Reliable
Servers' by R. A. Van Valzah in 2001 or 2002 on
www.freebsd.org)
There has never been
vinum -
which seems to be correct...
Than I did
growfs /dev/vinum/mirror
You've missed out some information. May I assume that you had a valid
file system on the 15 GB volume mirror before you started this?
It finished with the following error:
growfs: bad inode number 1 to ginode
I have
On Mon, 29 Mar 2004, Greg 'groggy' Lehey wrote:
On Monday, 29 March 2004 at 9:37:39 +0200, Ludo Koren wrote:
It finished with the following error:
growfs: bad inode number 1 to ginode
I have searched the archives, but did not find any answer. Please,
could you point to me what I did
, when it was not
possible to mirror root partition (at least I found just document
'Bootstrapping Vinum: A Foundation for Reliable Servers' by R. A. Van
Valzah in 2001 or 2002 on www.freebsd.org)
...
growfs /dev/vinum/mirror
You've missed out some information. May I assume that you
On Monday, 29 March 2004 at 18:38:20 +0200, Ludo Koren wrote:
vinum - l ...
D d1State: up /dev/da1s1e A: 0/15452 MB (0%)
D rd1 State: up /dev/da1s1h A: 0/1023 MB (0%)
You shouldn't have more than one drive per spindle.
It has
At 2004-03-29T21:04:19Z, Greg 'groggy' Lehey [EMAIL PROTECTED] writes:
On Monday, 29 March 2004 at 18:38:20 +0200, Ludo Koren wrote:
It has 'historical' reason. I started with vinum, when it was not
possible to mirror root partition (at least I found just document
'Bootstrapping Vinum: A
...
Than I did
growfs /dev/vinum/mirror
It finished with the following error:
growfs: bad inode number 1 to ginode
I have searched the archives, but did not find any answer. Please,
could you point to me what I did wrong?
Thank you very much.
lk
to the
end of the slice.
So far all seems ok. So I boot in single user and tries to run growfs. But
all growfs gives me is:
growfs: rdfs: read error -1940904252 Input/output error
After lots of trying, I decided to instead of resizing /usr, to just add a
new g-partition, and split /usr in /usr
system using growfs ... and then added the third subdisk to the
plex, and tried to growfs again, and got the following message:
# growfs -N /dev/vinum/data
new file systemsize is: 195366077 frags
Warning: 157556 sector(s) cannot be allocated.
growfs: 381497.4MB (781306752 sectors) block size
Greg 'groggy' Lehey wrote:
What file system? UFS 1 or UFS 2? growfs is suffering a bit from
lack of love at the moment. It might be worth putting in a PR.
Thanks for the response,
File system UFS2 (with softupdates, if that matters?).
I've sort of worked around the issue at the moment (backed
On Monday, 23 February 2004 at 13:51:52 +1030, Michael Ritchie wrote:
Greg 'groggy' Lehey wrote:
What file system? UFS 1 or UFS 2? growfs is suffering a bit from
lack of love at the moment. It might be worth putting in a PR.
Thanks for the response,
File system UFS2 (with softupdates
I have followed the method advocated by Drew Tomlinson on this list
(October 2002) to create a Vinum volume without losing data. Everything
worked ok for the first drive... and then I added the second, and grew
the file system using growfs ... and then added the third subdisk to the
plex
to the vinum configuration, went to growfs (which has saved
me more time than you can imagine, thanks freebsd) and no luck, newfs,
no luck, mount the old one without modifying? no luck...it just breaks
after making it larger than 1 tb. I've tried this one two boxes, various
drives configs. I can
data.p0
|
| That worked well, I had a 115GB volume, I now have a 301GB one, I'm
| happy :)
|
| Now, growfs, so, I launch it :
|
| ...
| new file systemsize is: 78997019 frags
| growfs: wtfs: write error: 631976157: Inappropriate ioctl for device
|
| Can you check the state of the volume and its
Ok, I could not wait, so I did :
a create with :
drive vinumdrive1 device /dev/ad3e
sd name data.p0.s1 drive vinumdrive1 len 0
then :
attach data.p0.s1 data.p0
That worked well, I had a 115GB volume, I now have a 301GB one, I'm happy :)
Now, growfs, so, I launch it :
# growfs -N /dev/vinum
one, I'm happy
| :)
|
| Now, growfs, so, I launch it :
|
|# growfs -N /dev/vinum/data
| new file systemsize is: 78997019 frags
| Warning: 4312 sector(s) cannot be allocated.
| growfs: 308580.0MB (631971840 sectors) block size 32768, fragment size
| 4096 using 417 cylinder groups of 740.00MB
|# growfs /dev/vinum/data
| We strongly recommend you to make a backup before growing the Filesystem
|=20
| Did you backup your data (Yes/No) ? Yes
| new file systemsize is: 78997019 frags
| growfs: wtfs: write error: 631976157: Inappropriate ioctl for device
|=20
| And well, it does
one, I'm happy :)
Now, growfs, so, I launch it :
...
new file systemsize is: 78997019 frags
growfs: wtfs: write error: 631976157: Inappropriate ioctl for device
Can you check the state of the volume and its plexes and subdisks? If
they're all OK,, can you run this with ktrace? I'd
super-time-intensive - from reading various posts and blogs
related to FreeBSD, it appears to me that I can resolve my issue by
using growfs - the next slice after / is /tmp which has plenty of room
free, and can afford to be reduced by a little. It doesn't seem to be
affecting system use
by
using growfs - the next slice after / is /tmp which has plenty of room
free, and can afford to be reduced by a little. It doesn't seem to be
affecting system use except that I can't add new users.
Here's what I look like now:
Filesystem 1K-blocksUsedAvail Capacity Mounted
growfs - the next slice after / is /tmp which has plenty of room
free, and can afford to be reduced by a little. It doesn't seem to be
affecting system use except that I can't add new users.
Here's what I look like now:
Filesystem 1K-blocksUsedAvail Capacity Mounted on
/dev/ad0s1a
, it appears to me that I can resolve my issue by
using growfs - the next slice after / is /tmp which has plenty of room
free, and can afford to be reduced by a little. It doesn't seem to be
affecting system use except that I can't add new users.
Here's what I look like now:
Filesystem 1K-blocks
Problems have been resolved, thanks again for the help =)
-Rishi
Sergey 'DoubleF' Zaharchenko wrote:
On Thu, 04 Dec 2003 14:23:13 -0800
Rishi Chopra [EMAIL PROTECTED] probably wrote:
I get:
growfs: we are not growing (1048576 - 0)
So I took a look at your email, and you said:
You've grown
37% /usr
But when I try:
# umount /usr
# growfs /dev/da0s1e
I get:
growfs: we are not growing (1048576 - 0)
What am I missing? The /stand/sysinstall program had some problems with
a large /usr partition as well; when I tried installing the system
giving /usr all the remaining space on my
37% /usr
But when I try:
# umount /usr
# growfs /dev/da0s1e
You need to extend your e slice (with disklabel -e da0s1) first.
You've got three container objects: Fdisk slices, BSD partitions, and
filesystem. You've grown the slice, but you need to also expand the
partition before
1.9G 670M 1.1G 37% /usr
But when I try:
# umount /usr
# growfs /dev/da0s1e
You need to extend your e slice (with disklabel -e da0s1) first.
You've got three container objects: Fdisk slices, BSD partitions, and
filesystem. You've grown the slice, but you need to also
:
_
|DISK_|
|MBR|__Slice 1_|___Slice 2|
|_a_|___b___|__d___|___e___|
|_/_| |_/var_|_/usr__|
Run growfs on da0s2e and grow the filesystem
Hello,
I was wondering if there was a way to increase the size of
the / partition using growfs? i was thinking of trying
this by booting to a FreeBSD live disk and giving it a
shot. Is this the proper way to perform this type of
growth?
Scott
hi scott,
i believe you can only use growfs with contiguous disk space.
but, with unix you don't need to increase the size of the freebsd partition
( actually called slice in this context ). you do not need to use growfs to
use more space. you have at least three other options that will let you
Hello all,
So I've been reading the man page for growfs and I'm ready to use it,
however, I'm concerned about not utilizing it properly and destroying
things. Here's what I got:
I have my primary drive(ad0) split up like this. The first partition is
a 6GB space which holds all of the slices
Hello all,
So I was reading this growfs howto
http://www.daemonnews.org/200111/growfs.html
I'm ready to give it a shot..however, i'm VERY concerned
about messing things up. Here's the deal. The example
here shows a fsize of 1024 while mine is 2048. I'm
confused by the calculations as I'm
, Then concatenate the 30 gig to the 40 gig and use growfs to
have one 70 gig drive.
I know that on 5.0R growfs would not work with a vinum volume. I found a
bug report (1) saying it was fixed but am unsure if this means it was
included in 5.1R.
Assuming growfs will work all I should have to do is change
, FreeBSD
still thinks I've only got a 20G drive and I have 20 Gigs of unused
space. I was thinking of using growfs in order to grow my /usr
partition to fill the rest of the drive. I've read the man pages for
growfs, bsdlabel and fdisk but I'm a bit confused about how I should go
about
thinks I've only got a 20G drive and I have 20 Gigs of unused
space. I was thinking of using growfs in order to grow my /usr
partition to fill the rest of the drive. I've read the man pages for
growfs, bsdlabel and fdisk but I'm a bit confused about how I should go
about using these tools to do
Hi all,
I am looking at the promise ultratrak RM 15000
(http://www.promise.com/product/product_detail_eng.asp?productId=109familyI
d=6) Raid appliance with a 3TB disk configuration. This box connects
to the host with a SCSI 160 interface which is no problem, and as I
understand it UFS2 is
Hi all,
I am looking at the promise ultratrak RM 15000
(http://www.promise.com/product/product_detail_eng.asp?productId=109familyI
d=6) Raid appliance with a 3TB disk configuration. This box connects to the
host with a SCSI 160 interface which is no problem, and as I understand it
UFS2 is 64 bit
I finally succeded in adding a new drive to my concat volume (by attaching
it as a subdisk) but when I try to use 'growfs it says:
growfs: wtfs: write error: 160809993: Undefined error: 0
'growfs -N xxx' gives no errors.
What is preventing me from growing my file system? Any help would be very
76 matches
Mail list logo