Re: NFSv2 Wrong FS Size

2009-02-05 Thread perryh
   you could rebuild df to print its numbers as unsigned
   instead of signed.  Just watch out if your local filesystems
   start eating into their 8% reserve, since they'll start
   reporting huge values.
  
  Or patch df to print local filesystem sizes as signed -- so
  that the reserve reporting still works -- and NFS as unsigned
  to match the spec.

 That works as long as you don't NFS-mount other FreeBSD systems
 with overfull drives :)

Looking at this a little more closely, it appears that the struct
statfs returned by statfs(2) and friends tells the whole story,
using 64-bit values most of which are defined as unsigned.  (Only
f_bavail and f_ffree -- the number of blocks and inodes available to
non-superusers -- are defined as signed.)  The code that converts
from the 32-bit NFSv2 to the 64-bit struct statfs values seems
more likely to be somewhere in NFS than in df(1).
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: NFSv2 Wrong FS Size

2009-02-04 Thread perryh
 1755708928*1024/512 = 3511417856 blocks.  This number is larger
 than 2^31, which techinically isn't a problem because the NFSv2
 spec says that the filesystem size is unsigned.  FreeBSD treats it
 as signed, though, so it can display negative free space when
 root starts using its 8% reserve, so your unsigned 3511417856 gets
 printed as a signed -783549440, which messes everything up.
...
 you could rebuild df to print its numbers as unsigned instead of
 signed.  Just watch out if your local filesystems start eating
 into their 8% reserve, since they'll start reporting huge values.

Or patch df to print local filesystem sizes as signed -- so that
the reserve reporting still works -- and NFS as unsigned to match
the spec.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: NFSv2 Wrong FS Size

2009-02-04 Thread Dan Nelson
In the last episode (Feb 04), per...@pluto.rain.com said:
  1755708928*1024/512 = 3511417856 blocks.  This number is larger than
  2^31, which techinically isn't a problem because the NFSv2 spec says
  that the filesystem size is unsigned.  FreeBSD treats it as signed,
  though, so it can display negative free space when root starts using
  its 8% reserve, so your unsigned 3511417856 gets printed as a signed
  -783549440, which messes everything up.
 ...
  you could rebuild df to print its numbers as unsigned instead of
  signed.  Just watch out if your local filesystems start eating into
  their 8% reserve, since they'll start reporting huge values.
 
 Or patch df to print local filesystem sizes as signed -- so that the
 reserve reporting still works -- and NFS as unsigned to match the spec.

That works as long as you don't NFS-mount other FreeBSD systems with
overfull drives :)

-- 
Dan Nelson
dnel...@allantgroup.com
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


NFSv2 Wrong FS Size

2009-02-03 Thread John Morgan Salomon

Hi there,

I'm facing an odd problem with an NFSv2 mount.  I'm using userland  
nfsd from a Buffalo TeraStation Pro v1 NAS, running PPC Linux 2.4.20.


r...@leviathan:~# uname -a
Linux LEVIATHAN 2.4.20_mvl31-ppc_terastation #3 Tue Jul 18 09:29:11  
JST 2006 ppc GNU/Linux


I am sharing the following filesystem:

r...@leviathan:~# df -k
Filesystem   1K-blocks  Used Available Use% Mounted on
local filesystems
/dev/md1 1755708928 979032844 776676084  56% /mnt/array1

/etc/exports looks as follows:
/mnt/array1/data 192.168.2.0/255.255.255.0(rw,sync,insecure)

Mounting this on my Macbook Pro:

Fluffy:~ root# mount_nfs 192.168.2.11:/mnt/array1/data /mnt

Fluffy:~ root# df -k
Filesystem1024-blocks  Used Available  
Capacity  Mounted on

local filesystems
192.168.2.11:/mnt/array1/data  1755708928 979032844 776676084 
56%/mnt


So far, so good...

Mounting this on a FreeBSD 7.1 client:

behemoth# mount /data
behemoth# df -k
Filesystem1024-blocksUsed Avail  
Capacity  Mounted on

local filesystems
192.168.2.11:/mnt/array1/data  -391774720 -1168450804 776676084
298%/data


Here is my fstab:

192.168.2.11:/mnt/array1/data   /data   nfs rw  0   0

Woo.  298%!  That's a record, even for me.

I've tried mount_nfs with -2, -T, and I can't think of anything else.   
There are no telling log messages, either on the NAS or on the FreeBSD  
box.


behemoth# uname -a
FreeBSD behemoth 7.1-RELEASE-p2 FreeBSD 7.1-RELEASE-p2 #2: Sat Jan 31  
20:13:15 CET 2009 r...@behemoth:/usr/obj/usr/src/sys/BEHEMOTH  i386


Any ideas?  It's causing various php scripts that need an accurate  
filesystem size to puke all over the place. Help!


Thanks much for any thoughts,

-John
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: NFSv2 Wrong FS Size

2009-02-03 Thread John Morgan Salomon

Hi there,

I may have found a clue on this in case anyone's interested:

the FreeBSD box runs on an Intel Atom 230 64-bit CPU

I did more digging and found this:

http://www.freebsd.org/projects/bigdisk/index.html

An audit is needed to make sure that all reported fields are 64-bit  
clean. There are reports with certain fields being incorrect or  
negative with NFS volumes, which could either be an NFS or df problem.


Not sure where to go now, as the last entry in that project is dated  
2005 -- again, any tips welcome.


-John

On 3 Feb 2009, at 19:21, John Morgan Salomon wrote:


Hi there,

I'm facing an odd problem with an NFSv2 mount.  I'm using userland  
nfsd from a Buffalo TeraStation Pro v1 NAS, running PPC Linux 2.4.20.


r...@leviathan:~# uname -a
Linux LEVIATHAN 2.4.20_mvl31-ppc_terastation #3 Tue Jul 18 09:29:11  
JST 2006 ppc GNU/Linux


I am sharing the following filesystem:

r...@leviathan:~# df -k
Filesystem   1K-blocks  Used Available Use% Mounted on
local filesystems
/dev/md1 1755708928 979032844 776676084  56% /mnt/array1

/etc/exports looks as follows:
/mnt/array1/data 192.168.2.0/255.255.255.0(rw,sync,insecure)

Mounting this on my Macbook Pro:

Fluffy:~ root# mount_nfs 192.168.2.11:/mnt/array1/data /mnt

Fluffy:~ root# df -k
Filesystem1024-blocks  Used Available  
Capacity  Mounted on

local filesystems
192.168.2.11:/mnt/array1/data  1755708928 979032844 776676084 
56%/mnt


So far, so good...

Mounting this on a FreeBSD 7.1 client:

behemoth# mount /data
behemoth# df -k
Filesystem1024-blocksUsed Avail  
Capacity  Mounted on

local filesystems
192.168.2.11:/mnt/array1/data  -391774720 -1168450804 776676084
298%/data


Here is my fstab:

192.168.2.11:/mnt/array1/data   /data   nfs rw  0   0

Woo.  298%!  That's a record, even for me.

I've tried mount_nfs with -2, -T, and I can't think of anything  
else.  There are no telling log messages, either on the NAS or on  
the FreeBSD box.


behemoth# uname -a
FreeBSD behemoth 7.1-RELEASE-p2 FreeBSD 7.1-RELEASE-p2 #2: Sat Jan  
31 20:13:15 CET 2009 r...@behemoth:/usr/obj/usr/src/sys/ 
BEHEMOTH  i386


Any ideas?  It's causing various php scripts that need an accurate  
filesystem size to puke all over the place. Help!


Thanks much for any thoughts,

-John
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org 



___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: NFSv2 Wrong FS Size

2009-02-03 Thread Dan Nelson

In the last episode (Feb 03), John Morgan Salomon said:
 On 3 Feb 2009, at 19:21, John Morgan Salomon wrote:
  Hi there,
 
  I'm facing an odd problem with an NFSv2 mount.  I'm using userland  
  nfsd from a Buffalo TeraStation Pro v1 NAS, running PPC Linux 2.4.20.
 
  r...@leviathan:~# uname -a
  Linux LEVIATHAN 2.4.20_mvl31-ppc_terastation #3 Tue Jul 18 09:29:11 JST 
  2006 ppc GNU/Linux
 
  I am sharing the following filesystem:
 
  r...@leviathan:~# df -k
  Filesystem   1K-blocks  Used Available Use% Mounted on
  local filesystems
  /dev/md1 1755708928 979032844 776676084  56% /mnt/array1
 
  Mounting this on a FreeBSD 7.1 client:
 
  behemoth# mount /data
  behemoth# df -k
  Filesystem1024-blocksUsed Avail  
  Capacity  Mounted on
  local filesystems
  192.168.2.11:/mnt/array1/data  -391774720 -1168450804 776676084
 
 I did more digging and found this:
 
 http://www.freebsd.org/projects/bigdisk/index.html
 
 An audit is needed to make sure that all reported fields are 64-bit  
 clean. There are reports with certain fields being incorrect or  
 negative with NFS volumes, which could either be an NFS or df problem.
 
 Not sure where to go now, as the last entry in that project is dated  
 2005 -- again, any tips welcome.

The real problem is that NFSv2 only provides a 32-bit field for filesystem
size, and multiplies that by the reported blocksize.  Most NFS servers claim
512-byte blocks no matter what the underlying filessytem has, so in your
case that would result in the filesystem size being reported as
1755708928*1024/512 = 3511417856 blocks.  This number is larger than 2^31,
which techinically isn't a problem because the NFSv2 spec says that the
filesystem size is unsigned.  FreeBSD treats it as signed, though, so it can
display negative free space when root starts using its 8% reserve, so your
unsigned 3511417856 gets printed as a signed -783549440, which messes
everything up.

NFSv3 uses 64-bit fields for those size values, so just mount with NFSv3
(which actually is the default on FreeBSD; maybe you have it disabled on
your TeraStation for some reason), and you should get correct filesystem
stats, as well as better performance and the ability to work with files over
2gb.

Alternatively, you could rebuild df to print its numbers as unsigned
instead of signed.  Just watch out if your local filesystems start eating
into their 8% reserve, since they'll start reporting huge values.

-- 
Dan Nelson
dnel...@allantgroup.com
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: NFSv2 Wrong FS Size

2009-02-03 Thread John Morgan Salomon
I was starting to suspect that it might be something along these  
lines.  NFSv3 hasn't been possible so far because the Terastation  
hacked firmware on this particular platform (TS Pro v1) doesn't seem  
to play nice with kernel-level nfs (userland nfs only has packages for  
v2, and I've been too intimidated to approach the idea of rolling my  
own so far.)


This explains a lot -- I thought maybe it might be the result of me  
running normal 32-bit i386 release on a 64-bit CPU.  I will see if I  
can get NFS3 working.


(I guess MacOS X/Mach/BSDI treat the size value as unsigned...)

-John

On 3 Feb 2009, at 22:53, Dan Nelson wrote:



In the last episode (Feb 03), John Morgan Salomon said:

On 3 Feb 2009, at 19:21, John Morgan Salomon wrote:

Hi there,

I'm facing an odd problem with an NFSv2 mount.  I'm using userland
nfsd from a Buffalo TeraStation Pro v1 NAS, running PPC Linux  
2.4.20.


r...@leviathan:~# uname -a
Linux LEVIATHAN 2.4.20_mvl31-ppc_terastation #3 Tue Jul 18  
09:29:11 JST 2006 ppc GNU/Linux


I am sharing the following filesystem:

r...@leviathan:~# df -k
Filesystem   1K-blocks  Used Available Use% Mounted on
local filesystems
/dev/md1 1755708928 979032844 776676084  56% /mnt/array1

Mounting this on a FreeBSD 7.1 client:

behemoth# mount /data
behemoth# df -k
Filesystem1024-blocksUsed Avail
Capacity  Mounted on
local filesystems
192.168.2.11:/mnt/array1/data  -391774720 -1168450804 776676084


I did more digging and found this:

http://www.freebsd.org/projects/bigdisk/index.html

An audit is needed to make sure that all reported fields are 64-bit
clean. There are reports with certain fields being incorrect or
negative with NFS volumes, which could either be an NFS or df  
problem.


Not sure where to go now, as the last entry in that project is dated
2005 -- again, any tips welcome.


The real problem is that NFSv2 only provides a 32-bit field for  
filesystem
size, and multiplies that by the reported blocksize.  Most NFS  
servers claim
512-byte blocks no matter what the underlying filessytem has, so in  
your

case that would result in the filesystem size being reported as
1755708928*1024/512 = 3511417856 blocks.  This number is larger than  
2^31,
which techinically isn't a problem because the NFSv2 spec says that  
the
filesystem size is unsigned.  FreeBSD treats it as signed, though,  
so it can
display negative free space when root starts using its 8% reserve,  
so your

unsigned 3511417856 gets printed as a signed -783549440, which messes
everything up.

NFSv3 uses 64-bit fields for those size values, so just mount with  
NFSv3
(which actually is the default on FreeBSD; maybe you have it  
disabled on
your TeraStation for some reason), and you should get correct  
filesystem
stats, as well as better performance and the ability to work with  
files over

2gb.

Alternatively, you could rebuild df to print its numbers as unsigned
instead of signed.  Just watch out if your local filesystems start  
eating

into their 8% reserve, since they'll start reporting huge values.

--
Dan Nelson
dnel...@allantgroup.com
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org 



___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org