Re: Optimising NFS for system files

2008-12-31 Thread Peter Boosten



On 31 dec 2008, at 08:53, Bernard Dugas bern...@dugas-family.org  
wrote:



Wojciech Puchar wrote:


nfsserver# time tar -cf - clientusr-amd64  /dev/null
5.001u 12.147s 1:23.92 20.4%69+1369k 163345+0io 0pf+0w

client9# time tar -cf - /usr  /dev/null
tar: Removing leading '/' from member names
3.985u 19.779s 4:32.47 8.7% 74+1457k 0+0io 0pf+0w

Note : clientusr-amd64 is around 1.3GB and is the same directory  
exported to client9 /usr with nfs.



it's FAST. what's wrong?


First thing that may be wrong is the understanding of the time  
figures. The documentation is not clear about them and the -h option  
is not working :


client6# time -h tar -cf - /usr  /dev/null
-h: Command not found.
0.000u 0.000s 0:00.00 0.0%  0+0k 0+0io 0pf+0w

The main thing is that the 3rd figures 1:23.92 and 4:32.47 seems to  
be the time i wait in front of the computer while it works (ok, i  
know, i should enjoy a beer, or hot coffee with this nice snow ;-) :


client9# date ; time tar -cf - /usr  /dev/null ; date ;
Wed Dec 31 08:23:59 CET 2008
tar: Removing leading '/' from member names
4.103u 19.651s 4:25.80 8.9% 74+1453k 0+0io 2pf+0w
Wed Dec 31 08:28:25 CET 2008

and 08:28:25 - 08:23:59 = 00:04:26 is very close to 4:25.80.

On server, it means : 1440MB / 84s = 17MB/s
On client, that becomes : 1440MB / 266s = 5.4MB/s

I know the disk is not very fast, but i would like the NFS layer not  
to add too much...


I don't want my users to wait between 3 or 4 times more because  
computer is using NFS.


In my opinion there are more considerations than only nfs: the data is  
pulled twice over the network, and the tar process might initiate  
paging which is done over the network as well. The tar comparison is  
not a good one.


Peter
--
http://www.boosten.org
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Optimising NFS for system files

2008-12-31 Thread Manolis Kiagias
Bernard Dugas wrote:
 Wojciech Puchar wrote:

 nfsserver# time tar -cf - clientusr-amd64  /dev/null
 5.001u 12.147s 1:23.92 20.4%69+1369k 163345+0io 0pf+0w

 client9# time tar -cf - /usr  /dev/null
 tar: Removing leading '/' from member names
 3.985u 19.779s 4:32.47 8.7% 74+1457k 0+0io 0pf+0w

 Note : clientusr-amd64 is around 1.3GB and is the same directory
 exported to client9 /usr with nfs.

 it's FAST. what's wrong?

 First thing that may be wrong is the understanding of the time
 figures. The documentation is not clear about them and the -h option
 is not working :

 client6# time -h tar -cf - /usr  /dev/null
 -h: Command not found.
 0.000u 0.000s 0:00.00 0.0%  0+0k 0+0io 0pf+0w

Just a sidenote, you are probably getting a version of time integrated
to your shell. The -h option works fine in /usr/bin/time, so run like this:

client6# /usr/bin/time -h tar -cf - /usr  /dev/null
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Optimising NFS for system files

2008-12-31 Thread Bernard Dugas
I am trying a memory disk on server to see the effect of hard drive 
performances, and also discovering the function :-)


The conclusion is that memory disk is faster that this drive ;-)
 45MB/s vs 10Mb/s

But the NFS access to the memory drive is still 5MB/s :-(

As there is no more hard drive involved, we know that there is a 
bottleneck at 5MB in NFS layer on this system... Where ?


Thanks a lot for any help on the method to find/diagnose this !

Details are below :

nfsserver# uname -a
FreeBSD nfsserver 7.1-RC1 FreeBSD 7.1-RC1 #0: Sun Dec  7 00:38:13 UTC 
2008 r...@driscoll.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC  amd64

nfsserver# mdconfig -a -t swap -s 600m -o reserve -u 7
nfsserver# ls /dev/md*
/dev/md7/dev/mdctl
nfsserver# newfs -i 2048 /dev/md7
Reduced frags per cylinder group from 60864 to 59928 to enlarge last cyl 
group

/dev/md7: 600.0MB (1228800 sectors) block size 16384, fragment size 2048
using 6 cylinder groups of 117.05MB, 7491 blks, 59968 inodes.
super-block backups (for fsck -b #) at:
 160, 239872, 479584, 719296, 959008, 1198720
nfsserver# mkdir /tstnfs
nfsserver# mount /dev/md7 /tstnfs

nfsserver# date ; time tar -cf - 
/nfsro/commun/clientusr-amd64-7.2-RC2-20081230/ports  /dev/null ; date ;

Wed Dec 31 09:11:08 CET 2008
tar: Removing leading '/' from member names
3.794u 8.766s 0:46.40 27.0% 71+1406k 123375+0io 0pf+0w
Wed Dec 31 09:11:54 CET 2008

That makes 498MB / 46s = 10.8MB/s for disk drive.

nfsserver# date ; cp -r -p 
/nfsro/commun/clientusr-amd64-7.2-RC2-20081230/ports /tstnfs/ports ; date

Wed Dec 31 09:33:09 CET 2008
Wed Dec 31 09:34:46 CET 2008

df -h
/dev/md7   512M498M-27M   106%/tstnfs

nfsserver# date ; time tar -cf - /tstnfs/ports  /dev/null ; date ;
Wed Dec 31 09:36:59 CET 2008
tar: Removing leading '/' from member names
2.947u 6.218s 0:10.61 86.2% 74+1463k 104885+0io 0pf+0w
Wed Dec 31 09:37:10 CET 2008
nfsserver# date ; time tar -cf - /tstnfs/ports  /dev/null ; date ;
Wed Dec 31 09:37:12 CET 2008
tar: Removing leading '/' from member names
2.895u 6.487s 0:11.01 85.1% 74+1466k 112838+0io 0pf+0w
Wed Dec 31 09:37:23 CET 2008
nfsserver# date ; time tar -cf - /tstnfs/ports  /dev/null ; date ;
Wed Dec 31 09:40:22 CET 2008
tar: Removing leading '/' from member names
2.902u 6.610s 0:11.10 85.6% 75+1483k 113393+0io 0pf+0w
Wed Dec 31 09:40:33 CET 2008

That makes 498MB / 11s = 45MB/s : better that 10MB/s for disk, but not 
exceptional.


Now on the client :

client9# uname -a
FreeBSD client9 7.1-RC2 FreeBSD 7.1-RC2 #0: Tue Dec 23 11:42:13 UTC 2008 
r...@driscoll.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC  amd64


client9# mount -o ro nfsserver:/tstnfs /tstnfs
client9# df -h
nfsserver:/tstnfs 512M498M 
0B   100%/tstnfs


client9# date ; time tar -cf - /tstnfs/ports  /dev/null ; date ;
Wed Dec 31 09:50:39 CET 2008
tar: Removing leading '/' from member names
2.896u 13.020s 1:35.22 16.7%75+1483k 0+0io 2pf+0w
Wed Dec 31 09:52:14 CET 2008
client9# date ; time tar -cf - /tstnfs/ports  /dev/null ; date ;
Wed Dec 31 09:52:22 CET 2008
tar: Removing leading '/' from member names
2.700u 12.755s 1:27.78 17.6%76+1498k 0+0io 0pf+0w
Wed Dec 31 09:53:50 CET 2008
client9# date ; time tar -cf - /tstnfs/ports  /dev/null ; date ;
Wed Dec 31 09:55:02 CET 2008
tar: Removing leading '/' from member names
2.681u 12.688s 1:28.15 17.4%74+1464k 0+0io 0pf+0w
Wed Dec 31 09:56:30 CET 2008

That makes between 95s and 87s, then 498MB / 95s = 5,2MB/s and 5.7MB/s, 
like previous test from hard drive NFS export.



Top is showing around 100MB of free memory while taring on client9, so i 
don't think tar is paging on network :
last pid:  3318;  load averages:  0.17,  0.11,  0.04up 0+11:14:27 
10:08:10

30 processes:  1 running, 29 sleeping
CPU:  0.8% user,  0.0% nice,  9.0% system,  0.0% interrupt, 90.2% idle
Mem: 19M Active, 720M Inact, 136M Wired, 240K Cache, 110M Buf, 98M Free



Best regards,
--
Bernard DUGAS Mobile +33 615 333 770
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Optimising NFS for system files

2008-12-31 Thread Bernard Dugas

Peter Boosten wrote:

On server, it means : 1440MB / 84s = 17MB/s
On client, that becomes : 1440MB / 266s = 5.4MB/s

I know the disk is not very fast, but i would like the NFS layer not  
to add too much...


I don't want my users to wait between 3 or 4 times more because  
computer is using NFS.


In my opinion there are more considerations than only nfs: the data is  
pulled twice over the network, and the tar process might initiate  
paging which is done over the network as well. The tar comparison is  
not a good one.


I would welcome any way to check that idea on the system.

But :
- tar is directed to /dev/null so that should avoid any physical writing ;
- there is still memory FREE on both server and client while taring ;
- the effect of tar is the same on server and client, so the induced 
error should be the same time on both.


Thanks a lot,
Best regards,
--
Bernard DUGAS Mobile +33 615 333 770
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Optimising NFS for system files

2008-12-31 Thread Bernard Dugas

Manolis Kiagias wrote:

First thing that may be wrong is the understanding of the time
figures. The documentation is not clear about them and the -h option
is not working :

client6# time -h tar -cf - /usr  /dev/null
-h: Command not found.
0.000u 0.000s 0:00.00 0.0%  0+0k 0+0io 0pf+0w


Just a sidenote, you are probably getting a version of time integrated
to your shell. The -h option works fine in /usr/bin/time, so run like this:

client6# /usr/bin/time -h tar -cf - /usr  /dev/null


Very true, this is it :-) Thanks a lot for your help !

Best regards,
--
Bernard DUGAS Mobile +33 615 333 770
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Optimising NFS for system files

2008-12-31 Thread Bernard Dugas

usleep wrote:
 - Second installation
  - FreeNAS, RAID0
  - Tested throughput ( to local RAID0 ):
  - ftp: 82MB/s
  - nfs: 75MB/s
  - cifs/samba: 42MB/s

Thanks a lot for these clear references !

 Test issues ( things that get you confused )
   - if you expect to be able to copy a file at Gigabit speeds, you
 need to be able to write as fast to  your local disk as well. So to
  reliable test SAN/NAS performance at Gigabit speeds you need RAID at
  the server and at the client. Or write to /dev/null

This is what i did with the tar. What function do you use for testing ?

   - if you repeatedly test with the same file, it will get cached into
 memory of the NAS. so you won't be testing troughput
 disk-network-disk anymore: you are testing
 NAS-memory-network-disk. I was testing with
 ubuntu-iso's, but with 1GB of memory, ISO's get cached as well.

i was fearing that, but it seems that does not happen in my testing, as 
repeating same test give same result. I don't know why it does not 
happens...


  - if you repeatedly test with the same file, and you have enough
 local memory, and you test with nfs or cifs/samba, the file will get
  cached locally as well. this results into transfer-speeds to
 /dev/null exceeding 100MB/s ( Gigabit speeds ). i have observed
 transfer speeds to /dev/null of 400MB/s!

I would love to see that behaviour. But it didn't happen neither in my 
testing :-(


 We now have a NAS that performs faster than local disk.
 
Is it the FreeNAS you describe in your testing or a new one ?

  We plan to use it run development-virtual-machines on.

This is also my target :-) I will have some high density servers (6 
independent servers in 1U) and trying to master the freebsd diskless 
process before...


Best regards,
--
Bernard DUGAS Mobile +33 615 333 770
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Optimising NFS for system files

2008-12-30 Thread Bernard Dugas

Wojciech Puchar wrote:
i can see a reading speed difference 4 time slower on client than on 
server (time tar -cf - /usr  /dev/null).


I will play with jumbo MTU for network performance, but would anybody 
know if i can ask system files NFS exports to stay in server memory ? 
I have less than 2Go to share and 2GO DDR2 is affordable.


you don't have to.


So you din't think that if all files are already in RAM on server, i 
will  save the drive access time ?


Or do you think the NFS network access is so much slow that the disk 
access time is just marginal ?


Do you think i should use something more efficient than NFS ?

Best regards,
--
Bernard DUGAS Mobile +33 615 333 770
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Optimising NFS for system files

2008-12-30 Thread Michel Talon
Bernard Dugas wrote:

 So you din't think that if all files are already in RAM on server, i 
 will  save the drive access time ?
 
 Or do you think the NFS network access is so much slow that the disk 
 access time is just marginal ?
 
 Do you think i should use something more efficient than NFS ?

The VM system in principle does a good job of keeping in memory files
which are frequently accessed, so you should not have to do anything
special, and moreover i don't think there exists something convenient
to force some files in memory (and this would be detrimental to the
globalthroughput of the server).

As to NFS speed, you should experiment with NFS on TCP and run a large
number of nfsd on the server (see nfs_server_flags in rc.conf). For
example -n 6 or -n 8. Maybe also experiment with the readsize and
writesize. Anyways, i don't think you can expect the same throughput
via NFS (say 10 MB/s, or more on Gig ethernet) as on a local disk
(40 MB/s or more).  

-- 

Michel TALON

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Optimising NFS for system files

2008-12-30 Thread Wojciech Puchar

less than 2Go to share and 2GO DDR2 is affordable.


you don't have to.


So you din't think that if all files are already in RAM on server, i will 
save the drive access time ?


FreeBSD automatically use all free memory as cache.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Optimising NFS for system files

2008-12-30 Thread Wojciech Puchar

As to NFS speed, you should experiment with NFS on TCP and run a large
number of nfsd on the server (see nfs_server_flags in rc.conf). For
example -n 6 or -n 8. Maybe also experiment with the readsize and
writesize. Anyways, i don't think you can expect the same throughput
via NFS (say 10 MB/s, or more on Gig ethernet) as on a local disk


anyway even relatively slow computer (like pentium 200) can easily 
saturate fast ethernet. mostly network speed is the limit.


there is slowdown because network introduces slight delay, but few ms at 
most if network is made properly

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Optimising NFS for system files

2008-12-30 Thread Bernard Dugas

Wojciech Puchar wrote:
So you din't think that if all files are already in RAM on server, i 
will save the drive access time ?


FreeBSD automatically use all free memory as cache.


OK

 there is slowdown because network introduces slight delay,
 but few ms at most if network is made properly

This is a Gbps network with only 1 switch between nfs server and client, 
with less than 0.2ms ping. So bandwidth should not be a problem, seems 
that NFSV3 is the limitation...


Trying to change mtu, but don't look easy, where can i find the possible 
range for ports ?


Best regards,
--
Bernard DUGAS Mobile +33 615 333 770
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Optimising NFS for system files

2008-12-30 Thread Vincent Hoffman
Bernard Dugas wrote:
 Wojciech Puchar wrote:
 So you din't think that if all files are already in RAM on server, i
 will save the drive access time ?

 FreeBSD automatically use all free memory as cache.

 OK

  there is slowdown because network introduces slight delay,
  but few ms at most if network is made properly

 This is a Gbps network with only 1 switch between nfs server and
 client, with less than 0.2ms ping. So bandwidth should not be a
 problem, seems that NFSV3 is the limitation...

 Trying to change mtu, but don't look easy, where can i find the
 possible range for ports ?

MTU can be a pain, check what your switch supports, and the manpage for
your network driver should say what MTU the nic supports.
mtu is set using ifconfig (or the ifconfig_$nic line in rc.conf) :
from man ifconfig
mtu n   Set the maximum transmission unit of the interface to n, default
 is interface specific.  The MTU is used to limit the size of
 packets that are transmitted on an interface.  Not all
interfaces
 support setting the MTU, and some interfaces have range
restric-
 tions.
from man em  (for example)
 Support for Jumbo Frames is provided via the interface MTU setting.
 Selecting an MTU larger than 1500 bytes with the ifconfig(8)
utility con-
 figures the adapter to receive and transmit Jumbo Frames.  The maximum
 MTU size for Jumbo Frames is 16114.



Vince
 Best regards,

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Optimising NFS for system files

2008-12-30 Thread Wojciech Puchar



there is slowdown because network introduces slight delay,
but few ms at most if network is made properly


This is a Gbps network with only 1 switch between nfs server and
client, with less than 0.2ms ping. So bandwidth should not be a


it should work with near-wire speed on 100Mbit clients.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Optimising NFS for system files

2008-12-30 Thread Bernard Dugas

Vince wrote:

Trying to change mtu, but don't look easy, where can i find the
possible range for ports ?



MTU can be a pain, check what your switch supports, and the manpage for
your network driver should say what MTU the nic supports.


Thank you for the method !

It seems that em and re are not behaving like they should :

re(4) says :
The 8169, 8169S and 8110S also support jumbo frames, which can be 
configured via the interface MTU setting.  The MTU is limited to 7422, 
since the chip cannot transmit larger frames. 


But :
nfsserver# ifconfig re0 -mtu 7422
ifconfig: -mtu: bad value
nfsserver# ifconfig re0 -mtu 7421
ifconfig: -mtu: bad value

It should be a Realtek RTL 8111c but i don't know where to find the 
relationship to what pciconf -l gives me :
r...@pci0:2:0:0: class=0x02 card=0xe0001458 chip=0x816810ec rev=0x02 
hdr=0x00


em(4) says :
The maximum MTU size for Jumbo Frames is 16114.

But :
client9# ifconfig em1 -mtu 8192
ifconfig: -mtu: bad value

with :
Dec 30 16:02:36 client9 kernel: em1: Intel(R) PRO/1000 Network 
Connection 6.9.6 port 0x7f00-0x7f1f mem 0xfd4e-0xfd4f irq 17 at 
device 0.0 on pci7


client1# ifconfig em0 -mtu 8192
ifconfig: -mtu: bad value

with :
Dec 30 18:10:38 client1 kernel: em0: Intel(R) PRO/1000 Network 
Connection 6.9.6 port 0xfe00-0xfe1f mem 0xf

dfc-0xfdfd,0xfdffe000-0xfdffefff irq 20 at device 25.0 on pci0


Now i understand better MTU can be a pain ;-)

Best regards,
--
Bernard DUGAS Mobile +33 615 333 770
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Optimising NFS for system files

2008-12-30 Thread Matthew Seaman

Bernard Dugas wrote:


But :
nfsserver# ifconfig re0 -mtu 7422
ifconfig: -mtu: bad value
nfsserver# ifconfig re0 -mtu 7421
ifconfig: -mtu: bad value


Syntax error on the ifconfig command line:

% ifconfig de0 
de0: flags=8843UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST metric 0 mtu 1500

[...]
% sudo ifconfig de0 mtu 1460 
% ifconfig de0

de0: flags=8843UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST metric 0 mtu 1460
   [...]
% sudo ifconfig de0 mtu 1500
% sudo ifconfig de0 -mtu 1460
ifconfig: -mtu: bad value

It's 'mtu ' not '-mtu '

Cheers,

Matthew


--
Dr Matthew J Seaman MA, D.Phil.   7 Priory Courtyard
 Flat 3
PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate
 Kent, CT11 9PW



signature.asc
Description: OpenPGP digital signature


Re: Optimising NFS for system files

2008-12-30 Thread Bernard Dugas

Wojciech Puchar wrote:

This is a Gbps network with only 1 switch between nfs server and
client, with less than 0.2ms ping. So bandwidth should not be a


it should work with near-wire speed on 100Mbit clients.


Server and clients are 1Gbps.

But i have a 4 factor of performance for reading only ...

nfsserver# time tar -cf - clientusr-amd64  /dev/null
5.001u 12.147s 1:23.92 20.4%69+1369k 163345+0io 0pf+0w

client9# time tar -cf - /usr  /dev/null
tar: Removing leading '/' from member names
3.985u 19.779s 4:32.47 8.7% 74+1457k 0+0io 0pf+0w

Note : clientusr-amd64 is around 1.3GB and is the same directory 
exported to client9 /usr with nfs.


I have tried on 7.1-RC1 and 7.1-RC2, with amd64 architecture.

CPU don't seem to be the limiting factor, more than 80% idle on server, 
they are either Core2duo on nfsserver :
Dec 23 04:52:18 nfsserver kernel: CPU: Intel(R) Celeron(R) CPU 
E1200  @ 1.60GHz (1600.01-MHz K8-class CPU)
Dec 23 04:52:18 nfsserver kernel: Origin = GenuineIntel  Id = 0x6fd 
Stepping = 13


or on client9 :
/var/log/messages.3:Dec 29 12:21:20 client9 kernel: CPU: Intel(R) 
Core(TM)2 Duo CPU E4500  @ 2.20GHz (2200.01-MHz K8-class CPU)


If anybody can help to look at right places... ? How may i divide the 
problem ?


Or is my simple test wrong ? I use a tar directed to /dev/null to avoid 
any writing.


Best regards,
--
Bernard DUGAS Mobile +33 615 333 770
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Optimising NFS for system files

2008-12-30 Thread Bernard Dugas

Matthew Seaman wrote:

It's 'mtu ' not '-mtu '


I'm confused, thanks so much !

There was no option without - in my old unix time ;-)

Thanks to you, it seems that my max mtu is 9216 on em :

client9# ifconfig em1 mtu 9216
client9# ifconfig em1 mtu 9217
ifconfig: ioctl (set mtu): Invalid argument

Max mtu is changing on re :
nfsserver# ifconfig re0 mtu 1504
nfsserver# ifconfig re0 mtu 1505
ifconfig: ioctl (set mtu): Invalid argument

But another re accept 7422 :
client6# ifconfig re0 mtu 7422
client6# ifconfig re0 mtu 7423
ifconfig: ioctl (set mtu): Invalid argument

It seems that only testing can give the limit, this is not documented.

Best regards,
--
Bernard DUGAS Mobile +33 615 333 770
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Optimising NFS for system files

2008-12-30 Thread Wojciech Puchar

nfsserver# time tar -cf - clientusr-amd64  /dev/null
5.001u 12.147s 1:23.92 20.4%69+1369k 163345+0io 0pf+0w

client9# time tar -cf - /usr  /dev/null
tar: Removing leading '/' from member names
3.985u 19.779s 4:32.47 8.7% 74+1457k 0+0io 0pf+0w

Note : clientusr-amd64 is around 1.3GB and is the same directory exported to 
client9 /usr with nfs.



it's FAST. what's wrong?

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Optimising NFS for system files

2008-12-30 Thread usleepless
On 12/30/08, Michel Talon ta...@lpthe.jussieu.fr wrote:

 Bernard Dugas wrote:

  So you din't think that if all files are already in RAM on server, i
  will  save the drive access time ?
 
  Or do you think the NFS network access is so much slow that the disk
  access time is just marginal ?
 
  Do you think i should use something more efficient than NFS ?


 The VM system in principle does a good job of keeping in memory files
 which are frequently accessed, so you should not have to do anything
 special, and moreover i don't think there exists something convenient
 to force some files in memory (and this would be detrimental to the
 globalthroughput of the server).

 As to NFS speed, you should experiment with NFS on TCP and run a large
 number of nfsd on the server (see nfs_server_flags in rc.conf). For
 example -n 6 or -n 8. Maybe also experiment with the readsize and
 writesize.

Anyways, i don't think you can expect the same throughput
 via NFS (say 10 MB/s, or more on Gig ethernet) as on a local disk
 (40 MB/s or more).


i disagree. i have recently installed a NAS by slapping FreeNAS on a
relative old server ( P4 2.8Ghz ) and experimented with lots of stuff
because i was disappointed with the througput. spoiler: 1st try 30MB/s, last
try 82MB/s.

hardware server:
 - intel server p4 3ghz, 1GB memory
 - onboard intel 1Gb fxp nic
 - 2 x barracuda 750GB disks
 - hp procurve 3500zl( ? )
 - OS: Freebsd 6.2 ( FreeNAS )

hardware linux workstation:
 - 2 x dual core, 2GB memory workstation
 - onboard intel 1Gb nic
 - 3 250GB disks
 - OS: Ubuntu 8.10

hardware windows workstation:
 - same
 - OS: Windows Server 2003

First installation
 -  FreeNAS, ignorant as i was: chose JBOD as disk-configuration. This is
Just a Bunch Of Disks, it just concats all the ( 2 pcs ) drives into 1 big
volume.
 - Tested throughput(cifs/samba), got about 40MB/s on my linux box. Tested
on the windows box: about 33MB/s.
 - Above measurements where achieved only after jumbo-frames and
send-receive-buffer optimalisations ( won about 10% )

I was heavily disappointed with the results: i had installed a couple of
NAS-systems, which could easily reach 80MB/s or 140MB/s with two nic's
trunked.

To make a long story short: with Gigabit networking it is not the network
which is the bottle-neck: it is the local access to disks. So you need to
use lots of disks. So instead of JBOD you need to configure RAID0, RAID1 on
the file server etc to maximize disk throughput. That's why the NAS-systems
performed so well: they had 4 disks each.

- Second installation
 - FreeNAS, RAID0
 - Tested throughput ( to local RAID0 ):
 - ftp: 82MB/s
 - nfs: 75MB/s
 - cifs/samba: 42MB/s

Confused by the performance of cifs, i configured jumboframes,  large
send/receive buffers for cifs/samba, freenas-tuning-opting, polling etc. To
no avail, there seems to be another limit to cifs/samba performance (
FreeNAS has an optimized smb.conf btw).

Test issues ( things that get you confused )
  - if you expect to be able to copy a file at Gigabit speeds, you need to
be able to write as fast to  your local disk as well. So to reliable test
SAN/NAS performance at Gigabit speeds you need RAID at the server and at the
client. Or write to /dev/null
  - if you repeatedly test with the same file, it will get cached into
memory of the NAS. so you won't be testing troughput disk-network-disk
anymore: you are testing NAS-memory-network-disk. I was testing with
ubuntu-iso's, but with 1GB of memory, ISO's get cached as well.
 - if you repeatedly test with the same file, and you have enough local
memory, and you test with nfs or cifs/samba, the file will get cached
locally as well. this results into transfer-speeds to /dev/null exceeding
100MB/s ( Gigabit speeds ). i have observed transfer speeds to /dev/null of
400MB/s!

The funny thing is i started this DIY-NAS with FreeNAS because we had a
cheap commercial NAS with 4 disks ( raid 5 ). We have had performance
troubles at 100Mbit, repeated authentication trouble ( integration with MSAD
), and when we upgraded our network to Gigabit, it only performed at 11MB/s!

We now have a NAS that performs faster than local disk. We plan to use it
run development-virtual-machines on.

With Gigabit ethernet the network isn't the problem anymore: it's disks. You
need as much as you can get your hands on.

About your question about memory management: it is not needed and you don't
want it. tune nics, filesystems, memory, nfs-options and disks.

regards,

usleep


--


 Michel TALON


 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to 
 freebsd-questions-unsubscr...@freebsd.org

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Optimising NFS for system files

2008-12-30 Thread Bernard Dugas

Wojciech Puchar wrote:


nfsserver# time tar -cf - clientusr-amd64  /dev/null
5.001u 12.147s 1:23.92 20.4%69+1369k 163345+0io 0pf+0w

client9# time tar -cf - /usr  /dev/null
tar: Removing leading '/' from member names
3.985u 19.779s 4:32.47 8.7% 74+1457k 0+0io 0pf+0w

Note : clientusr-amd64 is around 1.3GB and is the same directory 
exported to client9 /usr with nfs.



it's FAST. what's wrong?


First thing that may be wrong is the understanding of the time figures. 
The documentation is not clear about them and the -h option is not working :


client6# time -h tar -cf - /usr  /dev/null
-h: Command not found.
0.000u 0.000s 0:00.00 0.0%  0+0k 0+0io 0pf+0w

The main thing is that the 3rd figures 1:23.92 and 4:32.47 seems to be 
the time i wait in front of the computer while it works (ok, i know, i 
should enjoy a beer, or hot coffee with this nice snow ;-) :


client9# date ; time tar -cf - /usr  /dev/null ; date ;
Wed Dec 31 08:23:59 CET 2008
tar: Removing leading '/' from member names
4.103u 19.651s 4:25.80 8.9% 74+1453k 0+0io 2pf+0w
Wed Dec 31 08:28:25 CET 2008

and 08:28:25 - 08:23:59 = 00:04:26 is very close to 4:25.80.

On server, it means : 1440MB / 84s = 17MB/s
On client, that becomes : 1440MB / 266s = 5.4MB/s

I know the disk is not very fast, but i would like the NFS layer not to 
add too much...


I don't want my users to wait between 3 or 4 times more because computer 
is using NFS.


I have plenty of cpu and bandwidth available : something is slowing the 
process that should not... But what ? How to diagnose NFS ? Where should 
i look in a logical diagnosis process ?


Best regards
--
Bernard DUGAS Mobile +33 615 333 770
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Optimising NFS for system files

2008-12-29 Thread Wojciech Puchar


i can see a reading speed difference 4 time slower on client than on server 
(time tar -cf - /usr  /dev/null).


I will play with jumbo MTU for network performance, but would anybody know if 
i can ask system files NFS exports to stay in server memory ? I have less 
than 2Go to share and 2GO DDR2 is affordable.

you don't have to.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org