Re: Samba read speed performance tuning

2010-03-20 Thread Dan Naumov
On Sat, Mar 20, 2010 at 3:49 AM, Gary Gatten ggat...@waddell.com wrote:
 It MAY make a big diff, but make sure during your tests you use unique files 
 or flush the cache or you'll me testing cache speed and not disk speed.

Yeah I did make sure to use unique files for testing the effects of
prefetch. This is Atom D510 / Supermicro X75SPA-H / 4Gb Ram with 2 x
slow 2tb WD Green (WD20EADS) disks with 32mb cache in a ZFS mirror
after enabling prefetch.:
Code:

bonnie -s 8192

  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
 8192 29065 68.9 52027 39.8 39636 33.3 54057 95.4
105335 34.6 174.1 7.9

DD read:
dd if=/dev/urandom of=test2 bs=1M count=8192
dd if=test2 of=/dev/zero bs=1M
8589934592 bytes transferred in 76.031399 secs (112978779 bytes/sec)
(107,74mb/s)


Individual disks read capability: 75mb/s
Reading off a mirror of 2 disks with prefetch disabled: 60mb/s
Reading off a mirror of 2 disks with prefetch enabled: 107mb/s


- Sincerely,
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Samba read speed performance tuning

2010-03-19 Thread Dan Naumov
On a FreeBSD 8.0-RELEASE/amd64 system with a Supermicro X7SPA-H board
using an Intel gigabit nic with the em driver, running on top of a ZFS
mirror, I was seeing a strange issue. Local reads and writes to the
pool easily saturate the disks with roughly 75mb/s throughput, which
is roughly the best these drives can do. However, working with Samba,
writes to a share could easily pull off 75mb/s and saturate the disks,
but reads off a share were resulting in rather pathetic 18mb/s
throughput.

I found a threadon the FreeBSD forums
(http://forums.freebsd.org/showthread.php?t=9187) and followed the
suggested advice. I rebuilt Samba with AIO support, kldloaded the aio
module and made the following changes to my smb.conf

From:
socket options=TCP_NODELAY

To:
socket options=SO_RCVBUF=131072 SO_SNDBUF=131072 TCP_NODELAY
min receivefile size=16384
use sendfile=true
aio read size = 16384
aio write size = 16384
aio write behind = true
dns proxy = no[/CODE]

This showed a very welcome improvement in read speed, I went from
18mb/s to 48mb/s. The write speed remained unchanged and was still
saturating the disks. Now I tried the suggested sysctl tunables:

atombsd# sysctl net.inet.tcp.delayed_ack=0
net.inet.tcp.delayed_ack: 1 - 0

atombsd# sysctl net.inet.tcp.path_mtu_discovery=0
net.inet.tcp.path_mtu_discovery: 1 - 0

atombsd# sysctl net.inet.tcp.recvbuf_inc=524288
net.inet.tcp.recvbuf_inc: 16384 - 524288

atombsd# sysctl net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.recvbuf_max: 262144 - 16777216

atombsd# sysctl net.inet.tcp.sendbuf_inc=524288
net.inet.tcp.sendbuf_inc: 8192 - 524288

atombsd# sysctl net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.sendbuf_max: 262144 - 16777216

atombsd# sysctl net.inet.tcp.sendspace=65536
net.inet.tcp.sendspace: 32768 - 65536

atombsd# sysctl net.inet.udp.maxdgram=57344
net.inet.udp.maxdgram: 9216 - 57344

atombsd# sysctl net.inet.udp.recvspace=65536
net.inet.udp.recvspace: 42080 - 65536

atombsd# sysctl net.local.stream.recvspace=65536
net.local.stream.recvspace: 8192 - 65536

atombsd# sysctl net.local.stream.sendspace=65536
net.local.stream.sendspace: 8192 - 65536

This improved the read speeds a further tiny bit, now I went from
48mb/s to 54mb/s. This is it however, I can't figure out how to
increase Samba read speed any further. Any ideas?
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Samba read speed performance tuning

2010-03-19 Thread Dan Naumov
On Fri, Mar 19, 2010 at 11:14 PM, Dan Naumov dan.nau...@gmail.com wrote:
 On a FreeBSD 8.0-RELEASE/amd64 system with a Supermicro X7SPA-H board
 using an Intel gigabit nic with the em driver, running on top of a ZFS
 mirror, I was seeing a strange issue. Local reads and writes to the
 pool easily saturate the disks with roughly 75mb/s throughput, which
 is roughly the best these drives can do. However, working with Samba,
 writes to a share could easily pull off 75mb/s and saturate the disks,
 but reads off a share were resulting in rather pathetic 18mb/s
 throughput.

 I found a threadon the FreeBSD forums
 (http://forums.freebsd.org/showthread.php?t=9187) and followed the
 suggested advice. I rebuilt Samba with AIO support, kldloaded the aio
 module and made the following changes to my smb.conf

 From:
 socket options=TCP_NODELAY

 To:
 socket options=SO_RCVBUF=131072 SO_SNDBUF=131072 TCP_NODELAY
 min receivefile size=16384
 use sendfile=true
 aio read size = 16384
 aio write size = 16384
 aio write behind = true
 dns proxy = no[/CODE]

 This showed a very welcome improvement in read speed, I went from
 18mb/s to 48mb/s. The write speed remained unchanged and was still
 saturating the disks. Now I tried the suggested sysctl tunables:

 atombsd# sysctl net.inet.tcp.delayed_ack=0
 net.inet.tcp.delayed_ack: 1 - 0

 atombsd# sysctl net.inet.tcp.path_mtu_discovery=0
 net.inet.tcp.path_mtu_discovery: 1 - 0

 atombsd# sysctl net.inet.tcp.recvbuf_inc=524288
 net.inet.tcp.recvbuf_inc: 16384 - 524288

 atombsd# sysctl net.inet.tcp.recvbuf_max=16777216
 net.inet.tcp.recvbuf_max: 262144 - 16777216

 atombsd# sysctl net.inet.tcp.sendbuf_inc=524288
 net.inet.tcp.sendbuf_inc: 8192 - 524288

 atombsd# sysctl net.inet.tcp.sendbuf_max=16777216
 net.inet.tcp.sendbuf_max: 262144 - 16777216

 atombsd# sysctl net.inet.tcp.sendspace=65536
 net.inet.tcp.sendspace: 32768 - 65536

 atombsd# sysctl net.inet.udp.maxdgram=57344
 net.inet.udp.maxdgram: 9216 - 57344

 atombsd# sysctl net.inet.udp.recvspace=65536
 net.inet.udp.recvspace: 42080 - 65536

 atombsd# sysctl net.local.stream.recvspace=65536
 net.local.stream.recvspace: 8192 - 65536

 atombsd# sysctl net.local.stream.sendspace=65536
 net.local.stream.sendspace: 8192 - 65536

 This improved the read speeds a further tiny bit, now I went from
 48mb/s to 54mb/s. This is it however, I can't figure out how to
 increase Samba read speed any further. Any ideas?


Oh my god... Why did noone tell me how much of an enormous performance
boost vfs.zfs.prefetch_disable=0 (aka actually enabling prefetch) is.
My local reads off the mirror pool jumped from 75mb/s to 96mb/s (ie.
they are now nearly 25% faster than reading off an individual disk)
and reads off a Samba share skyrocketed from 50mb/s to 90mb/s.

By default, FreeBSD sets vfs.zfs.prefetch_disable to 1 on any i386
systems and on any amd64 systems with less than 4GB of avaiable
memory. My system is amd64 with 4gb ram, but integrated video eats
some of that, so the autotuning disabled the prefetch. I had read up
on it and a fair amount of people seemed to have performance issues
caused by having prefetch enabled and get better results with it
turned off, in my case however, it seems that enabling it gave a
really solid boost to performance.


- Sincerely
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Samba read speed performance tuning

2010-03-19 Thread Gary Gatten
It MAY make a big diff, but make sure during your tests you use unique files or 
flush the cache or you'll me testing cache speed and not disk speed.

- Original Message -
From: owner-freebsd-questi...@freebsd.org owner-freebsd-questi...@freebsd.org
To: freebsd-...@freebsd.org freebsd-...@freebsd.org; 
freebsd-questions@freebsd.org freebsd-questions@freebsd.org; FreeBSD-STABLE 
Mailing List freebsd-sta...@freebsd.org; freebsd-performa...@freebsd.org 
freebsd-performa...@freebsd.org
Sent: Fri Mar 19 20:28:02 2010
Subject: Re: Samba read speed performance tuning

On Fri, Mar 19, 2010 at 11:14 PM, Dan Naumov dan.nau...@gmail.com wrote:
 On a FreeBSD 8.0-RELEASE/amd64 system with a Supermicro X7SPA-H board
 using an Intel gigabit nic with the em driver, running on top of a ZFS
 mirror, I was seeing a strange issue. Local reads and writes to the
 pool easily saturate the disks with roughly 75mb/s throughput, which
 is roughly the best these drives can do. However, working with Samba,
 writes to a share could easily pull off 75mb/s and saturate the disks,
 but reads off a share were resulting in rather pathetic 18mb/s
 throughput.

 I found a threadon the FreeBSD forums
 (http://forums.freebsd.org/showthread.php?t=9187) and followed the
 suggested advice. I rebuilt Samba with AIO support, kldloaded the aio
 module and made the following changes to my smb.conf

 From:
 socket options=TCP_NODELAY

 To:
 socket options=SO_RCVBUF=131072 SO_SNDBUF=131072 TCP_NODELAY
 min receivefile size=16384
 use sendfile=true
 aio read size = 16384
 aio write size = 16384
 aio write behind = true
 dns proxy = no[/CODE]

 This showed a very welcome improvement in read speed, I went from
 18mb/s to 48mb/s. The write speed remained unchanged and was still
 saturating the disks. Now I tried the suggested sysctl tunables:

 atombsd# sysctl net.inet.tcp.delayed_ack=0
 net.inet.tcp.delayed_ack: 1 - 0

 atombsd# sysctl net.inet.tcp.path_mtu_discovery=0
 net.inet.tcp.path_mtu_discovery: 1 - 0

 atombsd# sysctl net.inet.tcp.recvbuf_inc=524288
 net.inet.tcp.recvbuf_inc: 16384 - 524288

 atombsd# sysctl net.inet.tcp.recvbuf_max=16777216
 net.inet.tcp.recvbuf_max: 262144 - 16777216

 atombsd# sysctl net.inet.tcp.sendbuf_inc=524288
 net.inet.tcp.sendbuf_inc: 8192 - 524288

 atombsd# sysctl net.inet.tcp.sendbuf_max=16777216
 net.inet.tcp.sendbuf_max: 262144 - 16777216

 atombsd# sysctl net.inet.tcp.sendspace=65536
 net.inet.tcp.sendspace: 32768 - 65536

 atombsd# sysctl net.inet.udp.maxdgram=57344
 net.inet.udp.maxdgram: 9216 - 57344

 atombsd# sysctl net.inet.udp.recvspace=65536
 net.inet.udp.recvspace: 42080 - 65536

 atombsd# sysctl net.local.stream.recvspace=65536
 net.local.stream.recvspace: 8192 - 65536

 atombsd# sysctl net.local.stream.sendspace=65536
 net.local.stream.sendspace: 8192 - 65536

 This improved the read speeds a further tiny bit, now I went from
 48mb/s to 54mb/s. This is it however, I can't figure out how to
 increase Samba read speed any further. Any ideas?


Oh my god... Why did noone tell me how much of an enormous performance
boost vfs.zfs.prefetch_disable=0 (aka actually enabling prefetch) is.
My local reads off the mirror pool jumped from 75mb/s to 96mb/s (ie.
they are now nearly 25% faster than reading off an individual disk)
and reads off a Samba share skyrocketed from 50mb/s to 90mb/s.

By default, FreeBSD sets vfs.zfs.prefetch_disable to 1 on any i386
systems and on any amd64 systems with less than 4GB of avaiable
memory. My system is amd64 with 4gb ram, but integrated video eats
some of that, so the autotuning disabled the prefetch. I had read up
on it and a fair amount of people seemed to have performance issues
caused by having prefetch enabled and get better results with it
turned off, in my case however, it seems that enabling it gave a
really solid boost to performance.


- Sincerely
Dan Naumov
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org





font size=1
div style='border:none;border-bottom:double windowtext 2.25pt;padding:0in 0in 
1.0pt 0in'
/div
This email is intended to be reviewed by only the intended recipient
 and may contain information that is privileged and/or confidential.
 If you are not the intended recipient, you are hereby notified that
 any review, use, dissemination, disclosure or copying of this email
 and its attachments, if any, is strictly prohibited.  If you have
 received this email in error, please immediately notify the sender by
 return email and delete this email from your system.
/font

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org

Re: Samba read speed performance tuning

2010-03-19 Thread Adam Vande More
On Fri, Mar 19, 2010 at 8:28 PM, Dan Naumov dan.nau...@gmail.com wrote:

 Oh my god... Why did noone tell me how much of an enormous performance
 boost vfs.zfs.prefetch_disable=0 (aka actually enabling prefetch) is.
 My local reads off the mirror pool jumped from 75mb/s to 96mb/s (ie.
 they are now nearly 25% faster than reading off an individual disk)
 and reads off a Samba share skyrocketed from 50mb/s to 90mb/s.

 By default, FreeBSD sets vfs.zfs.prefetch_disable to 1 on any i386
 systems and on any amd64 systems with less than 4GB of avaiable
 memory. My system is amd64 with 4gb ram, but integrated video eats
 some of that, so the autotuning disabled the prefetch. I had read up
 on it and a fair amount of people seemed to have performance issues
 caused by having prefetch enabled and get better results with it
 turned off, in my case however, it seems that enabling it gave a
 really solid boost to performance.


My home VBox server is similar specs and I enabled the prefetch from the
start.  A few days ago, I added an intel SSD as the zpool cache device and
the read speed is mind blowing now.  This is from inside a VM frunning on it
meaning ad0 is really a vdi. Once the cache is populated, HD latency is
mostly a thing of the past.

# diskinfo -tv /dev/ad0
/dev/ad0
512 # sectorsize
12884901888 # mediasize in bytes (12G)
25165824# mediasize in sectors
24966   # Cylinders according to firmware.
16  # Heads according to firmware.
63  # Sectors according to firmware.
VBf9752473-05343e4e # Disk ident.

Seek times:
Full stroke:  250 iter in   0.082321 sec =0.329 msec
Half stroke:  250 iter in   0.078944 sec =0.316 msec
Quarter stroke:   500 iter in   0.161266 sec =0.323 msec
Short forward:400 iter in   0.128624 sec =0.322 msec
Short backward:   400 iter in   0.131770 sec =0.329 msec
Seq outer:   2048 iter in   0.667510 sec =0.326 msec
Seq inner:   2048 iter in   0.691691 sec =0.338 msec
Transfer rates:
outside:   102400 kbytes in   0.722864 sec =   141659 kbytes/sec
middle:102400 kbytes in   0.813619 sec =   125857 kbytes/sec
inside:102400 kbytes in   0.838129 sec =   122177 kbytes/sec



-- 
Adam Vande More
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org