Re: [Samba] Transfer rates faster than 23MBps?

2006-09-29 Thread Aaron Kincer
Thought you might find this worth reading since it is on topic to this 
thread:


http://techreport.com/reviews/2006q3/maxtor-diamondmax-11/index.x?pg=6

Mark Smith wrote:

Mark Smith wrote:
Actually, setting SNDBUF and RCVBUF to 65536 from the default of 8192 
is what got me _TO_ 22MBps...


...Ya know, I once tried increasing SNDBUF and RCVBUF to 256k but 
didn't see any difference.  I've also tried setting the kernel 
parameters to 256k, but never both at the same time.  Let me try that 
and see if it helps.


Nope.   Just tried.  Same as always, about 22MBps.

-Mark


--
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/listinfo/samba


Conclusion - Re: [Samba] Transfer rates faster than 23MBps?

2006-09-29 Thread Mark Smith

In the interest of closing this loop for the sake of the archives...  :)

My company has officially thrown in the towel on this issue.  Everything 
we're seeing is suggesting that 22MBps is about as fast as we're going 
to get with SMB.  RedHat, our parent company's IT department (much 
bigger than ours), the (read: this) Samba Mailing List, and all my 
techie friends, all agree that we're at a limit.


For the parts of our process where higher speed is critical, we'll look 
into replacing SMB with a more appropriate protocol (eg: FTP or SCP 
hacked to allow a cipher of none).


I want to thank everyone who tried to help with suggestions of things to 
try, areas to look, and what-not.  Every time, I'd see something and say 
Huh.  Haven't tried that! and got hopeful.  Alas, none of it ended up 
helping, but that's not y'all's fault.  :)


Again, thanks everyone for your help.  Cheers.

-Mark

Mark Smith wrote:
We use SMB to transfer large files (between 1GB and 5GB) from RedHat AS4 
Content Storage servers to Windows clients with 6 DVD burners and 
robotic arms and other cool gadgets.  The servers used to be Windows 
based, but we're migrating to RedHat for a host of reasons.


Unfortunately, the RedHat Samba servers are about 2.5 times slower than 
the Windows servers.  Windows will copy a 1GB file in about 30 seconds, 
where as it takes about 70 to 75 seconds to copy the same file from a 
RedHat Samba server.


I've asked Dr. Google and gotten all kinds of suggestions, most of which 
have already been applied by RedHat to the stock Samba config.  I've 
opened a ticket with RedHat.  They pointed out a couple errors in my 
config, but fixing those didn't have any effect.  Some tweaking, 
however, has gotten the transfer speed to about 50 seconds for that 1GB 
file.


But I seem to have hit a brick wall; my fastest time ever was 44 
seconds, but typically it's around 50.


I know it's not a problem with network or disk; if I use Apache and HTTP 
to transfer the same file from the same server, it transfers in about 15 
to 20 seconds.  Unfortunately, HTTP doesn't meet our other requirements 
for random access to the file.


Do you folks use Samba for large file transfers at all?  Have you had 
any luck speeding it up past about 23MBps (the 44 second transfer 
speed)?  Any help you may have would be fantastic.  Thanks.


-Mark

--
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/listinfo/samba


Re: [Samba] Transfer rates faster than 23MBps?

2006-09-25 Thread Mark Smith

Doug VanLeuven wrote:

Mark Smith wrote:

I also tried your values, with the tcp_window_scaling, with no luck.
It's enable by default, but I explicitly set options other options 
depend on.


Reasonable idea.  :)


I set up my test rig again.
Host server
2.6.12-1.1376_FC3, samba 3.0.23
Broadcom Nextreme BCM5702X Gigabit, tg3 driver default config
Client
2.6.12-1.1381_FC3, samba 3.0.21pre3-SVN-build-11739
Intel Pro/1000, 82546GB Gigabit, e1000 driver default config
HD Drives on both are 45-50MBps

smbclient 26.7-27.2MBps
ftp 25.4 MBps (small window size)


Yeah, see, that's the difference.  With FTP and HTTP, I'm seeing the 
~60MBps numbers, but SMB is still down at about 22MBps, not even the 
27MBps you're seeing.



FWIW - I'm used to seeing CIFS performance numbers 5-10% slower than ftp.


5-10% wouldn't surprise me, but 70% slower disturbs me.

Using ethereal to capture the start of the transfers, I'm seeing windows 
ftp negotiate a 256960 window size, which is what I have specified in 
HKLM/system/currcontrolset/services/tcpip/parameters/TcpWindowSize, but 
linux samba establishes a window size of whatever is specified for 
SO_SNDBUF in socket options or by default 8K.  So I set SO_SNDBUF=256960 
and it gave me the extra large window and raised the speed up to 
27.3MBps (1048576 Megs) - not enough to really address your concerns.  


Actually, setting SNDBUF and RCVBUF to 65536 from the default of 8192 is 
what got me _TO_ 22MBps...


...Ya know, I once tried increasing SNDBUF and RCVBUF to 256k but didn't 
see any difference.  I've also tried setting the kernel parameters to 
256k, but never both at the same time.  Let me try that and see if it helps.


Maybe it would be different on your system.  That's an issue for samba 
because it should allow for autonegotiation of the window size and I 
don't know how to set that other than ipv4.tcp_window_scaling=1 (the 
default).  SO_SNDBUF  SO_RCVBUF are only limited by the /proc/sys 
values* *net.core.rmem_max and net.core.wmem_max which you altered after 
the earlier post.


See above.  I've set both, but never at the same time.  Let me try that.

Comparing the linux ftp to linux samba transfer speeds, I don't think 
the answer lies in samba per se other than how the socket gets set up.  


I thought this too, but Ethereal shows that the Windows client is ACKing 
the TCP stream after only a couple or three packets, no where near the 
32k Window size that is negotiated.  So if Windows were delaying the 
ACK, Linux would still be sending more packets to it.  The sent packets 
are roughly evenly spaced in time, and we're getting them ACKed every 
two or three packets.  It really doesn't look like a TCP Window size 
problem.  (This was the very first path I went down.)


And it's not a linux issue either if you're getting those http numbers 
(I never see anything like that here).  Your Redhat is obviously tuned 
for those types of packets.  Maybe you using the in-kernel optimized 
apache they offer.  If so, try a user space apache for comparison.


Nope.  Stock Apache 2 as distributed with RedHat AS4-U3.

I smacked up against these numbers 2 years ago.  Nothing much seems to 
have changed.  The numbers end up in the low to mid 200Mbps on copper 
Gigabit for user space applications.  If you ever fix it, pop me an 
email please.  I figured the answer would be pci-x and 64 bit pci.  
Higher front side bus speeds.


I will definitely post whatever solution I find here.  Thanks for your 
help, Doug.


-Mark
--
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/listinfo/samba


Re: [Samba] Transfer rates faster than 23MBps?

2006-09-25 Thread Mark Smith

Guenter Kukkukk wrote:

Hi Doug,

have you ever tried netio to check for raw needwork speed?
http://www.ars.de/ars/ars.nsf/docs/netio
It does not add any overhead caused by file operations - so it
can help to tune raw parameters.
The source is included - so it can be tuned, too.
When sniffing such traffic, also have a look, how tcpi/ip ACK
packets are used and whether they are send immediately
or with some delay.


I've used iPerf and been able to show that it is most definitely not a 
networking card problem in my case.  iPerf shows network transfer rates 
of 120MBps.  :)


http://dast.nlanr.net/Projects/Iperf/

-Mark
--
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/listinfo/samba


Re: [Samba] Transfer rates faster than 23MBps?

2006-09-25 Thread Mark Smith

Mark Smith wrote:
Actually, setting SNDBUF and RCVBUF to 65536 from the default of 8192 is 
what got me _TO_ 22MBps...


...Ya know, I once tried increasing SNDBUF and RCVBUF to 256k but didn't 
see any difference.  I've also tried setting the kernel parameters to 
256k, but never both at the same time.  Let me try that and see if it 
helps.


Nope.   Just tried.  Same as always, about 22MBps.

-Mark
--
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/listinfo/samba


Re: [Samba] Transfer rates faster than 23MBps?

2006-09-23 Thread Doug VanLeuven

Mark Smith wrote:

I also tried your values, with the tcp_window_scaling, with no luck.
It's enable by default, but I explicitly set options other options 
depend on.


I set up my test rig again.
Host server
2.6.12-1.1376_FC3, samba 3.0.23
Broadcom Nextreme BCM5702X Gigabit, tg3 driver default config
Client
2.6.12-1.1381_FC3, samba 3.0.21pre3-SVN-build-11739
Intel Pro/1000, 82546GB Gigabit, e1000 driver default config
HD Drives on both are 45-50MBps

smbclient 26.7-27.2MBps
ftp 25.4 MBps (small window size)

Interestingly enough, downloading in the opposite direction, where the 
Intel card was doing the serving was slightly faster, so hardware does 
make a difference.

smbclient 28.8MBps

client win2000 sp4, Intel Pro/1000
ftp 31.2-34.4MBps
explorer 26.2-27.0MBps (wall clock on 2Gig transfers)

FWIW - I'm used to seeing CIFS performance numbers 5-10% slower than ftp.

Using ethereal to capture the start of the transfers, I'm seeing windows 
ftp negotiate a 256960 window size, which is what I have specified in 
HKLM/system/currcontrolset/services/tcpip/parameters/TcpWindowSize, but 
linux samba establishes a window size of whatever is specified for 
SO_SNDBUF in socket options or by default 8K.  So I set SO_SNDBUF=256960 
and it gave me the extra large window and raised the speed up to 
27.3MBps (1048576 Megs) - not enough to really address your concerns.  
Maybe it would be different on your system.  That's an issue for samba 
because it should allow for autonegotiation of the window size and I 
don't know how to set that other than ipv4.tcp_window_scaling=1 (the 
default).  SO_SNDBUF  SO_RCVBUF are only limited by the /proc/sys 
values* *net.core.rmem_max and net.core.wmem_max which you altered after 
the earlier post.


Comparing the linux ftp to linux samba transfer speeds, I don't think 
the answer lies in samba per se other than how the socket gets set up.  
And it's not a linux issue either if you're getting those http numbers 
(I never see anything like that here).  Your Redhat is obviously tuned 
for those types of packets.  Maybe you using the in-kernel optimized 
apache they offer.  If so, try a user space apache for comparison.


I smacked up against these numbers 2 years ago.  Nothing much seems to 
have changed.  The numbers end up in the low to mid 200Mbps on copper 
Gigabit for user space applications.  If you ever fix it, pop me an 
email please.  I figured the answer would be pci-x and 64 bit pci.  
Higher front side bus speeds.


Best of luck, Doug

--
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/listinfo/samba


Re: [Samba] Transfer rates faster than 23MBps?

2006-09-23 Thread Guenter Kukkukk
On Saturday 23 September 2006 17:13, Doug VanLeuven wrote:
 Mark Smith wrote:
  I also tried your values, with the tcp_window_scaling, with no luck.

 It's enable by default, but I explicitly set options other options
 depend on.

 I set up my test rig again.
 Host server
 2.6.12-1.1376_FC3, samba 3.0.23
 Broadcom Nextreme BCM5702X Gigabit, tg3 driver default config
 Client
 2.6.12-1.1381_FC3, samba 3.0.21pre3-SVN-build-11739
 Intel Pro/1000, 82546GB Gigabit, e1000 driver default config
 HD Drives on both are 45-50MBps
...snip

Hi Doug,

have you ever tried netio to check for raw needwork speed?
http://www.ars.de/ars/ars.nsf/docs/netio
It does not add any overhead caused by file operations - so it
can help to tune raw parameters.
The source is included - so it can be tuned, too.
When sniffing such traffic, also have a look, how tcpi/ip ACK
packets are used and whether they are send immediately
or with some delay.

Good luck - Guenter Kukkukk
-- 
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/listinfo/samba


Re: [Samba] Transfer rates faster than 23MBps?

2006-09-22 Thread Doug VanLeuven

OK, I'll top post.
I can't let this stand unanswered.
I ran a LOT of tests with gigabit copper and windows machines.  I never 
did better than 40 seconds per gig.  That was with the Intel cards 
configured for maximum cpu utilization.  80-90% cpu for 40 sec per gig.  
On windows.  Uploads went half as fast.  Asymetric.  Of course I only 
had 32 bit PCI, 2.5Gig processor motherboards with 45MBps drives.


Which leads me to my point.  One can't rationally compare performance of 
gigabit ethernet without talking about hardware on the platforms.  I 
wouldn't think you'd have overlooked this, but one can bump up against 
the speed of the disk drive.  Raid has overhead.  Have you tried 
something like iostat?  Serial ATA?  I seem to recall the folks at 
Enterasys indicating 300Gbps as a practical upper limit on copper gig.  
Are you using fiber?  64 bit PCI?  Who made which model of the network 
card?  Is it a network card that's well supported in Linux?  Can you 
change the interrupt utilization of the card?  What's the CPU 
utilization on the Redhat machine during transfers?


I don't have specific answers for your questions, but one can't just say 
this software product is slower on gigabit than the other one without 
talking hardware at the same time.


I have lots of memory.  I use these configurations in sysctl.conf to up 
the performance of send/recieve windows on my systems.  There's articles 
out there.  I don't have historical references handy.

YMMV.
net.core.wmem_max = 1048576
net.core.rmem_max = 1048576
net.ipv4.tcp_wmem = 4096 65536 1048575
net.ipv4.tcp_rmem = 4096 524288 1048575
net.ipv4.tcp_window_scaling = 1

Regards, Doug

I wanted to follow up to my email to provide at least a partial answer 
to my problem.


The stock RedHat AS4-U3 Samba config has SO_SNDBUF and SO_RCVBUF set 
to 8k.  With this value, I can transfer a 1GB file in about 70-75 
seconds, about 14MBps.  If I increase those buffers to their max value 
of 64k, that same 1GB file transfers in 45-50 seconds, about 23MBps.


That is the _ONLY_ configuration value I've found that made any 
difference in my setup.  All the other tweaks I'd done, when removed, 
seemed to make no difference at all.  I was playing with oplocks, 
buffers, max xmit sizes, you name it.  But the socket option buffers 
was the only thing that made a difference.


I'm still looking for more speed.  I'll report if I find anything else 
that helps.


In response to Jeremy's suggestion of using smbclient, I ran a test 
from a Linux client using smbclient and it reported a transfer rate of 
21MBps, about the same as a normal smbfs mount.  I haven't tried 
porting smbclient to Windows yet, and probably won't until we get more 
info on what the server is doing.


Thanks everyone.

-Mark

Mark Smith wrote:
We use SMB to transfer large files (between 1GB and 5GB) from RedHat 
AS4 Content Storage servers to Windows clients with 6 DVD burners and 
robotic arms and other cool gadgets.  The servers used to be Windows 
based, but we're migrating to RedHat for a host of reasons.


Unfortunately, the RedHat Samba servers are about 2.5 times slower 
than the Windows servers.  Windows will copy a 1GB file in about 30 
seconds, where as it takes about 70 to 75 seconds to copy the same 
file from a RedHat Samba server.


I've asked Dr. Google and gotten all kinds of suggestions, most of 
which have already been applied by RedHat to the stock Samba config.  
I've opened a ticket with RedHat.  They pointed out a couple errors 
in my config, but fixing those didn't have any effect.  Some 
tweaking, however, has gotten the transfer speed to about 50 seconds 
for that 1GB file.


But I seem to have hit a brick wall; my fastest time ever was 44 
seconds, but typically it's around 50.


I know it's not a problem with network or disk; if I use Apache and 
HTTP to transfer the same file from the same server, it transfers in 
about 15 to 20 seconds.  Unfortunately, HTTP doesn't meet our other 
requirements for random access to the file.


Do you folks use Samba for large file transfers at all?  Have you had 
any luck speeding it up past about 23MBps (the 44 second transfer 
speed)?  Any help you may have would be fantastic.  Thanks.


-Mark


--
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/listinfo/samba


Re: [Samba] Transfer rates faster than 23MBps?

2006-09-22 Thread Mark Smith

Doug VanLeuven wrote:

OK, I'll top post.
I can't let this stand unanswered.
I ran a LOT of tests with gigabit copper and windows machines.  I never 
did better than 40 seconds per gig.  That was with the Intel cards 
configured for maximum cpu utilization.  80-90% cpu for 40 sec per gig.  
On windows.  Uploads went half as fast.  Asymetric.  Of course I only 
had 32 bit PCI, 2.5Gig processor motherboards with 45MBps drives.


Which leads me to my point.  One can't rationally compare performance of 
gigabit ethernet without talking about hardware on the platforms.  I 
wouldn't think you'd have overlooked this, but one can bump up against 
the speed of the disk drive.  Raid has overhead.  Have you tried 
something like iostat?  Serial ATA?  I seem to recall the folks at 
Enterasys indicating 300Gbps as a practical upper limit on copper gig.  
Are you using fiber?  64 bit PCI?  Who made which model of the network 
card?  Is it a network card that's well supported in Linux?  Can you 
change the interrupt utilization of the card?  What's the CPU 
utilization on the Redhat machine during transfers?


 I don't have specific answers for your questions, but one can't just say
 this software product is slower on gigabit than the other one without
 talking hardware at the same time.

You have a very good point:  I never indicated what my hardware 
situation was.


Server: Rackable UltraDense.  It's an Opteron 250, 2GB RAM, a 3Ware RAID 
controller and 12x 500GB SATA disks (about 460GB formatted) in 2x 6 disk 
RAID5 arrays (a little space wasted due to a 2TB limit somewhere.) 
Ethernet is a BroadCom BCM85702A20 gigabit (two of them, actually, but 
we're only using one.)


I've used a number of different clients, ranging from a Dell 850 copying 
to /dev/null, to a Dell OptiPlex GX620 copying to a local SATA drive, to 
another Rackable UltraDense.  Both Linux and WinXP.  (Not so 
surprisingly, the Linux client is slower than the WinXP client. 
Although, using smbclient (as Jeremy suggested) was just as fast as the 
WinXP client, our famous 45 second 1GB transfer.)


Reasons I didn't list hardware in my first email:
- iPerf shows that I can saturate the Ethernet interfaces, TCP/IP stack, 
and switching fabric to 120MBps, 960Mbps.
- Copying the same file to/from the same machines using HTTP (Apache2) 
transfers at about 60MBps, 480Mbps.  This uses the same disk and network 
subsystems.
- Copying a 1GB file from a RAM disk on the server to /dev/null on the 
client (eliminating disk performance from the equation entirely) does 
_NOT_ speed things up at all, still stuck at about 45 seconds, about 
23MBps, 182Mbps.
- Copying locally from the disk to /dev/null (using dd, no network at 
all) takes about 17 seconds for a 1GB file, which matches up nicely with 
the 60MBps, 480Mbps seen with HTTP.


Given these tests, I would expect to see transfer rates of up to 60MBps 
in the best case.  Admittedly, that is a _BEST_ case.  I know I can't 
avoid that bottle neck, and honestly, that would be totally sufficient 
for our use.


The question is, what bottle neck am I hitting now?  The only thing that 
changes between the HTTP and SMB tests are the transport mechanisms (and 
their interactions with other systems, eg: kernel), so naturally I 
suspect those.  For the time being, at least, I need to use the SMB 
protocol.  So I'm trying to figure out what I can tweak, if anything, to 
make this go faster.


As a data point, I'm going to try a newer version of Samba.  (RHEL4 uses 
3.0.10-RedHat-Heavily-Modified-Of-Course)  If that makes a difference, 
then I have to decide whether it's worth it to me to keep RedHat support 
or not.  (And when I say I, I really mean my management.)


I have lots of memory.  I use these configurations in sysctl.conf to up 
the performance of send/recieve windows on my systems.  There's articles 
out there.  I don't have historical references handy.

YMMV.
net.core.wmem_max = 1048576
net.core.rmem_max = 1048576
net.ipv4.tcp_wmem = 4096 65536 1048575
net.ipv4.tcp_rmem = 4096 524288 1048575
net.ipv4.tcp_window_scaling = 1


I have not tried tweaking the TCP stack in the OS.  I'll give that a shot.

Thank you very much, Doug.

-Mark
--
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/listinfo/samba


Re: [Samba] Transfer rates faster than 23MBps?

2006-09-22 Thread Mark Smith

Mark Smith wrote:
As a data point, I'm going to try a newer version of Samba.  (RHEL4 uses 
3.0.10-RedHat-Heavily-Modified-Of-Course)  If that makes a difference, 
then I have to decide whether it's worth it to me to keep RedHat support 
or not.  (And when I say I, I really mean my management.)


I've just tried this.  Samba v3.0.23c, built locally from the Fedora 
Source RPM as distributed at samba.org, makes no noticeable difference: 
 still about 45 seconds.


I have lots of memory.  I use these configurations in sysctl.conf to 
up the performance of send/recieve windows on my systems.  There's 
articles out there.  I don't have historical references handy.

YMMV.
net.core.wmem_max = 1048576
net.core.rmem_max = 1048576
net.ipv4.tcp_wmem = 4096 65536 1048575
net.ipv4.tcp_rmem = 4096 524288 1048575
net.ipv4.tcp_window_scaling = 1


I have not tried tweaking the TCP stack in the OS.  I'll give that a shot.


The person at RedHat who's handling my ticket just suggested these very 
changes, without the last one.  They did not help.  The values he gave 
were a little different:


- snip! -
# increase TCP maximum buffer size
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216

# increase Linux autotuning TCP buffer limits
# min, default, and maximum number of bytes to use
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
- snip! -

I also tried your values, with the tcp_window_scaling, with no luck.

-Mark
--
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/listinfo/samba


Re: [Samba] Transfer rates faster than 23MBps?

2006-09-22 Thread Mark Smith
I have not.  Unfortunately, that is not a trivial process.  I believe 
everything supports it, but it's a somewhat major change to my 
production systems.  It might be worth trying as a data point, however.


Given the iPerf tests, I really don't think it's a network bottleneck at 
this point.


-Mark

Pitti, Raul wrote:

i am not an expert, but,
do you have jumbo frame enabled on your nic and  switch?
try using ethtools...

RP

Mark Smith wrote:

Mark Smith wrote:
As a data point, I'm going to try a newer version of Samba.  (RHEL4 
uses 3.0.10-RedHat-Heavily-Modified-Of-Course)  If that makes a 
difference, then I have to decide whether it's worth it to me to keep 
RedHat support or not.  (And when I say I, I really mean my 
management.)


I've just tried this.  Samba v3.0.23c, built locally from the Fedora 
Source RPM as distributed at samba.org, makes no noticeable 
difference:  still about 45 seconds.


I have lots of memory.  I use these configurations in sysctl.conf to 
up the performance of send/recieve windows on my systems.  There's 
articles out there.  I don't have historical references handy.

YMMV.
net.core.wmem_max = 1048576
net.core.rmem_max = 1048576
net.ipv4.tcp_wmem = 4096 65536 1048575
net.ipv4.tcp_rmem = 4096 524288 1048575
net.ipv4.tcp_window_scaling = 1


I have not tried tweaking the TCP stack in the OS.  I'll give that a 
shot.


The person at RedHat who's handling my ticket just suggested these 
very changes, without the last one.  They did not help.  The values he 
gave were a little different:


- snip! -
# increase TCP maximum buffer size
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216

# increase Linux autotuning TCP buffer limits
# min, default, and maximum number of bytes to use
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
- snip! -

I also tried your values, with the tcp_window_scaling, with no luck.

-Mark



--
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/listinfo/samba


Re: [Samba] Transfer rates faster than 23MBps?

2006-09-22 Thread Pitti, Raul

i am not an expert, but,
do you have jumbo frame enabled on your nic and  switch?
try using ethtools...

RP

Mark Smith wrote:

Mark Smith wrote:
As a data point, I'm going to try a newer version of Samba.  (RHEL4 
uses 3.0.10-RedHat-Heavily-Modified-Of-Course)  If that makes a 
difference, then I have to decide whether it's worth it to me to keep 
RedHat support or not.  (And when I say I, I really mean my 
management.)


I've just tried this.  Samba v3.0.23c, built locally from the Fedora 
Source RPM as distributed at samba.org, makes no noticeable difference: 
 still about 45 seconds.


I have lots of memory.  I use these configurations in sysctl.conf to 
up the performance of send/recieve windows on my systems.  There's 
articles out there.  I don't have historical references handy.

YMMV.
net.core.wmem_max = 1048576
net.core.rmem_max = 1048576
net.ipv4.tcp_wmem = 4096 65536 1048575
net.ipv4.tcp_rmem = 4096 524288 1048575
net.ipv4.tcp_window_scaling = 1


I have not tried tweaking the TCP stack in the OS.  I'll give that a 
shot.


The person at RedHat who's handling my ticket just suggested these very 
changes, without the last one.  They did not help.  The values he gave 
were a little different:


- snip! -
# increase TCP maximum buffer size
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216

# increase Linux autotuning TCP buffer limits
# min, default, and maximum number of bytes to use
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
- snip! -

I also tried your values, with the tcp_window_scaling, with no luck.

-Mark


--

Raúl Pittí Palma, Eng.

Global Engineering and Technology S.A.
mobile (507)-6616-0194
office (507)-390-4338
Republic of Panama
www.globaltecsa.com
--
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/listinfo/samba


Re: [Samba] Transfer rates faster than 23MBps?

2006-09-21 Thread Mark Smith
I wanted to follow up to my email to provide at least a partial answer 
to my problem.


The stock RedHat AS4-U3 Samba config has SO_SNDBUF and SO_RCVBUF set to 
8k.  With this value, I can transfer a 1GB file in about 70-75 seconds, 
about 14MBps.  If I increase those buffers to their max value of 64k, 
that same 1GB file transfers in 45-50 seconds, about 23MBps.


That is the _ONLY_ configuration value I've found that made any 
difference in my setup.  All the other tweaks I'd done, when removed, 
seemed to make no difference at all.  I was playing with oplocks, 
buffers, max xmit sizes, you name it.  But the socket option buffers was 
the only thing that made a difference.


I'm still looking for more speed.  I'll report if I find anything else 
that helps.


In response to Jeremy's suggestion of using smbclient, I ran a test from 
a Linux client using smbclient and it reported a transfer rate of 
21MBps, about the same as a normal smbfs mount.  I haven't tried porting 
smbclient to Windows yet, and probably won't until we get more info on 
what the server is doing.


Thanks everyone.

-Mark

Mark Smith wrote:
We use SMB to transfer large files (between 1GB and 5GB) from RedHat AS4 
Content Storage servers to Windows clients with 6 DVD burners and 
robotic arms and other cool gadgets.  The servers used to be Windows 
based, but we're migrating to RedHat for a host of reasons.


Unfortunately, the RedHat Samba servers are about 2.5 times slower than 
the Windows servers.  Windows will copy a 1GB file in about 30 seconds, 
where as it takes about 70 to 75 seconds to copy the same file from a 
RedHat Samba server.


I've asked Dr. Google and gotten all kinds of suggestions, most of which 
have already been applied by RedHat to the stock Samba config.  I've 
opened a ticket with RedHat.  They pointed out a couple errors in my 
config, but fixing those didn't have any effect.  Some tweaking, 
however, has gotten the transfer speed to about 50 seconds for that 1GB 
file.


But I seem to have hit a brick wall; my fastest time ever was 44 
seconds, but typically it's around 50.


I know it's not a problem with network or disk; if I use Apache and HTTP 
to transfer the same file from the same server, it transfers in about 15 
to 20 seconds.  Unfortunately, HTTP doesn't meet our other requirements 
for random access to the file.


Do you folks use Samba for large file transfers at all?  Have you had 
any luck speeding it up past about 23MBps (the 44 second transfer 
speed)?  Any help you may have would be fantastic.  Thanks.


-Mark

--
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/listinfo/samba


Re: [Samba] Transfer rates faster than 23MBps?

2006-09-19 Thread Jeremy Allison
On Tue, Sep 19, 2006 at 06:19:43PM -0700, Mark Smith wrote:
 We use SMB to transfer large files (between 1GB and 5GB) from RedHat AS4 
 Content Storage servers to Windows clients with 6 DVD burners and 
 robotic arms and other cool gadgets.  The servers used to be Windows 
 based, but we're migrating to RedHat for a host of reasons.
 
 Unfortunately, the RedHat Samba servers are about 2.5 times slower than 
 the Windows servers.  Windows will copy a 1GB file in about 30 seconds, 
 where as it takes about 70 to 75 seconds to copy the same file from a 
 RedHat Samba server.
 
 I've asked Dr. Google and gotten all kinds of suggestions, most of which 
 have already been applied by RedHat to the stock Samba config.  I've 
 opened a ticket with RedHat.  They pointed out a couple errors in my 
 config, but fixing those didn't have any effect.  Some tweaking, 
 however, has gotten the transfer speed to about 50 seconds for that 1GB 
 file.
 
 But I seem to have hit a brick wall; my fastest time ever was 44 
 seconds, but typically it's around 50.
 
 I know it's not a problem with network or disk; if I use Apache and HTTP 
 to transfer the same file from the same server, it transfers in about 15 
 to 20 seconds.  Unfortunately, HTTP doesn't meet our other requirements 
 for random access to the file.
 
 Do you folks use Samba for large file transfers at all?  Have you had 
 any luck speeding it up past about 23MBps (the 44 second transfer 
 speed)?  Any help you may have would be fantastic.  Thanks.

An interesting thing you could do is to use a port of smbclient
on Windows (no I don't know where to get one :-) to copy the
client to the Windows client in userspace. smbclient will use
read pipelining (ie. issue more than one read at a time) whereas
Windows clients issue one read, wait for response, issue the next
read, wait for response etc.

That would tell you if it's a client redirector issue. You could
probably use cygwin to compile smbclient.

Jeremy.
-- 
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/listinfo/samba


Re: [Samba] Transfer rates faster than 23MBps?

2006-09-19 Thread Mark Smith

Jeremy Allison wrote:

An interesting thing you could do is to use a port of smbclient
on Windows (no I don't know where to get one :-) to copy the
client to the Windows client in userspace. smbclient will use
read pipelining (ie. issue more than one read at a time) whereas
Windows clients issue one read, wait for response, issue the next
read, wait for response etc.


I will try using smbclient from a Linux client and see how that compares 
to using the Linux kernel's SMB implementation.  That's easy to do.  :)


I'll see what I can do to get smbclient compiled in Windows.  Oy.  This 
should be interesting..


-Mark
--
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/listinfo/samba