> > Operation:
> > dd if=/dev/zero of=test bs=16384 count=512
> >
> > Results:
> > (CIFS)
> > 8388608 bytes (8.4 MB) copied, 1.7191 s, 4.9 MB/s
> > (NFS)
> > 8388608 bytes (8.4 MB) copied, 0.852603 s, 9.8 MB/s
>
> what's the problem? what is the expected result? what caught your
> attention?
> cifs and nfs are very different protocols, so it's perfectly
> reasonable for them to differ performance wise. Also dd is hardly a
> benchmarking tool
I gotta disagree Ignacio. In the end, there may be a fundamental difference
in SMB that makes it slower, but until such a fundamental characteristic is
identified I fully agree with Yannis that the SMB should be approx. 2x
faster than measured (approx matching the NFS). It's not acceptable to
simply say "so what. It sucks, that's interesting, oh well." It warrants
deeper investigation.
Also dd is a perfect benchmarking tool. I use it for this sort of operation
all the time. It's simple and effective for a lot of situations.
Yannis, I would make the following suggestions:
Your benchmark is only 1-2 seconds long. You may see skewed results due to
system caching. Since you're going across a 100Mb line, you will not max
out your disks; the network is definitely the bottleneck. I would suggest
something like this:
time dd if=/dev/zero of=..... bs=1G count=1
This should take about 2 minutes (maybe 4 minutes on the SMB, if SMB is
performing poorly)
Also, in your NFS setup, check for "sync" and "async" options. These too
may be skewing your results.
I know that one of my colleagues has had great success tuning NFS
performance by changing the udp/tcp options of the server, and block sizes.
Perhaps the same can be done here?
_______________________________________________
opensolaris-discuss mailing list
[email protected]