Whyzzi wrote:
Answers to your questions are inline /w the question

2008/11/18 tico <[EMAIL PROTECTED]>:
1. Why is this sent to the "ports@" mailing list ?

I've lurked the list for a long time - since around 2.8. I've asked
the odd question here and/or there, and got flamed once when I asked
about a question in misc about a port and was subsequently
"corrected". Since the main topic is about using samba it belongs in
ports. See "ports" general interest list comment @
http://www.openbsd.org/mail.html
Perhaps I should have phrased my comment better, or possibly I'm wrong altogether, but I don't see where you showed that the problem was with *Samba* throughput (which would obviously be directed to ports@ or even [EMAIL PROTECTED] as you state) versus hardware/OS/configuration.

What network throughput can your system handle?
What disk I/O throughput can your system handle?

What network and disk throughput can your clients handle?

2. What performance comparison is showing you that the write throughput that
you *are* achieving is bad?

The beginning of email before I *tweaked* (which I didn't want to do)
170meg copy was really slow. So I tossed in the aforementioned crap
based on an internet search that didn't really have a satisfactory
discussion on the subject - just "things to try".
What consistently slow speed did you get on average running a specific test, and what better speed did you get running the exact same test after changing each setting one-by-one ?
I have much nicer systems on nicer 100Mbit managed switches  that individual
Windows clients are not able to drive at  higher than 9MB/sec ~= 72Mbits.
 If you're ranging from 4-12MBytes/sec, those are good real-world numbers.

One of these days I should learn how to apply the math I learned. I
still am not good at math problems like this one. Thus I still have no
idea what I should be expecting - and I guess that was the real
problem. Last time I had my little home network file storage served by
OpenBSD (by nearly the same hardware, by the way) it just *felt*
faster. My previous install still does feel faster. This post is a
result of that.
Find a way to test and measure those things that "feel" faster.

Technical people can sometimes help with technical issues when given the appropriate level of detail, but only therapists and priests will help you with your feelings.

3. If you want to test out your problem, why haven't you tested out the
network throughput capability of your system/network, separately from the
disk I/O capability? /dev/null and /dev/zero exist. use them.

Didn't think of it. But then didn't suspect it to begin with. See the
rest in answer the answer to #4. Certainly something to try.

There are gazillions of messages out there describing ways to test the throughput of a network, of a disk subsystem, of various daemons serving data, et cetera.

Only you have access to your hardware, so only you will be able to isolate where a problem, if it exists, is located. And only you will be able to retrieve a sufficient amount of technical detail about that specific problem to make it remotely possible that someone else can or will help you fix it.
4. Why are you trying to tweak the TCP stack of your system, if you don't
know what each piece does, whether or not it's a bottleneck, and what the
trade-offs are from adjusting it? There is no "kern.system.go.faster=true"
option for a reason.

I know. Hence this conversation. I've got no peer in all of the
technologists that I know to learn from @ the moment.
Read mailing list archives and documentation. I don't have tons of friends that are more technical than me to learn from either, but I read and practice and test.

I have systems that serve 100's of megs of data where multiple NICs are
sharing the same IRQ. Don't drink the kool-aid that you find on random
forums. Only attempt to optimize what you can establish is *actually* a
problem.

Do you want to see one of my client's fileserver systems that serves data
faster than the Windows clients can eat it?

You bet, and thanks for including it!

You'll notice, hopefully, that there are no "tweaks" added to any relevant conf file, because the defaults just plain work, and I've not been able to show a bottleneck in real-world performance that is solvable through hackery ... and most importantly, the customer is happy and I don't have to write even more documentation for the next sysadmin that walks through the door.
---------------
from a stock smb.conf:
[global]
      workgroup = BHD
      server string = Samba Server
      passdb backend = tdbsam
      log file = /var/log/smbd.%m
      max log size = 50
      os level = 33
      preferred master = Yes
      domain master = Yes
      dns proxy = No
      wins support = Yes
      hosts allow = 192.168.1., 192.168.0., 127.
------
# uname -m -r -v
4.4 GENERIC#1915 amd64
# grep -v ^# /etc/sysctl.conf net.inet.ip.forwarding=1        # 1=Permit
forwarding (routing) of IPv4 packets
#

em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
      lladdr 00:30:48:9a:17:34
      description: internal net (VLAN100 untagged)
      media: Ethernet autoselect (100baseTX full-duplex)
      status: active
      inet6 fe80::230:48ff:fe9a:1734%em0 prefixlen 64 scopeid 0x1
      inet 192.168.0.252 netmask 0xffffff00 broadcast 192.168.0.255

I'm running a single RAID1 volume (should be slower than your single disk)

Actually I disagree. You've got hardware RAID (a host processor to
handle and optimize the duplication of data without bothering the rest
of the computer) /w a large cache. I've got an add-in 32bit pci card &
32M cache on the drive.

Well, that at least is arguable. While this is not a comprehensive test like Bonnie or IOBench, see below:
# time dd if=/dev/zero of=/home/test bs=1024k count=1024
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 11.085 secs (96860568 bytes/sec)
   0m13.14s real     0m0.00s user     0m2.47s system
#
# time dd if=/dev/zero of=/home/test bs=1024k count=5000
5000+0 records in
5000+0 records out
5242880000 bytes transferred in 63.536 secs (82517837 bytes/sec)
   1m3.58s real     0m0.00s user     0m11.32s system
# time dd if=/dev/zero of=/home/test2 bs=1024k count=500
500+0 records in
500+0 records out
524288000 bytes transferred in 5.960 secs (87959934 bytes/sec)
   0m5.97s real     0m0.00s user     0m1.03s system
# time dd if=/dev/zero of=/home/test3 bs=1024k count=50 50+0 records in
50+0 records out
52428800 bytes transferred in 0.153 secs (341718211 bytes/sec)
   0m0.17s real     0m0.00s user     0m0.09s system
#

But then again, my bottlenecks are on the client access side, not my server.

Reply via email to