> From: Adam Jacob Muller <[EMAIL PROTECTED]>
>
> You need more than to just ifconfig you also need "options
> DEVICE_POLLING" in your kernel.
>
> The evidence suggests that this is not the default:
>
> -=[/usr/src/sys/amd64/conf]=- -=[Wed May 30]=- -=[22:44:15]=-
> [EMAIL PROTECTED] grep POLLING GENERIC
> -=[/usr/src/sys/amd64/conf]=- -=[Wed May 30]=- -=[22:44:17]=-
> [EMAIL PROTECTED] grep POLLING LINT
> options         DEVICE_POLLING
>
> Polling works very well.
>
> -Adam
>
> On May 30, 2007, at 6:22 AM, Florian Klemenz wrote:
>
>> Wednesday, May 30, 2007, 17:14, you wrote:
>>
>>> If I remember correctly, freebsd uses polling for the sis driver
>>> instead of interrupts. Any idea how this would perform compared to
>>> the
>>> new openbsd stuff?
>>
>> FreeBSD doesn't enable polling by default. You can turn it on for  
>> many
>> nics including sis if your kernel supports polling (which isn't
>> default in GENERIC, correct me if I'm wrong):
>>
>> ifconfig sis0 polling
>>
>> Performance depends on your network usage. I've not seen any
>> improvement with polling enabled on a net4801 as it generally is to
>> weak for high throughput.
>>
>> See also:
>> http://www.freebsd.org/cgi/man.cgi?
>> query=polling&apropos=0&sektion=0&format=html

I found absolutely no advantage to using polling on my net4801. And  
in most cases, polling offers little benefit. However, in doing some  
tuning in a large DNS system (processes about 15,000 recursive  
queries per second) I found tuning beneficial on some hosts but not  
others. Here's an excerpt I wrote summarizing my findings:

> I've used polling in FreeBSD 6 and my experience has been just the  
> opposite. Enabling polling [on a net4801] created more system load  
> than without it. But that wasn't on a system processing 6,000 UDP  
> packets per second. So I took the load off the Xeon, recompiled the  
> kernel with DEVICE_POLLING, and rebooted. I swung the load back  
> over to the xeon and let the cache repopulate for an hour. Then I  
> flipped the switch (ifconfig bge0 polling) and the amount of CPU  
> activity dropped by two thirds. Same workload, same machine, but  
> only 1/3 the CPU activity.
>
> So I installed a new kernel on the Opteron, switched the traffic to  
> there and verified by findings. Confident I had finally struck pay  
> dirt, I installed modified kernels on cache01-4. The results speak  
> for themselves:
> image
>
> But what's going on with cache02? Its CPU activity didn't drop at  
> all. One fundamental difference between c2 and the other three is  
> that it sports an AMD processor, but I don't think that explains  
> it. To understand why it made no difference, we need two data  
> points. The first is the number of packets per second (same as DNS  
> qps) that we are processing. From the measurements done previously,  
> we can see that value averaged 600, making cache02 the most lightly  
> loaded server.
> image
>
> The other data point we need to know is the system clock rate. When  
> polling is enabled, it blocks the nic from generating interrupts to  
> the CPU. In return, the CPU polls the device 'clock rate' times per  
> second. Our default clock rate is 1,000 (sysctl kern.clockrate).  
> The net effect is that if we're processing less than 1,000 qps,  
> polling is actually more expensive than letting the nic generate  
> interrupts (the default). It's only after that threshold gets  
> crossed that polling delivers an advantage.

Hope that helps!

Matt
_______________________________________________
Soekris-tech mailing list
[email protected]
http://lists.soekris.com/mailman/listinfo/soekris-tech

Reply via email to