On Tuesday,  7 August 2001 at  1:58:21 -0700, Terry Lambert wrote:
> Bosko Milekic wrote:
>>> I keep wondering about the sagicity of running interrupts in
>>> threads... it still seems like an incredibly bad idea to me.
>>>
>>> I guess my major problem with this is that by running in
>>> threads, it's made it nearly impossibly to avoid receiver
>>> livelock situations, using any of the classical techniques
>>> (e.g. Mogul's work, etc.).
>>
>>         References to published works?
>
> Just do an NCSTRL search on "receiver livelock"; you will get
> over 90 papers...
>
>       http://ncstrl.mit.edu/
>
> See also the list of participating institutions:
>
>       http://ncstrl.mit.edu/Dienst/UI/2.0/ListPublishers
>
> It won't be that hard to find... Mogul has "only" published 92
> papers.  8-)

So much data, in fact, that you could hide anything behind it.  Would
you like to be more specific?

>>> It also has the unfortunate property of locking us into virtual
>>> wire mode, when in fact Microsoft demonstrated that wiring down
>>> interrupts to particular CPUs was good practice, in terms of
>>> assuring best performance.  Specifically, running in virtual
>>
>>         Can you point us at any concrete information that shows
>> this?  Specifically, without being Microsoft biased (as is most
>> "data" published by Microsoft)? -- i.e. preferably third-party
>> performance testing that attributes wiring down of interrupts to
>> particular CPUs as _the_ performance advantage.
>
> FreeBSD was tested, along with Linux and NT, by Ziff Davis
> Labs, in Foster city, with the participation of Jordan
> Hubbard and Mike Smith.  You can ask either of them for the
> results of the test; only the Linux and NT numbers were
> actually released.  This was done to provide a non-biased
> baseline, in reaction to the Mindcraft benchmarks, where
> Linux showed so poorly.  They ran quad ethernet cards, with
> quad CPUs; the NT drivers wired the cards down to seperate
> INT A/B/C/D interrupts, one per CPU.

You carefully neglect to point out that this was the old SMP
implementation.  I think this completely invalidates any point you may
have been trying to make.

>>> wire mode means that all your CPUs get hit with the interrupt,
>>> whereas running with the interrupt bound to a particular CPU
>>> reduces the overall overhead.  Even what we have today, with
>>
>>         Obviously.
>
> I mention it because this is the direction FreeBSD appears
> to be moving in.  Right now, Intel is shipping with seperate
> PCI busses; there is one motherboard from their serverworks
> division that has 16 seperate PCI busses -- which means that
> you can do simultaneous gigabit card DMA to and from memory,
> without running into bus contention, so long as the memory is
> logically seperate.  NT can use this hardware to its full
> potential; FreeBSD as it exists, can not, and FreeBSD as it
> appears to be heading today (interrupt threads, etc.) seems
> to be in the same boat as Linux, et. al..  PCI-X will only
> make things worse (8.4 gigabit, burst rate).

What do interrupt threads have to do with this?

Terry, we've done a lot of thinking about performance implications
over the last 2 years, including addressing all of the points that you
appear to raise.  A lot of it is in the archives.

It's quite possible that we've missed something important that you
haven't.  But if that's the case, we'd like you to state it.  All I
see is you coming in, waving your hands and shouting generalities
which don't really help much.  The fact that people are still
listening is very much an indication of the hope that you might come
up with something useful.  But pointing to 92 papers and saying "it's
in there [somewhere]" isn't very helpful.

Greg
--
See complete headers for address and phone numbers

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message

Reply via email to