--- Cy Schubert <[EMAIL PROTECTED]> wrote:
> In message
> <[EMAIL PROTECTED]>,
> Nicole Harrington
> > --- Cy Schubert <[EMAIL PROTECTED]> wrote:
> > > In message
> > > <[EMAIL PROTECTED]>, Mike
> > > Meyer writes:
> > > > Generally, more processors means things will
> > > faster until you run
> > > > out of threads. However, if there's some
> > > resource that is the
> > > > bottleneck for your load, and the resource
> > > support
> > > > simultaneous access by all the cores, more
> > > can slow things
> > > > down.
> > > >
> > > > Of course, it's not really that simple. Some
> > > shared resources can be
> > > > managed so as to make things improve under
> > > loads, even if they
> > > > don't support simultaneous access.
> > >
> > > Generally speaking the performance increase is
> > > linear. At some point
> > > there is no benefit to adding more processors.
> In a
> > > former life when I was
> > > an MVS systems programmer the limit was seven
> > > processors in a System/370.
> > > Today we can use 16, 32, even 64 processors with
> > > standard operating
> > > system and current hardware, unless one of the
> > > massively parallel
> > > architectures is used.
> > >
> > > To answer the original posters question, there
> > > architectural
> > > differences mentioned here, e.g. shared cache,
> > > channel, etc., but the
> > > reason the chip manufacturers make them is that
> > > they're more cost effective
> > > than two CPUs.
> > >
> > > The AMD X2 series of chips (I have one), they're
> > > truely a dual
> > > processor chip. They're analogous to the single
> > > processor System/370 with
> > > an AP (attached processor) in concept. What this
> > > means is that both
> > > processors can execute all instructions and are
> > > as capable in every
> > > way except external interrupts, e.g. I/O
> > > are handled by the
> > > processor 0 as only that processor is "wired" to
> > > interrupted in case of
> > > external interrupt. I can't comment about
> > > Dual Core CPUs as I don't
> > > know their architecture but I'd suspect the same
> > > would be true. Chips in
> > > which there are two dual core CPUs on the same
> > > I believe one of each
> > > of the dual core CPUs can handle external
> > > interrupts.
> > Wow I love ansking questions without too many
> > specifics as I learn so much more. With this
> > it really seems to be a love hate relationship
> > dual core.
> > Based on what you stated above, would that mean
> > when using a dual core system, using polling
> > might be better or perhaps monumanally worse?
> No. CPU 0 would be interrupted. It would schedule
> the interrupt in the
> queue. Either CPU could service the interrupt once
> the interrupt was queued.
> Some devices need to be polled as they do not
> generate interrupts or they
> generate spurious interrupts. Otherwise allowing a
> device to interrupt the
> CPU is more efficient as it allows the CPU to do
> other work rather than
> spinning its wheels polling. This is the Von Neumann
> Cy Schubert <[EMAIL PROTECTED]>
> FreeBSD UNIX: <[EMAIL PROTECTED]> Web:
Yes, I have heard that, thanks.
However, how does one know or tell which is the
right mode/model for which devices? I have seen people
on either side (poll vs interupt) claim one is better
or much like an infomercial, just do blah and your
system will be so much faster. Altho of course that
would be the pro polling side, since by default,
interupts are used. Is it all just imperical testing?
Take this pill and see let me know how you feel?
It seems as though when it's heavy networking, use
polling. Otherwise stick with interupts. I have even
heard when using X network card, use polling. How
would know when one card will do better with polling
while another may not?
Thanks for helping me understand the debate better.
If you make people think they're thinking, they'll
love you; but if you really make them think, they'll
-- Don Marquis
firstname.lastname@example.org mailing list
To unsubscribe, send any mail to "[EMAIL PROTECTED]"