Nicole Harrington wrote:
--- Cy Schubert <[EMAIL PROTECTED]> wrote:
In message
<[EMAIL PROTECTED]>,
Nicole Harrington wri
tes:
--- Cy Schubert <[EMAIL PROTECTED]> wrote:
In message
<[EMAIL PROTECTED]>, Mike
Meyer writes:
Generally, more processors means things will
go
faster until you run
out of threads. However, if there's some
shared
resource that is the
bottleneck for your load, and the resource
doesn't
support
simultaneous access by all the cores, more
cores
can slow things
down.

Of course, it's not really that simple. Some
shared resources can be
managed so as to make things improve under
most
loads, even if they
don't support simultaneous access.
Generally speaking the performance increase is
not
linear. At some point there is no benefit to adding more processors.
In a
former life when I was an MVS systems programmer the limit was seven processors in a System/370. Today we can use 16, 32, even 64 processors with
a
standard operating system and current hardware, unless one of the massively parallel architectures is used.

To answer the original posters question, there
are
architectural differences mentioned here, e.g. shared cache,
I/O
channel, etc., but the reason the chip manufacturers make them is that they're more cost effective than two CPUs.

The AMD X2 series of chips (I have one), they're
not
truely a dual processor chip. They're analogous to the single processor System/370 with an AP (attached processor) in concept. What this means is that both processors can execute all instructions and are
just
as capable in every way except external interrupts, e.g. I/O
interrupts,
are handled by the processor 0 as only that processor is "wired" to
be
interrupted in case of external interrupt. I can't comment about
Intel's
Dual Core CPUs as I don't know their architecture but I'd suspect the same would be true. Chips in which there are two dual core CPUs on the same
die,
I believe one of each of the dual core CPUs can handle external
interrupts.
 Wow I love ansking questions without too many
specifics as I learn so much more. With this
however
it really seems to be a love hate relationship
with
dual core.
Based on what you stated above, would that mean
that
when using a dual core system, using polling
interupts
might be better or perhaps monumanally worse?
No. CPU 0 would be interrupted. It would schedule
the interrupt in the queue. Either CPU could service the interrupt once
the interrupt was queued.

Some devices need to be polled as they do not
generate interrupts or they generate spurious interrupts. Otherwise allowing a device to interrupt the CPU is more efficient as it allows the CPU to do other work rather than spinning its wheels polling. This is the Von Neumann
model.


--
Cheers,
Cy Schubert <[EMAIL PROTECTED]>
FreeBSD UNIX: <[EMAIL PROTECTED]> Web: http://www.FreeBSD.org

                        e**(i*pi)+1=0


 Yes, I have heard that, thanks.

  However, how does one know or tell which is the
right mode/model for which devices? I have seen people
on either side (poll vs interupt) claim one is better
or much like an infomercial, just do blah and your
system will be so much faster. Altho of course that
would be the pro polling side, since by default,
interupts are used. Is it all just imperical testing? Take this pill and see let me know how you feel? It seems as though when it's heavy networking, use
polling. Otherwise stick with interupts. I have even
heard when using X network card, use polling. How
would know when one card will do better with polling
while another may not?

 Thanks for helping me understand the debate better.

  Nicole

Nicole:
If you're doing something regularly, no matter what the task, polling is the better method. Interrupts are for cases when you do something occasionally, but not all the time over your clock cycle. It's really dependent on the situation and the use of the software, for when interrupts are better than polling. Not sure how AMD does it over Intel, but different things are done in different ways in either chipmaker camp, so interrupts may be better with AMD, than with Intel (I'm just thinking pipeline length because Intel's always had long pipelines in their processors). Anyhow, best of luck deciding with method is better, although depending on your situation it probably doesn't matter all that much, esp since there are other limiting factors in the system like bus speed, harddrives, chipset speed, etc. Just basing the factors off CPUs is a bad way to go as it's not a complete analysis.
Cheers,
-Garrett
_______________________________________________
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to