* This is the modus mailing list *

Bill, I actually re-read what I wrote and agree with you and since I am
now relaxing at home I think I will give a much fuller explanation so
that you guys can all understand a bit better.


Intel SMP Architecture
----------------------

The CPU, which executes the applications, is the most important resource
in a system. For a CPU to be effective, both data and instructions must
be moved into the CPU so that it can remain productive and perform the
requested tasks. A variety of disk, network, and memory operations can
lead the CPU into a waiting state.

To enable CPUs to more productively execute program instructions, memory
hierarchies are constructed of a level 1 (L1) cache, a level 2 (L2)
cache, a level 3 (L3) cache (in some cases), and the main memory. In
Dual Independent Bus (DIB) processors, L1 and L2 caches are built on the
same physical chip, which enables the CPU to access data at the speed of
the core CPU without traversing the system bus. This functionality can
help increase the overall scalability of the system.

When data is not available in the cache, the CPU sends the request
across the system bus to obtain the data from memory. This action can
slow down the CPU and prevent it from completing the application's
request. In an SMP-based server with more than one CPU, the extra data
on the system bus can slow down the other CPUs because only one request
at a time can traverse the system bus. In general, increasing the size
of the L2 cache in this architecture can have the following benefits:

1). Keeps the CPU operating at a higher rate and in a more efficient
manner because of the higher likelihood that data resides in cache.

2). Results in less data traversing the system bus, essentially reducing
the bus's congestion levels.


Windows OS Scheduling
---------------------

In Windows OS's a process represents an instance of an executing
application, and each process must have at least one thread of execution
which executes program code for the process. In addition, Windows 2000
introduced a new entity called the 'job object' which enabled groups off
processes to be managed as a single processing unit.

A context switch is the action that occurs when the processor switches
from one process's executable thread to another executable thread. From
a CPU perspective, a context switch could be a costly operation because
of process housekeeping such as virtual address space tracking, thread
state management, CPU registry contents, kernel pointers, and so forth.
Conversely, context switches that occur between threads from the same
process have lower overhead because these threads share the same address
space.

To avoid resource contention, resources should not be shared, nor should
they be scheduled. Assigning threads to processors increases the chances
of a cache hit (keeping data consistently in cache). In general, the
Windows OS tries to run a thread on the CPU that last executed it unless
that CPU is not available. In that case, the Windows scheduler
dispatches the thread to the next available CPU (a concept known as soft
affinity). Conversely, hard affinity binds a particular thread or
process permanently to a specific CPU.

The Windows OS uses a priority-based, round-robin scheduling algorithm
to distribute threads among the CPUs in an SMP system. Together, these
technologies can help implement application consolidation by enabling
servers to run multiple applications more efficiently.

Now, we know that Vircom have stated that Modus Mail is multi-threaded,
therefore in theory new threads started by a Modus Mail process
(background service or console) should utilize all the CPU's. However,
whilst Modus Mail is multi-threaded, threads are not started unless
needed and hence why you would rarely see processes running at over 50%
CPU usage (or at least not on my systems). The reason for this is that
by the time the service starts a thread on a separate processor a thread
on the original processor has finished executing. Modus Mail is actually
very efficient at using very little processing power in each thread and
hence why you can expect not to see any services using more than 50% of
the overall processing off the system as nearly always a single process
does not have threads that are running on separate processors flat out
simultaneously.

Hope this clears up my line off thinking, and provides some insight into
multiprocessing systems.


Regards,

Suneel.





-----Original Message-----
From: Bill Sobel [mailto:[EMAIL PROTECTED] 
Sent: 29 October 2003 19:34
To: [EMAIL PROTECTED]

* This is the modus mailing list *

"Yes, basically the services are not written to run on multiple
processors, instead they execute in memory spawning new threads to
handle individual calls. This in turn means that the OS will distribute
the processes over multiple CPU's, however each service will be
restricted to that percentage off the processing power."

The amount of misinformation in this thread is amazing.  Threads will
distribute across available CPU's, there is no 'restrictions' as to
'that
percentage off the processing power'.  

Bill




**
To unsubscribe, send an Email to: [EMAIL PROTECTED]
with the word "UNSUBSCRIBE" in the body or subject line.



**
To unsubscribe, send an Email to: [EMAIL PROTECTED]
with the word "UNSUBSCRIBE" in the body or subject line.

Reply via email to