Windows Task Manager Performance tab shows the presence of two processors.
Will 2 instructions be executed concurrently??

With Regards,
Prabagaran.


On Wed, May 5, 2010 at 4:56 PM, Varun Nagpal <[email protected]>wrote:

> I guess with Virtual machines, instructions that simulate instructions
> of microprocessor are scheduled onto the real processor. But good
> question is how the scheduling of real microprocessor instructions
> done in a virtual machine. And the answer is again that its done on
> virtual processor, which essentially has all hardware components of
> real processor modeled in software. All sub-parts of this software
> representing essential hardware components, again run synchronously
> (in parallel) either at instruction accurate level or cycle accurate
> level.
>
> All new processor that are designed as of today, are first mostly are
> verified using simulators written in hardware description languages
> like VHDL/SystemVerilog/SystemC and then simulated either in software
> or hardware. For hardware simulation, in some cases its eventually
> possible to create them on FPGA's and verify before they are sent to
> fab lab. Its an arduous task.
> For example, you can get HDL code for free for SUN Open Sparc
> processor and can flash it on FPGA.
>
> So It doesn't really matter whether your processor is real or virtual,
> so you need to understand architecture principles and some digital
> electronics to  understand at hardware(VLSI ) level
>
> Intel x86 and now x64 are the most popular architectures. Other
> popular architectures are ARM, MIPS, SPARC, PowerPC, etc.
> You should probably read  the book: Computer Architecture, by hennrey
> Peterson
>
>
> On Tue, May 4, 2010 at 10:49 PM, praba garan <[email protected]>
> wrote:
> > I think it is necessary to study the full architecture of INTEL
> MotherBoard
> > to get a full picture.
> > How does scheduling happen incase of Virtual Machines??
> > Then how does a packet coming to the Guest OS is sent to Guest OS.
> > ie, either directly to Guest OS or through Host OS.
> > With Regards,
> > Prabagaran.
> >
> >
> > On Mon, May 3, 2010 at 12:25 PM, Varun Nagpal <[email protected]>
> > wrote:
> >>
> >> I think its a good question and fairly complicated to explain at
> >> hardware(RTL) level. Anyways, let me give it a try :
> >>
> >> You suggested that only 1 instruction is executed by one processor,
> >> which is not true(if you have read computer architecture). Briefly,
> >> lets assume the instruction pipeline(assuming only single hardware
> >> thread) is filled with instructions from the present thread(or
> >> process) of execution. Assume number of pipeline stages to be 20. In
> >> the pipeline, 20 instructions from the current instruction control
> >> flow are executing synchronously on every clock tick. Depending upon
> >> the design of pipeline, data from registers/memory is read in
> >> different pipeline stages. Also there may also be many execution
> >> stages(ALU) before the data is written to register/memory.
> >>
> >> The OS kernel keeps a track of all the threads/processes presently
> >> executing, active, waiting, suspended etc. in memory in the form of a
> >> data structure, which is to say that it always knows the next
> >> thread/process it needs to schedule on to the processor. I think it
> >> has a compare register that stores an arbitrary number(as decided by
> >> kernel) of clock ticks for a time slice expiry and keeps another
> >> counter register to keep track of time slice expiration for present
> >> thread. At every clock tick, it increments the counter register and
> >> compares it with compare register. This summing and comparison is done
> >> by inserting an instruction in the current instruction flow.  The
> >> point is that a clock interrupt is generated whenever the values of
> >> the counter and the compare registers match. When this does occur, the
> >> next PC value,registers etc(i.e its context information) is pushed
> >> onto the stack and a jump is made to an area in memory storing an
> >> interrupt vector table. I also assume that when this jump is made, the
> >> OS kernel supplies some information to the jump instruction about the
> >> next thread to be executed. This information maybe stored in another
> >> dedicated register. Now by using this information and interrupt vector
> >> table, it can find out the memory address of next thread(ii.e next
> >> instruction) to be executed. The PC including other registers is then
> >> simply loaded with context information of the new thread.
> >>
> >> Important thing here is again that when all of this is happening, the
> >> pipeline may still be executing instructions from the previous thread.
> >> In addition it will contain interrupt instructions! Only when PC is
> >> updated(in some stage of pipeline) that the instruction fetch stage
> >> will start fetching instructions from instruction memory area of new
> >> thread. In a 20 stage pipeline, it is still highly likely that it may
> >> be the case that pipeline contains instructions from old thread,
> >> followed by interrupt instruction , followed by instructions from new
> >> thread.
> >>
> >> I hope this explanation should give you better clarity.
> >>
> >> On Sun, May 2, 2010 at 7:01 PM, harit agarwal <[email protected]>
> >> wrote:
> >> > although CPU is busy in exexcution...it check's its registers values
> for
> >> > the
> >> > pending interrupts ..
> >> > if any interrupt is pending at the end of the current CPU cycle...it
> >> > shedules the interrupt handler to further execute the interrupt
> >> > subroutine...
> >> >
> >> > --
> >> > You received this message because you are subscribed to the Google
> >> > Groups
> >> > "Algorithm Geeks" group.
> >> > To post to this group, send email to [email protected].
> >> > To unsubscribe from this group, send email to
> >> > [email protected]<algogeeks%[email protected]>
> .
> >> > For more options, visit this group at
> >> > http://groups.google.com/group/algogeeks?hl=en.
> >> >
> >>
> >> --
> >> You received this message because you are subscribed to the Google
> Groups
> >> "Algorithm Geeks" group.
> >> To post to this group, send email to [email protected].
> >> To unsubscribe from this group, send email to
> >> [email protected]<algogeeks%[email protected]>
> .
> >> For more options, visit this group at
> >> http://groups.google.com/group/algogeeks?hl=en.
> >>
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Algorithm Geeks" group.
> > To post to this group, send email to [email protected].
> > To unsubscribe from this group, send email to
> > [email protected]<algogeeks%[email protected]>
> .
> > For more options, visit this group at
> > http://groups.google.com/group/algogeeks?hl=en.
> >
>
> --
> You received this message because you are subscribed to the Google Groups
> "Algorithm Geeks" group.
> To post to this group, send email to [email protected].
> To unsubscribe from this group, send email to
> [email protected]<algogeeks%[email protected]>
> .
> For more options, visit this group at
> http://groups.google.com/group/algogeeks?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Algorithm Geeks" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/algogeeks?hl=en.

Reply via email to