Jens Nerche <[EMAIL PROTECTED]> wrote:

> I built a GDT and lgdt'ed it. Next instruction breaks
> vm, in the vm_debug_exception () output I see the value
> of cs. This means while emulating lgdt should cs be set
> to a value different from 0xb, because this GDT slot
> is used and free GDT slots are at the end of new GDT.
> cs should be significant higher...

OK, now I see what you mean.   Note, however, that the value
of %cs shown there is the *guest* cs, *not* the monitor cs!

This is actually a general problem, and is caused by this situation:

  Let's say you have guest code that performs an lgdt and
  subsequently loads the segment registers with new selectors.

  What happens on the real processor is that the lgdt changes
  the address where the processor is looking for the descriptor
  table *in the future*.  It does *not* change any of the 
  segment registers *currently loaded*, neither the visible 
  selector value, nor the invisible descriptor cache.

  Note that this means that e.g. accessing memory via %ds accesses 
  still the *old* segment described by the descriptor in the old GDT
  (which is cached in the descriptor cache), *not* the new segment
  that would correspond to the selector in the new GDT.

  This implies especially that after performing a 'push %ds; pop %ds'
  in this situation, accesses via %ds will now point to different
  memory, although the selector value is the same :-/

For us, the problem is that any interrupt/exception at this point
is very difficult to implement, because the retf reloads the %cs
descriptor cache with the new descriptor -- which is wrong.

As the lgdt is itself emulated within an exception, the very
'iret' to guest code performs this (incorrect) reloading of the
descriptor cache, and hence we the current lgdt implementation
is only correct if the old and new GDT contain equivalent descriptors
for all selectors currently loaded into registers at the point the
lgdt is performed ...  (The guest kernels do it this way.)

To implement this correctly would probably require to create 
'shadow' GDT selectors that contain the old descriptors that
are to be loaded into the descriptor cache, and change the guest
segment register to those shadow selectors when emulating lgdt.


> >Does the ljmp itself cause the GPF?  If so, the cs in
> >the exception frame is still the old one ...
> This is what I said. But after lgdt it _should not_ be
> the old one, no? Thats the point confusing me...

If the problem is what I described above, the GPF is *not*
triggered at the ljmp [MON_JMP_INFO] as you suspected, but
at the 'iret' that switches to guest code.

That's why I was asking; if it were really that ljmp that
triggers the GPF, we'd have a different problem.  But from
what you said above (you looked at the vm_debug_exception 
output), it seems that you don't have a problem with the
*monitor* cs values after all ;-)


Bye,
Ulrich

Reply via email to