Re: [Xen-devel] Re: [XenPPC] Xencomm for xen/ia64

2006-08-22 Thread Tristan Gingold
Le Lundi 21 Août 2006 18:24, Hollis Blanchard a écrit :
 On Mon, 2006-08-21 at 08:46 +0200, Tristan Gingold wrote:
  Le Vendredi 18 Août 2006 18:39, Hollis Blanchard a écrit :
   On Fri, 2006-08-18 at 11:04 +0200, Tristan Gingold wrote:
Le Jeudi 17 Août 2006 20:35, Hollis Blanchard a écrit :
  
   I'm not sure how it simplifies hcall.c. You always need to create
   xencomm descriptors, unless you're manually guaranteeing that the
   dom0_op structures do not cross page boundaries (in which case they are
   not linear in memory). Is that what you're doing?
 
  For hypercalls issued through privcmd, xencomm descriptors are always
  created. For hypercalls directly issued by kernel inline xencomm is
  prefered.

 How do you guarantee that kernel-created data structures are not
 crossing page boundaries? The patch you sent does not do this. Without
 that, xencomm_inline() simply cannot work except by luck.
Kernel-created structures are linear in guest physical space, so it doesn't 
matter if they cross page boundaries.

 We need to do one more thing though: we *also* need to change fix
 up the size of longs and pointers in our code (since 32-bit
 userland is passing structures to a 64-bit kernel). So perhaps
 these two fixup passes could be split: we could share the xencomm
 conversion in common code, and PPC arch code could contain the size
 munging.
   
Are structure sizes different on 32 and 64 bits ?
  
   Yes, in particular longs and pointers.
 
  But are longs and pointers used directly in Xen hypercalls ?  I though
  only sized types (uintNN_t  others) are used.

 I have put a lot of work into converting types to be explicitly sized,
 but there are still missing pieces. I think Jimi got tired of it, and
 started doing the Linux compat32 conversion. For example, see
 drivers/xen/privcmd/compat_privcmd.c .
Ok.

Tristan.

___
Xen-ppc-devel mailing list
Xen-ppc-devel@lists.xensource.com
http://lists.xensource.com/xen-ppc-devel


Re: [Xen-devel] Re: [XenPPC] Xencomm for xen/ia64

2006-08-21 Thread Tristan Gingold
Le Vendredi 18 Août 2006 18:39, Hollis Blanchard a écrit :
 On Fri, 2006-08-18 at 11:04 +0200, Tristan Gingold wrote:
  Le Jeudi 17 Août 2006 20:35, Hollis Blanchard a écrit :
If we agree on using xencomm we will have to work with xen/ppc people
in order not to duplicate the code.  Hopefully it is rather small.  I
have enhanced the xencomm code so that the kernel may not use xencomm
area but pass the guest physical address with a flag to know the
space is linear in memory.
   
At this time I can boot dom0 with xencomm.  I will publish the patch
later.
  
   I'll be very interested to see your patch. I guess the flag is a
   reserved bit in the (physical) address passed from kernel to
   hypervisor?
 
  Yes.
 
   Does that really gain much performance?
 
  I don't think performance will be decreased.  But it simplifies hcall.c a
  lot!

 I'm not sure how it simplifies hcall.c. You always need to create
 xencomm descriptors, unless you're manually guaranteeing that the
 dom0_op structures do not cross page boundaries (in which case they are
 not linear in memory). Is that what you're doing?
For hypercalls issued through privcmd, xencomm descriptors are always created.
For hypercalls directly issued by kernel inline xencomm is prefered.

  Using xencomm_create (and __get_free_page) is tricky: it doesn't work all
  the time and at least it doesn't work very early duing kernel boot.
  Using xencomm_create_mini is possible but rather heavy.

 Heavy meaning what? It adds almost no CPU overhead (just checking for
 crossing page boundaries), and the stack space used is 64 bytes.
It is cumbersome: you have to declare the stack space, to do the call and to 
check the result.  Using inline xencomm is just a call.

 The only reason it's not the preferred API is that a) it's a little
 cumbersome to use (in that the caller must manually allocate stack
 space), and b) it handles only up to two pages worth of data.

   I guess you will need to do the same thing we need to with privcmd
   ioctl handling, i.e. copy and modify the pointers in the dom0_op data
   structures passed to the kernel. :(
 
  Yes.  hcall.c *has* to be shared between ppc and ia64.
 
   We need to do one more thing though: we *also* need to change fix up
   the size of longs and pointers in our code (since 32-bit userland is
   passing structures to a 64-bit kernel). So perhaps these two fixup
   passes could be split: we could share the xencomm conversion in common
   code, and PPC arch code could contain the size munging.
 
  Are structure sizes different on 32 and 64 bits ?

 Yes, in particular longs and pointers.
But are longs and pointers used directly in Xen hypercalls ?  I though only 
sized types (uintNN_t  others) are used.

Tristan.

___
Xen-ppc-devel mailing list
Xen-ppc-devel@lists.xensource.com
http://lists.xensource.com/xen-ppc-devel


Re: [Xen-devel] Re: [XenPPC] Xencomm for xen/ia64

2006-08-18 Thread Hollis Blanchard
On Fri, 2006-08-18 at 11:04 +0200, Tristan Gingold wrote:
 Le Jeudi 17 Août 2006 20:35, Hollis Blanchard a écrit :
 
   If we agree on using xencomm we will have to work with xen/ppc people in
   order not to duplicate the code.  Hopefully it is rather small.  I have
   enhanced the xencomm code so that the kernel may not use xencomm area but
   pass the guest physical address with a flag to know the space is linear
   in memory.
  
   At this time I can boot dom0 with xencomm.  I will publish the patch
   later.
 
  I'll be very interested to see your patch. I guess the flag is a
  reserved bit in the (physical) address passed from kernel to hypervisor?
 Yes.
 
  Does that really gain much performance?
 I don't think performance will be decreased.  But it simplifies hcall.c a lot!

I'm not sure how it simplifies hcall.c. You always need to create
xencomm descriptors, unless you're manually guaranteeing that the
dom0_op structures do not cross page boundaries (in which case they are
not linear in memory). Is that what you're doing?

 Using xencomm_create (and __get_free_page) is tricky: it doesn't work all the 
 time and at least it doesn't work very early duing kernel boot.
 Using xencomm_create_mini is possible but rather heavy.

Heavy meaning what? It adds almost no CPU overhead (just checking for
crossing page boundaries), and the stack space used is 64 bytes.

The only reason it's not the preferred API is that a) it's a little
cumbersome to use (in that the caller must manually allocate stack
space), and b) it handles only up to two pages worth of data.

  I guess you will need to do the same thing we need to with privcmd ioctl
  handling, i.e. copy and modify the pointers in the dom0_op data
  structures passed to the kernel. :(
 Yes.  hcall.c *has* to be shared between ppc and ia64.
 
  We need to do one more thing though: we *also* need to change fix up the
  size of longs and pointers in our code (since 32-bit userland is passing
  structures to a 64-bit kernel). So perhaps these two fixup passes could
  be split: we could share the xencomm conversion in common code, and PPC
  arch code could contain the size munging.
 Are structure sizes different on 32 and 64 bits ?

Yes, in particular longs and pointers.

-- 
Hollis Blanchard
IBM Linux Technology Center


___
Xen-ppc-devel mailing list
Xen-ppc-devel@lists.xensource.com
http://lists.xensource.com/xen-ppc-devel


[XenPPC] Xencomm for xen/ia64

2006-08-16 Thread Tristan Gingold
Hi,

[xen-ppc-devel is on CC just for info]

I am porting xen-ppc's xencomm to xen/ia64.
Currently on xen/ia64 copy_from/to_guest uses guest virtual address.  This 
works well as long as the virtual addresses are in the TLB.  When not in TLB 
(or vTLB) the hypercall can't success without domain help.  The possible 
solution is either to touch the memory areas before doing the hypercall 
and/or restarting the hypercall.

Touching the memory area is an hack and we can't be sure it works.
Restarting the hypercall is not always possible (some hypercalls are atomic: 
DOM0_SHADOW_CONTROL_OP_CLEAN) or can result in a live-lock.

The most simple solution is to use guest physical addresses instead of virtual 
addresses.

For hypercalls directly issued by the kernel, the translation is very easy.  
For hypercalls (indirectly) issued by dom0 though the ioctl, kernel has to do 
the translation.  Because it may not be linear in guest physical memory the 
parameter is a pointer to a list of page (xencomm).

The pros of such approach is simplicity and reliability.

The main cons is maybe speed.  Hopefully the most frequent hypercalls (dom0vp 
and eoi) either don't use in memory parameters (dom0vp) or may be modified so 
that they pass parameters through registers (eoi).  IMHO speed won't be 
affected.

Access to guest memory is also performed during vcpu_translate (to read vhpt) 
or EFI/PAL/SAL calls.  We can either do not change that code (ie both 
mechanisms are not exclusive) or change the code.  This point will be 
postpone.

Comments are welcome (I won't work tomorrow, so you have more time).

If we agree on using xencomm we will have to work with xen/ppc people in order 
not to duplicate the code.  Hopefully it is rather small.  I have enhanced 
the xencomm code so that the kernel may not use xencomm area but pass the 
guest physical address with a flag to know the space is linear in memory.

At this time I can boot dom0 with xencomm.  I will publish the patch later.

Tristan.

___
Xen-ppc-devel mailing list
Xen-ppc-devel@lists.xensource.com
http://lists.xensource.com/xen-ppc-devel