Re: [Xen-devel] Re: [PATCH] [xen, xencomm] various xencomm fixes (was Re: [XenPPC] Re: [Xen-ia64-devel] [PATCH 1/2] remove xencomm page size limit(xen side))

2007-08-12 Thread Isaku Yamahata
On Fri, Aug 10, 2007 at 01:54:47PM -0500, Hollis Blanchard wrote:
 I don't see how we'd ever hit this case though, nor why we'd panic the
 domain. How could paddr_to_maddr() ever return an maddr that doesn't
 belong to the current domain? If paddr is invalid, an error should be
 returned from there and propagated up, which is better than killing the
 whole domain IMHO...

I agree with you that we should return an error instead of panicing domain.

After getting maddr, another vcpu of the domain can free the page
with the decrease reservation hypercall at the same time.
Please remember that Xen supports SMP and balloon.
Then when accessing the page of maddr, the page might be used 
for another purpose.
Such a domain behaviour doesn't make sense. Howerver nothing
prevents it.

-- 
yamahata

___
Xen-ppc-devel mailing list
Xen-ppc-devel@lists.xensource.com
http://lists.xensource.com/xen-ppc-devel


Re: [PATCH] [xen, xencomm] various xencomm fixes (was Re: [XenPPC] Re: [Xen-ia64-devel] [PATCH 1/2] remove xencomm page size limit(xen side))

2007-08-10 Thread Hollis Blanchard
On Tue, 2007-08-07 at 17:44 +0900, Isaku Yamahata wrote:
 
 +/* get_page() to prevent from another vcpu freeing the page */
 +static int
 +xencomm_paddr_to_maddr(unsigned long paddr, unsigned long *maddr,
 +struct page_info **page)
 +{
 +*maddr = paddr_to_maddr(paddr);
 +if (*maddr == 0)
 +return -EFAULT;
 +
 +*page = virt_to_page(*maddr);
 +if (get_page(*page, current-domain) == 0) {
 +if (page_get_owner(*page) != current-domain)
 +/* This page might be a page granted by another domain  */
 +panic_domain(NULL, copy_from_guest from foreign domain\n);
 +
 +/* Try again. */
 +return -EAGAIN;
 +}
 +
 +return 0;
 +} 

PPC doesn't implement panic_domain(), so this doesn't build for me.

I don't see how we'd ever hit this case though, nor why we'd panic the
domain. How could paddr_to_maddr() ever return an maddr that doesn't
belong to the current domain? If paddr is invalid, an error should be
returned from there and propagated up, which is better than killing the
whole domain IMHO...

 +*page = virt_to_page(*maddr);
This line doesn't make sense. Is it an maddr or a vaddr? If it's an
maddr, we shouldn't be passing it as virt to virt_to_page().

-- 
Hollis Blanchard
IBM Linux Technology Center


___
Xen-ppc-devel mailing list
Xen-ppc-devel@lists.xensource.com
http://lists.xensource.com/xen-ppc-devel


[PATCH] [xen, xencomm] various xencomm fixes (was Re: [XenPPC] Re: [Xen-ia64-devel] [PATCH 1/2] remove xencomm page size limit(xen side))

2007-08-07 Thread Isaku Yamahata
On Fri, Aug 03, 2007 at 09:43:16AM -0500, Hollis Blanchard wrote:
 On Fri, 2007-08-03 at 11:12 +0900, Isaku Yamahata wrote:
  On Thu, Aug 02, 2007 at 10:00:46AM -0500, Hollis Blanchard wrote:
   On Thu, 2007-08-02 at 11:44 +0900, Isaku Yamahata wrote:

 But we can issue sequential p2m hcalls with different offsets, so we
do.
 So what exactly is the new problem?

The limit is about 64GB for xen/ia64 because xen/ia64 deafult page
size
is 16KB.
The issue I'm seeing is the populate physmap hypercall failure
at domain creation. It's essentially same issue you described above.
I want to remove the limitation with the patches. 
   
   Can't you simply issue repeated populate_physmap hypercalls,
   incrementing extent_start?
  
  Yes, it's possible. However my choice was to generalize xencomm
  because 
- The inline xencomm code already supports page boundary crossing.
- IMHO it doesn't unnecessarily complicate the existing code.
 
 Well, your patch was largish...

I see your point.
Unfortunately I consolidated the xen side xencomm.c and reviewed the patch,
it became larger.

If you don't like multi page address array support, I can drop it
from the attached patch and go for repeating populate_physmap hypercall.
But the various argument check is necessary with or without multi page
support anyway, so just dropping only multi page support doesn't
make sense.

# HG changeset patch
# User [EMAIL PROTECTED]
# Date 1186464727 -32400
# Node ID 41fde67aa85ac54b24191eb3db573998b098c41d
# Parent  8b6af0333d531d1fd63b1ce02c2df51b0d4f45f5
Various xencomm fixes.
This patch fixes following issues.
- Xencomm should check whether struct xencomm_desc itself (8 bytes)doesn't
  cross page boundary. Otherwise a hostile guest kernel can pass such
  a pointer that may across page boundary. Then xencomm accesses a unrelated
  page.
- Xencomm should check whether struct xencomm_desc::address[] array crosses
  page boundary. Otherwise xencomm may access unrelated pages.
- Xencomm should get_page()/put_page() after address conversion from paddr
  to maddr because xen supports SMP and balloon driver.
  Otherwise another vcpu may free the page at the same time.
- Current implementation doesn't allow struct xencomm_desc::address array
  to be more than single page. On IA64 it causes 64GB+ domain creation
  failure. This patch generalizes xencomm to allow multipage
  xencomm_desc::address array.

PATCHNAME: various_fix_xencomm

Signed-off-by: Isaku Yamahata [EMAIL PROTECTED]

diff -r 8b6af0333d53 -r 41fde67aa85a xen/common/xencomm.c
--- a/xen/common/xencomm.c  Tue Aug 07 17:25:23 2007 +0900
+++ b/xen/common/xencomm.c  Tue Aug 07 14:32:07 2007 +0900
@@ -17,6 +17,7 @@
  *
  * Authors: Hollis Blanchard [EMAIL PROTECTED]
  *  Tristan Gingold [EMAIL PROTECTED]
+ *  Isaku Yamahata [EMAIL PROTECTED]
  */
 
 #include xen/config.h
@@ -34,6 +35,82 @@ static int xencomm_debug = 1; /* extreme
 #define xencomm_debug 0
 #endif
 
+/* get_page() to prevent from another vcpu freeing the page */
+static int
+xencomm_paddr_to_maddr(unsigned long paddr, unsigned long *maddr,
+struct page_info **page)
+{
+*maddr = paddr_to_maddr(paddr);
+if (*maddr == 0)
+return -EFAULT;
+
+*page = virt_to_page(*maddr);
+if (get_page(*page, current-domain) == 0) {
+if (page_get_owner(*page) != current-domain)
+/* This page might be a page granted by another domain  */
+panic_domain(NULL, copy_from_guest from foreign domain\n);
+
+/* Try again. */
+return -EAGAIN;
+}
+
+return 0;
+}
+
+/* check if struct desc doesn't cross page boundry */
+static int
+xencomm_desc_cross_page_boundary(unsigned long paddr)
+{
+unsigned long offset = paddr  ~PAGE_MASK;
+if (offset  PAGE_SIZE - sizeof(struct xencomm_desc))
+return 1;
+return 0;
+}
+
+static int
+xencomm_get_address(const void *handle, struct xencomm_desc *desc, int i,
+unsigned long **address, struct page_info **page)
+{
+if (i == 0)
+*address = desc-address[0];
+else
+(*address)++;
+
+/* When crossing page boundary, machine address must be calculated. */
+if (((unsigned long)*address  ~PAGE_MASK) == 0) {
+unsigned long paddr = (unsigned long)
+(((const struct xencomm_desc*)handle)-address[i]);
+put_page(*page);
+return xencomm_paddr_to_maddr(paddr, *address, page);
+}
+return 0;
+}
+
+static int
+xencomm_copy_chunk_from(unsigned long to, unsigned long paddr,
+unsigned int  len)
+{
+unsigned long maddr;
+struct page_info *page;
+
+while (1) {
+int res;
+res = xencomm_paddr_to_maddr(paddr, maddr, page);
+if (res != 0) {
+if (res == -EAGAIN)
+continue; /* Try again. */
+return res;
+}
+if (xencomm_debug)
+printk(%lx[%d] - %lx\n, maddr, len, to);
+
+