[Xen-ia64-devel] Xen panic if not initializing SHARED_INFO_VA
Hi, I after a month I updated to the current cset (12018:11b718eb22c9) for porting the mini-os and xen did a panic. I tracked the problem down to: - I did not initializing SHARED_INFO_VA (see early_xen_setup() for linux) - the next access to the shared info area leads to a alternate data tlb trap: shared_info_t *HYPERVISOR_shared_info = (shared_info_t *)XSI_BASE; ... pfn = HYPERVISOR_shared_info-arch.start_info_pfn My trap handler inserts a tlb entry for all region 7 addresses without any checks (and addresses 0xf are region 7 addresses too). Therewidth it installs tlb entries for the shared info pages. Later the hypervisor panics - see below. Thanks. Dietmar. (XEN) ### domain f7d9c080: rid=8-c mp_rid=2000 (XEN) arch_domain_create: domain=f7d9c080 (XEN) DomainU EFI build up: ACPI 2.0=0x1000 (XEN) dom mem: type=13, attr=0x8008, range=[0x-0x1000) (4KB) (XEN) dom mem: type=10, attr=0x8008, range=[0x1000-0x2000) (4KB) (XEN) dom mem: type= 6, attr=0x8008, range=[0x2000-0x3000) (4KB) (XEN) dom mem: type= 7, attr=0x0008, range=[0x3000-0x07ff4000) (127MB) (XEN) dom mem: type=12, attr=0x0001, range=[0x0c00-0x1000) (64MB) (XEN) lookup_domain_mpa: d 0xf7d9c080 id 6 current 0xf7db8000 id 0 (XEN) lookup_domain_mpa: bad mpa 0x3fff01010 (= 0x800) (XEN) ia64_fault, vector=0x18, ifa=0xfff01010, iip=0xf40451c0, ipsr=0x121008226018, isr=0x00800030 (XEN) General Exception: IA-64 Reserved Register/Field fault (data access). (XEN) d 0xf7d9c080 domid 6 (XEN) vcpu 0xf7db8000 vcpu 0 (XEN) (XEN) CPU 1 (XEN) psr : 121008226018 ifs : 8713 ip : [f40451c1] (XEN) ip is at printk+0x421/0x530 (XEN) unat: pfs : 0592 rsc : 0003 (XEN) rnat: 0009804c8a70033f bsps: f4122cc9 pr : 000182a9 (XEN) ldrs: ccv : fpsr: 0009804c8a70033f (XEN) csd : ssd : (XEN) b0 : f406cd80 b6 : f4063020 b7 : (XEN) f6 : 0fffaf000 f7 : 0ffde8000 (XEN) f8 : 100028000 f9 : 100038000 (XEN) f10 : 0fffdf000 f11 : 1003e (XEN) r1 : f4317050 r2 : f100 r3 : f7dbffe8 (XEN) r8 : 001c0561 r9 : r10 : (XEN) r11 : 0009804c0270033f r12 : f7dbfdc0 r13 : f7db8000 (XEN) r14 : r15 : 001008226018 r16 : f7d9c080 (XEN) r17 : 4000 r18 : f4117f68 r19 : f42a4080 (XEN) r20 : 03c680808002 r21 : 03c68080 r22 : 1fff (XEN) r23 : r24 : f7dbfe20 r25 : f7dbfe28 (XEN) r26 : r27 : r28 : (XEN) r29 : r30 : r31 : f4125c70 (XEN) (XEN) Call Trace: (XEN) [f4095680] show_stack+0x80/0xa0 (XEN) sp=f7dbf9f0 bsp=f7db9018 (XEN) [f4065ca0] ia64_fault+0x130/0x4f0 (XEN) sp=f7dbfbc0 bsp=f7db8fd8 (XEN) [f4092680] ia64_leave_kernel+0x0/0x310 (XEN) sp=f7dbfbc0 bsp=f7db8fd8 (XEN) [f40451c0] printk+0x420/0x530 (XEN) sp=f7dbfdc0 bsp=f7db8f40 (XEN) [fff0] ??? (XEN) sp=f7dbfe00 bsp=f7db8e40 (XEN) (XEN) (XEN) Panic on CPU 1: (XEN) Fault in Xen. (XEN) (XEN) (XEN) Reboot in five seconds... ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
RE: [Xen-ia64-devel] VTi RHEL4U4 install?
Hi Alex, I also met the GPT issue, if I didn't use the GPT partitioned disk image file. But I can do installation successfully in VTI with ioemu(Qemu) disk. And installing again, the GPT issue is gone. But I did not understand how to install VTI Linux in a VBD disk. I just have the experience to use VBD disk, after original VTI Linux boot up and insert the VBD disk drivers. Could you please give some instructions and guide? Best Regards, Yongkang (Kangkang) 永康 -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Alex Williamson Sent: 2006年11月4日 6:12 To: xen-ia64-devel Subject: [Xen-ia64-devel] VTi RHEL4U4 install? I was setting up a new system and trying to do fun things with physical device VBDs and LVM backed VBDs, but I can't seem to get a VTi RHEL4U4 install happy with either of these. Right after the installer tells me it's going to wipe the disk, I get the series of errors shown below. If I use a file backed VBD, it seems to work fine. I test a physical device backed VBD pre-installed RHEL4U4 regularly, so I'm wondering if this is only a problem during install. It doesn't seem to matter whether or not I create a disk label on the block device prior to install either. Anyone else seeing such problems? Thanks, Alex -- Alex Williamson HP Open Source Linux Org. ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
Re: [Xen-ia64-devel] [Patch] Guest PAL_INIT support for IPI
Hi Zhang, We were looking forward to your patches. We have a question and a comment. Question: How can we inject the INIT interrupt into domVTi? We made xm os-init command, and tested your patch. But INIT handler of domVTi was not executed, we think. We attach two files. - merge.patch : It is a patch that we tested. - xenctx.log : It is a cpu-context of domVTi after we test. Comment: We think that TODO line is unnecessary. @@ -404,7 +419,7 @@ static void deliver_ipi (VCPU *vcpu, uin break; case 5: // INIT // TODO -- inject guest INIT-- This! -panic_domain (NULL, Inject guest INIT!\n); +vmx_inject_guest_pal_init(vcpu); break; case 7: // ExtINT vmx_vcpu_pend_interrupt (vcpu, 0); Best regards, Kan and Akio This patch add guest PAL_INIT support for IPI Signed-off-by, Zhang Xin [EMAIL PROTECTED] Good good study,day day up ! ^_^ -Wing(zhang xin) OTC,Intel Corporation ---text/plain--- ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel merge.patch Description: Binary data xenctx.log Description: Binary data ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
RE: [Xen-ia64-devel] [Patch] Guest PAL_INIT support for IPI
I still have a question: How windows OS_INIT handler do? How can I confirm whether PAL_INIT executed successful in windows? ( in Linux , I can see a lot of dump info on screen) Good good study,day day up ! ^_^ -Wing(zhang xin) OTC,Intel Corporation -Original Message- From: Masaki Kanno [mailto:[EMAIL PROTECTED] Sent: 2006年11月6日 18:18 To: Zhang, Xing Z; xen-ia64-devel@lists.xensource.com Subject: Re: [Xen-ia64-devel] [Patch] Guest PAL_INIT support for IPI Hi Zhang, We were looking forward to your patches. We have a question and a comment. Question: How can we inject the INIT interrupt into domVTi? We made xm os-init command, and tested your patch. But INIT handler of domVTi was not executed, we think. We attach two files. - merge.patch : It is a patch that we tested. - xenctx.log : It is a cpu-context of domVTi after we test. Comment: We think that TODO line is unnecessary. @@ -404,7 +419,7 @@ static void deliver_ipi (VCPU *vcpu, uin break; case 5: // INIT // TODO -- inject guest INIT-- This! -panic_domain (NULL, Inject guest INIT!\n); +vmx_inject_guest_pal_init(vcpu); break; case 7: // ExtINT vmx_vcpu_pend_interrupt (vcpu, 0); Best regards, Kan and Akio This patch add guest PAL_INIT support for IPI Signed-off-by, Zhang Xin [EMAIL PROTECTED] Good good study,day day up ! ^_^ -Wing(zhang xin) OTC,Intel Corporation ---text/plain--- ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
[Xen-ia64-devel] RE: [Xen-devel] struct xen_domctl_getmemlist's start_pfn member
From: Jan Beulich Sent: 2006年11月6日 21:17 What is the purpose of this member? Neither does it seem to get set (e.g. xc_get_pfn_list), nor does it seem to get read. Thanks, Jan Xen/ia64 is the only user on this member, with some historical requirement to query a specific frame list starting from given point. One reason I remembered is that once xen/ia64 adopted a policy to allocate machine frames dynamically upon lookup request by above getmemlist, and thus allow control panel to query-then-allocate for a specific region. But this requirement has disappeared for a long time. Not sure whether latest xen/ia64 still has other usage model. If not, it's better to be cleared. Thanks, Kevin ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
[Xen-ia64-devel] [PATCH 1/2] blktap: preliminary clean up
1 / 2 # HG changeset patch # User [EMAIL PROTECTED] # Date 1162819834 -32400 # Node ID 3eef92a12f219d048a45a54014d67bab0652734f # Parent 11b718eb22c996868bed5b18dcc08081ad27d0be preliminary clean up ia64 mm.c for blktap dom0 mount support. PATCHNAME: clean_up_for_blktap_dom0_mount Signed-off-by: Isaku Yamahata [EMAIL PROTECTED] diff -r 11b718eb22c9 -r 3eef92a12f21 xen/arch/ia64/xen/mm.c --- a/xen/arch/ia64/xen/mm.cThu Nov 02 12:43:04 2006 -0700 +++ b/xen/arch/ia64/xen/mm.cMon Nov 06 22:30:34 2006 +0900 @@ -36,7 +36,7 @@ * * operations on this structure: * - global tlb purge - * vcpu_ptc_g(), vcpu_ptc_ga() and domain_page_flush() + * vcpu_ptc_g(), vcpu_ptc_ga() and domain_page_flush_and_put() * I.e. callers of domain_flush_vtlb_range() and domain_flush_vtlb_all() * These functions invalidate VHPT entry and vcpu-arch.{i, d}tlb * @@ -179,8 +179,9 @@ #include asm/page.h #include public/memory.h -static void domain_page_flush(struct domain* d, unsigned long mpaddr, - volatile pte_t* ptep, pte_t old_pte); +static void domain_page_flush_and_put(struct domain* d, unsigned long mpaddr, + volatile pte_t* ptep, pte_t old_pte, + struct page_info* page); extern unsigned long ia64_iobase; @@ -1036,6 +1037,25 @@ assign_domain_mach_page(struct domain *d return mpaddr; } +static void +domain_put_page(struct domain* d, unsigned long mpaddr, +volatile pte_t* ptep, pte_t old_pte, int clear_PGC_allocate) +{ +unsigned long mfn = pte_pfn(old_pte); +struct page_info* page = mfn_to_page(mfn); + +if (page_get_owner(page) == d || +page_get_owner(page) == NULL) { +BUG_ON(get_gpfn_from_mfn(mfn) != (mpaddr PAGE_SHIFT)); +set_gpfn_from_mfn(mfn, INVALID_M2P_ENTRY); +} + +if (clear_PGC_allocate) { +try_to_clear_PGC_allocate(d, page); +} +domain_page_flush_and_put(d, mpaddr, ptep, old_pte, page); +} + // caller must get_page(mfn_to_page(mfn)) before call. // caller must call set_gpfn_from_mfn() before call if necessary. // because set_gpfn_from_mfn() result must be visible before pte xchg @@ -1066,18 +1086,7 @@ assign_domain_page_replace(struct domain // = create_host_mapping() // = assign_domain_page_replace() if (mfn != old_mfn) { -struct page_info* old_page = mfn_to_page(old_mfn); - -if (page_get_owner(old_page) == d || -page_get_owner(old_page) == NULL) { -BUG_ON(get_gpfn_from_mfn(old_mfn) != (mpaddr PAGE_SHIFT)); -set_gpfn_from_mfn(old_mfn, INVALID_M2P_ENTRY); -} - -domain_page_flush(d, mpaddr, pte, old_pte); - -try_to_clear_PGC_allocate(d, old_page); -put_page(old_page); +domain_put_page(d, mpaddr, pte, old_pte, 1); } } perfc_incrc(assign_domain_page_replace); @@ -1139,8 +1148,7 @@ assign_domain_page_cmpxchg_rel(struct do set_gpfn_from_mfn(old_mfn, INVALID_M2P_ENTRY); -domain_page_flush(d, mpaddr, pte, old_pte); -put_page(old_page); +domain_page_flush_and_put(d, mpaddr, pte, old_pte, old_page); perfc_incrc(assign_domain_pge_cmpxchg_rel); return 0; } @@ -1197,23 +1205,12 @@ zap_domain_page_one(struct domain *d, un page = mfn_to_page(mfn); BUG_ON((page-count_info PGC_count_mask) == 0); -if (page_get_owner(page) == d || -page_get_owner(page) == NULL) { -// exchange_memory() calls -// steal_page() -// page owner is set to NULL -// guest_physmap_remove_page() -// zap_domain_page_one() -BUG_ON(get_gpfn_from_mfn(mfn) != (mpaddr PAGE_SHIFT)); -set_gpfn_from_mfn(mfn, INVALID_M2P_ENTRY); -} - -domain_page_flush(d, mpaddr, pte, old_pte); - -if (page_get_owner(page) != NULL) { -try_to_clear_PGC_allocate(d, page); -} -put_page(page); +// exchange_memory() calls +// steal_page() +// page owner is set to NULL +// guest_physmap_remove_page() +// zap_domain_page_one() +domain_put_page(d, mpaddr, pte, old_pte, (page_get_owner(page) != NULL)); perfc_incrc(zap_dcomain_page_one); } @@ -1439,12 +1436,13 @@ destroy_grant_host_mapping(unsigned long unsigned long mfn, unsigned int flags) { struct domain* d = current-domain; +unsigned long gpfn = gpaddr PAGE_SHIFT; volatile pte_t* pte; unsigned long cur_arflags; pte_t cur_pte; pte_t new_pte; pte_t old_pte; -struct page_info* page; +struct page_info* page = mfn_to_page(mfn); if (flags (GNTMAP_application_map | GNTMAP_contains_pte)) { DPRINTK(%s: flags 0x%x\n, __func__, flags); @@ -1460,7 +1458,8 @@ destroy_grant_host_mapping(unsigned long again: cur_arflags = pte_val(*pte) ~_PAGE_PPN_MASK; cur_pte =
[Xen-ia64-devel] [PATCH 2/2] blktap: dom0 mount support
2 / 2 # HG changeset patch # User [EMAIL PROTECTED] # Date 1162819837 -32400 # Node ID 2ea5b636612322b6659ecc3481cdbf17e0eb0559 # Parent 3eef92a12f219d048a45a54014d67bab0652734f Support Xen/IA64 self-grant-table-page-mapping. Before the changeset 10677:2937703f0ed0 of xen-unstable.hg, it is prohibited mapping a page which is granted by self domain. However it is allowed in order to mount blktap image by dom0. This patch is necessary to support blktap on Xen/IA64. PATCHNAME: xen_ia64_self_grant_table_page_mapping Signed-off-by: Isaku Yamahata [EMAIL PROTECTED] diff -r 3eef92a12f21 -r 2ea5b6366123 xen/arch/ia64/xen/mm.c --- a/xen/arch/ia64/xen/mm.cMon Nov 06 22:30:34 2006 +0900 +++ b/xen/arch/ia64/xen/mm.cMon Nov 06 22:30:37 2006 +0900 @@ -763,7 +763,8 @@ __assign_new_domain_page(struct domain * // because set_pte_rel() has release semantics set_pte_rel(pte, pfn_pte(maddr PAGE_SHIFT, -__pgprot(__DIRTY_BITS | _PAGE_PL_2 | _PAGE_AR_RWX))); +__pgprot(_PAGE_PGC_ALLOCATED | __DIRTY_BITS | + _PAGE_PL_2 | _PAGE_AR_RWX))); smp_mb(); return p; @@ -805,6 +806,7 @@ flags_to_prot (unsigned long flags) #ifdef CONFIG_XEN_IA64_TLB_TRACK res |= flags ASSIGN_tlb_track ? _PAGE_TLB_TRACKING: 0; #endif +res |= flags ASSIGN_pgc_allocated ? _PAGE_PGC_ALLOCATED: 0; return res; } @@ -864,7 +866,8 @@ assign_domain_page(struct domain *d, set_gpfn_from_mfn(physaddr PAGE_SHIFT, mpaddr PAGE_SHIFT); // because __assign_domain_page() uses set_pte_rel() which has // release semantics, smp_mb() isn't needed. -(void)__assign_domain_page(d, mpaddr, physaddr, ASSIGN_writable); +(void)__assign_domain_page(d, mpaddr, physaddr, + ASSIGN_writable | ASSIGN_pgc_allocated); } int @@ -1033,6 +1036,7 @@ assign_domain_mach_page(struct domain *d unsigned long mpaddr, unsigned long size, unsigned long flags) { +BUG_ON(flags ASSIGN_pgc_allocated); assign_domain_same_page(d, mpaddr, size, flags); return mpaddr; } @@ -1044,14 +1048,16 @@ domain_put_page(struct domain* d, unsign unsigned long mfn = pte_pfn(old_pte); struct page_info* page = mfn_to_page(mfn); -if (page_get_owner(page) == d || -page_get_owner(page) == NULL) { -BUG_ON(get_gpfn_from_mfn(mfn) != (mpaddr PAGE_SHIFT)); -set_gpfn_from_mfn(mfn, INVALID_M2P_ENTRY); -} - -if (clear_PGC_allocate) { -try_to_clear_PGC_allocate(d, page); +if (pte_pgc_allocated(old_pte)) { +if (page_get_owner(page) == d || +page_get_owner(page) == NULL) { +BUG_ON(get_gpfn_from_mfn(mfn) != (mpaddr PAGE_SHIFT)); +set_gpfn_from_mfn(mfn, INVALID_M2P_ENTRY); +} else +BUG(); + +if (clear_PGC_allocate) +try_to_clear_PGC_allocate(d, page); } domain_page_flush_and_put(d, mpaddr, ptep, old_pte, page); } @@ -1142,6 +1148,7 @@ assign_domain_page_cmpxchg_rel(struct do } BUG_ON(!pte_mem(old_pte)); +BUG_ON(!pte_pgc_allocated(old_pte)); BUG_ON(page_get_owner(old_page) != d); BUG_ON(get_gpfn_from_mfn(old_mfn) != (mpaddr PAGE_SHIFT)); BUG_ON(old_mfn == new_mfn); @@ -1236,7 +1243,7 @@ dom0vp_add_physmap(struct domain* d, uns struct domain* rd; /* Not allowed by a domain. */ -if (flags ASSIGN_nocache) +if (flags (ASSIGN_nocache | ASSIGN_pgc_allocated)) return -EINVAL; rd = find_domain_by_id(domid); @@ -1418,8 +1425,6 @@ create_grant_host_mapping(unsigned long page = mfn_to_page(mfn); ret = get_page(page, page_get_owner(page)); BUG_ON(ret == 0); -BUG_ON(page_get_owner(mfn_to_page(mfn)) == d - get_gpfn_from_mfn(mfn) != INVALID_M2P_ENTRY); assign_domain_page_replace(d, gpaddr, mfn, #ifdef CONFIG_XEN_IA64_TLB_TRACK ASSIGN_tlb_track | @@ -1541,7 +1546,8 @@ steal_page(struct domain *d, struct page // has release semantics. ret = assign_domain_page_cmpxchg_rel(d, gpfn PAGE_SHIFT, page, new, - ASSIGN_writable); + ASSIGN_writable | + ASSIGN_pgc_allocated); if (ret 0) { DPRINTK(assign_domain_page_cmpxchg_rel failed %d\n, ret); set_gpfn_from_mfn(new_mfn, INVALID_M2P_ENTRY); @@ -1635,7 +1641,8 @@ guest_physmap_add_page(struct domain *d, BUG_ON(ret == 0); set_gpfn_from_mfn(mfn, gpfn); smp_mb(); -assign_domain_page_replace(d, gpfn PAGE_SHIFT, mfn, ASSIGN_writable); +assign_domain_page_replace(d, gpfn PAGE_SHIFT, mfn, + ASSIGN_writable | ASSIGN_pgc_allocated); //BUG_ON(mfn != ((lookup_domain_mpa(d, gpfn PAGE_SHIFT) _PFN_MASK)
Re: [Xen-ia64-devel] [PATCH 0/3] blktap: various clean up and ia64 support
Hi Keir, Is something keeping Isaku's blktap/ia64 patches from xen-unstable? We need these in RHEL5 but I'd like to see them merged into xen-unstable before patching the distro. Thanks, Aron ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
Re: [Xen-ia64-devel] [PATCH 0/3] blktap: various clean up and ia64 support
On 6/11/06 9:21 pm, Aron Griffis [EMAIL PROTECTED] wrote: Is something keeping Isaku's blktap/ia64 patches from xen-unstable? I'm not really blktap maintainer. Andy or Julian should probably look at the patches. -- Keir ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
Re: [Xen-ia64-devel] Modify to introduce delayed p2m table destruction
On Mon, Nov 06, 2006 at 03:15:22PM +0900, [EMAIL PROTECTED] wrote: However during shadow_teardown_xxx() in your patch other domain might access the p2m table and struct page_info. Page reference convension must be kept right during them. Yes, it might access them. In past, I thought so, but after discussion about delayed p2m table destruction of shadow2, I've be satisfied that get_page avoids memory corruption finally. You might understand x86 shadow code. However you must understand IA64 code too. It would be effective to understand IA64 code by analogy of x86 shadow code, But they're different. Hmm, I don't understand the difference. Can you give me suggestion about the difference ? The Xen/IA64 p2m table is lockless while Xen/x86 shadow p2m table is protected by shadow_lock()/shadow_unlock(). This is a burden to the Xen/IA64 p2m maintenance. So we must be very carefull when modifying it. Especially we must be aware of memory ordering. This is the reason why volatile is sprinckled. In Xen/IA64 p2m case page reference count must be increased before you add the new entry, page reference count must be decreased after removing the entry. The only exception is relinquish_pte() because it assumes that the p2m itself is freed. (But this assumption is wrong.) However Xen/x86 shadow p2m doesn't care abount page reference count. The blktap patches which I sent out last night impose a one more new rule which is related to PGC_allocated flag. The patch introduces _PAGE_PGC_ALLOCATED. When the p2m entry is removed and _PAGE_PGC_ALLOCATED bit is set, something like if (pte_pgc_allocated(old_pte)) { if (test_and_clear(_PGC_allocated, page-count_info)) put_page(page) } must be done. domain_put_page() takes care of it. Thanks. -- yamahata ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
Re: [Xen-ia64-devel] Modify to introduce delayed p2m table destruction
Thank you for your suggestion. We'll study it. Thanks, - Tsunehisa Doi You (yamahata) said: However during shadow_teardown_xxx() in your patch other domain might access the p2m table and struct page_info. Page reference convension must be kept right during them. Yes, it might access them. In past, I thought so, but after discussion about delayed p2m table destruction of shadow2, I've be satisfied that get_page avoids memory corruption finally. You might understand x86 shadow code. However you must understand IA64 code too. It would be effective to understand IA64 code by analogy of x86 shadow code, But they're different. Hmm, I don't understand the difference. Can you give me suggestion about the difference ? The Xen/IA64 p2m table is lockless while Xen/x86 shadow p2m table is protected by shadow_lock()/shadow_unlock(). This is a burden to the Xen/IA64 p2m maintenance. So we must be very carefull when modifying it. Especially we must be aware of memory ordering. This is the reason why volatile is sprinckled. In Xen/IA64 p2m case page reference count must be increased before you add the new entry, page reference count must be decreased after removing the entry. The only exception is relinquish_pte() because it assumes that the p2m itself is freed. (But this assumption is wrong.) However Xen/x86 shadow p2m doesn't care abount page reference count. The blktap patches which I sent out last night impose a one more new rule which is related to PGC_allocated flag. The patch introduces _PAGE_PGC_ALLOCATED. When the p2m entry is removed and _PAGE_PGC_ALLOCATED bit is set, something like if (pte_pgc_allocated(old_pte)) { if (test_and_clear(_PGC_allocated, page-count_info)) put_page(page) } must be done. domain_put_page() takes care of it. Thanks. -- yamahata ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
[Xen-ia64-devel] [PATCH] Modify vmx fault handler
Hi all, This patch fixes the vmx fault handler to set the fault vector number in r19. The r19 is used to display a fault message in dispatch_to_fault_handler(). Signed-off-by: Akio Takebe [EMAIL PROTECTED] Signed-off-by: Kazuhiro Suzuki [EMAIL PROTECTED] Thanks, KAZ diff -r 11b718eb22c9 xen/arch/ia64/vmx/vmx_ivt.S --- a/xen/arch/ia64/vmx/vmx_ivt.S Thu Nov 02 12:43:04 2006 -0700 +++ b/xen/arch/ia64/vmx/vmx_ivt.S Mon Nov 06 10:17:53 2006 +0900 @@ -95,6 +95,7 @@ #define VMX_FAULT(n)\ vmx_fault_##n:; \ +mov r19=n;; \ br.sptk.many dispatch_to_fault_handler; \ ;; \ @@ -106,7 +107,7 @@ vmx_fault_##n:; \ ;; \ tbit.z p6,p7=r29,IA64_PSR_VM_BIT; \ (p7)br.sptk.many vmx_dispatch_reflection;\ -VMX_FAULT(n);\ +br.sptk.many dispatch_to_fault_handler; \ GLOBAL_ENTRY(vmx_panic) ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
RE: [Xen-ia64-devel] [Patch] Guest PAL_INIT support for IPI
Hi, Wing I'm sorry. Our patch modify codes of hypervisor and dom0 kernel. So after installing xen.gz and dom0 kernel, please reboot system with them. Best Regards, Akio Takebe Oh, I forgot restart xend last time. But this time I get another error info. See attachment Good good study,day day up ! ^_^ -Wing(zhang xin) OTC,Intel Corporation -Original Message- From: Akio Takebe [mailto:[EMAIL PROTECTED] Sent: 2006年11月7日 14:55 To: Zhang, Xing Z; Masaki Kanno; xen-ia64-devel@lists.xensource.com Cc: Akio Takebe Subject: RE: [Xen-ia64-devel] [Patch] Guest PAL_INIT support for IPI Hi, Wing Could you try it again after xend stop; xend start. Best Regards, Akio Takebe New GFW will release soon, I think it's today. I used your merge.patch but failed. Seems some problems in python code. Attachment is a picture show the issue Good good study,day day up ! ^_^ -Wing(zhang xin) OTC,Intel Corporation -Original Message- From: Akio Takebe [mailto:[EMAIL PROTECTED] Sent: 2006年11月7日 13:39 To: Zhang, Xing Z; Masaki Kanno; xen-ia64-devel@lists.xensource.com Subject: RE: [Xen-ia64-devel] [Patch] Guest PAL_INIT support for IPI Hi, Wing I don't know, but if you give me newer GFW, we'll test xm os-init on windows of domVTi. Best Regards, Akio Takebe I still have a question: How windows OS_INIT handler do? How can I confirm whether PAL_INIT executed successful in windows? ( in Linux , I can see a lot of dump info on screen) Good good study,day day up ! ^_^ -Wing(zhang xin) OTC,Intel Corporation -Original Message- From: Masaki Kanno [mailto:[EMAIL PROTECTED] Sent: 2006ト・1ヤツ6ネユ 18:18 To: Zhang, Xing Z; xen-ia64-devel@lists.xensource.com Subject: Re: [Xen-ia64-devel] [Patch] Guest PAL_INIT support for IPI Hi Zhang, We were looking forward to your patches. We have a question and a comment. Question: How can we inject the INIT interrupt into domVTi? We made xm os-init command, and tested your patch. But INIT handler of domVTi was not executed, we think. We attach two files. - merge.patch : It is a patch that we tested. - xenctx.log : It is a cpu-context of domVTi after we test. Comment: We think that TODO line is unnecessary. @@ -404,7 +419,7 @@ static void deliver_ipi (VCPU *vcpu, uin break; case 5: // INIT // TODO -- inject guest INIT-- This! -panic_domain (NULL, Inject guest INIT!\n); +vmx_inject_guest_pal_init(vcpu); break; case 7: // ExtINT vmx_vcpu_pend_interrupt (vcpu, 0); Best regards, Kan and Akio This patch add guest PAL_INIT support for IPI Signed-off-by, Zhang Xin [EMAIL PROTECTED] Good good study,day day up ! ^_^ -Wing(zhang xin) OTC,Intel Corporation ---text/plain--- ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel