[Xen-ia64-devel] RE: The vcpus can change from one LP to another LP very frequently
Ok, let me try. Thanks! Thanks, Zhangjingke -Original Message- From: Keir Fraser [mailto:[EMAIL PROTECTED] Sent: Wednesday, November 22, 2006 4:01 PM To: Zhang, Jingke Cc: xen-ia64-devel Subject: Re: The vcpus can change from one LP to another LP very frequently Can you narrow it down to a specific changeset? If it turns out to be a credit-scheduler change (which is most likely) then you should email Emmanuel Ackaouy [EMAIL PROTECTED]. -- Keir On 22/11/06 7:57 am, Zhang, Jingke [EMAIL PROTECTED] wrote: Hi Keir, With #Cset12018, the vcpus of xen0 and the SMP guest can stay at one LP for a long time. But from the #Cset 12425 on (maybe an earlier Cset), the vcpus would jumped from here and there very fast. This fast changing will cause a lot of performance downgrade in SMP_VTI. Thanks, Zhangjingke ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
[Xen-ia64-devel] RE: [Xen-devel][RFC]degradation on IPF due to hypercall set irq
Keir Fraser write on 2006年11月22日 15:59: On 22/11/06 7:55 am, Keir Fraser [EMAIL PROTECTED] wrote: Have an array of set_level hypercall structures, and an array of multicall structures. Fill them in at the point we currently do the hypercall. Flush when: A) the array is already full; or B) when qemu passes through its event loop. Make the arrays 16 entries large, for example, will be plenty. Use the same mechanism for the notification (i.e., add to the multicall array, to be flushed by qemu's main loop). To clarify, by event/main loop I mean: Flush just before qemu blocks (otherwise multicall can be held for unbounded time, unless we set a batching timeout which hopefully we can avoid needing to do). Seems good, Is it possible that we add some file descriptors for some interrupts in qemu block (select)? Then if there are any interrupts, qemu will be woke up. In this case, we can only call multi-call just after qemu block. --Anthony. -- Keir ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
[Xen-ia64-devel] Re: [Xen-devel][RFC]degradation on IPF due to hypercall set irq
On 22/11/06 08:16, Xu, Anthony [EMAIL PROTECTED] wrote: To clarify, by event/main loop I mean: Flush just before qemu blocks (otherwise multicall can be held for unbounded time, unless we set a batching timeout which hopefully we can avoid needing to do). Seems good, Is it possible that we add some file descriptors for some interrupts in qemu block (select)? Then if there are any interrupts, qemu will be woke up. In this case, we can only call multi-call just after qemu block. I'm not sure what you mean. -- Keir ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
[Xen-ia64-devel] RE: [Xen-devel][RFC]degradation on IPF due to hypercall set irq
Keir Fraser write on 2006年11月22日 17:22: On 22/11/06 08:16, Xu, Anthony [EMAIL PROTECTED] wrote: To clarify, by event/main loop I mean: Flush just before qemu blocks (otherwise multicall can be held for unbounded time, unless we set a batching timeout which hopefully we can avoid needing to do). Why otherwise multicall can be held for unbounded time? --Anthony -- Keir ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
[Xen-ia64-devel] RE: [Xen-devel][RFC]degradation on IPF due to hypercall set irq
Keir Fraser write on 2006年11月22日 17:26: On 22/11/06 09:24, Xu, Anthony [EMAIL PROTECTED] wrote: To clarify, by event/main loop I mean: Flush just before qemu blocks (otherwise multicall can be held for unbounded time, unless we set a batching timeout which hopefully we can avoid needing to do). Why otherwise multicall can be held for unbounded time? Qemu only wakes up for device-model accesses. We don't know when the next of those will be. So we should flush multicalls before the potentially blocking select(). There are two threads, one is qemu thread, the other is IDE DMA thread, In IDE DMA thread, when it finishing DMA opereration, it will set irq, but it doesn't try to wakeup qemu thread. So if qemu thread is sleeping at the same time, this interrupt may be delivered until qemu thread wakes up, the time may be 10 msec. So we need a mechanism for IDE DMA thread to wake up Qemu thread. What's your opinion? Thanks, Anthony -- Keir ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
[Xen-ia64-devel] Re: [Xen-devel] [PATCH 4/5 TAKE 2] xenoprof: make linux xenoprofcodearch-generic
Accidentaly the attached patch was dropped. Please apply this too. On Wed, Nov 22, 2006 at 04:31:51PM +0900, Isaku Yamahata wrote: On Wed, Nov 22, 2006 at 07:01:31AM +, Keir Fraser wrote: On 22/11/06 12:50 am, Santos, Jose Renato G [EMAIL PROTECTED] wrote: Good! I am happy with the patches now. I have tested your patches today on x86 32bit Xen and found no problem. I did not test on x86_64 but I do not forsee any problems. I am now able to profile the same domain in different modes (active and passive) during different profiling sessions, which was not possible before your patches. This is good! Thanks. Now, it is up to Keir to apply the patches or request any changes Please re-send the patches as a single tarball. I've lost track of the consistent set. Attached. Please find it. Thanks. -- yamahata ___ Xen-devel mailing list [EMAIL PROTECTED] http://lists.xensource.com/xen-devel -- yamahata # HG changeset patch # User [EMAIL PROTECTED] # Date 1164196036 -32400 # Node ID 521c5ac07bb10335df87413cfb1a543b59f8d640 # Parent 18cd7d8869490c5662056c9c52d617c02d7c2003 [XENOPROFILE] removed unused gmaddr argument. Signed-off-by: Isaku Yamahata [EMAIL PROTECTED] diff -r 18cd7d886949 -r 521c5ac07bb1 xen/common/xenoprof.c --- a/xen/common/xenoprof.c Wed Nov 22 10:31:50 2006 + +++ b/xen/common/xenoprof.c Wed Nov 22 20:47:16 2006 +0900 @@ -128,7 +128,7 @@ xenoprof_shared_gmfn_with_guest( } } -static char *alloc_xenoprof_buf(struct domain *d, int npages, uint64_t gmaddr) +static char *alloc_xenoprof_buf(struct domain *d, int npages) { char *rawbuf; int order; @@ -146,7 +146,7 @@ static char *alloc_xenoprof_buf(struct d } static int alloc_xenoprof_struct( -struct domain *d, int max_samples, int is_passive, uint64_t gmaddr) +struct domain *d, int max_samples, int is_passive) { struct vcpu *v; int nvcpu, npages, bufsize, max_bufsize; @@ -179,8 +179,7 @@ static int alloc_xenoprof_struct( (max_samples - 1) * sizeof(struct event_log); npages = (nvcpu * bufsize - 1) / PAGE_SIZE + 1; -d-xenoprof-rawbuf = alloc_xenoprof_buf(is_passive ? dom0 : d, npages, - gmaddr); +d-xenoprof-rawbuf = alloc_xenoprof_buf(is_passive ? dom0 : d, npages); if ( d-xenoprof-rawbuf == NULL ) { @@ -368,8 +367,7 @@ static int add_passive_list(XEN_GUEST_HA if ( d-xenoprof == NULL ) { -ret = alloc_xenoprof_struct( -d, passive.max_samples, 1, passive.buf_gmaddr); +ret = alloc_xenoprof_struct(d, passive.max_samples, 1); if ( ret 0 ) { put_domain(d); @@ -509,9 +507,7 @@ static int xenoprof_op_get_buffer(XEN_GU */ if ( d-xenoprof == NULL ) { -ret = alloc_xenoprof_struct( -d, xenoprof_get_buffer.max_samples, 0, -xenoprof_get_buffer.buf_gmaddr); +ret = alloc_xenoprof_struct(d, xenoprof_get_buffer.max_samples, 0); if ( ret 0 ) return ret; } ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
[Xen-ia64-devel] [PATCH] Re: [Xen-devel] Re: [PATCH 2/2] PV framebuffer
Hi, Markus This is a patch for work PV frame buffer on IA64. We also prepare the patch for FC6 and RHEL5B2 and its confirmed working on IA64 and x86. In this patch code, #ifdef __ia64__ exists, but it does not need. The purpose is avoid the unnecessary error handling for x86. Signed-off-by: Isaku Yamahata [EMAIL PROTECTED] Signed-off-by: Masami Watanabe [EMAIL PROTECTED] Signed-off-by: Atsushi SAKAI [EMAIL PROTECTED] From seeing your patches policy, We do memory address translation in Dom0 application side. Thanks Atsushi SAKAI PV framebuffer backend. Derived from http://hg.codemonkey.ws/vncfb Extensive changes based on feedback from xen-devel. Signed-off-by: Markus Armbruster [EMAIL PROTECTED] --- tools/Makefile|1 tools/python/xen/xend/XendDevices.py |4 tools/python/xen/xend/server/vfbif.py | 29 + tools/python/xen/xm/create.py | 17 tools/xenfb/Makefile | 33 + tools/xenfb/sdlfb.c | 337 ++ tools/xenfb/vncfb.c | 396 + tools/xenfb/xenfb.c | 619 ++ tools/xenfb/xenfb.h | 34 + 9 files changed, 1469 insertions(+), 1 deletion(-) pvfb-ia64-support.patch Description: Binary data ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
[Xen-ia64-devel] [RFC][PATCH] Supporting MSB-domU on ia64 Xen
Hi, at present Xen supports only domU's with the same endianess. The ia64 cpu is able to run in little or big endianess. Currently ia64-Xen runs little endian (dom0 linux too) and supports little endian domU's. I want to add support for big endian domU's on ia64-Xen. My first step is changing the loader stuff to get big endian elf images loaded into the memory. Only tools/libxc/xc_load_elf.c is concerned. Therefore I added some swap inline functions and macros. Please have a look and send your comments! Thanks. Dietmar. Signed-off-by: Dietmar Hahn [EMAIL PROTECTED] diff -r bcd2960d6dfd tools/libxc/xc_load_elf.c --- a/tools/libxc/xc_load_elf.c Mon Nov 20 21:10:59 2006 -0700 +++ b/tools/libxc/xc_load_elf.c Wed Nov 22 13:05:27 2006 +0100 @@ -6,6 +6,66 @@ #include xc_elf.h #include stdlib.h #include inttypes.h + +#if defined(__ia64__) + +static int do_swap = 0; + +static __inline uint64_t +bswap64(uint64_t x) +{ +uint64_t r; +asm __volatile(mux1 %0=%1,@rev : =r (r) : r(x)); +return r; +} + +static __inline uint64_t +xen_swap64(uint64_t x) +{ +if(do_swap) +return (bswap64(x)); +else return x; +} + +static __inline uint32_t +xen_swap32(uint32_t x) +{ +if(do_swap) +return (bswap64(x) 32); +else return x; +} + +static __inline uint16_t +xen_swap16(uint16_t x) +{ + +if(do_swap) +return (bswap64(x) 48); +else return x; +} + + +#define xenswap(x,sz) ( \ + ((sz)==1)? (uint8_t)(x): \ + ((sz)==2)? xen_swap16(x): \ + ((sz)==4)? xen_swap32(x): \ + ((sz)==8)? xen_swap64(x): \ + ~0l ) + +#define SWAP(x) xenswap((x), sizeof((x))) + + +#define SET_SWAP do_swap = 1; +#define SET_NOSWAP do_swap = 0; + +#else /* defined(__ia64__) */ + +#define SWAP(x) x +#define SET_SWAP +#define SET_NOSWAP + +#endif /* defined(__ia64__) */ + #define round_pgup(_p)(((_p)+(PAGE_SIZE-1))PAGE_MASK) #define round_pgdown(_p) ((_p)PAGE_MASK) @@ -62,8 +122,8 @@ int probe_elf(const char *image, static inline int is_loadable_phdr(Elf_Phdr *phdr) { -return ((phdr-p_type == PT_LOAD) -((phdr-p_flags (PF_W|PF_X)) != 0)); +return ((SWAP(phdr-p_type) == PT_LOAD) +((SWAP(phdr-p_flags) (PF_W|PF_X)) != 0)); } /* @@ -72,7 +132,7 @@ static inline int is_loadable_phdr(Elf_P */ static int is_xen_guest_section(Elf_Shdr *shdr, const char *shstrtab) { -return strcmp(shstrtab[shdr-sh_name], __xen_guest) == 0; +return strcmp(shstrtab[SWAP(shdr-sh_name)], __xen_guest) == 0; } static const char *xen_guest_lookup(struct domain_setup_info *dsi, int type) @@ -157,11 +217,11 @@ static int is_xen_elfnote_section(const { Elf_Note *note; -if ( shdr-sh_type != SHT_NOTE ) -return 0; - -for ( note = (Elf_Note *)(image + shdr-sh_offset); - note (Elf_Note *)(image + shdr-sh_offset + shdr-sh_size); +if ( SWAP(shdr-sh_type) != SHT_NOTE ) +return 0; + +for ( note = (Elf_Note *)(image + SWAP(shdr-sh_offset)); + note (Elf_Note *)(image + SWAP(shdr-sh_offset) + SWAP(shdr-sh_size)); note = ELFNOTE_NEXT(note) ) { if ( !strncmp(ELFNOTE_NAME(note), Xen, 4) ) @@ -254,61 +314,75 @@ static int parseelfimage(const char *ima return -EINVAL; } +SET_NOSWAP /* Default is no byte swapping. */ +if(ehdr-e_ident[EI_DATA] != ELFDATA) +{ +#if defined(__ia64__) +if(ehdr-e_ident[EI_DATA] != ELFDATA2MSB) +{ +ERROR(Kernel not a Xen-compatible Elf image.); +return -EINVAL; +} +SET_SWAP /* Switch on byte swapping. */ +#else /* defined(__ia64__) */ +ERROR(Kernel not a Xen-compatible Elf image.); +return -EINVAL; +#endif /* defined(__ia64__) */ +} if ( (ehdr-e_ident[EI_CLASS] != ELFCLASS) || - (ehdr-e_machine != ELFMACHINE) || - (ehdr-e_ident[EI_DATA] != ELFDATA) || - (ehdr-e_type != ET_EXEC) ) + (SWAP(ehdr-e_machine) != ELFMACHINE) || + (SWAP(ehdr-e_type) != ET_EXEC) ) { ERROR(Kernel not a Xen-compatible Elf image.); return -EINVAL; } -if ( (ehdr-e_phoff + (ehdr-e_phnum*ehdr-e_phentsize)) image_len ) +if ( (SWAP(ehdr-e_phoff) + (SWAP(ehdr-e_phnum)*SWAP(ehdr-e_phentsize))) image_len ) { ERROR(ELF program headers extend beyond end of image.); return -EINVAL; } -if ( (ehdr-e_shoff + (ehdr-e_shnum*ehdr-e_shentsize)) image_len ) +if ( (SWAP(ehdr-e_shoff) + (SWAP(ehdr-e_shnum)*SWAP(ehdr-e_shentsize))) image_len ) { ERROR(ELF section headers extend beyond end of image.); return -EINVAL; } + /* Find the section-header strings table. */ -if ( ehdr-e_shstrndx == SHN_UNDEF ) +if ( SWAP(ehdr-e_shstrndx) == SHN_UNDEF ) { ERROR(ELF image has no section-header strings table (shstrtab).); return -EINVAL; } -shdr = (Elf_Shdr *)(image + ehdr-e_shoff + -
Re: [Xen-ia64-devel] MCA patches causing Xen to hang on sn2
SUZUKI Kazuhiro wrote: Hi Jes, Thanks for your information. But I could not find the cause of error though I checked your boot log. Please build and test in MCA debug mode which is enabled to define IA64_MCA_DEBUG_INFO in xen/arch/ia64/linux-xen/mca.c. Hi Kaz, I tried this and attached the output below. I was wondering why we seem to allocate pages to MCA handlers on 64 processors even if we only boot 8, but thats a detail. I found the sn/sn2 of SGI specific MCA code in native linux. Would you please teach me whether these codes are related with this problem, if you know. I have to admit that I know nothing about the SN2 MCA related code, I think thats Keith Owens' speciality. And I think that your system will boot up if nomca is specified in boot parameters. This didn't make any difference :( Cheers, Jes ELILO boot: x Uncompressing Linux... done Loading file vmlinuz-xen...done Uncompressing... done __ ___ ___ __ _ \ \/ /___ _ __ |___ / / _ \_ _ _ __ ___| |_ __ _| |__ | | ___ \ // _ \ '_ \|_ \| | | |__| | | | '_ \/ __| __/ _` | '_ \| |/ _ \ / \ __/ | | | ___) | |_| |__| |_| | | | \__ \ || (_| | |_) | | __/ /_/\_\___|_| |_| |(_)___/\__,_|_| |_|___/\__\__,_|_.__/|_|\___| http://www.cl.cam.ac.uk/netos/xen University of Cambridge Computer Laboratory Xen version 3.0-unstable ([EMAIL PROTECTED]) (gcc version 4.1.0 (SUSE Linux)) Wed Nov 22 15:08:30 CET 2006 Latest ChangeSet: Wed Nov 15 12:15:34 2006 -0700 12464:ac5330d4945a (XEN) Xen command line: (XEN) xen image pstart: 0x301400, xenheap pend: 0x301800 (XEN) Xen patching physical address access by offset: 0x301000 (XEN) find_memory: efi_memmap_walk returns max_page=c1efff (XEN) Before xen_heap_start: f03014155d00 (XEN) After xen_heap_start: f0301600 (XEN) Init boot pages: 0x3003000120 - 0x301400. (XEN) Setting first_pg to 30040 (XEN) Init boot pages: 0x301800 - 0x307bffc000. (XEN) System RAM: 1935MB (1982448kB) (XEN) size of virtual frame_table: 4880kB (XEN) virtual machine to physical table: f3fff9f08008 size: 1008kB (XEN) max_page: 0xc1efff (XEN) Xen heap: 32MB (32768kB) (XEN) ACPI: RSDP (v002SGI) @ 0x003002a09ac0 (XEN) ACPI: XSDT (v001SGI XSDTSN2 0x00010001 0x0001) @ 0x003002a09b00 (XEN) ACPI: MADT (v001SGI APICSN2 0x00010001 0x0001) @ 0x003002a09b60 (XEN) ACPI: SRAT (v001SGI SRATSN2 0x00010001 0x0001) @ 0x003002a09c00 (XEN) ACPI: SLIT (v001SGI SLITSN2 0x00010001 0x0001) @ 0x003002a09d60 (XEN) ACPI: FADT (v003SGI FACPSN2 0x00030001 0x0001) @ 0x003002a09e40 (XEN) ACPI: DSDT (v002SGI DSDTSN2 0x00020001 0x0001) @ 0x (XEN) Number of logical nodes in system = 4 (XEN) Number of memory chunks in system = 4 (XEN) SAL 2.9: SGI SN2 version 4.50 (XEN) SAL Platform features: ITC_Drift (XEN) SAL: AP wakeup using external interrupt vector 0x12 (XEN) No logical to physical processor mapping available (XEN) avail:0x1180c600, status:0x1000600,control:0x1180c000, vm?0x0 (XEN) No VT feature supported. (XEN) cpu_init: current=f40f8000 (XEN) vhpt_init: vhpt paddr=0x30045f, end=0x30045f (XEN) ia64_mca_cpu_init: __per_cpu_mca[0]=3017fe(mca_data[0]=f03017fe) (XEN) ia64_mca_cpu_init: __per_cpu_mca[1]=3017fd(mca_data[1]=f03017fd) (XEN) ia64_mca_cpu_init: __per_cpu_mca[2]=3017fc(mca_data[2]=f03017fc) (XEN) ia64_mca_cpu_init: __per_cpu_mca[3]=3017fb(mca_data[3]=f03017fb) (XEN) ia64_mca_cpu_init: __per_cpu_mca[4]=3017fa(mca_data[4]=f03017fa) (XEN) ia64_mca_cpu_init: __per_cpu_mca[5]=3017f9(mca_data[5]=f03017f9) (XEN) ia64_mca_cpu_init: __per_cpu_mca[6]=3017f8(mca_data[6]=f03017f8) (XEN) ia64_mca_cpu_init: __per_cpu_mca[7]=3017f7(mca_data[7]=f03017f7) (XEN) ia64_mca_cpu_init: __per_cpu_mca[8]=3017f6(mca_data[8]=f03017f6) (XEN) ia64_mca_cpu_init: __per_cpu_mca[9]=3017f5(mca_data[9]=f03017f5) (XEN) ia64_mca_cpu_init: __per_cpu_mca[10]=3017f4(mca_data[10]=f03017f4) (XEN) ia64_mca_cpu_init: __per_cpu_mca[11]=3017f3(mca_data[11]=f03017f3) (XEN) ia64_mca_cpu_init: __per_cpu_mca[12]=3017f2(mca_data[12]=f03017f2) (XEN) ia64_mca_cpu_init: __per_cpu_mca[13]=3017f1(mca_data[13]=f03017f1) (XEN) ia64_mca_cpu_init: __per_cpu_mca[14]=3017f0(mca_data[14]=f03017f0) (XEN) ia64_mca_cpu_init: __per_cpu_mca[15]=3017ef(mca_data[15]=f03017ef) (XEN) ia64_mca_cpu_init: __per_cpu_mca[16]=3017ee(mca_data[16]=f03017ee) (XEN) ia64_mca_cpu_init: __per_cpu_mca[17]=3017ed(mca_data[17]=f03017ed) (XEN) ia64_mca_cpu_init: __per_cpu_mca[18]=3017ec(mca_data[18]=f03017ec) (XEN) ia64_mca_cpu_init:
Re: [Xen-ia64-devel] [PATCH 0/7 TAKE 2] xenoprof for xen/ia64
On Wed, 2006-11-22 at 21:03 +0900, Isaku Yamahata wrote: Now xenoprof/common changes are committed. (At least they are in staging tree.) I attached updated xenoprof/ia64 patches as tar ball for convinience. Thanks Isaku, I'll start looking at them. The ia64 build on xen-unstable.hg broke before these patches went in as well. It looks like this changeset causes hvm_vioapic.c to fail on ia64 (again): http://xenbits.staging.xensource.com/staging/xen-unstable.hg?cs=f555a90bcc37 Anthony, could you take a look at fixing ia64 for this changeset? These hvm files linked from the x86 branch seem to be a common source of build failures lately. Thanks, Alex -- Alex Williamson HP Open Source Linux Org. ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
RE: [Xen-ia64-devel][Patch] New memory initial interface for VTI
On Tue, 2006-11-21 at 18:14 +0800, Zhang, Xing Z wrote: Hi Alex: Thanks your suggestion. I do some changes. GFW_PAGES saved due to it used several times in code. Applied, thanks, Alex -- Alex Williamson HP Open Source Linux Org. ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
Re: [Xen-ia64-devel] [PATCH] fix paravirtualization of clone2() system call.
On Tue, 2006-11-21 at 22:38 +0900, Isaku Yamahata wrote: fix paravirtualization of clone2() system call. If audit is enabled or the child process is ptraced, non-paravirtualized code path is executed. Thus paravirtualized ifs is left unmodifed so that the child process crashes after clone2(). paravirtualize ia64_ret_from_clone() to fix it. Applied, thanks, Alex -- Alex Williamson HP Open Source Linux Org. ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
Re: [Xen-ia64-devel] [Xen-ia-64-devel][PATCH] small fix a bug in vmx_send_assist_req()
On Wed, 2006-11-22 at 13:12 +0800, Zhang, Xing Z wrote: Fix a bug in vmx_send_assist_req(), call do_softirq() explicitly to enter scheduler. If not, boot a windows guest and a linux guest at the same time will lead system hang. Applied, thanks, Alex -- Alex Williamson HP Open Source Linux Org. ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
[Xen-ia64-devel] RE: [Xen-devel][RFC]degradation on IPF due to hypercall set irq
Keir Fraser write on 2006年11月22日 18:28: On 22/11/06 10:23, Xu, Anthony [EMAIL PROTECTED] wrote: I prefer atomic access, we used it in shared PIC. If each thread flush multicall seperately, There are some extra hypercalls. Since the threads run independently there seems little choice but for each to be able to flush. If the IDE DMA support had been properly integrated into the qemu select() event loop this would not be an issue. Agree, we can use pipe to integrate IDE DMA support into select. But we still need to use atomic access to shawdow line. --Anthony -- Keir ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel
[Xen-ia64-devel] Xen/IA64 Healthiness Report -Cset#12523
Xen/IA64 Healthiness Report Several issues: 1. Destroying XenU domains makes Xen0 Crash. 2. VTI Linux domain boot slowly, if enabled 'serial=pty'. Testing Environment: Platform: Tiger4 Processor: Itanium 2 Processor Logic Processors number: 8 (2 processors with Due Core) PAL version: 8.15 Service OS: RHEL4u3 IA64 SMP with 2 VCPUs VTI Guest OS: RHEL4u2 RHEL4u3 XenU Guest OS: RHEL4u2 Xen IA64 Unstable tree: 12523:0114b372dfae Xen Schedule: credit VTI Guest Firmware Flash.fd.2006.11.07 MD5: 797c2d6c391a6fc6f16d267e01b382f8 Summary Test Report: - Total cases: 16 Passed:15 Failed: 0 Crash: 1 Case Name Status Case Description Four_SMPVTI_Coexistpass 4 VTI (mem=256, vcpus=2) Two_UP_VTI_Co pass 2 UP_VTI (mem=256) One_UP_VTIpass1 UP_VTI (mem=256) One_UP_XenU pass1 UP_xenU(mem=256) SMPVTI_LTPpassVTI (vcpus=4, mem=512) run LTP SMPVTI_and_SMPXenU pass 1 VTI + 1 xenU (mem=256 vcpus=2) Two_SMPXenU_Coexist__crash__ 2 xenU (mem=256, vcpus=2) One_SMPVTI_4096M pass 1 VTI (vcpus=2, mem=4096M) SMPVTI_Network pass 1 VTI (mem=256,vcpu=2) and 'ping' SMPXenU_Networkpass 1 XenU (vcpus=2) and 'ping' One_SMP_XenU pass 1 SMP xenU (vcpus=2) One_SMP_VTIpass 1 SMP VTI (vcpus=2) SMPVTI_Kernel_Build pass VTI (vcpus=4) and do Kernel Build Four_SMPVTI_Coexist pass 4 VTI domains( mem=256, vcpu=2) SMPVTI_Windows pass SMPVTI windows(vcpu=2) SMPWin_SMPVTI_SMPxenU passSMPVTI Linux/Windows XenU UPVTI_Kernel_Build pass 1 UP VTI and do kernel build Notes: - The last stable changeset: - 12014:9c649ca5c1cc Thanks, Zhangjingke ___ Xen-ia64-devel mailing list Xen-ia64-devel@lists.xensource.com http://lists.xensource.com/xen-ia64-devel