Oh great. I'm forwarding the mail to Ian Jackson suspecting
that he is not on the ia64 list.

On Thu, Jan 08, 2009 at 10:54:50AM +0800, Zhang, Jingke wrote:
> Zhang, Jingke wrote:
> > Hi all,
> >     The "dom0 crash with 2048M memory" issue has been fixed. One
> > issue was found in Cset#18952 for Qcow image. 
> > 
> >     One issue was fixed:
> >     =====================
> >     [FIXED] With dom0_max_mem=2048M, dom0 will crash while booting.
> > (no such issue before dom0#742) 
> > 
> >     New issue:
> >     =====================
> >     [NEW] linux guest boot very slow with qcow image by Cset#18952
> 
> For this new issue, we found it relates to the remote-ioemu. 
> 
> We used the default one "qemu-xen-unstable.git" in Config.mk. It is very old 
> (git log shows the latest is Dec16). 
> If using the "staging/qemu-xen-unstable.git" (Jan,07), the VTI booting speed 
> is normal. 
> 
> Hope the staging qemu will be merged to unstable qemu. Thanks!
> 
> > 
> >     Old issues:
> >     =====================
> >     1. [IRQ] with IRQBALANCE service start, dom0 will crash on RHEL5u2
> >     2. [Windows Network] Dom0 and Win2k3_VTI with e1000 can not ping
> > each other 
> >     3. [CPU Sceduler] When all the vcpus are taken by Dom0, SMP_VTI
> > booting speed is very slow during NVRAM loading period. 
> > 
> > 
> > Detail Xen/IA64 Unstable Cset #18957 Status Report
> > ============================================================
> > Test Result Summary:
> >         # total case:   17
> >         # passed case: 17
> >         # failed case:   0
> > ============================================================
> > Testing Environment:
> >         platform: Tiger4
> >         xen ia64 unstable tree: 18957
> >         dom0 Cset:                    767
> >         ioemu commit:              
> >         b36996b39a4128ad9396bc0cf5ec3a57a7d31aa8 processor: Itanium 2
> >         Processor logic Processors number: 8 (2 processors with Dual
> >         Core) pal version: 9.68
> >         service os: RHEL5u2 IA64 SMP with 2 VCPUs
> >         vti guest os: RHEL5u2 & RHEL4u3
> >         xenU guest os: RHEL4u4
> >         xen schedule: credit
> >         gfw: open guest firmware Cset#131
> > ============================================================
> > Detailed Test Results:
> > 
> > Passed case Summary                     Description
> >         Two_UP_VTI_Co           UP_VTI (mem=256)
> >         One_UP_VTI                         1 UP_VTI (mem=256)
> >         One_UP_XenU                     1 UP_xenU(mem=256)
> >         One_SMPVTI_4096M                1 VTI (vcpus=2, mem=4096M)
> >         SMPVTI_LTP                      VTI (vcpus=4, mem=512) run LTP
> >         Save&Restore                    Save&Restore
> >         VTI_Live-migration                      Linux VTI
> >         live-migration SMPVTI_and_SMPXenU      1 VTI + 1 xenU
> >         (mem=256 vcpus=2) Two_SMPXenU_Coexist     2 xenU (mem=256,
> >         vcpus=2) SMPVTI_Network                  1 VTI
> >         (mem=256,vcpu=2) and'ping' SMPXenU_Network         1 XenU
> >         (vcpus=2) and 'ping' One_SMP_XenU                    1 SMP
> >         xenU (vcpus=2) One_SMP_VTI                     1 SMP VTI
> >         (vcpus=2) SMPVTI_Kernel_Build             VTI (vcpus=4) and
> >         do KernelBuild UPVTI_Kernel_Build              1 UP VTI and
> >         do kernel build SMPVTI_Windows                  SMPVTI
> >         windows(vcpu=2) SMPWin_SMPVTI_SMPxenU   SMPVTI Linux/Windows
> > & XenU 
> > 
> > 
> > Thanks,
> > Zhang Jingke
> > 
> > 
> > _______________________________________________
> > Xen-ia64-devel mailing list
> > Xen-ia64-devel@lists.xensource.com
> > http://lists.xensource.com/xen-ia64-devel
> 
> 
> 
> Thanks,
> Zhang Jingke
> 
> _______________________________________________
> Xen-ia64-devel mailing list
> Xen-ia64-devel@lists.xensource.com
> http://lists.xensource.com/xen-ia64-devel
> 

-- 
yamahata

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

Reply via email to