Re: [Xen-ia64-devel] Re: [Xen-devel] [PATCH 0/5][IA64][HVM] Windowscrashdump support

2007-01-24 Thread Masaki Kanno

On Wed, Jan 24, 2007 at 02:27:42PM +0900, Masaki Kanno wrote:
[...]
 Hi Tristan and Keir and all,
 
 Thanks for your idea and comments.
 I will remake and resend these patches in the following command syntax. 
 
  xm trigger Domain VCPU init|reset|nmi
 xm trigger Domain init|reset|nmi [VCPU]
is slightly better.  By default VCPU is 0.

Okay, I will do that.

 Kan



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Patch][RFC] fix PAL_HALT ( is Re: [Xen-ia64-devel] [RFC] dump core is failed for PAL_HALT)

2007-01-24 Thread tgingold
Selon Isaku Yamahata [EMAIL PROTECTED]:

 On Wed, Jan 24, 2007 at 11:43:37AM +0900, Akio Takebe wrote:
[...]
 According to SDM vol2 11.9, PAL_HALT places cpu in low power state.
Correct.

 So the current behaviour that xen/ia64 shutdown unconditionally is
 wrong.
Yes, but that's the code in linux/ia64.
Why linux/ia64 doesn't call the shutdown EFI runtime service ?  I don't know.
Maybe Alex knows the answer.

 CPU hot-unplug routine also calls cpu_halt(). In that case,
 only the targeted cpu should be halted. We don't want domain shutdown.
If the last vcpu calls PAL_HALT, the domain can be safely shut down.

 Probably modifying machine_reboot() and machine_power_off()
 needs modification(paravirtualization) to call shutdown hypercall.
I think all the paravirtualization can be done through EFI+PAL calls.

Tristan.

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] [PATCH] fix oops message from timer_interrupton VTI domain

2007-01-24 Thread Atsushi SAKAI
Hi, Alex and Aron

  Thank you for your various comments,
I attach the patche which reflect this discussion.
Please edit the comment line in patches, as you like.
I change the last line of document from previous mail.

Thanks
Atsushi SAKAI


==

This patch intends to fix the oops message from timer_interrupt on VTI 
domain. 
This problem occurred when we test PV-on-HVM driver by ltp-20061121.
Typical message shown as follows.

ltp Now Running...( Exception mode )
dom=domVTI
1 times
Unable to find swap-space signature
Oops: timer tick before it's due (itc=ed98bb5849,itm=ed98bb5849)
Oops: timer tick before it's due (itc=f20bca8ca3,itm=f20bca8ca3)
Oops: timer tick before it's due (itc=f4ea4e2b32,itm=f4ea4e2b32)
mmap1(7392): unaligned access to 0x6fffb634, ip=0x2004fad0
mmap1(7392): unaligned access to 0x6fffb634, ip=0x2004fad0
ltp End


These oops messages are generated
because timer_interrupt checks the condition itc  itm.
Currently Xen-hypervisor outputs following values, 
max(current_itc,vtm-last_itc),
Some occasion, oops message appeared if we use the value of vtm-last_itc as 
ia64_get_itc() return value, because the vtm-last_itc is same as itm.
To fix this issue, it needs to add return value like +1.

But, ia64_get_itc() is handled at [EMAIL PROTECTED]
and it works same logic of  now_itc()@vlsapic.c.
And these routines shared vtm-last_itc.
So I fix this problem by adding +1 at caller of update_last_itc.

Signed-off-by: Atsushi SAKAI [EMAIL PROTECTED]

==


Alex Williamson [EMAIL PROTECTED] wrote:

 On Tue, 2007-01-23 at 16:44 -0500, Aron Griffis wrote:
  Atsushi SAKAI wrote:  [Mon Jan 22 2007, 07:36:55PM EST]
   Oops: timer tick before it's due (itc=ed98bb5849,itm=ed98bb5849)
   Oops: timer tick before it's due (itc=f20bca8ca3,itm=f20bca8ca3)
   Oops: timer tick before it's due (itc=f4ea4e2b32,itm=f4ea4e2b32)
  ...
   
   These oops messages are generated
   because timer_interrupt checks the condition itc  itm.
  
  Is that the right comparison though?  itc isn't guaranteed to return
  different values on subsequent fetches, and the interrupt is generated
  when itc == itm, right?  So shouldn't the condition be itc = itm?
 
Good point.  With the slower ITC on a Montecito system, I don't know
 if anything would prevent you hitting the interrupt handler when itc ==
 itm.  Perhaps a Montecito fix for Linux-ia64 to use time_after_eq()
 would eliminate this problem.
 
   Alex
 -- 
 Alex Williamson HP Open Source  Linux Org.
 


fix-vti-oops-take2.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

Re: [Patch][RFC] fix PAL_HALT ( is Re: [Xen-ia64-devel] [RFC] dump core is failed for PAL_HALT)

2007-01-24 Thread Akio Takebe
Hi, Isaku and Tristan

Thank you for your comments.

 Probably modifying machine_reboot() and machine_power_off()
 needs modification(paravirtualization) to call shutdown hypercall.
I think all the paravirtualization can be done through EFI+PAL calls.
I fix by using notifier_call and I can shutdown domU.
But I don't try domVTi.
Must we be care about shutdown process of OSes on domVTi?
What do you think?

diff -r dc4a69e66104 linux-2.6-xen-sparse/arch/ia64/kernel/setup.c
--- a/linux-2.6-xen-sparse/arch/ia64/kernel/setup.c Fri Jan 19 13:27:52 
2007 
+0900
+++ b/linux-2.6-xen-sparse/arch/ia64/kernel/setup.c Wed Jan 24 19:27:09 
2007 
+0900
@@ -64,6 +64,7 @@
 #ifdef CONFIG_XEN
 #include asm/hypervisor.h
 #include asm/xen/xencomm.h
+#include asm/kdebug.h
 #endif
 #include linux/dma-mapping.h
 
@@ -95,6 +96,19 @@ xen_panic_event(struct notifier_block *t
 
 static struct notifier_block xen_panic_block = {
xen_panic_event, NULL, 0 /* try to go last */
+};
+
+static int
+xen_poweroff_event(struct notifier_block *this, unsigned long event, void *ptr)
+{
+   if ( event == DIE_MACHINE_HALT )
+   HYPERVISOR_shutdown(SHUTDOWN_poweroff);
+
+   return NOTIFY_DONE;
+}
+
+static struct notifier_block xen_poweroff_block = {
+   xen_poweroff_event, NULL, 0 /* try to go last */
 };
 #endif
 
@@ -448,6 +462,7 @@ setup_arch (char **cmdline_p)
setup_xen_features();
/* Register a call for panic conditions. */
notifier_chain_register(panic_notifier_list, xen_panic_block);
+   notifier_chain_register(ia64die_chain, xen_poweroff_block);
}
 #endif


Best Regards,

Akio Takebe


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] [PATCH] xen might misunderstand a normal page asI/O page

2007-01-24 Thread Kouya SHIMURA
Hi Anthony,

The guest OS can use ig field{63:53} in VHPT Short/Long Format.
Actually Windows seems to use this field and sometimes set bit{60}.
Xen/IPF also uses bit{60} of PTE as VTLB_PTE_IO_BIT. 
So misunderstanding may happen.

When first TLB-miss happens, the pteval in a guest VHPT propagates as
follows:

In vmx_hpw_miss(),
 = }else if(type == DSIDE_TLB){
   = if (!guest_vhpt_lookup(vhpt_adr, pteval)) {
 = thash_purge_and_insert(v, pteval, itir, vadr, DSIDE_TLB);

In thash_purge_and_insert(),
 = if(VMX_DOMAIN(v)){
   =   if (ps == mrr.ps) {
 = if(!(pteVTLB_PTE_IO)){  This condition is failure
else{
vtlb_insert(v, pte, itir, ifa);
vcpu_quick_region_set(PSCBX(v,tc_regions),ifa);
}

After all, this TLB-miss wastes a vtlb and reproduce TLB-miss again.
When the second TLB-miss happens, 

In vmx_hpw_miss(),
 = if((data=vtlb_lookup(v, vadr,type))!=0){
   =   if (v-domain != dom0  type == DSIDE_TLB) {
 = if (__gpfn_is_io(v-domain, gppa  PAGE_SHIFT)) {  failure. 
resolve misunderstanding
 = thash_vhpt_insert(v, data-page_flags, data-itir, vadr, type);

My patch just masks the ig field. 

Thanks
Kouya

Xu, Anthony writes:
  Hi Kouya,
  
  Can you explain more?
  
  How does misunderstanding happen?
  And how does this patch fix it?
  
  Thanks
  Anthony
  
  Kouya SHIMURA write on 2007年1月24日 12:31:
   Hi,
   
   Hypervisor might misunderstand a normal page as I/O page
   if a guest OS uses the ig field in the guest VHPT.
   
   It seems to be harmless but slightly slow down.
   
   Thanks,
   Kouya
   
   Signed-off-by: Kouya Shimura [EMAIL PROTECTED]


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] [PATCH] xen might misunderstand a normal page asI/O page

2007-01-24 Thread Xu, Anthony
Hi kouya,

I understand now, good catch.
Thanks for your explanation

- Anthony

Kouya SHIMURA write on 2007年1月24日 17:43:
 Hi Anthony,
 
 The guest OS can use ig field{63:53} in VHPT Short/Long Format.
 Actually Windows seems to use this field and sometimes set bit{60}.
 Xen/IPF also uses bit{60} of PTE as VTLB_PTE_IO_BIT.
 So misunderstanding may happen.
 
 When first TLB-miss happens, the pteval in a guest VHPT propagates as
 follows:
 
 In vmx_hpw_miss(),
  = }else if(type == DSIDE_TLB){
= if (!guest_vhpt_lookup(vhpt_adr, pteval)) {
  = thash_purge_and_insert(v, pteval, itir, vadr, DSIDE_TLB);
 
 In thash_purge_and_insert(),
  = if(VMX_DOMAIN(v)){
=   if (ps == mrr.ps) {
  = if(!(pteVTLB_PTE_IO)){  This condition is failure
 else{
 vtlb_insert(v, pte, itir, ifa);
 vcpu_quick_region_set(PSCBX(v,tc_regions),ifa);
 }
 
 After all, this TLB-miss wastes a vtlb and reproduce TLB-miss again.
 When the second TLB-miss happens,
 
 In vmx_hpw_miss(),
  = if((data=vtlb_lookup(v, vadr,type))!=0){
=   if (v-domain != dom0  type == DSIDE_TLB) {
  = if (__gpfn_is_io(v-domain, gppa  PAGE_SHIFT)) { 
  failure. resolve misunderstanding = thash_vhpt_insert(v,
 data-page_flags, data-itir, vadr, type); 
 
 My patch just masks the ig field.
 
 Thanks
 Kouya
 
 Xu, Anthony writes:
   Hi Kouya,
  
   Can you explain more?
  
   How does misunderstanding happen?
   And how does this patch fix it?
  
   Thanks
   Anthony
  
   Kouya SHIMURA write on 2007年1月24日 12:31:
Hi,
   
Hypervisor might misunderstand a normal page as I/O page
if a guest OS uses the ig field in the guest VHPT.
   
It seems to be harmless but slightly slow down.
   
Thanks,
Kouya
   
Signed-off-by: Kouya Shimura [EMAIL PROTECTED]

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Patch][RFC] fix PAL_HALT ( is Re: [Xen-ia64-devel] [RFC] dump core is failed for PAL_HALT)

2007-01-24 Thread Akio Takebe
Hi,

 Probably modifying machine_reboot() and machine_power_off()
 needs modification(paravirtualization) to call shutdown hypercall.
I think all the paravirtualization can be done through EFI+PAL calls.
Do Linux/Windows-ia64 use ACPI to shutdown system?
If so, we must not be care about VTi domain.

Best Regards,

Akio Takebe


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Patch][RFC] fix PAL_HALT ( is Re: [Xen-ia64-devel] [RFC] dump core is failed for PAL_HALT)

2007-01-24 Thread Tristan Gingold
On Wed, Jan 24, 2007 at 07:29:14PM +0900, Akio Takebe wrote:
 Hi,
 
  Probably modifying machine_reboot() and machine_power_off()
  needs modification(paravirtualization) to call shutdown hypercall.
 I think all the paravirtualization can be done through EFI+PAL calls.
 Do Linux/Windows-ia64 use ACPI to shutdown system?
Yes.
 If so, we must not be care about VTi domain.
Yes.

The issue araises in PV because ACPI is not used to shutdown the system.

Tristan.

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Patch][RFC] fix PAL_HALT ( is Re: [Xen-ia64-devel] [RFC] dump core is failed for PAL_HALT)

2007-01-24 Thread Alex Williamson
On Wed, 2007-01-24 at 11:14 +0100, [EMAIL PROTECTED] wrote:
 Selon Isaku Yamahata [EMAIL PROTECTED]:
 
  On Wed, Jan 24, 2007 at 11:43:37AM +0900, Akio Takebe wrote:
 [...]
  According to SDM vol2 11.9, PAL_HALT places cpu in low power state.
 Correct.
 
  So the current behaviour that xen/ia64 shutdown unconditionally is
  wrong.
 Yes, but that's the code in linux/ia64.
 Why linux/ia64 doesn't call the shutdown EFI runtime service ?  I don't know.
 Maybe Alex knows the answer.

   I think we need to be sure we're getting the correct expected user
behavior for domains.  A user expects the following on real hardware:

  * halt: Machine is stopped, not shutdown, not rebooted.
Linux/ia64 uses PAL_HALT for this.
  * restart/reboot: Machine is reset.  Linux/ia64 uses
efi.reset_system for this.
  * poweroff: Machine is turned off.  Linux/ia64 uses ACPI S5 power
state if pm_power_off is set, otherwise behaves as if halted.

So, for PV domains, cpu_halt() should just take the vcpu offline.  I
don't think there's any reason to special case the last vcpu going
offline and shutdown the domain.  That's not what real hardware does.
Machine restart/reboot should (and does) happen transparently when Xen
catches the EFI call.  To support poweroff, I think we should set
pm_power_off to a Xen specific hypervisor shutdown routine.  The
abstraction is already in place to do this.

Do VTI domains implement enough ACPI to provide the OS a fake S5 power
state?  If not, a PV-on-HVM driver could set pm_power_off and use a
hypercall, but that means HVM domains would need a Xen driver for some
pretty basic functionality.  Maybe all vcpus in cpu_halt() should only
be cause for a domain shutdown for VTI domains?

  CPU hot-unplug routine also calls cpu_halt(). In that case,
  only the targeted cpu should be halted. We don't want domain shutdown.
 If the last vcpu calls PAL_HALT, the domain can be safely shut down.

  It's safe, but I don't agree that it should.  Thanks,

Alex


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] [PATCH] ptc_ga might not purge vtlb

2007-01-24 Thread Alex Williamson
On Wed, 2007-01-24 at 11:37 +0900, Kouya SHIMURA wrote:
 Hi,
 
 SMP Windows sometimes failed to boot up with BSOD.
 After the deep investigation, I found a bug.
 
 If VTLB hasn't been used in region 0,
 ptc_ga for other region doesn't purge VTLBs.

   Applied.  Thanks,

Alex


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] [PATCH] fix oops message from timer_interrupton VTI domain

2007-01-24 Thread Alex Williamson
On Wed, 2007-01-24 at 18:30 +0900, Atsushi SAKAI wrote:
 Hi, Alex and Aron
 
   Thank you for your various comments,
 I attach the patche which reflect this discussion.
 Please edit the comment line in patches, as you like.
 I change the last line of document from previous mail.

   Applied.  We should submit a patch to linux-ia64 to test for
time_after_eq() as well.  Thanks,

Alex


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Patch][RFC] fix PAL_HALT ( is Re: [Xen-ia64-devel] [RFC] dump core is failed for PAL_HALT)

2007-01-24 Thread Akio Takebe
Hi, Tristan

On Wed, Jan 24, 2007 at 07:29:14PM +0900, Akio Takebe wrote:
 Hi,
 
  Probably modifying machine_reboot() and machine_power_off()
  needs modification(paravirtualization) to call shutdown hypercall.
 I think all the paravirtualization can be done through EFI+PAL calls.
 Do Linux/Windows-ia64 use ACPI to shutdown system?
Yes.
 If so, we must not be care about VTi domain.
Yes.

The issue araises in PV because ACPI is not used to shutdown the system.

Thank you for your answer.
I remake my patches, and I'll post soon.

Best Regards,

Akio Takebe


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Patch][RFC] fix PAL_HALT ( is Re: [Xen-ia64-devel] [RFC] dumpcore is failed for PAL_HALT)

2007-01-24 Thread Akio Takebe
Hi, Alex

Thank you for your elaboration.
I agree your opinion.

So, for PV domains, cpu_halt() should just take the vcpu offline.  I
don't think there's any reason to special case the last vcpu going
offline and shutdown the domain.  That's not what real hardware does.
Exactly.

Machine restart/reboot should (and does) happen transparently when Xen
catches the EFI call.  To support poweroff, I think we should set
pm_power_off to a Xen specific hypervisor shutdown routine.  The
abstraction is already in place to do this.
OK, I'll try it.

Do VTI domains implement enough ACPI to provide the OS a fake S5 power
state?  If not, a PV-on-HVM driver could set pm_power_off and use a
hypercall, but that means HVM domains would need a Xen driver for some
pretty basic functionality.  Maybe all vcpus in cpu_halt() should only
be cause for a domain shutdown for VTI domains?
Hmm. Some OSes on VTI may use cpu_halt() on all vcpu.
So I add printk like call PAL_HALT on all cpu,
and call domain_shutdown() for VTI domain.
Is this OK?

Best Regards,

Akio Takebe

On Wed, 2007-01-24 at 11:14 +0100, [EMAIL PROTECTED] wrote:
 Selon Isaku Yamahata [EMAIL PROTECTED]:
 
  On Wed, Jan 24, 2007 at 11:43:37AM +0900, Akio Takebe wrote:
 [...]
  According to SDM vol2 11.9, PAL_HALT places cpu in low power state.
 Correct.
 
  So the current behaviour that xen/ia64 shutdown unconditionally is
  wrong.
 Yes, but that's the code in linux/ia64.
 Why linux/ia64 doesn't call the shutdown EFI runtime service ?  I don't 
 know.
 Maybe Alex knows the answer.

   I think we need to be sure we're getting the correct expected user
behavior for domains.  A user expects the following on real hardware:

  * halt: Machine is stopped, not shutdown, not rebooted.
Linux/ia64 uses PAL_HALT for this.
  * restart/reboot: Machine is reset.  Linux/ia64 uses
efi.reset_system for this.
  * poweroff: Machine is turned off.  Linux/ia64 uses ACPI S5 power
state if pm_power_off is set, otherwise behaves as if halted.

So, for PV domains, cpu_halt() should just take the vcpu offline.  I
don't think there's any reason to special case the last vcpu going
offline and shutdown the domain.  That's not what real hardware does.
Machine restart/reboot should (and does) happen transparently when Xen
catches the EFI call.  To support poweroff, I think we should set
pm_power_off to a Xen specific hypervisor shutdown routine.  The
abstraction is already in place to do this.

Do VTI domains implement enough ACPI to provide the OS a fake S5 power
state?  If not, a PV-on-HVM driver could set pm_power_off and use a
hypercall, but that means HVM domains would need a Xen driver for some
pretty basic functionality.  Maybe all vcpus in cpu_halt() should only
be cause for a domain shutdown for VTI domains?

  CPU hot-unplug routine also calls cpu_halt(). In that case,
  only the targeted cpu should be halted. We don't want domain shutdown.
 If the last vcpu calls PAL_HALT, the domain can be safely shut down.

  It's safe, but I don't agree that it should.  Thanks,

   Alex


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Patch][RFC] fix PAL_HALT ( is Re: [Xen-ia64-devel] [RFC] dumpcore is failed for PAL_HALT)

2007-01-24 Thread Alex Williamson
On Thu, 2007-01-25 at 10:02 +0900, Akio Takebe wrote:
 Do VTI domains implement enough ACPI to provide the OS a fake S5 power
 state?  If not, a PV-on-HVM driver could set pm_power_off and use a
 hypercall, but that means HVM domains would need a Xen driver for some
 pretty basic functionality.  Maybe all vcpus in cpu_halt() should only
 be cause for a domain shutdown for VTI domains?
 Hmm. Some OSes on VTI may use cpu_halt() on all vcpu.
 So I add printk like call PAL_HALT on all cpu,
 and call domain_shutdown() for VTI domain.
 Is this OK?

Hi Akio,

   I would prioritize a poweroff shutting down a domain over halt making
a domain stall.  So for VTI domains, this sounds ok.  I'd skip the
printk though, it seems overly verbose.  Thanks,

Alex

-- 
Alex Williamson HP Open Source  Linux Org.


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Patch][RFC] fix PAL_HALT ( is Re: [Xen-ia64-devel] [RFC] dump core is failed for PAL_HALT)

2007-01-24 Thread Tristan Gingold
On Wed, Jan 24, 2007 at 09:19:09AM -0700, Alex Williamson wrote:
 On Wed, 2007-01-24 at 11:14 +0100, [EMAIL PROTECTED] wrote:
  Selon Isaku Yamahata [EMAIL PROTECTED]:
  
   On Wed, Jan 24, 2007 at 11:43:37AM +0900, Akio Takebe wrote:
  [...]
   According to SDM vol2 11.9, PAL_HALT places cpu in low power state.
  Correct.
  
   So the current behaviour that xen/ia64 shutdown unconditionally is
   wrong.
  Yes, but that's the code in linux/ia64.
  Why linux/ia64 doesn't call the shutdown EFI runtime service ?  I don't 
  know.
  Maybe Alex knows the answer.
 
I think we need to be sure we're getting the correct expected user
 behavior for domains.  A user expects the following on real hardware:
 
   * halt: Machine is stopped, not shutdown, not rebooted.
 Linux/ia64 uses PAL_HALT for this.
   * restart/reboot: Machine is reset.  Linux/ia64 uses
 efi.reset_system for this.
   * poweroff: Machine is turned off.  Linux/ia64 uses ACPI S5 power
 state if pm_power_off is set, otherwise behaves as if halted.
 
 So, for PV domains, cpu_halt() should just take the vcpu offline.  I
 don't think there's any reason to special case the last vcpu going
 offline and shutdown the domain.  That's not what real hardware does.
Thanks for the details.  So current Xen/ia64 PAL_HALT behavior is not
correct.

 Machine restart/reboot should (and does) happen transparently when Xen
 catches the EFI call.  To support poweroff, I think we should set
 pm_power_off to a Xen specific hypervisor shutdown routine.  The
 abstraction is already in place to do this.
 
 Do VTI domains implement enough ACPI to provide the OS a fake S5 power
 state?  If not, a PV-on-HVM driver could set pm_power_off and use a
 hypercall, but that means HVM domains would need a Xen driver for some
 pretty basic functionality.  Maybe all vcpus in cpu_halt() should only
 be cause for a domain shutdown for VTI domains?
I think VTI support S5, if not it should :-)

[...]
Tristan.

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Patch][RFC] fix PAL_HALT ( is Re: [Xen-ia64-devel] [RFC] dumpcore is failed for PAL_HALT)

2007-01-24 Thread Xu, Anthony
 Do VTI domains implement enough ACPI to provide the OS a fake S5
 power state?  If not, a PV-on-HVM driver could set pm_power_off and
 use a hypercall, but that means HVM domains would need a Xen driver
 for some pretty basic functionality.  Maybe all vcpus in cpu_halt()
 should only be cause for a domain shutdown for VTI domains?
 I think VTI support S5, if not it should :-)
 
In VTI side, ACPI is emulated by ACPI module of Qemu.
I think it supports S5.


Anthony.




Tristan Gingold write on 2007年1月25日 11:09:
 On Wed, Jan 24, 2007 at 09:19:09AM -0700, Alex Williamson wrote:
 On Wed, 2007-01-24 at 11:14 +0100, [EMAIL PROTECTED] wrote:
 Selon Isaku Yamahata [EMAIL PROTECTED]:
 
 On Wed, Jan 24, 2007 at 11:43:37AM +0900, Akio Takebe wrote: [...]
 According to SDM vol2 11.9, PAL_HALT places cpu in low power
 state. Correct. 
 
 So the current behaviour that xen/ia64 shutdown unconditionally is
 wrong.
 Yes, but that's the code in linux/ia64.
 Why linux/ia64 doesn't call the shutdown EFI runtime service ?  I
 don't know. Maybe Alex knows the answer.
 
I think we need to be sure we're getting the correct expected user
 behavior for domains.  A user expects the following on real hardware:
 
   * halt: Machine is stopped, not shutdown, not rebooted.
 Linux/ia64 uses PAL_HALT for this.
   * restart/reboot: Machine is reset.  Linux/ia64 uses
 efi.reset_system for this.
   * poweroff: Machine is turned off.  Linux/ia64 uses ACPI S5
 power state if pm_power_off is set, otherwise behaves as if
 halted. 
 
 So, for PV domains, cpu_halt() should just take the vcpu offline.  I
 don't think there's any reason to special case the last vcpu going
 offline and shutdown the domain.  That's not what real hardware does.
 Thanks for the details.  So current Xen/ia64 PAL_HALT behavior is not
 correct.
 
 Machine restart/reboot should (and does) happen transparently when
 Xen catches the EFI call.  To support poweroff, I think we should set
 pm_power_off to a Xen specific hypervisor shutdown routine.  The
 abstraction is already in place to do this.
 
 Do VTI domains implement enough ACPI to provide the OS a fake S5
 power state?  If not, a PV-on-HVM driver could set pm_power_off and
 use a hypercall, but that means HVM domains would need a Xen driver
 for some pretty basic functionality.  Maybe all vcpus in cpu_halt()
 should only be cause for a domain shutdown for VTI domains?
 I think VTI support S5, if not it should :-)
 
 [...]
 Tristan.
 
 ___
 Xen-ia64-devel mailing list
 Xen-ia64-devel@lists.xensource.com
 http://lists.xensource.com/xen-ia64-devel

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] [PATCH] Fix percpu IRQs, set IRQ_PER_CPU

2007-01-24 Thread Alex Williamson

   I noticed when saving and restoring a 4-way domU that the CMC polling
mechanism was being triggered.  This is actually a generic domU CPU
hotplug problem.  We currently aren't setting the IRQ_PER_CPU status bit
in the IRQ description structure.  This causes migrate_irq() to mark the
IRQ for migration and trigger it before taking the CPU offline.  So
every time a CPU is taken offline, it first receives a CMC interrupt and
a CMC polling interrupt.  If you have 4 or more CPUs, this is enough to
exceed the polling threshold and switch the CMC driver to polling mode.

   I also removed some printks that now seem extraneous and switched to
using slightly more descriptive variables.  Thanks,

Alex

Signed-off-by: Alex Williamson [EMAIL PROTECTED]
--- 

diff -r b4df7de0cbf7 linux-2.6-xen-sparse/arch/ia64/kernel/irq_ia64.c
--- a/linux-2.6-xen-sparse/arch/ia64/kernel/irq_ia64.c  Wed Jan 24 12:28:05 
2007 -0700
+++ b/linux-2.6-xen-sparse/arch/ia64/kernel/irq_ia64.c  Wed Jan 24 20:33:37 
2007 -0700
@@ -303,81 +303,85 @@ static struct irqaction resched_irqactio
  * required.
  */
 static void
-xen_register_percpu_irq (unsigned int irq, struct irqaction *action, int save)
+xen_register_percpu_irq (unsigned int vec, struct irqaction *action, int save)
 {
unsigned int cpu = smp_processor_id();
-   int ret = 0;
+   irq_desc_t *desc;
+   int irq = 0;
 
if (xen_slab_ready) {
-   switch (irq) {
+   switch (vec) {
case IA64_TIMER_VECTOR:
sprintf(timer_name[cpu], %s%d, action-name, cpu);
-   ret = bind_virq_to_irqhandler(VIRQ_ITC, cpu,
+   irq = bind_virq_to_irqhandler(VIRQ_ITC, cpu,
action-handler, action-flags,
timer_name[cpu], action-dev_id);
-   per_cpu(timer_irq,cpu) = ret;
-   printk(KERN_INFO register VIRQ_ITC (%s) to xen irq 
(%d)\n, timer_name[cpu], ret);
+   per_cpu(timer_irq,cpu) = irq;
break;
case IA64_IPI_RESCHEDULE:
sprintf(resched_name[cpu], %s%d, action-name, cpu);
-   ret = bind_ipi_to_irqhandler(RESCHEDULE_VECTOR, cpu,
+   irq = bind_ipi_to_irqhandler(RESCHEDULE_VECTOR, cpu,
action-handler, action-flags,
resched_name[cpu], action-dev_id);
-   per_cpu(resched_irq,cpu) = ret;
-   printk(KERN_INFO register RESCHEDULE_VECTOR (%s) to 
xen irq (%d)\n, resched_name[cpu], ret);
+   per_cpu(resched_irq,cpu) = irq;
break;
case IA64_IPI_VECTOR:
sprintf(ipi_name[cpu], %s%d, action-name, cpu);
-   ret = bind_ipi_to_irqhandler(IPI_VECTOR, cpu,
+   irq = bind_ipi_to_irqhandler(IPI_VECTOR, cpu,
action-handler, action-flags,
ipi_name[cpu], action-dev_id);
-   per_cpu(ipi_irq,cpu) = ret;
-   printk(KERN_INFO register IPI_VECTOR (%s) to xen irq 
(%d)\n, ipi_name[cpu], ret);
-   break;
-   case IA64_SPURIOUS_INT_VECTOR:
+   per_cpu(ipi_irq,cpu) = irq;
break;
case IA64_CMC_VECTOR:
sprintf(cmc_name[cpu], %s%d, action-name, cpu);
-   ret = bind_virq_to_irqhandler(VIRQ_MCA_CMC, cpu,
+   irq = bind_virq_to_irqhandler(VIRQ_MCA_CMC, cpu,
  action-handler,
  action-flags,
  cmc_name[cpu],
  action-dev_id);
-   per_cpu(cmc_irq,cpu) = ret;
-   printk(KERN_INFO register VIRQ_MCA_CMC (%s) to xen 
-  irq (%d)\n, cmc_name[cpu], ret);
+   per_cpu(cmc_irq,cpu) = irq;
break;
case IA64_CMCP_VECTOR:
sprintf(cmcp_name[cpu], %s%d, action-name, cpu);
-   ret = bind_ipi_to_irqhandler(CMCP_VECTOR, cpu,
+   irq = bind_ipi_to_irqhandler(CMCP_VECTOR, cpu,
 action-handler,
 action-flags,
 cmcp_name[cpu],
 action-dev_id);
-   per_cpu(cmcp_irq,cpu) = ret;
-   printk(KERN_INFO register CMCP_VECTOR (%s) to xen 
-  irq (%d)\n, cmcp_name[cpu], ret);
+  

[Xen-ia64-devel] Re: [Xen-devel] [PATCH] Add netfornt tx_queue_len

2007-01-24 Thread Tomonari Horikoshi
Hi Herbert-san

Thank you for your comment.

I agreed.
I examine the another way. 

It is likely to go well if something is added to the check 
on netfront_tx_slot_available(). 

Best regards.

Tomonari Horikoshi,

Herbert Xu wrote:--
Sent:Wed, 24 Jan 2007 13:29:51 +1100
Subject: Re: [Xen-devel] [PATCH] Add netfornt tx_queue_len

 On Wed, Jan 24, 2007 at 01:37:55AM +, Tomonari Horikoshi wrote:
  
  When I executed netperf by a short message of UDP, 
  PV domain and PV-on-HVM driver issued Call trace.
  
  I think that GrantEntry was filled with a lot of messages processings.
  
  This problem is generated in IA64 only.
  Probably, I think that I am the following problems. 
  
In IA64
  NET_TX_RING_SIZE 1024,  NR_GRANT_ENTRIES 2048
In x86
  NET_TX_RING_SIZE  256,  NR_GRANT_ENTRIES 2048
  
  I corrected to check number of unprocessing queue  tx_queue_len before 
  Grant was filled.
  
  However, my correction influences x86. 
  Please teach to me in that when there is a better improvement. 
 
 Sorry, but this patch looks bogus.  The tx queue is maintained by
 Linux and has nothing to do with the driver.  So limiting its length
 based on internal state of the driver can't be right.
 
 We need to find out what's really going wrong with the grant table
 entries here.
 
 Cheers,
 -- 
 Visit Openswan at http://www.openswan.org/
 Email: Herbert Xu ~{PmVHI~} [EMAIL PROTECTED]
 Home Page: http://gondor.apana.org.au/~herbert/
 PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
 
 ___
 Xen-devel mailing list
 [EMAIL PROTECTED]
 http://lists.xensource.com/xen-devel



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel][Patch]move vmx_vcpu_thash() to assemble

2007-01-24 Thread Zhang, Xing Z
move vmx_vcpu_thash() to assemble.

It's good to performance.

 

Signed-off-by, Zhang Xin  [EMAIL PROTECTED] 

Good good study,day day up ! ^_^

-Wing(zhang xin)

 

OTC,Intel Corporation

 



asm-thash2.patch
Description: asm-thash2.patch
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] Xen/IA64 Healthiness Report -Cset#13475

2007-01-24 Thread You, Yongkang
Xen/IA64 Healthiness Report

All testing cases pass. 

Testing Environment:

Platform: Tiger4
Processor: Itanium 2 Processor
Logic Processors number: 8 (2 processors with Due Core)
Service OS: RHEL4u3 IA64 SMP with 2 vcpus  1G memory
VTI Guest OS: RHEL4u2  RHEL4u3
XenU Guest OS: RHEL4u2
Xen IA64 Unstable tree: 13475:b4df7de0cbf7
Xen Schedule: credit
VTI Guest Firmware Flash.fd.2006.12.01 MD5:
09a224270416036a8b4e6f8496e97854

Summary Test Report:
-
  Total cases: 16
  Passed:16
  Failed: 0

Case Name Status   Case Description
Four_SMPVTI_Coexistpass   4 VTI (mem=256, vcpus=2)
Two_UP_VTI_Co pass   2 UP_VTI (mem=256)
One_UP_VTIpass1 UP_VTI (mem=256)
One_UP_XenU  pass1 UP_xenU(mem=256)
SMPVTI_LTPpassVTI (vcpus=4, mem=512) run LTP
SMPVTI_and_SMPXenU   pass  1 VTI + 1 xenU (mem=256 vcpus=2)
Two_SMPXenU_Coexistpass2 xenU (mem=256, vcpus=2)
One_SMPVTI_4096M  pass  1 VTI (vcpus=2, mem=4096M)
SMPVTI_Network  pass  1 VTI (mem=256,vcpu=2) and 'ping'
SMPXenU_Networkpass  1 XenU (vcpus=2) and 'ping'
One_SMP_XenU  pass   1 SMP xenU (vcpus=2)
One_SMP_VTIpass   1 SMP VTI (vcpus=2)
SMPVTI_Kernel_Build  pass  VTI (vcpus=4) and do Kernel Build
Four_SMPVTI_Coexist  pass  4 VTI domains( mem=256, vcpu=2)
SMPVTI_Windows  pass  SMPVTI windows(vcpu=2)
SMPWin_SMPVTI_SMPxenU  pass  SMPVTI Linux/Windows  XenU
UPVTI_Kernel_Build   pass   1 UP VTI and do kernel build

Notes:
-
The last stable changeset:
-
13475:b4df7de0cbf7

Best Regards,
Yongkang (Kangkang) 永康

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel