RE: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

2006-06-29 Thread Xu, Anthony
From: Isaku Yamahata [mailto:[EMAIL PROTECTED]
Sent: 2006?6?21? 17:25
To: Xu, Anthony
Cc: xen-ia64-devel@lists.xensource.com
Subject: Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.


On Thu, Jun 01, 2006 at 12:55:08PM +0800, Xu, Anthony wrote:

  Or you mean the protection of global purge.
  When a vcpu get IPI to purge TLB,
  What it does is to invalid the TLB entry in VHPT,
  but not remove the TLB entry.
  There is no race condition.
 
 Is there any gurantee that the vcpu which recives IPI isn't touching VHPT?

 The vcpu which receives IPI can touch VHPT in the same time.
 Because purge operation only sets the TLB entry invalid, like entry-ti=1.
 That has the same philosophy with Tristan's direct purge

Could you review the two attached patches?
Purge function traverses the collision chain when IPI is sent.
But there is a window when the assumption of the collision chain
is broken.
vmx_hpw_miss() has a race. ia64_do_page_fault() had a similar race before.

--

Sorry for late response.

The second patch is good cleanup and improvement.

I don't understand the race condition the first patch fixes.

Could you please elaborate this?


Thanks,
Anthony


yamahata

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

2006-06-29 Thread Isaku Yamahata
Hi Anthony.

On Thu, Jun 29, 2006 at 02:21:48PM +0800, Xu, Anthony wrote:
 From: Isaku Yamahata [mailto:[EMAIL PROTECTED]
 Sent: 2006?6?21? 17:25
 To: Xu, Anthony
 Cc: xen-ia64-devel@lists.xensource.com
 Subject: Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.
 
 
 On Thu, Jun 01, 2006 at 12:55:08PM +0800, Xu, Anthony wrote:
 
   Or you mean the protection of global purge.
   When a vcpu get IPI to purge TLB,
   What it does is to invalid the TLB entry in VHPT,
   but not remove the TLB entry.
   There is no race condition.
  
  Is there any gurantee that the vcpu which recives IPI isn't touching VHPT?
 
  The vcpu which receives IPI can touch VHPT in the same time.
  Because purge operation only sets the TLB entry invalid, like entry-ti=1.
  That has the same philosophy with Tristan's direct purge
 
 Could you review the two attached patches?
 Purge function traverses the collision chain when IPI is sent.
 But there is a window when the assumption of the collision chain
 is broken.
 vmx_hpw_miss() has a race. ia64_do_page_fault() had a similar race before.
 
 --
 
 Sorry for late response.
 
 The second patch is good cleanup and improvement.

The second patch is also a bug fix patch.


 I don't understand the race condition the first patch fixes.
 
 Could you please elaborate this?

The patch fixes two races.
- a race in vmx_process() of vmx_process.c
  The same race was there in ia64_do_page_fault() before.
  The check, (fault == IA64_USE_TLB  !current-arch.dtlb.pte.p), in
  ia64_do_page_fault() avoids this race.

- a race in vtlb.c
  vmx_vcpu_ptc_l() needs a certain condition on the collision chain
  to traverse collision chains and purge entries correctly.
  But there are windows when the condision is broken.
  The critical areas are surrounded by local_irq_save() and local_irq_restore()
  with the patch.
  If a vcpu send a IPI to another vcpu for ptc.ga when another vcpu is
  in the critical area, things goes bad.


  Actually the patch assumes that the targeted vcpu is still
  on the physical cpu which received IPI.
  It might be reasonable to assume so before credit scheduler was
  introduced...

  If the targeted vcpu moved to another physical cpu,
  the collision chain is traversed on the IPI'ed physical cpu and
  the collision chain is modified on the physical cpu on which the
  targeted vcpu runs at the same time.
  The collision chain modification/traverse code doesn't seem to
  be lock-free. Something bad would happen.
  

In fact I suspect that there still remains a problem.
I haven't checked the generated assembler code and I'm not sure though.
- vmx_vcpu_ptc_ga()
  while (proc != v-processor);

  v-processor's type is int.
  However v-processor might be changed asynchronously by vcpu scheduler
  on another physical cpu.
  Compiler barrier and memory barrier might be needed somewhere.

-- 
yamahata

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

2006-06-29 Thread Xu, Anthony
See comments

From: Isaku Yamahata [mailto:[EMAIL PROTECTED]
Sent: 2006?6?29? 16:03
To: Xu, Anthony
Cc: xen-ia64-devel@lists.xensource.com
Subject: Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

Hi Anthony.

On Thu, Jun 29, 2006 at 02:21:48PM +0800, Xu, Anthony wrote:
 From: Isaku Yamahata [mailto:[EMAIL PROTECTED]
 Sent: 2006?6?21? 17:25
 To: Xu, Anthony
 Cc: xen-ia64-devel@lists.xensource.com
 Subject: Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.
 
 
 On Thu, Jun 01, 2006 at 12:55:08PM +0800, Xu, Anthony wrote:
 
   Or you mean the protection of global purge.
   When a vcpu get IPI to purge TLB,
   What it does is to invalid the TLB entry in VHPT,
   but not remove the TLB entry.
   There is no race condition.
  
  Is there any gurantee that the vcpu which recives IPI isn't touching 
  VHPT?
 
  The vcpu which receives IPI can touch VHPT in the same time.
  Because purge operation only sets the TLB entry invalid, like entry-ti=1.
  That has the same philosophy with Tristan's direct purge
 
 Could you review the two attached patches?
 Purge function traverses the collision chain when IPI is sent.
 But there is a window when the assumption of the collision chain
 is broken.
 vmx_hpw_miss() has a race. ia64_do_page_fault() had a similar race before.
 
 --

 Sorry for late response.

 The second patch is good cleanup and improvement.

The second patch is also a bug fix patch.


 I don't understand the race condition the first patch fixes.

 Could you please elaborate this?

The patch fixes two races.
- a race in vmx_process() of vmx_process.c
  The same race was there in ia64_do_page_fault() before.
  The check, (fault == IA64_USE_TLB  !current-arch.dtlb.pte.p), in
  ia64_do_page_fault() avoids this race.
VTI-domain doesn't have this issue, which is introduced by 1-entry TLB


- a race in vtlb.c
  vmx_vcpu_ptc_l() needs a certain condition on the collision chain
  to traverse collision chains and purge entries correctly.
  But there are windows when the condision is broken.
  The critical areas are surrounded by local_irq_save() and local_irq_restore()
  with the patch.
  If a vcpu send a IPI to another vcpu for ptc.ga when another vcpu is
  in the critical area, things goes bad.

There seems be some race conditions. But I had used correct operation sequence 
on VTLB to avoid these race conditions. local_irq_save() and 
local_irq_restore() 
are not needed, some mb may be needed to guarantee the memory access order.


  Actually the patch assumes that the targeted vcpu is still
  on the physical cpu which received IPI.
  It might be reasonable to assume so before credit scheduler was
  introduced...
I don't assume the targeted vcpu is running on the physical cpu.


  If the targeted vcpu moved to another physical cpu,
  the collision chain is traversed on the IPI'ed physical cpu and
  the collision chain is modified on the physical cpu on which the
  targeted vcpu runs at the same time.
  The collision chain modification/traverse code doesn't seem to
  be lock-free. Something bad would happen.


In fact I suspect that there still remains a problem.
I haven't checked the generated assembler code and I'm not sure though.
- vmx_vcpu_ptc_ga()
  while (proc != v-processor);

  v-processor's type is int.
  However v-processor might be changed asynchronously by vcpu scheduler
  on another physical cpu.
  Compiler barrier and memory barrier might be needed somewhere.

I didn't think of vcpu being scheduled to other LP by far.

--
yamahata

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

2006-06-29 Thread Isaku Yamahata
On Thu, Jun 29, 2006 at 05:42:27PM +0800, Xu, Anthony wrote:

 - a race in vmx_process() of vmx_process.c
   The same race was there in ia64_do_page_fault() before.
   The check, (fault == IA64_USE_TLB  !current-arch.dtlb.pte.p), in
   ia64_do_page_fault() avoids this race.
 VTI-domain doesn't have this issue, which is introduced by 1-entry TLB

VT-i domain introduced v-arch.vtlb instead of 1-entry TLB.
Similar race can occur with v-arch.vtlb.

-- 
yamahata

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

2006-06-29 Thread Xu, Anthony
From: Isaku Yamahata [mailto:[EMAIL PROTECTED]
Sent: 2006?6?29? 18:36
To: Xu, Anthony
Cc: xen-ia64-devel@lists.xensource.com
Subject: Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

On Thu, Jun 29, 2006 at 05:42:27PM +0800, Xu, Anthony wrote:

 - a race in vmx_process() of vmx_process.c
   The same race was there in ia64_do_page_fault() before.
   The check, (fault == IA64_USE_TLB  !current-arch.dtlb.pte.p), in
   ia64_do_page_fault() avoids this race.
 VTI-domain doesn't have this issue, which is introduced by 1-entry TLB

VT-i domain introduced v-arch.vtlb instead of 1-entry TLB.
Similar race can occur with v-arch.vtlb.

The difference is that 1-entry TLB uses pte.p to indicate whether it's valid.
While VTLB uses entry-ti to indicate whether it's valid, which is similar with 
VHPT, there is no issue.

Thanks,
Anthony


--
yamahata

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

2006-06-07 Thread Xu, Anthony
This patch enables SMP on VTI domain.

I can boot VTI domain with 8 vcpus on my box.
There are only 2 slots (2 core 2 thread each) on my box. :-)

Signed-off-by: Anthony Xu  [EMAIL PROTECTED] 

Thanks,
-Anthony 



vti_smp_0607.patch
Description: vti_smp_0607.patch
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

2006-05-31 Thread Tristan Gingold
Le Mercredi 31 Mai 2006 09:32, Xu, Anthony a écrit :
 This patch intends to enable SMP on VTI domain.

 This patch depends on previous three patches I sent out.
 1. fixed a bug which causes Oops
 2. fixed a small bug about VTLB
 3. Add sal emulation to VTI domain

 This patch uses IPI to implement global purge.

 If you want to reproduce what I did, you may need to get the
 newest guest FIRMWARE.
Good work!

Two comments:
* could you postpone your patch until I resend my cpu hotplug patch ?
I have to modify the start-up ipi, which you are reusing.  (Should it be 
common ?)

* I may be wrong, but why are you using a cpu/lid table ?  I think Xen can 
impose lid to the guest firmware.  Things will be a little simpler.

Tristan.


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

2006-05-31 Thread Xu, Anthony
From: Tristan Gingold 
Sent: 2006年5月31日 16:07
To: Xu, Anthony; xen-ia64-devel@lists.xensource.com
Subject: Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

Le Mercredi 31 Mai 2006 09:32, Xu, Anthony a écrit :
 This patch intends to enable SMP on VTI domain.

 This patch depends on previous three patches I sent out.
 1. fixed a bug which causes Oops
 2. fixed a small bug about VTLB
 3. Add sal emulation to VTI domain

 This patch uses IPI to implement global purge.

 If you want to reproduce what I did, you may need to get the
 newest guest FIRMWARE.
Good work!

Two comments:
* could you postpone your patch until I resend my cpu hotplug patch ?
I have to modify the start-up ipi, which you are reusing.  (Should it be
common ?)
It should be Ok, if it doesn't take long.

* I may be wrong, but why are you using a cpu/lid table ?  
Maybe I understand mistakenly,
I'm using cpu/lid table to send IPI


I think Xen can
impose lid to the guest firmware.  Things will be a little simpler.
I don't catch you, Could you explain in more detail?




Tristan.

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

2006-05-31 Thread Xu, Anthony
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Xu, Anthony
Sent: 2006年5月31日 16:25
To: Tristan Gingold; xen-ia64-devel@lists.xensource.com
Subject: RE: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

From: Tristan Gingold
Sent: 2006年5月31日 16:07
To: Xu, Anthony; xen-ia64-devel@lists.xensource.com
Subject: Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

Le Mercredi 31 Mai 2006 09:32, Xu, Anthony a écrit :
 This patch intends to enable SMP on VTI domain.

 This patch depends on previous three patches I sent out.
 1. fixed a bug which causes Oops
 2. fixed a small bug about VTLB
 3. Add sal emulation to VTI domain

 This patch uses IPI to implement global purge.

 If you want to reproduce what I did, you may need to get the
 newest guest FIRMWARE.
Good work!

Two comments:
* could you postpone your patch until I resend my cpu hotplug patch ?
I have to modify the start-up ipi, which you are reusing.  (Should it be
common ?)
It should be Ok, if it doesn't take long.
Further thinking,
Will this patch block your cpu hotplug patch?
If no, I think this patch should be checked in, and if needed, I will
update start-up ipi.


* I may be wrong, but why are you using a cpu/lid table ?
Maybe I understand mistakenly,
I'm using cpu/lid table to send IPI


I think Xen can
impose lid to the guest firmware.  Things will be a little simpler.
I don't catch you, Could you explain in more detail?




Tristan.

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

2006-05-31 Thread Tristan Gingold
Le Mercredi 31 Mai 2006 10:25, Xu, Anthony a écrit :
 From: Tristan Gingold

 Sent: 2006年5月31日 16:07
 To: Xu, Anthony; xen-ia64-devel@lists.xensource.com
 Subject: Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.
 
 Le Mercredi 31 Mai 2006 09:32, Xu, Anthony a écrit :
  This patch intends to enable SMP on VTI domain.
 
  This patch depends on previous three patches I sent out.
  1. fixed a bug which causes Oops
  2. fixed a small bug about VTLB
  3. Add sal emulation to VTI domain
 
  This patch uses IPI to implement global purge.
 
  If you want to reproduce what I did, you may need to get the
  newest guest FIRMWARE.
 
 Good work!
 
 Two comments:
 * could you postpone your patch until I resend my cpu hotplug patch ?
 I have to modify the start-up ipi, which you are reusing.  (Should it be
 common ?)

 It should be Ok, if it doesn't take long.
I expect to resent it within a few hours.

 * I may be wrong, but why are you using a cpu/lid table ?

 Maybe I understand mistakenly,
 I'm using cpu/lid table to send IPI

 I think Xen can
 impose lid to the guest firmware.  Things will be a little simpler.

 I don't catch you, Could you explain in more detail?
I think lid_2_vcpu is almost useless.  Like in paravirtualized, vcpuid should 
be simply extracted from lid.

Tristan.

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

2006-05-31 Thread Tristan Gingold
Le Mercredi 31 Mai 2006 10:48, Xu, Anthony a écrit :
 From: [EMAIL PROTECTED]

 [mailto:[EMAIL PROTECTED] On Behalf Of Xu,
  Anthony Sent: 2006年5月31日 16:25
 To: Tristan Gingold; xen-ia64-devel@lists.xensource.com
 Subject: RE: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.
 
 From: Tristan Gingold
 Sent: 2006年5月31日 16:07
 To: Xu, Anthony; xen-ia64-devel@lists.xensource.com
 Subject: Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.
 
 Le Mercredi 31 Mai 2006 09:32, Xu, Anthony a écrit :
  This patch intends to enable SMP on VTI domain.
 
  This patch depends on previous three patches I sent out.
  1. fixed a bug which causes Oops
  2. fixed a small bug about VTLB
  3. Add sal emulation to VTI domain
 
  This patch uses IPI to implement global purge.
 
  If you want to reproduce what I did, you may need to get the
  newest guest FIRMWARE.
 
 Good work!
 
 Two comments:
 * could you postpone your patch until I resend my cpu hotplug patch ?
 I have to modify the start-up ipi, which you are reusing.  (Should it be
 common ?)
 
 It should be Ok, if it doesn't take long.

 Further thinking,
 Will this patch block your cpu hotplug patch?
No, but my patch will broke yours.

 If no, I think this patch should be checked in, and if needed, I will
 update start-up ipi.
I still think start-up IPI should be common code between vti and 
paravirtualized.

Tristan.

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

2006-05-31 Thread Xu, Anthony
From: Tristan Gingold 
Sent: 2006年5月31日 17:01
 I think Xen can
 impose lid to the guest firmware.  Things will be a little simpler.

 I don't catch you, Could you explain in more detail?
I think lid_2_vcpu is almost useless.  Like in paravirtualized, vcpuid should
be simply extracted from lid.

You mean vcpuid is extracted from machine lid.

It may work for domU, because lsapic table is built by xen.
But for VTI-domain, lsapic table is built by guest firmware, how can
guest firmware get other LP's lid.
So in VTI-domain, we should use vcpuid predefined by XEN and guest firmware.

Another concern is,
If SMP guest is running on a UP platform, vcpus may have same vcpuid.


Tristan.

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

2006-05-31 Thread Tristan Gingold
Le Mercredi 31 Mai 2006 11:08, Xu, Anthony a écrit :
 From: Tristan Gingold

 Sent: 2006年5月31日 17:01
 
  I think Xen can
  impose lid to the guest firmware.  Things will be a little simpler.
 
  I don't catch you, Could you explain in more detail?
 
 I think lid_2_vcpu is almost useless.  Like in paravirtualized, vcpuid
  should be simply extracted from lid.

 You mean vcpuid is extracted from machine lid.
Yes.

 It may work for domU, because lsapic table is built by xen.
 But for VTI-domain, lsapic table is built by guest firmware, how can
 guest firmware get other LP's lid.
Same as on a real machine: SAL has to know LP's lid.
I simply think lids must be 0, 1, 2...

 So in VTI-domain, we should use vcpuid predefined by XEN and guest
 firmware.
Why can't they match ?

 Another concern is,
 If SMP guest is running on a UP platform, vcpus may have same vcpuid.
Why ?  IMHO this can never happen.

Tristan.

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

2006-05-31 Thread Xu, Anthony
From: Tristan Gingold [mailto:[EMAIL PROTECTED]
Sent: 2006年5月31日 17:43
To: Xu, Anthony; xen-ia64-devel@lists.xensource.com
Subject: Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

Le Mercredi 31 Mai 2006 11:08, Xu, Anthony a écrit :
 From: Tristan Gingold

 Sent: 2006年5月31日 17:01
 
  I think Xen can
  impose lid to the guest firmware.  Things will be a little simpler.
 
  I don't catch you, Could you explain in more detail?
 
 I think lid_2_vcpu is almost useless.  Like in paravirtualized, vcpuid
  should be simply extracted from lid.

 You mean vcpuid is extracted from machine lid.
Yes.

 It may work for domU, because lsapic table is built by xen.
 But for VTI-domain, lsapic table is built by guest firmware, how can
 guest firmware get other LP's lid.
Same as on a real machine: SAL has to know LP's lid.
I simply think lids must be 0, 1, 2...
Yes, you are right

Current implement is as below,
Only vcpu0 executes guest firmware, and other vcpus don't execute guest 
firmware.

The reason is as below,
There is below scenarios,
We configure 8 vcpus for a domain, but guest OS on this domain only supports 4 
vcpus, the other 4 vcpus are not used.
And now VTI-domain uses per-VCPU vtlb, that means there are 4 per-VCPU vtlbs
allocated, but not used. It will waste many memory.
So current algorithm is, only when Guest OS wants to wake up a vcpu, the 
per-VCPU
Vtlb for this vcpu is allocated.


 So in VTI-domain, we should use vcpuid predefined by XEN and guest
 firmware.
Why can't they match ?

 Another concern is,
 If SMP guest is running on a UP platform, vcpus may have same vcpuid.
Why ?  IMHO this can never happen.
There are 4 LP on my box, if I want to boot a domU with 6 vcpus,
Can this domU boot?



Tristan.

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

2006-05-31 Thread Tristan Gingold
Le Mercredi 31 Mai 2006 13:43, Xu, Anthony a écrit :
 From: Tristan Gingold [mailto:[EMAIL PROTECTED]
[...]
  Another concern is,
  If SMP guest is running on a UP platform, vcpus may have same vcpuid.
 
 Why ?  IMHO this can never happen.

 There are 4 LP on my box, if I want to boot a domU with 6 vcpus,
 Can this domU boot?
[ I have never tried, but here is my understanding:]
Sure.  You can create more vcpus than existing cpus.  Of course, you can't run 
6 vcpus simultaneously on 4 cpus!

Tristan.

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

2006-05-31 Thread Xu, Anthony
From: Tristan Gingold 
Sent: 2006年5月31日 20:04
To: Xu, Anthony; xen-ia64-devel@lists.xensource.com
Subject: Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

Le Mercredi 31 Mai 2006 13:43, Xu, Anthony a écrit :
 From: Tristan Gingold [mailto:[EMAIL PROTECTED]
[...]
  Another concern is,
  If SMP guest is running on a UP platform, vcpus may have same vcpuid.
 
 Why ?  IMHO this can never happen.

 There are 4 LP on my box, if I want to boot a domU with 6 vcpus,
 Can this domU boot?
[ I have never tried, but here is my understanding:]
Sure.  You can create more vcpus than existing cpus.  Of course, you can't run
6 vcpus simultaneously on 4 cpus!
If the lid of vcpus is extracted from machine lid, there are at least two vcpus
whose lids are same, which may make guest OS confused.

Tristan.

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

2006-05-31 Thread Tristan Gingold
Le Mercredi 31 Mai 2006 14:04, Xu, Anthony a écrit :
 From: Tristan Gingold

 Sent: 2006年5月31日 20:04
 To: Xu, Anthony; xen-ia64-devel@lists.xensource.com
 Subject: Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.
 
 Le Mercredi 31 Mai 2006 13:43, Xu, Anthony a écrit :
  From: Tristan Gingold [mailto:[EMAIL PROTECTED]
 
 [...]
 
   Another concern is,
   If SMP guest is running on a UP platform, vcpus may have same vcpuid.
  
  Why ?  IMHO this can never happen.
 
  There are 4 LP on my box, if I want to boot a domU with 6 vcpus,
  Can this domU boot?
 
 [ I have never tried, but here is my understanding:]
 Sure.  You can create more vcpus than existing cpus.  Of course, you can't
  run 6 vcpus simultaneously on 4 cpus!

 If the lid of vcpus is extracted from machine lid, there are at least two
 vcpus whose lids are same, which may make guest OS confused.
lid are paravirtualized except for dom0 (currently, this has to be revisited).

I think there is a misunderstanding somewhere, because some questions sound 
too strange.

My (small) comment was simple:
On Xen/VTI, GFW set lid and Xen has to build a map from lid to vcpuid.
I just think it should be simpler to modify GFW so that lid = vcpuid.

Tristan.

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

2006-05-31 Thread Xu, Anthony
From: Tristan Gingold [mailto:[EMAIL PROTECTED]
Sent: 2006年5月31日 20:19
To: Xu, Anthony; xen-ia64-devel@lists.xensource.com
Subject: Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

Le Mercredi 31 Mai 2006 14:04, Xu, Anthony a écrit :
 From: Tristan Gingold

 Sent: 2006年5月31日 20:04
 To: Xu, Anthony; xen-ia64-devel@lists.xensource.com
 Subject: Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.
 
 Le Mercredi 31 Mai 2006 13:43, Xu, Anthony a écrit :
  From: Tristan Gingold [mailto:[EMAIL PROTECTED]
 
 [...]
 
   Another concern is,
   If SMP guest is running on a UP platform, vcpus may have same vcpuid.
  
  Why ?  IMHO this can never happen.
 
  There are 4 LP on my box, if I want to boot a domU with 6 vcpus,
  Can this domU boot?
 
 [ I have never tried, but here is my understanding:]
 Sure.  You can create more vcpus than existing cpus.  Of course, you can't
  run 6 vcpus simultaneously on 4 cpus!

 If the lid of vcpus is extracted from machine lid, there are at least two
 vcpus whose lids are same, which may make guest OS confused.
lid are paravirtualized except for dom0 (currently, this has to be revisited).

I think there is a misunderstanding somewhere, because some questions sound
too strange.

My (small) comment was simple:
On Xen/VTI, GFW set lid and Xen has to build a map from lid to vcpuid.
I just think it should be simpler to modify GFW so that lid = vcpuid.

That is exactly what I implemented.

Tristan.

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

2006-05-31 Thread Tristan Gingold
Le Mercredi 31 Mai 2006 14:25, Xu, Anthony a écrit :
 From: Tristan Gingold [mailto:[EMAIL PROTECTED]

 Sent: 2006年5月31日 20:19
 To: Xu, Anthony; xen-ia64-devel@lists.xensource.com
 Subject: Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.
 
 Le Mercredi 31 Mai 2006 14:04, Xu, Anthony a écrit :
  From: Tristan Gingold
 
  Sent: 2006年5月31日 20:04
  To: Xu, Anthony; xen-ia64-devel@lists.xensource.com
  Subject: Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.
  
  Le Mercredi 31 Mai 2006 13:43, Xu, Anthony a écrit :
   From: Tristan Gingold [mailto:[EMAIL PROTECTED]
  
  [...]
  
Another concern is,
If SMP guest is running on a UP platform, vcpus may have same
vcpuid.
   
   Why ?  IMHO this can never happen.
  
   There are 4 LP on my box, if I want to boot a domU with 6 vcpus,
   Can this domU boot?
  
  [ I have never tried, but here is my understanding:]
  Sure.  You can create more vcpus than existing cpus.  Of course, you
   can't run 6 vcpus simultaneously on 4 cpus!
 
  If the lid of vcpus is extracted from machine lid, there are at least
  two vcpus whose lids are same, which may make guest OS confused.
 
 lid are paravirtualized except for dom0 (currently, this has to be
  revisited).
 
 I think there is a misunderstanding somewhere, because some questions
  sound too strange.
 
 My (small) comment was simple:
 On Xen/VTI, GFW set lid and Xen has to build a map from lid to vcpuid.
 I just think it should be simpler to modify GFW so that lid = vcpuid.

 That is exactly what I implemented.
I may have mis-read the patches you have recently posted but on the current 
changeset lid_to_vcpu is O(n).  Why isn't it O(1) ?

Tristan.

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

2006-05-31 Thread Xu, Anthony
From: Tristan Gingold [mailto:[EMAIL PROTECTED]
Sent: 2006年5月31日 20:54
To: Xu, Anthony; xen-ia64-devel@lists.xensource.com
Subject: Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

Le Mercredi 31 Mai 2006 14:25, Xu, Anthony a écrit :
 From: Tristan Gingold [mailto:[EMAIL PROTECTED]

 Sent: 2006年5月31日 20:19
 To: Xu, Anthony; xen-ia64-devel@lists.xensource.com
 Subject: Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.
 
 Le Mercredi 31 Mai 2006 14:04, Xu, Anthony a écrit :
  From: Tristan Gingold
 
  Sent: 2006年5月31日 20:04
  To: Xu, Anthony; xen-ia64-devel@lists.xensource.com
  Subject: Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.
  
  Le Mercredi 31 Mai 2006 13:43, Xu, Anthony a écrit :
   From: Tristan Gingold [mailto:[EMAIL PROTECTED]
  
  [...]
  
Another concern is,
If SMP guest is running on a UP platform, vcpus may have same
vcpuid.
   
   Why ?  IMHO this can never happen.
  
   There are 4 LP on my box, if I want to boot a domU with 6 vcpus,
   Can this domU boot?
  
  [ I have never tried, but here is my understanding:]
  Sure.  You can create more vcpus than existing cpus.  Of course, you
   can't run 6 vcpus simultaneously on 4 cpus!
 
  If the lid of vcpus is extracted from machine lid, there are at least
  two vcpus whose lids are same, which may make guest OS confused.
 
 lid are paravirtualized except for dom0 (currently, this has to be
  revisited).
 
 I think there is a misunderstanding somewhere, because some questions
  sound too strange.
 
 My (small) comment was simple:
 On Xen/VTI, GFW set lid and Xen has to build a map from lid to vcpuid.
 I just think it should be simpler to modify GFW so that lid = vcpuid.

 That is exactly what I implemented.
I may have mis-read the patches you have recently posted but on the current
changeset lid_to_vcpu is O(n).  Why isn't it O(1) ?
Sorry, don’t catch you.
In lid register, id is from 24 to 31, eid is from 16 to 23.
So lid = vcpuid  24


Tristan.

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

2006-05-31 Thread Tristan Gingold
Le Mercredi 31 Mai 2006 15:02, Xu, Anthony a écrit :
 From: Tristan Gingold [mailto:[EMAIL PROTECTED]

 Sent: 2006年5月31日 20:54
 To: Xu, Anthony; xen-ia64-devel@lists.xensource.com
 Subject: Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.
 
 Le Mercredi 31 Mai 2006 14:25, Xu, Anthony a écrit :
  From: Tristan Gingold [mailto:[EMAIL PROTECTED]
 
  Sent: 2006年5月31日 20:19
  To: Xu, Anthony; xen-ia64-devel@lists.xensource.com
  Subject: Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.
  
  Le Mercredi 31 Mai 2006 14:04, Xu, Anthony a écrit :
   From: Tristan Gingold
  
   Sent: 2006年5月31日 20:04
   To: Xu, Anthony; xen-ia64-devel@lists.xensource.com
   Subject: Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.
   
   Le Mercredi 31 Mai 2006 13:43, Xu, Anthony a écrit :
From: Tristan Gingold [mailto:[EMAIL PROTECTED]
   
   [...]
   
 Another concern is,
 If SMP guest is running on a UP platform, vcpus may have same
 vcpuid.

Why ?  IMHO this can never happen.
   
There are 4 LP on my box, if I want to boot a domU with 6 vcpus,
Can this domU boot?
   
   [ I have never tried, but here is my understanding:]
   Sure.  You can create more vcpus than existing cpus.  Of course, you
can't run 6 vcpus simultaneously on 4 cpus!
  
   If the lid of vcpus is extracted from machine lid, there are at least
   two vcpus whose lids are same, which may make guest OS confused.
  
  lid are paravirtualized except for dom0 (currently, this has to be
   revisited).
  
  I think there is a misunderstanding somewhere, because some questions
   sound too strange.
  
  My (small) comment was simple:
  On Xen/VTI, GFW set lid and Xen has to build a map from lid to vcpuid.
  I just think it should be simpler to modify GFW so that lid = vcpuid.
 
  That is exactly what I implemented.
 
 I may have mis-read the patches you have recently posted but on the
  current changeset lid_to_vcpu is O(n).  Why isn't it O(1) ?

 Sorry, don’t catch you.
 In lid register, id is from 24 to 31, eid is from 16 to 23.
 So lid = vcpuid  24
OK.

Tristan.

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

2006-05-31 Thread Xu, Anthony
From: Isaku Yamahata [mailto:[EMAIL PROTECTED]
Sent: 2006?6?1? 10:12
To: Xu, Anthony
Cc: xen-ia64-devel@lists.xensource.com
Subject: Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.


On Wed, May 31, 2006 at 03:32:06PM +0800, Xu, Anthony wrote:
 This patch intends to enable SMP on VTI domain.

 This patch depends on previous three patches I sent out.
 1. fixed a bug which causes Oops
 2. fixed a small bug about VTLB
 3. Add sal emulation to VTI domain

 This patch uses IPI to implement global purge.

I just took a quick look at your patch though.
There is no protection to IPI.
Is it O.K? Does the use of IPI cause race?

Do you mean below code?
#ifdef XEN
spin_lock(call_lock);
#else

Or you mean the protection of global purge.
When a vcpu get IPI to purge TLB,
What it does is to invalid the TLB entry in VHPT,
but not remove the TLB entry.
There is no race condition.

--
yamahata

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

2006-05-31 Thread Isaku Yamahata
On Thu, Jun 01, 2006 at 11:46:05AM +0800, Xu, Anthony wrote:
 From: Isaku Yamahata [mailto:[EMAIL PROTECTED]
 Sent: 2006?6?1? 10:12
 To: Xu, Anthony
 Cc: xen-ia64-devel@lists.xensource.com
 Subject: Re: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.
 
 
 On Wed, May 31, 2006 at 03:32:06PM +0800, Xu, Anthony wrote:
  This patch intends to enable SMP on VTI domain.
 
  This patch depends on previous three patches I sent out.
  1. fixed a bug which causes Oops
  2. fixed a small bug about VTLB
  3. Add sal emulation to VTI domain
 
  This patch uses IPI to implement global purge.
 
 I just took a quick look at your patch though.
 There is no protection to IPI.
 Is it O.K? Does the use of IPI cause race?
 
 Do you mean below code?
 #ifdef XEN
   spin_lock(call_lock);
 #else

I meant local_irq_save(), local_irq_restore(). masking ipi.


 Or you mean the protection of global purge.
 When a vcpu get IPI to purge TLB,
 What it does is to invalid the TLB entry in VHPT,
 but not remove the TLB entry.
 There is no race condition.

Is there any gurantee that the vcpu which recives IPI isn't touching VHPT?

-- 
yamahata

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel][PATCH] Enable SMP on VTI domain.

2006-05-31 Thread Xu, Anthony
From: Isaku Yamahata [mailto:[EMAIL PROTECTED]
Sent: 2006?6?1? 12:45
 I just took a quick look at your patch though.
 There is no protection to IPI.
 Is it O.K? Does the use of IPI cause race?
 
 Do you mean below code?
 #ifdef XEN
  spin_lock(call_lock);
 #else

I meant local_irq_save(), local_irq_restore(). masking ipi.

I'm not sure.
But I don't think it is needed, otherwise how a LP sends IPI to itself.

 Or you mean the protection of global purge.
 When a vcpu get IPI to purge TLB,
 What it does is to invalid the TLB entry in VHPT,
 but not remove the TLB entry.
 There is no race condition.

Is there any gurantee that the vcpu which recives IPI isn't touching VHPT?

The vcpu which receives IPI can touch VHPT in the same time.
Because purge operation only sets the TLB entry invalid, like entry-ti=1.
That has the same philosophy with Tristan's direct purge



--
yamahata

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel