Re: [Xen-ia64-devel] [PATCH]Fix the logic when deassign the mmio ranges for vti-domain.

2009-03-05 Thread Isaku Yamahata
On Fri, Mar 06, 2009 at 10:11:42AM +0800, Zhang, Xiantao wrote:
> 
> Isaku Yamahata wrote:
> > On Thu, Mar 05, 2009 at 11:55:10AM +0800, Zhang, Xiantao wrote:
> >> Isaku Yamahata wrote:
> >>> On Wed, Mar 04, 2009 at 05:26:41PM +0800, Zhang, Xiantao wrote:
>  So far, we just found the msi-x case. Maybe we will add msi-x
>  support later, so this fix is also required.
> >>> 
> >>> Okay, makes sense.
> >>> 
> >>> And why GPFN_LOW_MMIO independently of addr? Shouldn't it be
> >>> aware of io_ranges[]?
> >> 
> >> For the low mmio ranges (3G-3.5G), we can use the fixed mfn
> >> GPFN_LOW_MMIO combined with ASSIGN_io to indicate whether the p2m
> >> entries are mmio ranges.   You may refer to io_ranges and it also
> >> use the fixed GPFN_LOW_MMIO to intialize p2m entries for low mmio
> >> range.
> > 
> > Hmm, there are two cases to call
> > xc_domain_mempry_mapping(DPCI_REMOVE_MAPPING). - Just to remove
> >   the entry. zap_domain_page_one() is wanted.
>  
>  Why remove the entries ?  For hvm domain, I think the entires
>  should always exists during the lift of the guests.
>  You may refer to the call vmx_build_io_physmap_table, and these
>  entries are created at the initialization time of the domain.
>  
> >   the one in pt_iomem_map() and remove_msix_mapping() excpet
> >   called by pt_iomem_map()
>  
> > 
> > - mmio on the area should be rounted to qemu-dm
> >   GPFN_LOW_MMIO and ASSIGN_io are wanted.
> > 
> >   remove_msix_mapping() which is called by pt_iomem_map().
> > 
> > Is there a way to distinguish them.
>  
>  We don't need to distinguish them, and instead of we should keep
>  these entires in two cases consistent with the values which is
>  initilized by vmx_build_io_physmap_table.
> >>> 
> >>> The current x86 implementation mmio which isn't handled by xen VMM
> >>> are passed to qemu-dm. On the other hand, the current xen/ia64
> >>> check _PAGE_IO and 
> >>> if _PAGE_IO it is passed to qemu-dm, otherwise panic_domain()
> >>> So the behaviour should be changed such that if load/store on
> >>> the unpresent p2m entry is passed to qemu-dm like x86.
> >> 
> >> That may change much logic.
> > 
> > I think only vmx_hpw_miss() (and possibly few related functions)
> > needs modification.
> > 
> > 
> >>  For hvm/ia64, we have the assumption that all p2m entries  for mmio
> >> range should exist, and use the _PAGE_IO to identify its type.
> > 
> > But the msi-x emulation code in qemu-dm doesn't assume it.
> > So you created the patch, didn't you?
> > If you want to keep the assumption, the code to change would be
> > the msi-x emulation in qemu-dm, not vmm.
> 
> I don't think we need to touch the code in qemu-dm, could you give some hint ?

I wanted to claim that _PAGE_IO assumption should be changed.
So let's modify vmx_hpw_miss() so that load/store on the area of zero p2m entry
will be passed to qemu-dm.
What should be changed is to change "pte & _PAGE_IO" to
"(pte & _PAGE_IO) || (pte == INVALID_MFN)" or something like that.
And the argument, pt_pfn(__pte(pte)), to emulate_io_inst() should be twisted
into if pte == INVALID_MFN. Maybe lookup_domain_mpa() needs to be revised.
NOTE: I haven't checked the detail so that the above description needs
  minor corrections.


> In addition, maybe we need do flush_tlb_all after removing some mapping for 
> mmio range ? 

Oh Yes. You're right.
For pv domain case, zap_domain_page_ona() evantually calls
domain_page_flush_put() which flushes tlb.

Having reviewed the code, I found that the current ia64 implementation of
XEN_DOMCTL_memory_mapping is broken for hvm domain.
i.e. deassign_domain_mmio_page() is broken for full virtualed domain.
For hvm domain case, there is an assumption that the p2m table 
of hvm domain isn't changed while domain is running. Otherwise there will
be a race. (Possiblely it could be relaxed to append-only.)
zap_domain_page_one() hasn't been used for hvm domain so far, there is
no users for that. balloon driver isn't supported for ia64 hvm domain
for the same reason.

So for hvm domain, we have to do
 - domain_pause()
 - zap_domain_page_one()
   This function does
   - zero clear the p2m entry
   - flush tlb (Fortunately the current implementation happens to issues
domain_flush_vtlb_all() for hvm domain)
 - domain_unpause()

I suppose XEN_DOMCTL_memory_mapping isn't performance critical path,
so pausing/unpausing domain is acceptable.

-- 
yamahata

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] [PATCH]Fix the logic when deassign the mmio ranges for vti-domain.

2009-03-05 Thread Zhang, Xiantao

Isaku Yamahata wrote:
> On Thu, Mar 05, 2009 at 11:55:10AM +0800, Zhang, Xiantao wrote:
>> Isaku Yamahata wrote:
>>> On Wed, Mar 04, 2009 at 05:26:41PM +0800, Zhang, Xiantao wrote:
 So far, we just found the msi-x case. Maybe we will add msi-x
 support later, so this fix is also required.
>>> 
>>> Okay, makes sense.
>>> 
>>> And why GPFN_LOW_MMIO independently of addr? Shouldn't it be
>>> aware of io_ranges[]?
>> 
>> For the low mmio ranges (3G-3.5G), we can use the fixed mfn
>> GPFN_LOW_MMIO combined with ASSIGN_io to indicate whether the p2m
>> entries are mmio ranges.   You may refer to io_ranges and it also
>> use the fixed GPFN_LOW_MMIO to intialize p2m entries for low mmio
>> range.
> 
> Hmm, there are two cases to call
> xc_domain_mempry_mapping(DPCI_REMOVE_MAPPING). - Just to remove
>   the entry. zap_domain_page_one() is wanted.
 
 Why remove the entries ?  For hvm domain, I think the entires
 should always exists during the lift of the guests.
 You may refer to the call vmx_build_io_physmap_table, and these
 entries are created at the initialization time of the domain.
 
>   the one in pt_iomem_map() and remove_msix_mapping() excpet
>   called by pt_iomem_map()
 
> 
> - mmio on the area should be rounted to qemu-dm
>   GPFN_LOW_MMIO and ASSIGN_io are wanted.
> 
>   remove_msix_mapping() which is called by pt_iomem_map().
> 
> Is there a way to distinguish them.
 
 We don't need to distinguish them, and instead of we should keep
 these entires in two cases consistent with the values which is
 initilized by vmx_build_io_physmap_table.
>>> 
>>> The current x86 implementation mmio which isn't handled by xen VMM
>>> are passed to qemu-dm. On the other hand, the current xen/ia64
>>> check _PAGE_IO and 
>>> if _PAGE_IO it is passed to qemu-dm, otherwise panic_domain()
>>> So the behaviour should be changed such that if load/store on
>>> the unpresent p2m entry is passed to qemu-dm like x86.
>> 
>> That may change much logic.
> 
> I think only vmx_hpw_miss() (and possibly few related functions)
> needs modification.
> 
> 
>>  For hvm/ia64, we have the assumption that all p2m entries  for mmio
>> range should exist, and use the _PAGE_IO to identify its type.
> 
> But the msi-x emulation code in qemu-dm doesn't assume it.
> So you created the patch, didn't you?
> If you want to keep the assumption, the code to change would be
> the msi-x emulation in qemu-dm, not vmm.

I don't think we need to touch the code in qemu-dm, could you give some hint ?

In addition, maybe we need do flush_tlb_all after removing some mapping for 
mmio range ? 
Xiantao

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] [PATCH]Fix the logic when deassign the mmio ranges for vti-domain.

2009-03-04 Thread Isaku Yamahata
On Thu, Mar 05, 2009 at 11:55:10AM +0800, Zhang, Xiantao wrote:
> Isaku Yamahata wrote:
> > On Wed, Mar 04, 2009 at 05:26:41PM +0800, Zhang, Xiantao wrote:
> >> So far, we just found the msi-x case. Maybe we will add msi-x
> >> support later, so this fix is also required. 
> > 
> > Okay, makes sense.
> > 
> > And why GPFN_LOW_MMIO independently of addr? Shouldn't it be aware
> > of io_ranges[]?
>  
>  For the low mmio ranges (3G-3.5G), we can use the fixed mfn
>  GPFN_LOW_MMIO combined with ASSIGN_io to indicate whether the p2m
>  entries are mmio ranges.   You may refer to io_ranges and it also
>  use the fixed GPFN_LOW_MMIO to intialize p2m entries for low mmio
>  range.
> >>> 
> >>> Hmm, there are two cases to call
> >>> xc_domain_mempry_mapping(DPCI_REMOVE_MAPPING). - Just to remove the
> >>>   entry. zap_domain_page_one() is wanted.
> >> 
> >> Why remove the entries ?  For hvm domain, I think the entires should
> >> always exists during the lift of the guests. 
> >> You may refer to the call vmx_build_io_physmap_table, and these
> >> entries are created at the initialization time of the domain. 
> >> 
> >>>   the one in pt_iomem_map() and remove_msix_mapping() excpet called
> >>>   by pt_iomem_map()
> >> 
> >>> 
> >>> - mmio on the area should be rounted to qemu-dm
> >>>   GPFN_LOW_MMIO and ASSIGN_io are wanted.
> >>> 
> >>>   remove_msix_mapping() which is called by pt_iomem_map().
> >>> 
> >>> Is there a way to distinguish them.
> >> 
> >> We don't need to distinguish them, and instead of we should keep
> >> these entires in two cases consistent with the values which is
> >> initilized by vmx_build_io_physmap_table.  
> > 
> > The current x86 implementation mmio which isn't handled by xen VMM
> > are passed to qemu-dm.
> > On the other hand, the current xen/ia64 check _PAGE_IO and
> > if _PAGE_IO it is passed to qemu-dm, otherwise panic_domain()
> > So the behaviour should be changed such that if load/store on
> > the unpresent p2m entry is passed to qemu-dm like x86. 
> 
> That may change much logic.

I think only vmx_hpw_miss() (and possibly few related functions)
needs modification.


>  For hvm/ia64, we have the assumption that all p2m entries  for mmio range 
> should exist, and use the
> _PAGE_IO to identify its type. 

But the msi-x emulation code in qemu-dm doesn't assume it.
So you created the patch, didn't you?
If you want to keep the assumption, the code to change would be
the msi-x emulation in qemu-dm, not vmm.

thanks,
-- 
yamahata

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] [PATCH]Fix the logic when deassign the mmio ranges for vti-domain.

2009-03-04 Thread Zhang, Xiantao
Isaku Yamahata wrote:
> On Wed, Mar 04, 2009 at 05:26:41PM +0800, Zhang, Xiantao wrote:
>> So far, we just found the msi-x case. Maybe we will add msi-x
>> support later, so this fix is also required. 
> 
> Okay, makes sense.
> 
> And why GPFN_LOW_MMIO independently of addr? Shouldn't it be aware
> of io_ranges[]?
 
 For the low mmio ranges (3G-3.5G), we can use the fixed mfn
 GPFN_LOW_MMIO combined with ASSIGN_io to indicate whether the p2m
 entries are mmio ranges.   You may refer to io_ranges and it also
 use the fixed GPFN_LOW_MMIO to intialize p2m entries for low mmio
 range.
>>> 
>>> Hmm, there are two cases to call
>>> xc_domain_mempry_mapping(DPCI_REMOVE_MAPPING). - Just to remove the
>>>   entry. zap_domain_page_one() is wanted.
>> 
>> Why remove the entries ?  For hvm domain, I think the entires should
>> always exists during the lift of the guests. 
>> You may refer to the call vmx_build_io_physmap_table, and these
>> entries are created at the initialization time of the domain. 
>> 
>>>   the one in pt_iomem_map() and remove_msix_mapping() excpet called
>>>   by pt_iomem_map()
>> 
>>> 
>>> - mmio on the area should be rounted to qemu-dm
>>>   GPFN_LOW_MMIO and ASSIGN_io are wanted.
>>> 
>>>   remove_msix_mapping() which is called by pt_iomem_map().
>>> 
>>> Is there a way to distinguish them.
>> 
>> We don't need to distinguish them, and instead of we should keep
>> these entires in two cases consistent with the values which is
>> initilized by vmx_build_io_physmap_table.  
> 
> The current x86 implementation mmio which isn't handled by xen VMM
> are passed to qemu-dm.
> On the other hand, the current xen/ia64 check _PAGE_IO and
> if _PAGE_IO it is passed to qemu-dm, otherwise panic_domain()
> So the behaviour should be changed such that if load/store on
> the unpresent p2m entry is passed to qemu-dm like x86. 

That may change much logic.  For hvm/ia64, we have the assumption that all p2m 
entries  for mmio range should exist, and use the
_PAGE_IO to identify its type. 

Xiantao



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] [PATCH]Fix the logic when deassign the mmio ranges for vti-domain.

2009-03-04 Thread Isaku Yamahata
On Wed, Mar 04, 2009 at 05:26:41PM +0800, Zhang, Xiantao wrote:
> So far, we just found the msi-x case. Maybe we will add msi-x support later, 
> so this fix is also required. 

Okay, makes sense.

> >>> And why GPFN_LOW_MMIO independently of addr? Shouldn't it be aware
> >>> of io_ranges[]?
> >> 
> >> For the low mmio ranges (3G-3.5G), we can use the fixed mfn
> >> GPFN_LOW_MMIO combined with ASSIGN_io to indicate whether the p2m
> >> entries are mmio ranges.   You may refer to io_ranges and it also
> >> use the fixed GPFN_LOW_MMIO to intialize p2m entries for low mmio
> >> range.
> > 
> > Hmm, there are two cases to call
> > xc_domain_mempry_mapping(DPCI_REMOVE_MAPPING). - Just to remove the
> >   entry. zap_domain_page_one() is wanted.
> 
> Why remove the entries ?  For hvm domain, I think the entires should always 
> exists during the lift of the guests.  
> You may refer to the call vmx_build_io_physmap_table, and these entries are 
> created at the initialization time of the domain. 
> 
> >   the one in pt_iomem_map() and remove_msix_mapping() excpet called
> >   by pt_iomem_map()
> 
> > 
> > - mmio on the area should be rounted to qemu-dm
> >   GPFN_LOW_MMIO and ASSIGN_io are wanted.
> > 
> >   remove_msix_mapping() which is called by pt_iomem_map().
> > 
> > Is there a way to distinguish them. 
> 
> We don't need to distinguish them, and instead of we should keep these 
> entires in two cases consistent with the values which is initilized by 
> vmx_build_io_physmap_table.  

The current x86 implementation mmio which isn't handled by xen VMM
are passed to qemu-dm.
On the other hand, the current xen/ia64 check _PAGE_IO and
if _PAGE_IO it is passed to qemu-dm, otherwise panic_domain()
So the behaviour should be changed such that if load/store on 
the unpresent p2m entry is passed to qemu-dm like x86.

-- 
yamahata

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] [PATCH]Fix the logic when deassign the mmio ranges for vti-domain.

2009-03-04 Thread Zhang, Xiantao
Isaku Yamahata wrote:
> On Tue, Mar 03, 2009 at 05:32:42PM +0800, Zhang, Xiantao wrote:
>> 
>> Isaku Yamahata wrote:
>>> Could you elaborate on the concrete issue which you're seeing?
>>> I guess the issue occurs when passed through pci device
>>> is unplugged. But in that case, the region was occupied by
>>> the device so that qemu haven't seen io on the area anyway.
>> 
>> For assigning a device to a hvm domain,  all mmio regions of the pci
>> device should be mapped in p2m table, but a corner case is that
>> accessing some pages(for example, vector table in msi-x's bar) in
>> one region maybe need  route to qemu and emulate the corresponding
>> logic in qemu.  In that case, we have to remove the mapping for the
>> specified pages in p2m, and let accessing these pages intercepted by
>> hyperviosr and forward the io requests to qemu.  But
>> zap_domain_page_one can't intilziate the mmio p2m entries for hvm
>> domain.  Clear ?  :-)
> 
> You mean pt_iomem_map() which calls remove_msix_mapping() in
> pass-through.c of qemu-dm? Is there any case other than msi-x?
> I couldn't find any other usefull case because the current xen/ia64
> doesn't support msi/msi-x.

So far, we just found the msi-x case. Maybe we will add msi-x support later, so 
this fix is also required. 

> 
>>> And why GPFN_LOW_MMIO independently of addr? Shouldn't it be aware
>>> of io_ranges[]?
>> 
>> For the low mmio ranges (3G-3.5G), we can use the fixed mfn
>> GPFN_LOW_MMIO combined with ASSIGN_io to indicate whether the p2m
>> entries are mmio ranges.   You may refer to io_ranges and it also
>> use the fixed GPFN_LOW_MMIO to intialize p2m entries for low mmio
>> range.
> 
> Hmm, there are two cases to call
> xc_domain_mempry_mapping(DPCI_REMOVE_MAPPING). - Just to remove the
>   entry. zap_domain_page_one() is wanted.

Why remove the entries ?  For hvm domain, I think the entires should always 
exists during the lift of the guests.  
You may refer to the call vmx_build_io_physmap_table, and these entries are 
created at the initialization time of the domain. 

>   the one in pt_iomem_map() and remove_msix_mapping() excpet called
>   by pt_iomem_map()

> 
> - mmio on the area should be rounted to qemu-dm
>   GPFN_LOW_MMIO and ASSIGN_io are wanted.
> 
>   remove_msix_mapping() which is called by pt_iomem_map().
> 
> Is there a way to distinguish them. 

We don't need to distinguish them, and instead of we should keep these entires 
in two cases consistent with the values which is initilized by 
vmx_build_io_physmap_table.  


> thanks,
> 
>> Xiantao
>> 
>>> 
>>> On Tue, Mar 03, 2009 at 03:14:02PM +0800, Zhang, Xiantao wrote:
 PATCH: Fix the logic when deassign the mmio ranges for vti-domain.
 
 When de-assign the mmio range, it should resume its original value
 for p2m value, otherwise, it may fail to determin mmio range's
 type. 
 
 Signed-off-by: Xiantao Zhang 
 
 diff -r 67f2e14613ef xen/arch/ia64/xen/mm.c
 --- a/xen/arch/ia64/xen/mm.c   Tue Feb 10 13:47:02 2009 +0800
 +++ b/xen/arch/ia64/xen/mm.c   Tue Mar 03 15:04:54 2009 +0800
 @@ -1508,8 +1508,14 @@ deassign_domain_mmio_page(struct domain
 return -EINVAL; } 
 
 -for (; addr < end; addr += PAGE_SIZE )
 -zap_domain_page_one(d, addr, 0, INVALID_MFN);
 +for (; addr < end; addr += PAGE_SIZE ) {
 +  if (is_hvm_domain(d))
 +__assign_domain_page(d, addr, GPFN_LOW_MMIO <<
 PAGE_SHIFT, + 
 ASSIGN_writable |
 ASSIGN_io); +  else +zap_domain_page_one(d, addr, 0,
  INVALID_MFN); +} + return 0;
  }
 
>>> 
 ___
 Xen-ia64-devel mailing list
 Xen-ia64-devel@lists.xensource.com
 http://lists.xensource.com/xen-ia64-devel
>> 
>> 
>> ___
>> Xen-ia64-devel mailing list
>> Xen-ia64-devel@lists.xensource.com
>> http://lists.xensource.com/xen-ia64-devel


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] [PATCH]Fix the logic when deassign the mmio ranges for vti-domain.

2009-03-03 Thread Isaku Yamahata
On Tue, Mar 03, 2009 at 05:32:42PM +0800, Zhang, Xiantao wrote:
> 
> Isaku Yamahata wrote:
> > Could you elaborate on the concrete issue which you're seeing?
> > I guess the issue occurs when passed through pci device
> > is unplugged. But in that case, the region was occupied by
> > the device so that qemu haven't seen io on the area anyway.
> 
> For assigning a device to a hvm domain,  all mmio regions of the pci device 
> should be mapped in p2m table, but a corner case is that accessing some 
> pages(for example, vector table in msi-x's bar) in one region maybe need  
> route to qemu and emulate the corresponding logic in qemu.  In that case, we 
> have to remove the mapping for the specified pages in p2m, and let accessing 
> these pages intercepted by hyperviosr and forward the io requests to qemu.  
> But zap_domain_page_one can't intilziate the mmio p2m entries for hvm domain. 
>  Clear ?  :-)

You mean pt_iomem_map() which calls remove_msix_mapping() in
pass-through.c of qemu-dm? Is there any case other than msi-x?
I couldn't find any other usefull case because the current xen/ia64
doesn't support msi/msi-x.


> > And why GPFN_LOW_MMIO independently of addr? Shouldn't it be aware
> > of io_ranges[]?
> 
> For the low mmio ranges (3G-3.5G), we can use the fixed mfn GPFN_LOW_MMIO 
> combined with ASSIGN_io to indicate whether the p2m entries are mmio ranges.  
>  You may refer to io_ranges and it also use the fixed GPFN_LOW_MMIO to 
> intialize p2m entries for low mmio range. 

Hmm, there are two cases to call xc_domain_mempry_mapping(DPCI_REMOVE_MAPPING).
- Just to remove the entry.
  zap_domain_page_one() is wanted.

  the one in pt_iomem_map() and remove_msix_mapping() excpet called
  by pt_iomem_map()


- mmio on the area should be rounted to qemu-dm
  GPFN_LOW_MMIO and ASSIGN_io are wanted.

  remove_msix_mapping() which is called by pt_iomem_map().

Is there a way to distinguish them?

thanks,

> Xiantao
> 
> > 
> > On Tue, Mar 03, 2009 at 03:14:02PM +0800, Zhang, Xiantao wrote:
> >> PATCH: Fix the logic when deassign the mmio ranges for vti-domain.
> >> 
> >> When de-assign the mmio range, it should resume its original value
> >> for p2m value, otherwise, it may fail to determin mmio range's type.
> >> 
> >> Signed-off-by: Xiantao Zhang 
> >> 
> >> diff -r 67f2e14613ef xen/arch/ia64/xen/mm.c
> >> --- a/xen/arch/ia64/xen/mm.c   Tue Feb 10 13:47:02 2009 +0800
> >> +++ b/xen/arch/ia64/xen/mm.c   Tue Mar 03 15:04:54 2009 +0800
> >> @@ -1508,8 +1508,14 @@ deassign_domain_mmio_page(struct domain  
> >>  return -EINVAL; }
> >> 
> >> -for (; addr < end; addr += PAGE_SIZE )
> >> -zap_domain_page_one(d, addr, 0, INVALID_MFN);
> >> +for (; addr < end; addr += PAGE_SIZE ) {
> >> +  if (is_hvm_domain(d))
> >> +__assign_domain_page(d, addr, GPFN_LOW_MMIO <<
> >> PAGE_SHIFT, + 
> >> ASSIGN_writable |
> >> ASSIGN_io); +  else +zap_domain_page_one(d, addr, 0,
> >> INVALID_MFN); +} +
> >>  return 0;
> >>  }
> >> 
> > 
> >> ___
> >> Xen-ia64-devel mailing list
> >> Xen-ia64-devel@lists.xensource.com
> >> http://lists.xensource.com/xen-ia64-devel
> 
> 
> ___
> Xen-ia64-devel mailing list
> Xen-ia64-devel@lists.xensource.com
> http://lists.xensource.com/xen-ia64-devel
> 

-- 
yamahata

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] [PATCH]Fix the logic when deassign the mmio ranges for vti-domain.

2009-03-03 Thread Zhang, Xiantao

Isaku Yamahata wrote:
> Could you elaborate on the concrete issue which you're seeing?
> I guess the issue occurs when passed through pci device
> is unplugged. But in that case, the region was occupied by
> the device so that qemu haven't seen io on the area anyway.

For assigning a device to a hvm domain,  all mmio regions of the pci device 
should be mapped in p2m table, but a corner case is that accessing some 
pages(for example, vector table in msi-x's bar) in one region maybe need  route 
to qemu and emulate the corresponding logic in qemu.  In that case, we have to 
remove the mapping for the specified pages in p2m, and let accessing these 
pages intercepted by hyperviosr and forward the io requests to qemu.  But 
zap_domain_page_one can't intilziate the mmio p2m entries for hvm domain.  
Clear ?  :-)

> And why GPFN_LOW_MMIO independently of addr? Shouldn't it be aware
> of io_ranges[]?

For the low mmio ranges (3G-3.5G), we can use the fixed mfn GPFN_LOW_MMIO 
combined with ASSIGN_io to indicate whether the p2m entries are mmio ranges.   
You may refer to io_ranges and it also use the fixed GPFN_LOW_MMIO to intialize 
p2m entries for low mmio range. 
Xiantao

> 
> On Tue, Mar 03, 2009 at 03:14:02PM +0800, Zhang, Xiantao wrote:
>> PATCH: Fix the logic when deassign the mmio ranges for vti-domain.
>> 
>> When de-assign the mmio range, it should resume its original value
>> for p2m value, otherwise, it may fail to determin mmio range's type.
>> 
>> Signed-off-by: Xiantao Zhang 
>> 
>> diff -r 67f2e14613ef xen/arch/ia64/xen/mm.c
>> --- a/xen/arch/ia64/xen/mm.c Tue Feb 10 13:47:02 2009 +0800
>> +++ b/xen/arch/ia64/xen/mm.c Tue Mar 03 15:04:54 2009 +0800
>> @@ -1508,8 +1508,14 @@ deassign_domain_mmio_page(struct domain  
>>  return -EINVAL; }
>> 
>> -for (; addr < end; addr += PAGE_SIZE )
>> -zap_domain_page_one(d, addr, 0, INVALID_MFN);
>> +for (; addr < end; addr += PAGE_SIZE ) {
>> +if (is_hvm_domain(d))
>> +__assign_domain_page(d, addr, GPFN_LOW_MMIO <<
>> PAGE_SHIFT, +   
>> ASSIGN_writable |
>> ASSIGN_io); +else +zap_domain_page_one(d, addr, 0,
>> INVALID_MFN); +} +
>>  return 0;
>>  }
>> 
> 
>> ___
>> Xen-ia64-devel mailing list
>> Xen-ia64-devel@lists.xensource.com
>> http://lists.xensource.com/xen-ia64-devel


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] [PATCH]Fix the logic when deassign the mmio ranges for vti-domain.

2009-03-03 Thread Isaku Yamahata
Could you elaborate on the concrete issue which you're seeing?
I guess the issue occurs when passed through pci device
is unplugged. But in that case, the region was occupied by
the device so that qemu haven't seen io on the area anyway.

And why GPFN_LOW_MMIO independently of addr? Shouldn't it be aware
of io_ranges[]?

thanks,

On Tue, Mar 03, 2009 at 03:14:02PM +0800, Zhang, Xiantao wrote:
> PATCH: Fix the logic when deassign the mmio ranges for vti-domain.
> 
> When de-assign the mmio range, it should resume its original value
> for p2m value, otherwise, it may fail to determin mmio range's type.
> 
> Signed-off-by: Xiantao Zhang 
> 
> diff -r 67f2e14613ef xen/arch/ia64/xen/mm.c
> --- a/xen/arch/ia64/xen/mm.c  Tue Feb 10 13:47:02 2009 +0800
> +++ b/xen/arch/ia64/xen/mm.c  Tue Mar 03 15:04:54 2009 +0800
> @@ -1508,8 +1508,14 @@ deassign_domain_mmio_page(struct domain 
>  return -EINVAL;
>  }
>  
> -for (; addr < end; addr += PAGE_SIZE )
> -zap_domain_page_one(d, addr, 0, INVALID_MFN);
> +for (; addr < end; addr += PAGE_SIZE ) {
> + if (is_hvm_domain(d))
> +__assign_domain_page(d, addr, GPFN_LOW_MMIO << PAGE_SHIFT,
> +ASSIGN_writable | ASSIGN_io);
> + else
> +zap_domain_page_one(d, addr, 0, INVALID_MFN);
> +}
> +
>  return 0;
>  }
>  

> ___
> Xen-ia64-devel mailing list
> Xen-ia64-devel@lists.xensource.com
> http://lists.xensource.com/xen-ia64-devel

-- 
yamahata

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel