Isaku Yamahata wrote:
> On Wed, Dec 24, 2008 at 01:11:03PM +0800, Cui, Dexuan wrote:
>> Isaku Yamahata wrote:
>>>> diff -r 008b68ff6095 xen/arch/ia64/xen/domain.c
>>>> --- a/xen/arch/ia64/xen/domain.c   Tue Nov 18 10:33:55 2008 +0900
>>>> +++ b/xen/arch/ia64/xen/domain.c   Mon Dec 15 18:41:52 2008 +0800
>>>> @@ -602,10 +602,8 @@ int arch_domain_create(struct domain *d,
>>>>    if ((d->arch.mm.pgd = pgd_alloc(&d->arch.mm)) == NULL)             
>>>> goto fail_nomem; 
>>>> 
>>>> -  if ( iommu_enabled && (is_hvm_domain(d) || need_iommu(d)) ){
>>>> -          if(iommu_domain_init(d) != 0)
>>>> -                  goto fail_iommu;
>>>> -  }
>>>> +  if(iommu_domain_init(d) != 0)
>>>> +          goto fail_iommu;
>>>> 
>>>>    /*
>>>>     * grant_table_create() can't fully initialize grant table for
>>>> domain
>>> 
>>> Please don't drop is_hvm_domain(d) check.
>>> At this moment ia64 doesn't support iommu for PV domain because
>> Oh, thanks for the reminder. Here I neglected this.
>> 
>> Do you mean this:
>> if ( is_hvm_domain(d) )
>>     if(iommu_domain_init(d) != 0)
>>         goto fail_iommu;
>> This is also not ok since we must ensure iommu_domain_init() is
>> invoked for Dom0 -- we need the function invoked to enable DMA
>> remapping.  
>> 
>> So how about changing the logic to:
>> if ( (d->domain_id == 0) || is_hvm_domain(d) )
>>     if(iommu_domain_init(d) != 0)
>>         goto fail_iommu;
>> 
>> If you agree this, I'll post a new patch.
> 
> Do you mean if ( d->domain_id == 0 ) clause in
> the function, intel_iommu_domain_init()?
Yes. 

> Is iommu map/unmap for dom0 is necessary?
>   intel_iommu_domain_init() maps all the pages excect ones xen uses
>   to dom0. I suppose this is what you want.
Yes.
When Dom0 boots up, we assign all the devices to it, so it needs the 1:1 VT-d 
pagetables mapping.

>   However later pages is mapped/unmapped even for dom0 because
I suppose you mean the balloon driver and the grant table operations. Correct?

>   need_iommu(dom0) returns true due ot iommu_domain_init(dom0).
>   Since dom0 is PV, so iommu mapping/unmapping causes race on ia64.
In the cases of balloon and granttable, the iommu mapping/unmapping would cause 
race on IA64?
Sorry, I know few about the lockless p2m table now. I'm trying to understand 
more.

>   Only setting up iommu tables at the dom0 creation is necessary,
Could you please explain more about the this? I can't get the point.

>   all "if ( iommu_enabled && (is_hvm_domain(d) || need_iommu(d)) )"
>   would be "if ( iommu_enabled && is_hvm_domain(d) && need_iommu(d))
> )" 
Am I missing somthing?
#define need_iommu(d)    ((d)->need_iommu && !(d)->is_hvm)
So,
iommu_enabled && is_hvm_domain(d) && need_iommu(d)
is undoubtedly false. :-)

> 
> intel_iommu_domain_init() and dom0 memory size
>   calc_dom0_size() in xen/arch/ia64/domain.c calculates default dom0
>   memory size. You should take memory for iommu page table
>   into account because the memory size for iommu page table wouldn't
>   be neglectable.
>   probably iommu_pages = (max phys addr) / PTRS_PER_PTE_4K + (some
>   spare) where PTRS_PER_PTE_4K = (1 << (PAGE_SHIFT_4K - 3))
Now, in intel_iommu_domain_init(), with respect to iommu mapping, Xen maps all 
the pages for Dom0 except for the pages used by Xen itself.
Do you mean xen should only maps the page owned actually by Dom0?  -- for 
instance, you're saying xen should not map the iommu page tables? -- since in 
Dom0 normally drivers don't touch iommu pagetables at all, looks the current 
code  is OK?

> intel_iommu_domain_init() and sparse memory.
>   To be honest, I'm not sure how it matters in practice.
>   On ia64 memory might be located sparsely. So iommu page table
>   should also sparse instead of [0, max_page] - (xen page).
>   You want to use efi_memmap_walk() instead of for loop.
Thanks for pointing this out!
So my understanding is: in the current intel_iommu_domain_init(), since xen 
judges by the 'max_page', actually some pages at the high address(possible in 
the middle or at the end) are not mapped while they should have been mapped?

Thanks,
-- Dexuan

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

Reply via email to