Re: [Xenomai-core] pgprot_noncached for io-remapping?

2008-03-16 Thread Philippe Gerum
Jan Kiszka wrote:
> Philippe Gerum wrote:
>> Jan Kiszka wrote:
>>> Philippe Gerum wrote:
 Jan Kiszka wrote:
> Philippe Gerum wrote:
>> Jan Kiszka wrote:
>>> Philippe Gerum wrote:
 Jan Kiszka wrote:
> Philippe Gerum wrote:
>> Jan Kiszka wrote:
>>> Hi,
>>>
>>> doesn't this patch [1] have some relevance for us as well? As we use
>>> xnarch_remap_io_page_range also for non-IO memory, I'm hesitating to
>>> suggest that we apply this unconditionally at xnarch level. Ideas 
>>> welcome.
>>>
>> Yes, I think it makes a lot of sense on powerpc at least, since 
>> doing so will
>> set the PAGE_GUARDED bit as well, and we obviously want to avoid any
>> out-of-order access of I/O memory.
>>
>> (I don't see the reason to force the VM_RESERVED and VM_IO on the 
>> vma though,
>> since remap_pfn_range will do it anyway.)
> No, I was talking about cases where we may pass kmalloc'ed memory to
> xnarch_remap_io_page_range. In that case, caching and out-of-order
> access may be desired for performance reasons.
>
 xnarch_remap_io_page_range is intended for I/O memory only, some 
 assumptions are
 made on this. rtdm_mmap_buffer() should be fixed; it would be much 
 better to
 define another internal interface at xnarch level to specifically 
 perform
 kmalloc mapping.
>>> Yeah, probably. But I think the issue is not just limited to RTDM. The
>>> xnheap can be kmalloc-hosted as well.
>>>
>> This one is used with DMA memory. What I would suggest, is something 
>> like this:
>>
>> --- ksrc/skins/rtdm/drvlib.c (revision 3590)
>> +++ ksrc/skins/rtdm/drvlib.c (working copy)
>> @@ -1738,9 +1738,12 @@
>>  }
>>  return 0;
>>  } else
>> -#endif /* CONFIG_MMU */
>>  return xnarch_remap_io_page_range(vma, maddr, paddr,
>>size, PAGE_SHARED);
>> +#else
>> +return xnarch_remap_kmem_page_range(vma, maddr, paddr,
>> +size, PAGE_SHARED);
>> +#endif /* CONFIG_MMU */
>>  }
>>
>>  static struct file_operations rtdm_mmap_fops = {
>>
>>
>> I.e. split the cases where MMU is absent from the one where MMU is there 
>> but we
>> come from rtdm_iomap_to_user.
> Makes no sense to me yet. With CONFIG_MMU we have 3 cases, not just two,
> no? We have to use mmap_data->src_paddr to tell kmem_page apart from
> io_page.
 That's what the patch I sent right after this one does.
>>> Nope, vaddr can be NULL in both cases, only paddr is differentiating.
>>>
>>> This one should do what we need:
>>>
>>> Index: ksrc/skins/rtdm/drvlib.c
>>> ===
>>> --- ksrc/skins/rtdm/drvlib.c(Revision 3594)
>>> +++ ksrc/skins/rtdm/drvlib.c(Arbeitskopie)
>>> @@ -1739,8 +1739,12 @@ static int rtdm_mmap_buffer(struct file 
>>> return 0;
>>> } else
>>>  #endif /* CONFIG_MMU */
>>> +   if (mmap_data->src_paddr)
>>> return xnarch_remap_io_page_range(vma, maddr, paddr,
>>>   size, PAGE_SHARED);
>>> +   else
>>> +   return xnarch_remap_kmem_page_range(vma, maddr, paddr,
>>> +   size, PAGE_SHARED);
>>>  }
>>>  
>>>  static struct file_operations rtdm_mmap_fops = {
>>>
>>>
>> Ok. So if that's fine with you, I think we should merge that because the 
>> current
>> situation is error-prone.
> 
> Will do, patches for 2.3..trunk in stand-by for now. Do you have a
> xnarch_remap_kmem_page_range definition at hand?
>

yes, it's just been merged.

>> For the time being, I've defined a remap_kmem wrapper which mirrors the 
>> previous
>> actions of the remap_io one, before I fixed the latter with the noncache
>> protection bits. At least we should get the same behaviour than previously 
>> wrt
>> to rtdm_mmap_buffer. This leaves some time to think about the kmem mapping 
>> modes
>> without breaking the current situation, but they should be correct now.
>>
> 
> OK.
> 
> Jan
> 


-- 
Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] pgprot_noncached for io-remapping?

2008-03-16 Thread Jan Kiszka
Philippe Gerum wrote:
> Jan Kiszka wrote:
>> Philippe Gerum wrote:
>>> Jan Kiszka wrote:
 Philippe Gerum wrote:
> Jan Kiszka wrote:
>> Philippe Gerum wrote:
>>> Jan Kiszka wrote:
 Philippe Gerum wrote:
> Jan Kiszka wrote:
>> Hi,
>>
>> doesn't this patch [1] have some relevance for us as well? As we use
>> xnarch_remap_io_page_range also for non-IO memory, I'm hesitating to
>> suggest that we apply this unconditionally at xnarch level. Ideas 
>> welcome.
>>
> Yes, I think it makes a lot of sense on powerpc at least, since doing 
> so will
> set the PAGE_GUARDED bit as well, and we obviously want to avoid any
> out-of-order access of I/O memory.
>
> (I don't see the reason to force the VM_RESERVED and VM_IO on the vma 
> though,
> since remap_pfn_range will do it anyway.)
 No, I was talking about cases where we may pass kmalloc'ed memory to
 xnarch_remap_io_page_range. In that case, caching and out-of-order
 access may be desired for performance reasons.

>>> xnarch_remap_io_page_range is intended for I/O memory only, some 
>>> assumptions are
>>> made on this. rtdm_mmap_buffer() should be fixed; it would be much 
>>> better to
>>> define another internal interface at xnarch level to specifically 
>>> perform
>>> kmalloc mapping.
>> Yeah, probably. But I think the issue is not just limited to RTDM. The
>> xnheap can be kmalloc-hosted as well.
>>
> This one is used with DMA memory. What I would suggest, is something like 
> this:
>
> --- ksrc/skins/rtdm/drvlib.c  (revision 3590)
> +++ ksrc/skins/rtdm/drvlib.c  (working copy)
> @@ -1738,9 +1738,12 @@
>   }
>   return 0;
>   } else
> -#endif /* CONFIG_MMU */
>   return xnarch_remap_io_page_range(vma, maddr, paddr,
> size, PAGE_SHARED);
> +#else
> + return xnarch_remap_kmem_page_range(vma, maddr, paddr,
> + size, PAGE_SHARED);
> +#endif /* CONFIG_MMU */
>  }
>
>  static struct file_operations rtdm_mmap_fops = {
>
>
> I.e. split the cases where MMU is absent from the one where MMU is there 
> but we
> come from rtdm_iomap_to_user.
 Makes no sense to me yet. With CONFIG_MMU we have 3 cases, not just two,
 no? We have to use mmap_data->src_paddr to tell kmem_page apart from
 io_page.
>>> That's what the patch I sent right after this one does.
>> Nope, vaddr can be NULL in both cases, only paddr is differentiating.
>>
>> This one should do what we need:
>>
>> Index: ksrc/skins/rtdm/drvlib.c
>> ===
>> --- ksrc/skins/rtdm/drvlib.c (Revision 3594)
>> +++ ksrc/skins/rtdm/drvlib.c (Arbeitskopie)
>> @@ -1739,8 +1739,12 @@ static int rtdm_mmap_buffer(struct file 
>>  return 0;
>>  } else
>>  #endif /* CONFIG_MMU */
>> +if (mmap_data->src_paddr)
>>  return xnarch_remap_io_page_range(vma, maddr, paddr,
>>size, PAGE_SHARED);
>> +else
>> +return xnarch_remap_kmem_page_range(vma, maddr, paddr,
>> +size, PAGE_SHARED);
>>  }
>>  
>>  static struct file_operations rtdm_mmap_fops = {
>>
>>
> 
> Ok. So if that's fine with you, I think we should merge that because the 
> current
> situation is error-prone.

Will do, patches for 2.3..trunk in stand-by for now. Do you have a
xnarch_remap_kmem_page_range definition at hand?

> 
> For the time being, I've defined a remap_kmem wrapper which mirrors the 
> previous
> actions of the remap_io one, before I fixed the latter with the noncache
> protection bits. At least we should get the same behaviour than previously wrt
> to rtdm_mmap_buffer. This leaves some time to think about the kmem mapping 
> modes
> without breaking the current situation, but they should be correct now.
> 

OK.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] pgprot_noncached for io-remapping?

2008-03-16 Thread Philippe Gerum
Jan Kiszka wrote:
> Philippe Gerum wrote:
>> Jan Kiszka wrote:
>>> Philippe Gerum wrote:
 Jan Kiszka wrote:
> Philippe Gerum wrote:
>> Jan Kiszka wrote:
>>> Philippe Gerum wrote:
 Jan Kiszka wrote:
> Hi,
>
> doesn't this patch [1] have some relevance for us as well? As we use
> xnarch_remap_io_page_range also for non-IO memory, I'm hesitating to
> suggest that we apply this unconditionally at xnarch level. Ideas 
> welcome.
>
 Yes, I think it makes a lot of sense on powerpc at least, since doing 
 so will
 set the PAGE_GUARDED bit as well, and we obviously want to avoid any
 out-of-order access of I/O memory.

 (I don't see the reason to force the VM_RESERVED and VM_IO on the vma 
 though,
 since remap_pfn_range will do it anyway.)
>>> No, I was talking about cases where we may pass kmalloc'ed memory to
>>> xnarch_remap_io_page_range. In that case, caching and out-of-order
>>> access may be desired for performance reasons.
>>>
>> xnarch_remap_io_page_range is intended for I/O memory only, some 
>> assumptions are
>> made on this. rtdm_mmap_buffer() should be fixed; it would be much 
>> better to
>> define another internal interface at xnarch level to specifically perform
>> kmalloc mapping.
> Yeah, probably. But I think the issue is not just limited to RTDM. The
> xnheap can be kmalloc-hosted as well.
>
 This one is used with DMA memory. What I would suggest, is something like 
 this:

 --- ksrc/skins/rtdm/drvlib.c   (revision 3590)
 +++ ksrc/skins/rtdm/drvlib.c   (working copy)
 @@ -1738,9 +1738,12 @@
}
return 0;
} else
 -#endif /* CONFIG_MMU */
return xnarch_remap_io_page_range(vma, maddr, paddr,
  size, PAGE_SHARED);
 +#else
 +  return xnarch_remap_kmem_page_range(vma, maddr, paddr,
 +  size, PAGE_SHARED);
 +#endif /* CONFIG_MMU */
  }

  static struct file_operations rtdm_mmap_fops = {


 I.e. split the cases where MMU is absent from the one where MMU is there 
 but we
 come from rtdm_iomap_to_user.
>>> Makes no sense to me yet. With CONFIG_MMU we have 3 cases, not just two,
>>> no? We have to use mmap_data->src_paddr to tell kmem_page apart from
>>> io_page.
>> That's what the patch I sent right after this one does.
> 
> Nope, vaddr can be NULL in both cases, only paddr is differentiating.
> 
> This one should do what we need:
> 
> Index: ksrc/skins/rtdm/drvlib.c
> ===
> --- ksrc/skins/rtdm/drvlib.c  (Revision 3594)
> +++ ksrc/skins/rtdm/drvlib.c  (Arbeitskopie)
> @@ -1739,8 +1739,12 @@ static int rtdm_mmap_buffer(struct file 
>   return 0;
>   } else
>  #endif /* CONFIG_MMU */
> + if (mmap_data->src_paddr)
>   return xnarch_remap_io_page_range(vma, maddr, paddr,
> size, PAGE_SHARED);
> + else
> + return xnarch_remap_kmem_page_range(vma, maddr, paddr,
> + size, PAGE_SHARED);
>  }
>  
>  static struct file_operations rtdm_mmap_fops = {
> 
> 

Ok. So if that's fine with you, I think we should merge that because the current
situation is error-prone.

For the time being, I've defined a remap_kmem wrapper which mirrors the previous
actions of the remap_io one, before I fixed the latter with the noncache
protection bits. At least we should get the same behaviour than previously wrt
to rtdm_mmap_buffer. This leaves some time to think about the kmem mapping modes
without breaking the current situation, but they should be correct now.

-- 
Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] pgprot_noncached for io-remapping?

2008-03-16 Thread Jan Kiszka
Philippe Gerum wrote:
> Jan Kiszka wrote:
>> Philippe Gerum wrote:
>>> Jan Kiszka wrote:
 Philippe Gerum wrote:
> Jan Kiszka wrote:
>> Philippe Gerum wrote:
>>> Jan Kiszka wrote:
 Hi,

 doesn't this patch [1] have some relevance for us as well? As we use
 xnarch_remap_io_page_range also for non-IO memory, I'm hesitating to
 suggest that we apply this unconditionally at xnarch level. Ideas 
 welcome.

>>> Yes, I think it makes a lot of sense on powerpc at least, since doing 
>>> so will
>>> set the PAGE_GUARDED bit as well, and we obviously want to avoid any
>>> out-of-order access of I/O memory.
>>>
>>> (I don't see the reason to force the VM_RESERVED and VM_IO on the vma 
>>> though,
>>> since remap_pfn_range will do it anyway.)
>> No, I was talking about cases where we may pass kmalloc'ed memory to
>> xnarch_remap_io_page_range. In that case, caching and out-of-order
>> access may be desired for performance reasons.
>>
> xnarch_remap_io_page_range is intended for I/O memory only, some 
> assumptions are
> made on this. rtdm_mmap_buffer() should be fixed; it would be much better 
> to
> define another internal interface at xnarch level to specifically perform
> kmalloc mapping.
 Yeah, probably. But I think the issue is not just limited to RTDM. The
 xnheap can be kmalloc-hosted as well.

>>> This one is used with DMA memory. What I would suggest, is something like 
>>> this:
>>>
>>> --- ksrc/skins/rtdm/drvlib.c(revision 3590)
>>> +++ ksrc/skins/rtdm/drvlib.c(working copy)
>>> @@ -1738,9 +1738,12 @@
>>> }
>>> return 0;
>>> } else
>>> -#endif /* CONFIG_MMU */
>>> return xnarch_remap_io_page_range(vma, maddr, paddr,
>>>   size, PAGE_SHARED);
>>> +#else
>>> +   return xnarch_remap_kmem_page_range(vma, maddr, paddr,
>>> +   size, PAGE_SHARED);
>>> +#endif /* CONFIG_MMU */
>>>  }
>>>
>>>  static struct file_operations rtdm_mmap_fops = {
>>>
>>>
>>> I.e. split the cases where MMU is absent from the one where MMU is there 
>>> but we
>>> come from rtdm_iomap_to_user.
>> Makes no sense to me yet. With CONFIG_MMU we have 3 cases, not just two,
>> no? We have to use mmap_data->src_paddr to tell kmem_page apart from
>> io_page.
> 
> That's what the patch I sent right after this one does.

Nope, vaddr can be NULL in both cases, only paddr is differentiating.

This one should do what we need:

Index: ksrc/skins/rtdm/drvlib.c
===
--- ksrc/skins/rtdm/drvlib.c(Revision 3594)
+++ ksrc/skins/rtdm/drvlib.c(Arbeitskopie)
@@ -1739,8 +1739,12 @@ static int rtdm_mmap_buffer(struct file 
return 0;
} else
 #endif /* CONFIG_MMU */
+   if (mmap_data->src_paddr)
return xnarch_remap_io_page_range(vma, maddr, paddr,
  size, PAGE_SHARED);
+   else
+   return xnarch_remap_kmem_page_range(vma, maddr, paddr,
+   size, PAGE_SHARED);
 }
 
 static struct file_operations rtdm_mmap_fops = {




signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] pgprot_noncached for io-remapping?

2008-03-16 Thread Philippe Gerum
Jan Kiszka wrote:
> Philippe Gerum wrote:
>> Jan Kiszka wrote:
>>> Philippe Gerum wrote:
 Jan Kiszka wrote:
> Philippe Gerum wrote:
>> Jan Kiszka wrote:
>>> Hi,
>>>
>>> doesn't this patch [1] have some relevance for us as well? As we use
>>> xnarch_remap_io_page_range also for non-IO memory, I'm hesitating to
>>> suggest that we apply this unconditionally at xnarch level. Ideas 
>>> welcome.
>>>
>> Yes, I think it makes a lot of sense on powerpc at least, since doing so 
>> will
>> set the PAGE_GUARDED bit as well, and we obviously want to avoid any
>> out-of-order access of I/O memory.
>>
>> (I don't see the reason to force the VM_RESERVED and VM_IO on the vma 
>> though,
>> since remap_pfn_range will do it anyway.)
> No, I was talking about cases where we may pass kmalloc'ed memory to
> xnarch_remap_io_page_range. In that case, caching and out-of-order
> access may be desired for performance reasons.
>
 xnarch_remap_io_page_range is intended for I/O memory only, some 
 assumptions are
 made on this. rtdm_mmap_buffer() should be fixed; it would be much better 
 to
 define another internal interface at xnarch level to specifically perform
 kmalloc mapping.
>>> Yeah, probably. But I think the issue is not just limited to RTDM. The
>>> xnheap can be kmalloc-hosted as well.
>>>
>> This one is used with DMA memory. What I would suggest, is something like 
>> this:
>>
>> --- ksrc/skins/rtdm/drvlib.c (revision 3590)
>> +++ ksrc/skins/rtdm/drvlib.c (working copy)
>> @@ -1738,9 +1738,12 @@
>>  }
>>  return 0;
>>  } else
>> -#endif /* CONFIG_MMU */
>>  return xnarch_remap_io_page_range(vma, maddr, paddr,
>>size, PAGE_SHARED);
>> +#else
>> +return xnarch_remap_kmem_page_range(vma, maddr, paddr,
>> +size, PAGE_SHARED);
>> +#endif /* CONFIG_MMU */
>>  }
>>
>>  static struct file_operations rtdm_mmap_fops = {
>>
>>
>> I.e. split the cases where MMU is absent from the one where MMU is there but 
>> we
>> come from rtdm_iomap_to_user.
> 
> Makes no sense to me yet. With CONFIG_MMU we have 3 cases, not just two,
> no? We have to use mmap_data->src_paddr to tell kmem_page apart from
> io_page.

That's what the patch I sent right after this one does.

> 
> And we do not need io_page at all in the !CONFIG_MMU case?
>

MMU-less platforms may still control caching at page range level (e.g. Blackfin
via CPLBs), so we have to be flexible as well.

> Jan
> 


-- 
Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] pgprot_noncached for io-remapping?

2008-03-16 Thread Jan Kiszka
Philippe Gerum wrote:
> Jan Kiszka wrote:
>> Philippe Gerum wrote:
>>> Jan Kiszka wrote:
 Philippe Gerum wrote:
> Jan Kiszka wrote:
>> Hi,
>>
>> doesn't this patch [1] have some relevance for us as well? As we use
>> xnarch_remap_io_page_range also for non-IO memory, I'm hesitating to
>> suggest that we apply this unconditionally at xnarch level. Ideas 
>> welcome.
>>
> Yes, I think it makes a lot of sense on powerpc at least, since doing so 
> will
> set the PAGE_GUARDED bit as well, and we obviously want to avoid any
> out-of-order access of I/O memory.
>
> (I don't see the reason to force the VM_RESERVED and VM_IO on the vma 
> though,
> since remap_pfn_range will do it anyway.)
 No, I was talking about cases where we may pass kmalloc'ed memory to
 xnarch_remap_io_page_range. In that case, caching and out-of-order
 access may be desired for performance reasons.

>>> xnarch_remap_io_page_range is intended for I/O memory only, some 
>>> assumptions are
>>> made on this. rtdm_mmap_buffer() should be fixed; it would be much better to
>>> define another internal interface at xnarch level to specifically perform
>>> kmalloc mapping.
>> Yeah, probably. But I think the issue is not just limited to RTDM. The
>> xnheap can be kmalloc-hosted as well.
>>
> 
> This one is used with DMA memory. What I would suggest, is something like 
> this:
> 
> --- ksrc/skins/rtdm/drvlib.c  (revision 3590)
> +++ ksrc/skins/rtdm/drvlib.c  (working copy)
> @@ -1738,9 +1738,12 @@
>   }
>   return 0;
>   } else
> -#endif /* CONFIG_MMU */
>   return xnarch_remap_io_page_range(vma, maddr, paddr,
> size, PAGE_SHARED);
> +#else
> + return xnarch_remap_kmem_page_range(vma, maddr, paddr,
> + size, PAGE_SHARED);
> +#endif /* CONFIG_MMU */
>  }
> 
>  static struct file_operations rtdm_mmap_fops = {
> 
> 
> I.e. split the cases where MMU is absent from the one where MMU is there but 
> we
> come from rtdm_iomap_to_user.

Makes no sense to me yet. With CONFIG_MMU we have 3 cases, not just two,
no? We have to use mmap_data->src_paddr to tell kmem_page apart from
io_page.

And we do not need io_page at all in the !CONFIG_MMU case?

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] pgprot_noncached for io-remapping?

2008-03-16 Thread Philippe Gerum
Philippe Gerum wrote:
> Jan Kiszka wrote:
>> Philippe Gerum wrote:
>>> Jan Kiszka wrote:
 Philippe Gerum wrote:
> Jan Kiszka wrote:
>> Hi,
>>
>> doesn't this patch [1] have some relevance for us as well? As we use
>> xnarch_remap_io_page_range also for non-IO memory, I'm hesitating to
>> suggest that we apply this unconditionally at xnarch level. Ideas 
>> welcome.
>>
> Yes, I think it makes a lot of sense on powerpc at least, since doing so 
> will
> set the PAGE_GUARDED bit as well, and we obviously want to avoid any
> out-of-order access of I/O memory.
>
> (I don't see the reason to force the VM_RESERVED and VM_IO on the vma 
> though,
> since remap_pfn_range will do it anyway.)
 No, I was talking about cases where we may pass kmalloc'ed memory to
 xnarch_remap_io_page_range. In that case, caching and out-of-order
 access may be desired for performance reasons.

>>> xnarch_remap_io_page_range is intended for I/O memory only, some 
>>> assumptions are
>>> made on this. rtdm_mmap_buffer() should be fixed; it would be much better to
>>> define another internal interface at xnarch level to specifically perform
>>> kmalloc mapping.
>> Yeah, probably. But I think the issue is not just limited to RTDM. The
>> xnheap can be kmalloc-hosted as well.
>>
> 
> This one is used with DMA memory. What I would suggest, is something like 
> this:
> 
> --- ksrc/skins/rtdm/drvlib.c  (revision 3590)
> +++ ksrc/skins/rtdm/drvlib.c  (working copy)
> @@ -1738,9 +1738,12 @@
>   }
>   return 0;
>   } else
> -#endif /* CONFIG_MMU */
>   return xnarch_remap_io_page_range(vma, maddr, paddr,
> size, PAGE_SHARED);
> +#else
> + return xnarch_remap_kmem_page_range(vma, maddr, paddr,
> + size, PAGE_SHARED);
> +#endif /* CONFIG_MMU */
>  }
> 
>  static struct file_operations rtdm_mmap_fops = {
> 
> 
> I.e. split the cases where MMU is absent from the one where MMU is there but 
> we
> come from rtdm_iomap_to_user.
> 

This one is probably better:

--- ksrc/skins/rtdm/drvlib.c(revision 3590)
+++ ksrc/skins/rtdm/drvlib.c(working copy)
@@ -1738,9 +1738,16 @@
}
return 0;
} else
-#endif /* CONFIG_MMU */
return xnarch_remap_io_page_range(vma, maddr, paddr,
  size, PAGE_SHARED);
+#else
+   if (vaddr)
+   return xnarch_remap_kmem_page_range(vma, maddr, paddr,
+   size, PAGE_SHARED);
+
+   return xnarch_remap_io_page_range(vma, maddr, paddr,
+ size, PAGE_SHARED);
+#endif /* CONFIG_MMU */
 }

 static struct file_operations rtdm_mmap_fops = {

>> Jan
>>
> 
> 


-- 
Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] pgprot_noncached for io-remapping?

2008-03-16 Thread Philippe Gerum
Philippe Gerum wrote:
> Jan Kiszka wrote:
>> Philippe Gerum wrote:
>>> Jan Kiszka wrote:
 Philippe Gerum wrote:
> Jan Kiszka wrote:
>> Hi,
>>
>> doesn't this patch [1] have some relevance for us as well? As we use
>> xnarch_remap_io_page_range also for non-IO memory, I'm hesitating to
>> suggest that we apply this unconditionally at xnarch level. Ideas 
>> welcome.
>>
> Yes, I think it makes a lot of sense on powerpc at least, since doing so 
> will
> set the PAGE_GUARDED bit as well, and we obviously want to avoid any
> out-of-order access of I/O memory.
>
> (I don't see the reason to force the VM_RESERVED and VM_IO on the vma 
> though,
> since remap_pfn_range will do it anyway.)
 No, I was talking about cases where we may pass kmalloc'ed memory to
 xnarch_remap_io_page_range. In that case, caching and out-of-order
 access may be desired for performance reasons.

>>> xnarch_remap_io_page_range is intended for I/O memory only, some 
>>> assumptions are
>>> made on this. rtdm_mmap_buffer() should be fixed; it would be much better to
>>> define another internal interface at xnarch level to specifically perform
>>> kmalloc mapping.
>> Yeah, probably. But I think the issue is not just limited to RTDM. The
>> xnheap can be kmalloc-hosted as well.
>>
> 
> This one is used with DMA memory. What I would suggest, is something like 
> this:
> 
> --- ksrc/skins/rtdm/drvlib.c  (revision 3590)
> +++ ksrc/skins/rtdm/drvlib.c  (working copy)
> @@ -1738,9 +1738,12 @@
>   }
>   return 0;
>   } else
> -#endif /* CONFIG_MMU */
>   return xnarch_remap_io_page_range(vma, maddr, paddr,
> size, PAGE_SHARED);
> +#else
> + return xnarch_remap_kmem_page_range(vma, maddr, paddr,
> + size, PAGE_SHARED);
> +#endif /* CONFIG_MMU */
>  }
> 
>  static struct file_operations rtdm_mmap_fops = {
> 
> 
> I.e. split the cases where MMU is absent from the one where MMU is there but 
> we
> come from rtdm_iomap_to_user.
> 

This one is probably better:

--- ksrc/skins/rtdm/drvlib.c(revision 3590)
+++ ksrc/skins/rtdm/drvlib.c(working copy)
@@ -1738,9 +1738,16 @@
}
return 0;
} else
-#endif /* CONFIG_MMU */
return xnarch_remap_io_page_range(vma, maddr, paddr,
  size, PAGE_SHARED);
+#else
+   if (vaddr)
+   return xnarch_remap_kmem_page_range(vma, maddr, paddr,
+   size, PAGE_SHARED);
+
+   return xnarch_remap_io_page_range(vma, maddr, paddr,
+ size, PAGE_SHARED);
+#endif /* CONFIG_MMU */
 }

 static struct file_operations rtdm_mmap_fops = {

>> Jan
>>
> 
> 


-- 
Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] pgprot_noncached for io-remapping?

2008-03-16 Thread Philippe Gerum
Jan Kiszka wrote:
> Philippe Gerum wrote:
>> Jan Kiszka wrote:
>>> Philippe Gerum wrote:
 Jan Kiszka wrote:
> Hi,
>
> doesn't this patch [1] have some relevance for us as well? As we use
> xnarch_remap_io_page_range also for non-IO memory, I'm hesitating to
> suggest that we apply this unconditionally at xnarch level. Ideas welcome.
>
 Yes, I think it makes a lot of sense on powerpc at least, since doing so 
 will
 set the PAGE_GUARDED bit as well, and we obviously want to avoid any
 out-of-order access of I/O memory.

 (I don't see the reason to force the VM_RESERVED and VM_IO on the vma 
 though,
 since remap_pfn_range will do it anyway.)
>>> No, I was talking about cases where we may pass kmalloc'ed memory to
>>> xnarch_remap_io_page_range. In that case, caching and out-of-order
>>> access may be desired for performance reasons.
>>>
>> xnarch_remap_io_page_range is intended for I/O memory only, some assumptions 
>> are
>> made on this. rtdm_mmap_buffer() should be fixed; it would be much better to
>> define another internal interface at xnarch level to specifically perform
>> kmalloc mapping.
> 
> Yeah, probably. But I think the issue is not just limited to RTDM. The
> xnheap can be kmalloc-hosted as well.
>

This one is used with DMA memory. What I would suggest, is something like this:

--- ksrc/skins/rtdm/drvlib.c(revision 3590)
+++ ksrc/skins/rtdm/drvlib.c(working copy)
@@ -1738,9 +1738,12 @@
}
return 0;
} else
-#endif /* CONFIG_MMU */
return xnarch_remap_io_page_range(vma, maddr, paddr,
  size, PAGE_SHARED);
+#else
+   return xnarch_remap_kmem_page_range(vma, maddr, paddr,
+   size, PAGE_SHARED);
+#endif /* CONFIG_MMU */
 }

 static struct file_operations rtdm_mmap_fops = {


I.e. split the cases where MMU is absent from the one where MMU is there but we
come from rtdm_iomap_to_user.

> Jan
> 


-- 
Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] pgprot_noncached for io-remapping?

2008-03-16 Thread Jan Kiszka
Philippe Gerum wrote:
> Jan Kiszka wrote:
>> Philippe Gerum wrote:
>>> Jan Kiszka wrote:
 Hi,

 doesn't this patch [1] have some relevance for us as well? As we use
 xnarch_remap_io_page_range also for non-IO memory, I'm hesitating to
 suggest that we apply this unconditionally at xnarch level. Ideas welcome.

>>> Yes, I think it makes a lot of sense on powerpc at least, since doing so 
>>> will
>>> set the PAGE_GUARDED bit as well, and we obviously want to avoid any
>>> out-of-order access of I/O memory.
>>>
>>> (I don't see the reason to force the VM_RESERVED and VM_IO on the vma 
>>> though,
>>> since remap_pfn_range will do it anyway.)
>> No, I was talking about cases where we may pass kmalloc'ed memory to
>> xnarch_remap_io_page_range. In that case, caching and out-of-order
>> access may be desired for performance reasons.
>>
> 
> xnarch_remap_io_page_range is intended for I/O memory only, some assumptions 
> are
> made on this. rtdm_mmap_buffer() should be fixed; it would be much better to
> define another internal interface at xnarch level to specifically perform
> kmalloc mapping.

Yeah, probably. But I think the issue is not just limited to RTDM. The
xnheap can be kmalloc-hosted as well.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] pgprot_noncached for io-remapping?

2008-03-16 Thread Jan Kiszka
Philippe Gerum wrote:
> Jan Kiszka wrote:
>> Hi,
>>
>> doesn't this patch [1] have some relevance for us as well? As we use
>> xnarch_remap_io_page_range also for non-IO memory, I'm hesitating to
>> suggest that we apply this unconditionally at xnarch level. Ideas welcome.
>>
> 
> Yes, I think it makes a lot of sense on powerpc at least, since doing so will
> set the PAGE_GUARDED bit as well, and we obviously want to avoid any
> out-of-order access of I/O memory.
> 
> (I don't see the reason to force the VM_RESERVED and VM_IO on the vma though,
> since remap_pfn_range will do it anyway.)

No, I was talking about cases where we may pass kmalloc'ed memory to
xnarch_remap_io_page_range. In that case, caching and out-of-order
access may be desired for performance reasons.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] pgprot_noncached for io-remapping?

2008-03-16 Thread Philippe Gerum
Jan Kiszka wrote:
> Hi,
> 
> doesn't this patch [1] have some relevance for us as well? As we use
> xnarch_remap_io_page_range also for non-IO memory, I'm hesitating to
> suggest that we apply this unconditionally at xnarch level. Ideas welcome.
>

Yes, I think it makes a lot of sense on powerpc at least, since doing so will
set the PAGE_GUARDED bit as well, and we obviously want to avoid any
out-of-order access of I/O memory.

(I don't see the reason to force the VM_RESERVED and VM_IO on the vma though,
since remap_pfn_range will do it anyway.)

I'll merge that bit, thanks.

> Jan
> 
> [1] http://permalink.gmane.org/gmane.linux.kernel/653921
> 
> 
> 
> 
> 
> 
> ___
> Xenomai-core mailing list
> Xenomai-core@gna.org
> https://mail.gna.org/listinfo/xenomai-core


-- 
Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] pgprot_noncached for io-remapping?

2008-03-16 Thread Jan Kiszka
Hi,

doesn't this patch [1] have some relevance for us as well? As we use
xnarch_remap_io_page_range also for non-IO memory, I'm hesitating to
suggest that we apply this unconditionally at xnarch level. Ideas welcome.

Jan

[1] http://permalink.gmane.org/gmane.linux.kernel/653921




signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core