Re: [Xenomai-core] User space drivers on PPC440

2007-11-14 Thread Steven A. Falco

That works perfectly.  Thanks!

   Steve


Philippe Gerum wrote:

Philippe Gerum wrote:
  

Steven A. Falco wrote:


Solved.  As you pointed out, Xenomai inverts the returned value from
request_region.  So, that was a bug in my application.

However, turns out that instead of request_region, I have to use
request_mem_region.  This is because the I/O region only goes up to
2^32, but the mem region goes up to 2^64.

So, attached is a patch to add two new syscalls: rt_misc_get_mem_region
and rt_misc_put_mem_region.

  

Thanks. At this chance, I've reworked the I/O region support to
introduce descriptors, so that this API conforms to the
one-descriptor-per-object rule commonly followed by other services from
the native skin. I've merged your MMIO support on top of this. The added
bonus is that auto-cleanup upon process exit becomes available with I/O
regions too, since we now have the proper descriptor to hold the cleanup
data.

The calls supporting this scheme are named rt_io_get_region, and
rt_io_put_region, taking a RT_IOREGION object descriptor to hold the
internal resource information. The rt_misc_io_get/put_region API is now
deprecated starting with 2.4-rc6, albeit still available to allow for
smooth transition; one will only get a warning as a reminder to upgrade
to the new calls when using the old ones. e.g.

RT_IOREGION iorn;

rt_io_get_region(&iorn, label, start, len, IORN_IOPORT)
is equivalent to calling:
rt_misc_get_io_region(start, len, label)

the same way,

rt_io_get_region(&iorn, label, start, len, IORN_MEMORY)



s,IORN_MEMORY,IORN_IOMEM,

(for consistency reason with Linux's naming scheme)

  

is equivalent to calling:
rt_misc_get_mem_region(start, len, label)

Conversely,

rt_io_put_region(&iorn)
is equivalent to calling:
rt_misc_put_io/mem_region(start, len)






  
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] User space drivers on PPC440

2007-11-11 Thread Philippe Gerum
Philippe Gerum wrote:
> Steven A. Falco wrote:
>> Solved.  As you pointed out, Xenomai inverts the returned value from
>> request_region.  So, that was a bug in my application.
>>
>> However, turns out that instead of request_region, I have to use
>> request_mem_region.  This is because the I/O region only goes up to
>> 2^32, but the mem region goes up to 2^64.
>>
>> So, attached is a patch to add two new syscalls: rt_misc_get_mem_region
>> and rt_misc_put_mem_region.
>>
> 
> Thanks. At this chance, I've reworked the I/O region support to
> introduce descriptors, so that this API conforms to the
> one-descriptor-per-object rule commonly followed by other services from
> the native skin. I've merged your MMIO support on top of this. The added
> bonus is that auto-cleanup upon process exit becomes available with I/O
> regions too, since we now have the proper descriptor to hold the cleanup
> data.
> 
> The calls supporting this scheme are named rt_io_get_region, and
> rt_io_put_region, taking a RT_IOREGION object descriptor to hold the
> internal resource information. The rt_misc_io_get/put_region API is now
> deprecated starting with 2.4-rc6, albeit still available to allow for
> smooth transition; one will only get a warning as a reminder to upgrade
> to the new calls when using the old ones. e.g.
> 
> RT_IOREGION iorn;
> 
> rt_io_get_region(&iorn, label, start, len, IORN_IOPORT)
> is equivalent to calling:
> rt_misc_get_io_region(start, len, label)
> 
> the same way,
> 
> rt_io_get_region(&iorn, label, start, len, IORN_MEMORY)

s,IORN_MEMORY,IORN_IOMEM,

(for consistency reason with Linux's naming scheme)

> is equivalent to calling:
> rt_misc_get_mem_region(start, len, label)
> 
> Conversely,
> 
> rt_io_put_region(&iorn)
> is equivalent to calling:
> rt_misc_put_io/mem_region(start, len)
> 
> 


-- 
Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] User space drivers on PPC440

2007-11-10 Thread Philippe Gerum
Steven A. Falco wrote:
> Solved.  As you pointed out, Xenomai inverts the returned value from
> request_region.  So, that was a bug in my application.
> 
> However, turns out that instead of request_region, I have to use
> request_mem_region.  This is because the I/O region only goes up to
> 2^32, but the mem region goes up to 2^64.
> 
> So, attached is a patch to add two new syscalls: rt_misc_get_mem_region
> and rt_misc_put_mem_region.
> 

Thanks. At this chance, I've reworked the I/O region support to
introduce descriptors, so that this API conforms to the
one-descriptor-per-object rule commonly followed by other services from
the native skin. I've merged your MMIO support on top of this. The added
bonus is that auto-cleanup upon process exit becomes available with I/O
regions too, since we now have the proper descriptor to hold the cleanup
data.

The calls supporting this scheme are named rt_io_get_region, and
rt_io_put_region, taking a RT_IOREGION object descriptor to hold the
internal resource information. The rt_misc_io_get/put_region API is now
deprecated starting with 2.4-rc6, albeit still available to allow for
smooth transition; one will only get a warning as a reminder to upgrade
to the new calls when using the old ones. e.g.

RT_IOREGION iorn;

rt_io_get_region(&iorn, label, start, len, IORN_IOPORT)
is equivalent to calling:
rt_misc_get_io_region(start, len, label)

the same way,

rt_io_get_region(&iorn, label, start, len, IORN_MEMORY)
is equivalent to calling:
rt_misc_get_mem_region(start, len, label)

Conversely,

rt_io_put_region(&iorn)
is equivalent to calling:
rt_misc_put_io/mem_region(start, len)

-- 
Philippe.
Index: include/native/misc.h
===
--- include/native/misc.h	(revision 3166)
+++ include/native/misc.h	(working copy)
@@ -24,21 +24,109 @@
 
 #include 
 
-#if !defined(__KERNEL__) && !defined(__XENO_SIM__)
+#define IORN_IOPORT	0x1
+#define IORN_MEMORY	0x2
 
+typedef struct rt_ioregion_placeholder {
+	xnhandle_t opaque;
+	/*
+	 * We keep the region start and length in the userland
+	 * placeholder to support deprecated rt_misc_io_*() calls.
+	 */
+	uint64_t start;
+	uint64_t len;
+} RT_IOREGION_PLACEHOLDER;
+
+#if defined(__KERNEL__) || defined(__XENO_SIM__)
+
+#include 
+
+#define XENO_IOREGION_MAGIC 0x0b0b
+
+typedef struct rt_ioregion {
+
+	unsigned magic;		/* !< Magic code - must be first */
+
+	xnhandle_t handle;	/* !< Handle in registry -- must be registered. */
+
+	uint64_t start;		/* !< Start of I/O region. */
+
+	uint64_t len;		/* !< Length of I/O region. */
+
+	char name[XNOBJECT_NAME_LEN]; /* !< Symbolic name. */
+
+	int flags;		/* !< Operation flags. */
+
+	pid_t cpid;		/* !< Creator's pid. */
+
+	xnholder_t rlink;	/* !< Link in resource queue. */
+
+#define rlink2ioregion(ln)	container_of(ln, RT_IOREGION, rlink)
+
+	xnqueue_t *rqueue; /* !< Backpointer to resource queue. */
+
+} RT_IOREGION;
+
+int rt_ioregion_delete(RT_IOREGION *iorn);
+
+static inline void __native_ioregion_flush_rq(xnqueue_t *rq)
+{
+	xeno_flush_rq(RT_IOREGION, rq, ioregion);
+}
+
+static inline int __native_misc_pkg_init(void)
+{
+	return 0;
+}
+
+static inline void __native_misc_pkg_cleanup(void)
+{
+	__native_ioregion_flush_rq(&__native_global_rholder.ioregionq);
+}
+
+#else /* !(__KERNEL__ && __XENO_SIM__) */
+
+typedef RT_IOREGION_PLACEHOLDER RT_IOREGION;
+
 #ifdef __cplusplus
 extern "C" {
 #endif
 
 /* Public interface. */
 
-int rt_misc_get_io_region(uint64_t start,
-			  unsigned long len,
-			  const char *label);
+int rt_io_get_region(RT_IOREGION *iorn,
+		 const char *name,
+		 uint64_t start,
+		 uint64_t len,
+		 int flags);
 
-void rt_misc_put_io_region(uint64_t start,
-			   unsigned long len);
+int rt_io_put_region(RT_IOREGION *iorn);
 
+__deprecated_call__
+static inline int  rt_misc_get_io_region(unsigned long start,
+	 unsigned long len,
+	 const char *label)
+{
+	RT_IOREGION iorn;
+
+	return rt_io_get_region(&iorn, label, (uint64_t)start,
+(uint64_t)len, IORN_IOPORT);
+}
+
+__deprecated_call__
+static inline int rt_misc_put_io_region(unsigned long start,
+	unsigned long len)
+{
+	RT_IOREGION iorn;
+
+	iorn.opaque = XN_NO_HANDLE;
+	iorn.start = (uint64_t)start;
+	iorn.len = (uint64_t)len;
+	rt_io_put_region(&iorn);
+
+	return 0;
+}
+
 #ifdef __cplusplus
 }
 #endif
Index: include/native/syscall.h
===
--- include/native/syscall.h	(revision 3166)
+++ include/native/syscall.h	(working copy)
@@ -113,8 +113,8 @@
 #define __native_pipe_write 87
 #define __native_pipe_stream88
 #define __native_unimp_89   89
-#define __native_misc_get_io_region 90
-#define __native_misc_put_io_region 91
+#define __native_io_get_region  90
+#define __native_io_put_region  91
 #define __native_timer_ns2tsc   92
 #define __native_timer_tsc2ns   93
 #define __native_queue_write94
Index: include/nat

Re: [Xenomai-core] User space drivers on PPC440

2007-11-09 Thread Philippe Gerum
Steven A. Falco wrote:
> Your patch makes sense.
> 
> I have some results, but I'm not sure I understand what they mean.  I've
> attached the test program that I am using.  Here is what it outputs:
> 
> bash-3.00# ./o2
> Trying to free nonexistent resource <-c001>
> get leds: -16 Device or resource busy
> put leds: 0 Success
> 
> Trying to free nonexistent resource <->
> get low_mem: -16 Device or resource busy
> put low_mem: 0 Success

Actually, we should define rt_misc_put_io_region() as a void routine,
since the kernel does not return any status from release_region(). At
this chance, I've fixed this, since returning 0 albeit a failure message
is dumped to the kernel log upon error is disturbing.

> 
> I am a little unclear on request_resource() - the return code is
> backwards of what I would have expected.  Looking at examples in the
> kernel, it appears that request_resource() returns EBUSY when things go
> well, and it returns 0 when things go badly.  Like I said, that seems
> backwards, but I guess it makes sense - EBUSY apparently means that the
> resource is _now_ busy?
>

request_resource() should return a valid resource descriptor address
upon success, which Xenomai converts to zero. Conversely, -EBUSY is
returned if request_region() sends us back a NULL value, since this is
how check_region() behaves.

> Anyway, following the kernel examples, my program considers a non-zero
> return as success.  At that point I release the region.  If instead, I
> get a zero return, then I treat that as a failure, and don't release the
> region.
> 
> The part I don't understand is why I get the "Trying to free nonexistent
> resource" messages.  Since I am getting an EBUSY, I thought that meant I
> owned the resource, and that I should release it...
> 

I suspect request_region() did actually fail.

> Also, the addresses printed above are a bit strange.  For example, I
> would have thought that instead of
> "<-c001>", it would print
> "<0001c000-0001c013>".  Perhaps that is a clue - maybe
> the start and length are not being passed correctly.
> 
> One more question.  It appears that if my program crashes, that the
> region will never be released.  So, the normal behavior of an exiting
> process freeing all its resources doesn't seem to be guaranteed.
> 

As a matter of fact, we track all common objects like sema4s, mutexes,
queues, and so on, but not I/O regions. Consider this as an illustration
of our absolute laziness, since we do have the proper infrastructure to
handle I/O regions in the auto-cleanup process too.

> Steve
> 
> 
> Philippe Gerum wrote:
>> Philippe Gerum wrote:
>>   
>>> Steven A. Falco wrote:
>>> 
 The rt_misc_get_io_region() has the "start" argument as an unsigned 
 long.  On the PPC440, we have a 36-bit address space, where the I/O 
 registers are generally above the 4GB area.  For example, the UART is at 
 address 0x1ef600300.

 The Linux request_region call has "start" typed as a resource_size_t, 
 which is a u64 on the PPC440 (i.e. CONFIG_RESOURCES_64BIT is set even 
 though this is a 23-bit processor).

 Is this something that should be handled by xeno-config?  It could 
 append a CFLAG indicating the size of a resource.
   
>>> Or use a 64bit long unconditionally, to keep the same kernel-based
>>> implementation, since there is no performance issue for this call. In
>>> any case, we need to fix the API before 2.4 final is out -- which will
>>> also affect the ABI, but it already changed during the 2.4 development
>>> phase anyway.
>>>
>>> 
>>
>> Does this patch work for you?
>>
>>   


-- 
Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] User space drivers on PPC440

2007-11-09 Thread Steven A. Falco
Solved.  As you pointed out, Xenomai inverts the returned value from 
request_region.  So, that was a bug in my application.


However, turns out that instead of request_region, I have to use 
request_mem_region.  This is because the I/O region only goes up to 
2^32, but the mem region goes up to 2^64.


So, attached is a patch to add two new syscalls: rt_misc_get_mem_region 
and rt_misc_put_mem_region.


   Steve


Philippe Gerum wrote:

Steven A. Falco wrote:
  

Your patch makes sense.

I have some results, but I'm not sure I understand what they mean.  I've
attached the test program that I am using.  Here is what it outputs:

bash-3.00# ./o2
Trying to free nonexistent resource <-c001>
get leds: -16 Device or resource busy
put leds: 0 Success

Trying to free nonexistent resource <->
get low_mem: -16 Device or resource busy
put low_mem: 0 Success



Actually, we should define rt_misc_put_io_region() as a void routine,
since the kernel does not return any status from release_region(). At
this chance, I've fixed this, since returning 0 albeit a failure message
is dumped to the kernel log upon error is disturbing.

  

I am a little unclear on request_resource() - the return code is
backwards of what I would have expected.  Looking at examples in the
kernel, it appears that request_resource() returns EBUSY when things go
well, and it returns 0 when things go badly.  Like I said, that seems
backwards, but I guess it makes sense - EBUSY apparently means that the
resource is _now_ busy?




request_resource() should return a valid resource descriptor address
upon success, which Xenomai converts to zero. Conversely, -EBUSY is
returned if request_region() sends us back a NULL value, since this is
how check_region() behaves.

  

Anyway, following the kernel examples, my program considers a non-zero
return as success.  At that point I release the region.  If instead, I
get a zero return, then I treat that as a failure, and don't release the
region.

The part I don't understand is why I get the "Trying to free nonexistent
resource" messages.  Since I am getting an EBUSY, I thought that meant I
owned the resource, and that I should release it...




I suspect request_region() did actually fail.

  

Also, the addresses printed above are a bit strange.  For example, I
would have thought that instead of
"<-c001>", it would print
"<0001c000-0001c013>".  Perhaps that is a clue - maybe
the start and length are not being passed correctly.

One more question.  It appears that if my program crashes, that the
region will never be released.  So, the normal behavior of an exiting
process freeing all its resources doesn't seem to be guaranteed.




As a matter of fact, we track all common objects like sema4s, mutexes,
queues, and so on, but not I/O regions. Consider this as an illustration
of our absolute laziness, since we do have the proper infrastructure to
handle I/O regions in the auto-cleanup process too.

  

Steve


Philippe Gerum wrote:


Philippe Gerum wrote:
  
  

Steven A. Falco wrote:


The rt_misc_get_io_region() has the "start" argument as an unsigned 
long.  On the PPC440, we have a 36-bit address space, where the I/O 
registers are generally above the 4GB area.  For example, the UART is at 
address 0x1ef600300.


The Linux request_region call has "start" typed as a resource_size_t, 
which is a u64 on the PPC440 (i.e. CONFIG_RESOURCES_64BIT is set even 
though this is a 23-bit processor).


Is this something that should be handled by xeno-config?  It could 
append a CFLAG indicating the size of a resource.
  
  

Or use a 64bit long unconditionally, to keep the same kernel-based
implementation, since there is no performance issue for this call. In
any case, we need to fix the API before 2.4 final is out -- which will
also affect the ABI, but it already changed during the 2.4 development
phase anyway.




Does this patch work for you?

  
  



  
Index: include/native/misc.h
===
--- include/native/misc.h	(revision 3101)
+++ include/native/misc.h	(working copy)
@@ -39,6 +39,13 @@
 int rt_misc_put_io_region(unsigned long start,
 			  unsigned long len);
 
+int rt_misc_get_mem_region(uint64_t start,
+			  unsigned long len,
+			  const char *label);
+
+int rt_misc_put_mem_region(uint64_t start,
+			  unsigned long len);
+
 #ifdef __cplusplus
 }
 #endif
Index: include/native/syscall.h
===
--- include/native/syscall.h	(revision 3101)
+++ include/native/syscall.h	(working copy)
@@ -119,6 +119,8 @@
 #define __native_timer_tsc2ns   93
 #define __native_queue_write94
 #define __native_queue_read 95
+#define __native_misc_get_mem_region 96
+#define __native_misc_put_mem_region 97
 
 struct rt_arg_bulk {
 

Re: [Xenomai-core] User space drivers on PPC440

2007-11-09 Thread Gilles Chanteperdrix
On Nov 9, 2007 5:03 PM, Steven A. Falco <[EMAIL PROTECTED]> wrote:
>
>  Your patch makes sense.
>
>  I have some results, but I'm not sure I understand what they mean.  I've
> attached the test program that I am using.  Here is what it outputs:
>
>  bash-3.00# ./o2
>  Trying to free nonexistent resource <-c001>
>  get leds: -16 Device or resource busy
>  put leds: 0 Success
>
>  Trying to free nonexistent resource <->
>  get low_mem: -16 Device or resource busy
>  put low_mem: 0 Success
>
>  I am a little unclear on request_resource() - the return code is backwards
> of what I would have expected.  Looking at examples in the kernel, it
> appears that request_resource() returns EBUSY when things go well, and it
> returns 0 when things go badly.  Like I said, that seems backwards, but I
> guess it makes sense - EBUSY apparently means that the resource is _now_
> busy?

request_region returns a pointer to a struct resource, and NULL if the
resource is already reserved.


-- 
   Gilles Chanteperdrix

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] User space drivers on PPC440

2007-11-09 Thread Steven A. Falco
Many apologies.  I forgot to build the user library.  Now the addresses 
look better:


bash-3.00# ./o2
req: start = 0001c002   len = 000d
rel: start = 0001c002   len = 000d
Trying to free nonexistent resource <0001c002-0001c00e>
get leds: -16 Device or resource busy
put leds: 0 Success

However, I still get the printk that I've tried to free a non-existent 
resource.  Next, I'll try page aligning the addresses to see if that helps.


   Steve


Steven A. Falco wrote:

Your patch makes sense.

I have some results, but I'm not sure I understand what they mean.  
I've attached the test program that I am using.  Here is what it outputs:


bash-3.00# ./o2
Trying to free nonexistent resource <-c001>
get leds: -16 Device or resource busy
put leds: 0 Success

Trying to free nonexistent resource <->
get low_mem: -16 Device or resource busy
put low_mem: 0 Success

I am a little unclear on request_resource() - the return code is 
backwards of what I would have expected.  Looking at examples in the 
kernel, it appears that request_resource() returns EBUSY when things 
go well, and it returns 0 when things go badly.  Like I said, that 
seems backwards, but I guess it makes sense - EBUSY apparently means 
that the resource is _now_ busy?


Anyway, following the kernel examples, my program considers a non-zero 
return as success.  At that point I release the region.  If instead, I 
get a zero return, then I treat that as a failure, and don't release 
the region.


The part I don't understand is why I get the "Trying to free 
nonexistent resource" messages.  Since I am getting an EBUSY, I 
thought that meant I owned the resource, and that I should release it...


Also, the addresses printed above are a bit strange.  For example, I 
would have thought that instead of 
"<-c001>", it would print 
"<0001c000-0001c013>".  Perhaps that is a clue - maybe 
the start and length are not being passed correctly.


One more question.  It appears that if my program crashes, that the 
region will never be released.  So, the normal behavior of an exiting 
process freeing all its resources doesn't seem to be guaranteed.


Steve


Philippe Gerum wrote:

Philippe Gerum wrote:
  

Steven A. Falco wrote:

The rt_misc_get_io_region() has the "start" argument as an unsigned 
long.  On the PPC440, we have a 36-bit address space, where the I/O 
registers are generally above the 4GB area.  For example, the UART is at 
address 0x1ef600300.


The Linux request_region call has "start" typed as a resource_size_t, 
which is a u64 on the PPC440 (i.e. CONFIG_RESOURCES_64BIT is set even 
though this is a 23-bit processor).


Is this something that should be handled by xeno-config?  It could 
append a CFLAG indicating the size of a resource.
  

Or use a 64bit long unconditionally, to keep the same kernel-based
implementation, since there is no performance issue for this call. In
any case, we need to fix the API before 2.4 final is out -- which will
also affect the ABI, but it already changed during the 2.4 development
phase anyway.




Does this patch work for you?

  
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] User space drivers on PPC440

2007-11-09 Thread Steven A. Falco

Your patch makes sense.

I have some results, but I'm not sure I understand what they mean.  I've 
attached the test program that I am using.  Here is what it outputs:


bash-3.00# ./o2
Trying to free nonexistent resource <-c001>
get leds: -16 Device or resource busy
put leds: 0 Success

Trying to free nonexistent resource <->
get low_mem: -16 Device or resource busy
put low_mem: 0 Success

I am a little unclear on request_resource() - the return code is 
backwards of what I would have expected.  Looking at examples in the 
kernel, it appears that request_resource() returns EBUSY when things go 
well, and it returns 0 when things go badly.  Like I said, that seems 
backwards, but I guess it makes sense - EBUSY apparently means that the 
resource is _now_ busy?


Anyway, following the kernel examples, my program considers a non-zero 
return as success.  At that point I release the region.  If instead, I 
get a zero return, then I treat that as a failure, and don't release the 
region.


The part I don't understand is why I get the "Trying to free nonexistent 
resource" messages.  Since I am getting an EBUSY, I thought that meant I 
owned the resource, and that I should release it...


Also, the addresses printed above are a bit strange.  For example, I 
would have thought that instead of 
"<-c001>", it would print 
"<0001c000-0001c013>".  Perhaps that is a clue - maybe 
the start and length are not being passed correctly.


One more question.  It appears that if my program crashes, that the 
region will never be released.  So, the normal behavior of an exiting 
process freeing all its resources doesn't seem to be guaranteed.


   Steve


Philippe Gerum wrote:

Philippe Gerum wrote:
  

Steven A. Falco wrote:

The rt_misc_get_io_region() has the "start" argument as an unsigned 
long.  On the PPC440, we have a 36-bit address space, where the I/O 
registers are generally above the 4GB area.  For example, the UART is at 
address 0x1ef600300.


The Linux request_region call has "start" typed as a resource_size_t, 
which is a u64 on the PPC440 (i.e. CONFIG_RESOURCES_64BIT is set even 
though this is a 23-bit processor).


Is this something that should be handled by xeno-config?  It could 
append a CFLAG indicating the size of a resource.
  

Or use a 64bit long unconditionally, to keep the same kernel-based
implementation, since there is no performance issue for this call. In
any case, we need to fix the API before 2.4 final is out -- which will
also affect the ABI, but it already changed during the 2.4 development
phase anyway.




Does this patch work for you?

  
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 

#define LED 0x1c002ULL
#define LEN 13
#define BS 128

test_it(unsigned long long addr, unsigned long len, char *name)
{
int rv_get;
int rv_put;
char *bp;
char buf[BS];

rv_get = rt_misc_get_io_region(addr, len, name);
if(rv_get) {
rv_put = rt_misc_put_io_region(addr, len);
}

bp = strerror_r(-rv_get, buf, BS);
fprintf(stderr, "get %s: %d %s\n", name, rv_get, bp);

bp = strerror_r(-rv_put, buf, BS);
fprintf(stderr, "put %s: %d %s\n", name, rv_put, bp);
}

main()
{
test_it(LED, LEN, "leds");
test_it(0, LEN, "low_mem");
}
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] User space drivers on PPC440

2007-11-09 Thread Philippe Gerum
Philippe Gerum wrote:
> Steven A. Falco wrote:
>> The rt_misc_get_io_region() has the "start" argument as an unsigned 
>> long.  On the PPC440, we have a 36-bit address space, where the I/O 
>> registers are generally above the 4GB area.  For example, the UART is at 
>> address 0x1ef600300.
>>
>> The Linux request_region call has "start" typed as a resource_size_t, 
>> which is a u64 on the PPC440 (i.e. CONFIG_RESOURCES_64BIT is set even 
>> though this is a 23-bit processor).
>>
>> Is this something that should be handled by xeno-config?  It could 
>> append a CFLAG indicating the size of a resource.
> 
> Or use a 64bit long unconditionally, to keep the same kernel-based
> implementation, since there is no performance issue for this call. In
> any case, we need to fix the API before 2.4 final is out -- which will
> also affect the ABI, but it already changed during the 2.4 development
> phase anyway.
> 

Does this patch work for you?

-- 
Philippe.
Index: include/native/misc.h
===
--- include/native/misc.h	(revision 3162)
+++ include/native/misc.h	(working copy)
@@ -32,11 +32,11 @@
 
 /* Public interface. */
 
-int rt_misc_get_io_region(unsigned long start,
+int rt_misc_get_io_region(uint64_t start,
 			  unsigned long len,
 			  const char *label);
 
-int rt_misc_put_io_region(unsigned long start,
+int rt_misc_put_io_region(uint64_t start,
 			  unsigned long len);
 
 #ifdef __cplusplus
Index: src/skins/native/misc.c
===
--- src/skins/native/misc.c	(revision 3162)
+++ src/skins/native/misc.c	(working copy)
@@ -22,16 +22,16 @@
 
 extern int __native_muxid;
 
-int rt_misc_get_io_region(unsigned long start,
+int rt_misc_get_io_region(uint64_t start,
 			  unsigned long len, const char *label)
 {
 	return XENOMAI_SKINCALL3(__native_muxid,
- __native_misc_get_io_region, start, len,
+ __native_misc_get_io_region, &start, len,
  label);
 }
 
-int rt_misc_put_io_region(unsigned long start, unsigned long len)
+int rt_misc_put_io_region(uint64_t start, unsigned long len)
 {
 	return XENOMAI_SKINCALL2(__native_muxid,
- __native_misc_put_io_region, start, len);
+ __native_misc_put_io_region, &start, len);
 }
Index: ksrc/skins/native/syscall.c
===
--- ksrc/skins/native/syscall.c	(revision 3163)
+++ ksrc/skins/native/syscall.c	(working copy)
@@ -3656,7 +3656,7 @@
 #endif /* CONFIG_XENO_OPT_NATIVE_PIPE */
 
 /*
- * int __rt_misc_get_io_region(unsigned long start,
+ * int __rt_misc_get_io_region(uint64_t *start,
  * unsigned long len,
  * const char *label)
  */
@@ -3664,35 +3664,49 @@
 static int __rt_misc_get_io_region(struct task_struct *curr,
    struct pt_regs *regs)
 {
-	unsigned long start, len;
+	unsigned long len;
+	uint64_t start;
 	char label[64];
 
 	if (!__xn_access_ok
+	(curr, VERIFY_READ, __xn_reg_arg1(regs), sizeof(start)))
+		return -EFAULT;
+
+	if (!__xn_access_ok
 	(curr, VERIFY_READ, __xn_reg_arg3(regs), sizeof(label)))
 		return -EFAULT;
 
+	__xn_copy_from_user(curr, &start, (void __user *)__xn_reg_arg1(regs),
+			sizeof(start));
+
 	__xn_strncpy_from_user(curr, label,
 			   (const char __user *)__xn_reg_arg3(regs),
 			   sizeof(label) - 1);
 	label[sizeof(label) - 1] = '\0';
 
-	start = __xn_reg_arg1(regs);
 	len = __xn_reg_arg2(regs);
 
 	return request_region(start, len, label) ? 0 : -EBUSY;
 }
 
 /*
- * int __rt_misc_put_io_region(unsigned long start,
+ * int __rt_misc_put_io_region(uint64_t *start,
  * unsigned long len)
  */
 
 static int __rt_misc_put_io_region(struct task_struct *curr,
    struct pt_regs *regs)
 {
-	unsigned long start, len;
+	unsigned long len;
+	uint64_t start;
 
-	start = __xn_reg_arg1(regs);
+	if (!__xn_access_ok
+	(curr, VERIFY_READ, __xn_reg_arg1(regs), sizeof(start)))
+		return -EFAULT;
+
+	__xn_copy_from_user(curr, &start, (void __user *)__xn_reg_arg1(regs),
+			sizeof(start));
+
 	len = __xn_reg_arg2(regs);
 	release_region(start, len);
 
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] User space drivers on PPC440

2007-11-08 Thread Philippe Gerum
Steven A. Falco wrote:
> The rt_misc_get_io_region() has the "start" argument as an unsigned 
> long.  On the PPC440, we have a 36-bit address space, where the I/O 
> registers are generally above the 4GB area.  For example, the UART is at 
> address 0x1ef600300.
> 
> The Linux request_region call has "start" typed as a resource_size_t, 
> which is a u64 on the PPC440 (i.e. CONFIG_RESOURCES_64BIT is set even 
> though this is a 23-bit processor).
> 
> Is this something that should be handled by xeno-config?  It could 
> append a CFLAG indicating the size of a resource.

Or use a 64bit long unconditionally, to keep the same kernel-based
implementation, since there is no performance issue for this call. In
any case, we need to fix the API before 2.4 final is out -- which will
also affect the ABI, but it already changed during the 2.4 development
phase anyway.

> 
> Steve
> 
> 
> ___
> Xenomai-core mailing list
> Xenomai-core@gna.org
> https://mail.gna.org/listinfo/xenomai-core
> 


-- 
Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] User space drivers on PPC440

2007-11-08 Thread Steven A. Falco

>  (i.e. CONFIG_RESOURCES_64BIT is set even though this is a 23-bit 
> processor).
Make that a 32-bit processor. :-[

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core