[Xenomai-core] v2.2.x broken (r1534)

2006-09-01 Thread Ignacio García Pérez

Hi,

I just did a fresh checkout of the v2.2.x and it does not compile:

gcc: pthread_create: No such file or directory
gcc: pthread_detach: No such file or directory
gcc: pthread_setschedparam: No such file or directory
gcc: pthread_getschedparam: No such file or directory
gcc: pthread_yield: No such file or directory
gcc: sched_yield: No such file or directory
gcc: sem_init: No such file or directory
gcc: sem_destroy: No such file or directory
gcc: sem_post: No such file or directory
...


Any idea of what's broken?

Thanks.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] rt_task_wait_period returning -ETIMEDOUT

2006-06-06 Thread Ignacio García Pérez

Hi,

rt_task_wait_period() is spuriously returning -ETIMEDOUT.

(spuriously = maybe once every two hours with period = 500us)

I'm using the trunk code and that error code is not mentioned in the 
rt_task_wait_period documentation.


Any clues?

Nacho.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] [BUG] kernel oops on registry duplicate names

2006-05-12 Thread Ignacio García Pérez

The subject pretty much explains it all.

Just try to create a task named foo and a queue also named foo.

Tested in 2.1.1 and svn HEAD.

Nacho.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Coding style

2005-11-30 Thread Ignacio García Pérez
Philippe Gerum wrote:

 Ignacio García Pérez wrote:

 Hi,

 Some time ago someone mentioned the current xenomai coding style, and
 that maybe it would be a good idea to change it to something more
 standard. A good place to start would be turn the tabs into spaces to
 eliminate editor configuration dependency. Anyway, my question is: could
 indent be used to do this?


 The discussion was about going for the kernel coding style and rules;
 in such a case, tabs would have to be hard ones.

Ok. Never mind about the tabs. What about the coding style?. In general
when I fiddle with someone else's code, I just adapt to his coding
style, but it is a bit hard in this case...

Nacho.



[Xenomai-core] rt_intr_enable() requierd after rt_intr_create

2005-11-29 Thread Ignacio García Pérez
Hi,

I noticed that when an interrupt object is created using
rt_intr_create(), it is created disabled, and a call to rt_intr_enable()
is necessary for the ISR to be called.

Question is: is this the expected behaviour?. If so, I think this should
be mentioned somewhere in the rt_intr_create documentation. In fact,
from reading the docs one could infer the opposite.

On a related issue, I noticed that the rt_intr_enable() documentation says:

Enables the hardware interrupt line associated with an interrupt
object. Over Adeos-based systems which mask and acknowledge IRQs upon
receipt, this operation is necessary to revalidate the interrupt channel
so that more interrupts from the same source can be notified.

Is this correct?. I ask because the rt_intr_create() documentation tells
you to just return RT_INTR_ENABLE from the ISR if you want this. It's
confusing.


Nacho.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] rt_intr_enable() requierd after rt_intr_create

2005-11-29 Thread Ignacio García Pérez
Hi,

I noticed that when an interrupt object is created using
rt_intr_create(), it is created disabled, and a call to rt_intr_enable()
is necessary for the ISR to be called.

Question is: is this the expected behaviour?. If so, I think this should
be mentioned somewhere in the rt_intr_create documentation. In fact,
from reading the docs one could infer the opposite.

On a related issue, I noticed that the rt_intr_enable() documentation says:

Enables the hardware interrupt line associated with an interrupt
object. Over Adeos-based systems which mask and acknowledge IRQs upon
receipt, this operation is necessary to revalidate the interrupt channel
so that more interrupts from the same source can be notified.

Is this correct?. I ask because the rt_intr_create() documentation tells
you to just return RT_INTR_ENABLE from the ISR if you want this. It's
confusing.


Nacho.



[Xenomai-core] Coding style

2005-11-29 Thread Ignacio García Pérez
Hi,

Some time ago someone mentioned the current xenomai coding style, and
that maybe it would be a good idea to change it to something more
standard. A good place to start would be turn the tabs into spaces to
eliminate editor configuration dependency. Anyway, my question is: could
indent be used to do this?

Nacho.



[Xenomai-core] Latest snapshot (182) not compiling (kernel)

2005-11-25 Thread Ignacio García Pérez
Hi,

I dunno what's changed, but I updated my xenomai snapshot to the latest
revision (182) and the kernel no longer compiles (fails due to some
xnpod_* undefined symbols).

Revision 179 compiled fine.

Nacho.



[Xenomai-core] Re: [Xenomai-help] Blocking reads from pipes

2005-11-18 Thread Ignacio García Pérez

Exactly, I have just found out that and posted actually a long mail just 
before getting this mail from you :o)

Yep, and before getting blocked, read() increments the counter as well, that's 
why we don't have a xnpipe_realease() called as a result of close().
So everything is correct.
  

Fine. Though then my problem is not related to xenomai, any suggestions
on how to force read() to return? (without writing anything to the other
end of the pipe)



Re: [Xenomai-core] rt_pipe_* usage

2005-11-15 Thread Ignacio García Pérez
Philippe Gerum wrote:

 Ignacio García Pérez wrote:

 RT_PIPE_MSG *m = rt_pipe_alloc(sizeof(mystruct_t));
 mystruct_t *p = (mystruct_t *)P_MSGPTR(m);
 p-whatever1 = X;
 p-whatever2 = X;
 rt_pipe_send(mypipe, m, sizeof(mystruct_t), P_NORMAL);

 If this is correct, why do I have to specify the size of mystruct_t
 *twice*. Can't it be initialized by rt_pipe_alloc ?.


 It's initialized actually (*).


 So, what's the sense of having to specify it again whet calling
 rt_pipe_send ?


 Because you may (pre-)allocate more than you really need to send
 afterwards.

I guess this should be explained in the docs. Please consider the small
patch I attach.

Nacho.
Index: skins/native/pipe.c
===
--- skins/native/pipe.c (revision 143)
+++ skins/native/pipe.c (working copy)
@@ -598,7 +598,11 @@
  *
  * @param size The size in bytes of the message (payload data
  * only). Zero is a valid value, in which case the service returns
- * immediately without sending any message.
+ * immediately without sending any message. This parameter allows
+ * you to actually send less data than you reserved using the
+ * rt_pipe_alloc() service, which may be the case if you did not
+ * know how much space you needed at the time of allocation. In all
+ * other cases it may be more convenient to just pass P_MSGSIZE(msg).
  *
  * Additionally, rt_pipe_send() causes any data buffered by
  * rt_pipe_stream() to be flushed prior to sending the message. For


Re: [Xenomai-core] rt_pipe_* usage

2005-11-14 Thread Ignacio García Pérez

 
  RT_PIPE_MSG *m = rt_pipe_alloc(sizeof(mystruct_t));
  mystruct_t *p = (mystruct_t *)P_MSGPTR(m);
  p-whatever1 = X;
  p-whatever2 = X;
  rt_pipe_send(mypipe, m, sizeof(mystruct_t), P_NORMAL);
 
  If this is correct, why do I have to specify the size of mystruct_t
  *twice*. Can't it be initialized by rt_pipe_alloc ?.

 It's initialized actually (*).

So, what's the sense of having to specify it again whet calling
rt_pipe_send ?



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] [BUG] rt_pipe_flush declaration missing in skins/native/pipe.h

2005-11-14 Thread Ignacio García Pérez
Hi,

The subject says it all.

Nacho.



[Xenomai-core] rt_pipe_* usage

2005-11-14 Thread Ignacio García Pérez
Hi,

I'm having now my first contact with the pipe framework, an have some
comments about it that might be of interest to the core developers:

While the documentation is overall *great*, I fould it a bit lacking
regarding the pipes. Would be good to have some examples of general usage.

As far as I can tell, there is no mention of the usage of P_MSGPTR and
P_MSGSIZE. I had to learn about them in the headers.

At first sight, the rt_pipe_send call is confusing: why should I pass
the data size since it is supposed to be embedded in the RT_PIPE_MSG
structure?.

This is what I first did:

RT_PIPE_MSG *m = rt_pipe_alloc(sizeof(mystruct));
P_MSGPTR(m) = mystruct;
P_MSGSIZE(m) = sizeof(mystruct)
rt_pipe_send(mypipe, m, sizeof(mystruct), P_NORMAL);

Which is obviously wrong. Please correct me if I'm wrong:

P_MSGPTR and P_MSGSIZE are intended not to be used as an lvalue (is
there a way to define these macros to generate a compile error if they
are?).

So, the correct way would be something like this (again, correct me if
I'm wrong)::

RT_PIPE_MSG *m = rt_pipe_alloc(sizeof(mystruct_t));
mystruct_t *p = (mystruct_t *)P_MSGPTR(m);
p-whatever1 = X;
p-whatever2 = X;
rt_pipe_send(mypipe, m, sizeof(mystruct_t), P_NORMAL);

If this is correct, why do I have to specify the size of mystruct_t
*twice*. Can't it be initialized by rt_pipe_alloc ?.

Nacho.












Re: [Xenomai-core] [BUG] rt_pipe_flush declaration missing in skins/native/pipe.h

2005-11-14 Thread Ignacio García Pérez
Philippe Gerum wrote:

 Ignacio García Pérez wrote:

 Hi,

 The subject says it all.


 Fixed, thanks.

 PS: please send patches when possible, it's faster to handle for me
 and less likely to be forgotten in my job queue. TIA,

I updated my source from the repository, and the
EXPORT_SYMBOL(rt_pipe_flush) in pipe.c is missing, so rt_pipe_flush is
not usable yet. Patch attached.

Nacho.
Index: skins/native/pipe.c
===
--- skins/native/pipe.c (revision 143)
+++ skins/native/pipe.c (working copy)
@@ -1050,5 +1050,6 @@
 EXPORT_SYMBOL(rt_pipe_read);
 EXPORT_SYMBOL(rt_pipe_write);
 EXPORT_SYMBOL(rt_pipe_stream);
+EXPORT_SYMBOL(rt_pipe_flush);
 EXPORT_SYMBOL(rt_pipe_alloc);
 EXPORT_SYMBOL(rt_pipe_free);


[Xenomai-core] More on rt pipes usage

2005-11-14 Thread Ignacio García Pérez
Hi,

Suppose I have a kernel rt task that samples data at a certain rate and
writes it as messages into a rt pipe, from which it is read by a user
space non rt program.

I want to limit the number of messages that are put into the pipe,
because otherwise if the user space program dies, it will grow endlessly
till it exausts the rt heap.

What I want to do is to have a pipe that can hold a limited number of
messages such that rt_pipe_write will fail if it is full.

Is there a way to know how many messages are there in the pipe?

Even if there is a way, to prevent a (harmless) race condition, I would
need to lock the pipe between checking the number of messages and
calling rt_pipe_write. As far as I know, pipe locking belongs to the
nucleus and I'd like to stay in the native skin as much as possible.

Another method would be to count how many messages I write, but then I'd
need some hook that notifies me when the user space program reads a
message so I can decrement the count.

Any ideas?

Nacho.



Re: [Xenomai-core] rt_pipe_* usage

2005-11-14 Thread Ignacio García Pérez

 
  RT_PIPE_MSG *m = rt_pipe_alloc(sizeof(mystruct_t));
  mystruct_t *p = (mystruct_t *)P_MSGPTR(m);
  p-whatever1 = X;
  p-whatever2 = X;
  rt_pipe_send(mypipe, m, sizeof(mystruct_t), P_NORMAL);
 
  If this is correct, why do I have to specify the size of mystruct_t
  *twice*. Can't it be initialized by rt_pipe_alloc ?.

 It's initialized actually (*).

So, what's the sense of having to specify it again whet calling
rt_pipe_send ?