Gilles Chanteperdrix wrote:
> Philippe Gerum wrote:
> > Gilles Chanteperdrix wrote:
> > > As far as I understood, the user-space atomic operations, used to
> > > acquire a free mutex in user-space, are not part of the futex API. In
> > > our case, we are using xnarch_atomic_* operations to implement portably
> > > this user-space locking stuff. I think that even setting the bit saying
> > > that the mutex is currently owned is done in pthread_mutexes
> > > implementation, not in the Futex API.
> > I would fully agree if the futex API did not define PI-based ops, which are
> > needed for proper real-time operations from userland. You will certainly
> > that PI implies that some kind of ownership exists; and because there
> can't be
> > more than a single owner in that case, you end up with an object that
> can't be
> > held by more than a single task. So you do have a mutex in disguise,
> > the way to keep its state is (a bit, an integer, whatever). So there is
> > semantics attached to that API than to simply manage an event notification
> > Now, what remains is
> > > sys_futex(FUTEX_WAIT) and sys_futex(FUTEX_WAKE), this terribly looks like
> > > xnsync_sleep_on and xnsynch_wakeup_one_sleeper.
> > >
> > Yes, here again I partially agree, except for a significant issue: xnsynch
> is a
> > stateless object (that's why we can use it for different syncobjs which are
> > totally unrelated in their semantics - mutex, queue, region, counting sems,
> > whatever). I was just wondering if we could make the *tex thingy a bit more
> > stateful to ease the job for the skins, just in case we would use it for
> > only. I have no immediate answer to this question, just asking -- this is
> > contribution as a senior member of the peanut gallery.
> We can certainly implement an abstraction managing xnarch_atomic_t +
> xnsynch_t, however, it seems that we would have to re-factor all
> mutex/semaphores implementations to use this new abstraction. The
> current approach is to add an xnarch_atomic_cmpxchg in user-space, and
> fall back to an almost unchanged kernel-space support when it fails.
Ok. Let's merge this as it is. Common code will emerge eventually if it happens
to make sense when plugging the feature into the native and VxWorks mutex
> > > >
> > > > I feel this
> > > > > would complicate things: currently, the way I implemented user-space
> > > > > mutexes for the posix skin kept the old association between the
> > > > > user-space mutex, and its kernel-space companion, also used by
> > > > > kernel-space operations.
> > > > >
> > > >
> > > > My concern boils down to: how much of the POSIX implementation,
> beyond the
> > > > cb_lock stuff, would have to be duplicated to get the same support
> ported to,
> > > > say the VxWorks semM services?
> > >
> > > The initialization code, and the additional calls to
> > > xnarch_atomic_cmpxchg in user-space. If xnarch_atomic_cmpxchg fails we
> > > call kernel-space, which is mostly unchanged.
> > >
> > > Because of the cb_lock stuff, I also needed to implement the
> > > kernel-space syscalls in two versions: one if user-space has
> > > xnarch_atomic_cmpxchg and could lock the mutex control block, the other
> > > if the mutex control block needs to be locked by kernel-space.
> > >
> > This is the part my laziness would very much like to factor as much as
> > If ever possible.
> The current implementation is more or less:
> /* Common generic function */
> /*... Assume mutex cb is locked ... */
> /* Kernel-space operation */
> /* syscall wrapper */
> #ifdef XNARCH_HAVE_US_ATOMIC_CMPXCHG
Xenomai-core mailing list