Jan Kiszka wrote:
> Gilles Chanteperdrix wrote:
>> Jan Kiszka wrote:
>>> Gilles Chanteperdrix wrote:
>>>> Jan Kiszka wrote:
>>>>> Jan Kiszka wrote:
>>>>>> Jan Kiszka wrote:
>>>>>>> Gilles Chanteperdrix wrote:
>>>>>>>> GIT version control wrote:
>>>>>>>>> diff --git a/ksrc/skins/posix/mq.c b/ksrc/skins/posix/mq.c
>>>>>>>>> index 11f47c0..a896031 100644
>>>>>>>>> --- a/ksrc/skins/posix/mq.c
>>>>>>>>> +++ b/ksrc/skins/posix/mq.c
>>>>>>>>> @@ -1283,6 +1283,7 @@ int pse51_mq_select_bind(mqd_t fd, struct 
>>>>>>>>> xnselector *selector,
>>>>>>>>>       return 0;
>>>>>>>>>  
>>>>>>>>>        unlock_and_error:
>>>>>>>>> +     xnfree(binding):
>>>>>>>>>       xnlock_put_irqrestore(&nklock, s);
>>>>>>>>>       return err;
>>>>>>>>>  }
>>>>>>>> Ok. Will pull. But I really need to fix that.
>>>>>>>>
>>>>>>> Ack - now that I see it myself.
>>>>>>>
>>>>>> I fixed this in my branch and added another patch to transform EIDRM
>>>>>> into EBADF when selecting a (half-)deleted RTDM device. Please merge.
>>>>>>
>>>>> Wait! When the sync object behind some file descriptor is deleted but
>>>>> the descriptor itself is still existing, we rather have to return that
>>>>> fd signaled from select() instead of letting the call fail. I beed to
>>>>> look into this again.
>>>> It looks to me like a transitory state, we can wait for the sync object
>>>> to be deleted to have the fd destructor signaled. It should not be long.
>>> That's not an issue of waiting for this. See e.g. TCP: peer closes
>>> connection -> internal sync objects will be destroyed (to make
>>> read/write fail). But the fd will remain valid until the local side
>>> closes it as well.
>> It looks to me like this is going to complicate things a lot, and will
>> be a source of regressions. Why can we not have sync objects be
>> destroyed when the fd is really destroyed and use a status bit of some
>> kind to signal read/write that the fd was closed by peer?
> 
> It is way easier and more consistent to unblock reader and writers via
> destroying the sync object than to signal it and add tests for specific
> states to detect that. Keep in mind that this pattern is in use even
> without select support. Diverging from it just to add select awareness
> to some driver would be a step back.

Ok. As you wish. But in that case, please provide a unit test case which
we can run on all architectures to validate your modifications. We will
not put this test case in xenomai repository but will create a
repository for test cases later on.

-- 
                                            Gilles.

_______________________________________________
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core

Reply via email to