At the moment the devicename is defined by the caller of
rti2c_adapter_register().
When the device is registered a number is assigned to the adapter. Maybe
the devicename could be made up of some fixed text like "rti2c" followed
by the number that is assigned to the adapter.

That was the idea behind it. Given the RTI2C is generic (and it looks
like it is), you can than write a generic application, opening "rti2c0"
more or less blindly without knowing the adapter behind it.

Fixed.

  // set the address of the device on the bus
  rt_dev_ioctl(fd, RTI2C_SLAVE, addr);
Is the typical use case to not change the slave address that often,
rather to use separate device instances for accessing different slaves?
I'm wondering if a combined address+request command may make sense.
Maybe even a socket-based protocol device would match as well, maybe
even better... (needs more thinking, I guess)

Does Linux expose a similar API via some character devices? Keeping the
distance to Linux low might be a reason to not go the socket path.
The linux API is exactly the same. I had the same thoughts concerning
combined commands (address+request) but maybe we could offer such
commands as wrappers.

My concern was rather about performance than convenience. If a usage
pattern consists of almost as many address-set driver invocations as
actual requests, then you would benefit quite a lot from a combined
service call. But the question is, given that Linux uses the same API,
if that is really an expected use case and worth an optimisation effort.

For the moment I'll keep it the way it is.

...
A few thoughts on this:

 - What are typical delays between 4. and 5.? Does it makes sense for
   short requests to busy-wait (to avoid the scheduler/IRQ overhead)?
   I've seen that there is some polling path included in the code, but
   that would break as it calls into Linux.
Oops. I missed that schedule() call. Regarding typical delays I have to
admit that I have not measured yet, but I would estimate about 500 us
for the above usecase. What would you estimate the scheduler/IRQ overhead?

Oh, 500 us is far more than you should see as "suspend-me +
switch-to-someone-else + raise-irq-and-switch-back-to-me" overhead even
on low-end PPC. Busy-waiting is something for a few microseconds.

Ah, all right, so I leave this as is too.

 - Will a request always return? Or does it make sense to establish an
   (optional) timeout mechanism here?
A timeout mechanism is already there: rti2c_ppc4xx_wait_for_tc() uses
rtdm_event_timedwait to wait for the interrupt to complete. Certainly
that is left to the implementation of the adapter.

True, I missed this.

 - Buffer allocation for short requests may also happen on the stack, I
   think.
That is already done. Have a look at the RTI2C_SMBUS IOCTL. Do you think
it should also be possible to do this in the read/write calls denpending
on the requested size?

I currently see allocations in RTI2C_RDWR (BTW, there is a forgotten
kmalloc) and in read/write. As I don't know the typical sizes of those
requests, I cannot judge on this. So take it as some food to think about.

In contrast, the rtdm_malloc on adapter registration is overkill - this
will not happen in RT context, will it?

Fixed. Still not sure what to do about the memory allocations in the
other places.

 - Buffer allocation for large requests may (optionally) happen
   ahead-of-time via some special IOCTL. This would make a device
   independent of the current system heap usage/fragmentation.

 - During concurrent use, the latency of an user is defined by its
   priority, of course, and the number and lengths of potentially issued
   request of some lower priority user, right? Is there a way one could
   intercept a pending request list? Or is this list handled in toto to
   the hardware? Melts down to "how to manage the bandwidth according to
   the user's priority".
Concurrent use means that single RTI2C requests are called from
different tasks. Each request is atomic (and usually quite small). So
when a low-pri thread does a lot of requests a high-pri thread can get
inbetween anytime. I think this way bandwidth is already managed properly.

So there is no interface where you can submit several requests as an
atomic chunk? Then I'm fine with what we have.

As far as I can see there is no atomic chunk support.

..

I attached the changes as a patch.
I'll have a closer look at documentation and api next week.

Dirk


Attachment: rti2c.patch
Description: Binary data

_______________________________________________
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core

Reply via email to