Dear Harry,

Thanks for your ideas, I am an old hand at QNX which used Send/Receive/Reply
message passing.
For anyone familiar with QNX they will need no convincing that this is a very
simple and extremely robust means of IPC that will:

   a)  Force you to write code in a highly modular fashion.
   b)  Is intuitive in that there is no need for the coder to worry about
synchronisation between tasks
   c)  Very good for fault tolerance & easy to debug.
   d)  Very difficult to write code that will deadlock.


> >The problem here is that there are typically many clients (Task A's)  are
> >sending requests
> >to their corresponding server (Task B).
>
> So, do you have a one-to-one correspondence between clients and servers,
> ie for every Task A there's a corresponding Task B, or are there many Task As
> for a given Task B?

No there is no 1:1 correspondance.  There may be many client tasks (Task A) for
each server task (Task B).  In addition, a client task may send to more than one
server task. For example, you might have  3 server tasks (svrA, svrB , svrC) and
say 100 client tasks (clnt001 to clnt100).  Clnt001 may send a message to svrA,
get a reply and some time later send a message to svrB and wait for a different
reply.

> What form do the requests take?  It sounds like maybe message queues
> might be a better choice.  They are specifically designed to allow the passing
> of requests from one process to another.

Yes, I have tried 2 different user space implementations using both message
queues (msgget et al) and shared memory (shmget /shmat etc).  I used the msg Q to
handle synchronisation and pass shared memory handles to the server and shared
memory for the actual transfer. This got over the problem of exceeding the msg
Q's 16K memory limit.   It was quite slow (compared  to QNX on same hardware) and
was not linear with respect to number of clients running.  (Idealy, if one client
ask running on the system and it transfers N bytes per second then 2 simultaneous
client tasks running should each transfer N/2 bytes per second).

There is no restriction on message size, though under QNX2 there was an upper
limit of 64K due to segmentation restrictions.  Most of the stuff I am porting
will live with a 64K limit but it would be nice to remove that restriction if
possible.

> My guess would be that it's not that easy, because the sharing of data like
> this is precisely why shared memory exists.  As I say, though, threads are
> good for this sort of thing, if there's a one-to-one correspondence between
> clients and servers.

I reckon your right!  My first thought was that at least one of the tasks (either
client or server) will not be running at the time of the request and may
therefore have been paged out into swap.  Perhaps I'm being naive, but how about
this for an idea:

    a)   Server registers itself with the kernel module (KM) where the KM then:
            - does a virt_to_phys() on the receive buffer passed.
            - Places the server task in a blocked state waiting to be woken by a
client

    b)    A client comes along and does a send request to the server.  The KM
then:
            - does a virt_to_phys() on the send buffer passed.
            - Copies the memory between the two physical address determined.
            - wakes up the server task indicating its receive buffer is now
ready.
            - Places the sending task in a blocked state waiting for a reply.

The reply follows the same sort of process as above.

The one VERY BIG caveat here is that the 'non-current' task is still in physical
memory at the time of the transfer.  If not then you either get a segmentation
fault or (even worse) you trash some other poor innocent task's data segment.
Thanks to David's previous post, mlock() will get around that.

        **********************************************************

I know its a bit of a pain and some would say this old dog should learn new
IPC tricks (and I have) but after 10 years of code writing using the SRR
(send/receive/reply) paradigm, and having never been able to write an application
that has brought the system down I need a lot of convincing.  In addition the
code written has been designed in a modular fashion very quickly and almost
always works 'first go'.

I think this sort of IPC is worth implementing in Linux (and not just for me!!).
I think if more people knew about it, it would be far more popular than it is,
particularly in the areas of process control / synchronous server apps etc.

Hope I haven't dragged on too much about this and trust it has some interest to
others.

Thanks for your help and ideas.

Andrew E.

(PS: should I be sending only one reply to the slug site, or one to the author
and cc the slug site ? - I am a newbie to this site and SLUG rules say you can't
flame me for asking dumb questions but I don't want to anoy anyone by sending
mail directly)




-- 
SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/
More Info: http://slug.org.au/lists/listinfo/slug

Reply via email to