David Howells <[EMAIL PROTECTED]> writes:

> This is fairly pointless... the Linux system call overhead isn't all that
> major - issuing separate send & receive calls loses you very little. Beyond
> the fact that the server may be busy doing something else, the real killer is
> the pair of context switches and the fact that there is no guarantee that the
> server will immediately be scheduled after the client, and the client again
> after the server.

I'm not so sure. I did a few tests with using signals and ptrace as a
communication mechanism instead of sendmsg/recvmsg, and the total time
for a server round-trip got reduced by half. I don't know how much of
that is because of extra context switches or of sendmsg overhead, but
there is certainly a large potential to improve things here. And
before deciding to move everything in the kernel I think it would be a
good idea to investigate what we can achieve with less radical
solutions.

> Indeed, it may be necessary to swap the server back in, depending on system
> load.

Yes of course; but if the system is loaded I prefer to have the server
swapped out than having it locked forever in unswappable memory like
with your solution.

> Not nice... Imagine you want to just grab a mutex... you'd have to transfer 4K
> of data each direction, the vast majority of which would be totally wasted. I
> have to agree with Patrik again, sendmsg+recvmsg are better options there.

You missed the point; of course transferring 4K of data is too slow,
which is exactly why we don't do this now. But if we had a mechanism
where we could avoid the 4K transfers, then it would help a lot. Yes
there is sendmsg but it has some unnecessary overhead (memory
allocations etc.)

-- 
Alexandre Julliard
[EMAIL PROTECTED]

Reply via email to