Re: posix message queues and multiple receivers

2013-12-07 Thread David Laight
On Sat, Dec 07, 2013 at 12:38:42AM +0100, Johnny Billquist wrote:
 
 You know, you might also hit a different problem, which I have had on 
 many occasions.
 NFS using 8k transfers saturating the ethernet on the server, making the 
 server drop IP fragemnts. That in turn force a resend of the whole 8k 
 after a nfs timeout. That will totally kill your nfs performance. 
 (Obviously, even larger nfs buffers make the problem even worse.)

That wasn't the problem in this case since I could see the very delayed
responses.
That is a big problem, I've NFI why i386 defaults to very large transfers.

 Even with an elevator scan algorithm and four concurrent nfs clients, 
 you're disk operation will complete within a few hundred ms at most.

This was all from one client. I'm not sure how many concurrent NFS
requests were actually outstanding - it was quite a few.
I remember that the operation was copying a large file to the nfs server,
the process might have been doing a very large write of an mmaped file.
So the client could easily have a few MB of data to transfer - and be
trying to do them all at once.

Thinking further, multiple nfsd probably help when there are a lot more
reads than writes - reads can be serviced from the server's cache.

David

-- 
David Laight: da...@l8s.co.uk


Re: posix message queues and multiple receivers

2013-12-07 Thread Johnny Billquist

On 2013-12-07 20:51, David Laight wrote:

On Sat, Dec 07, 2013 at 12:38:42AM +0100, Johnny Billquist wrote:


You know, you might also hit a different problem, which I have had on
many occasions.
NFS using 8k transfers saturating the ethernet on the server, making the
server drop IP fragemnts. That in turn force a resend of the whole 8k
after a nfs timeout. That will totally kill your nfs performance.
(Obviously, even larger nfs buffers make the problem even worse.)


That wasn't the problem in this case since I could see the very delayed
responses.
That is a big problem, I've NFI why i386 defaults to very large transfers.


Even with an elevator scan algorithm and four concurrent nfs clients,
you're disk operation will complete within a few hundred ms at most.


This was all from one client. I'm not sure how many concurrent NFS
requests were actually outstanding - it was quite a few.
I remember that the operation was copying a large file to the nfs server,
the process might have been doing a very large write of an mmaped file.
So the client could easily have a few MB of data to transfer - and be
trying to do them all at once.

Thinking further, multiple nfsd probably help when there are a lot more
reads than writes - reads can be serviced from the server's cache.


Not much point in dragging this on. Without an actual system to look at, 
all we (I) can do is speculate. I've never seen concurrent nfs server or 
client been a problem. It's no different from actually having several 
requests outstanding to the controller at the same time on a local 
machine, conceptually.


If someone sees problems they think are related to this, I'd be 
interested in investigating it more.


For now, I'll just maintain that I do not believe that it can be a 
problem, but I do believe that it can be beneficial.

It's ok for me if we just disagree on this.

(And yes, I do believe that sometimes you can have problems, but the 
cause, and solution would not be to go single thread.)


Johnny

--
Johnny Billquist  || I'm on a bus
  ||  on a psychedelic trip
email: b...@softjar.se ||  Reading murder books
pdp is alive! ||  tryin' to stay hip - B. Idol


Re: posix message queues and multiple receivers

2013-12-06 Thread David Holland
On Sat, Dec 07, 2013 at 12:02:02AM +, David Laight wrote:
  I believe that the disk driver on the server selected the disk transfers
  using the 'elevator' algorithm. Since the writes were for more or less
  sequential sectors, as soon as they got out of sequence one of the write
  requests would have to wait until all the others completed.

(1) this is a bug (nfs should be taking steps to preserve
sequentiality), and (2) elevator is really not a very good disksort
algorithm to begin with.

However, I'll bet a significant part of the problem is/was using
threads in place of queuing I/O requests asynchronously. If you have a
lot of I/O coming in like that and you want things to behave well,
it's important that the requests queue up where scheduling logic
(disksort or others) can see them, rather than piling up on locks
inside the nfs code.

-- 
David A. Holland
dholl...@netbsd.org


Re: posix message queues and multiple receivers

2013-12-05 Thread David Brownlee
On 3 December 2013 22:45, David Laight da...@l8s.co.uk wrote:
 On Tue, Nov 26, 2013 at 01:32:44PM -0500, Mouse wrote:

 When serving a request takes nontrivial time, and multiple requests can
 usefully be in progress at once, it is useful - it typically improves
 performance - to have multiple workers serving requests.  NFS, as
 mentioned above, is a fairly good example (in these respects).

 Except that NFS is a bad example, and mostly should have a single server.

 If you could arrange a NFS server for each disk spindle you might win.

 But what tends to happen is that the disk 'elevator' algorithm makes
 one of the server process wait ages for its disk access to complete,
 by which time the client has timed out and resubmitted the RPC request.
 The effect is that a slightly overloaded NFS server hits a catastrophic
 overload and transfer rates become almost zero.

 Run a single nfsd and it all works much better.

On that basis should the NetBSD default be changed from -n 4?


re: posix message queues and multiple receivers

2013-12-05 Thread matthew green

  Run a single nfsd and it all works much better.
 
 On that basis should the NetBSD default be changed from -n 4?

i definitely would object to such a change.

i see slowness from multiple clients when i run nfsd with just
one thread.  i've never seen the problem dsl has seen with a
netbsd nfs server (only other problems! :-)


.mrg.


Re: posix message queues and multiple receivers

2013-12-04 Thread Michael van Elst
da...@l8s.co.uk (David Laight) writes:

But what tends to happen is that the disk 'elevator' algorithm makes
one of the server process wait ages for its disk access to complete,
by which time the client has timed out and resubmitted the RPC request.

The NFS client does not resubmit the RPC request because it uses TCP
and it's not the IRIX implementation.



posix message queues and multiple receivers

2013-11-26 Thread Marc Balmer
What is the prupose or reasoning behind the fact that multiple processes
can open a message queue for reading using mq_open()?

I wrote simple mq sender and mq receiver programs; when I start multiple
receivers on the same mq, and send a message to it, only one of the
receivers gets the message, in a round robin fashion.  That is probably
by design, but if a mq is meant to connect only two processes, why can
more than two processes open it?


Re: posix message queues and multiple receivers

2013-11-26 Thread Martin Husemann
On Tue, Nov 26, 2013 at 09:39:44AM +0100, Marc Balmer wrote:
 What is the prupose or reasoning behind the fact that multiple processes
 can open a message queue for reading using mq_open()?

You can dispatch messages from one producer to several workers (one
writer, multiple readers), or inject work for a single worker from
various clients (one reader, multiple writers).

It is just a mesage queue, not a multiplexor or whatever you are thinking
of. A single message stays a single message and gets delivered once.

Martin


Re: posix message queues and multiple receivers

2013-11-26 Thread Mindaugas Rasiukevicius
Hi,

The question is not really kernel related.  Possibly tech-userlevel@,
but neither it is related with NetBSD per se.

Marc Balmer m...@msys.ch wrote:
 What is the prupose or reasoning behind the fact that multiple processes
 can open a message queue for reading using mq_open()?
 
 I wrote simple mq sender and mq receiver programs; when I start multiple
 receivers on the same mq, and send a message to it, only one of the
 receivers gets the message, in a round robin fashion.  That is probably
 by design, but if a mq is meant to connect only two processes, why can
 more than two processes open it?

Why do you think it is meant to connect only two processes?  It is an
asynchronous inter-process communication mechanism, it is just a FIFO
queue of messages.  To expand what Martin said, you can have multiple
producers and multiple consumers (M:N, not only 1:N or M:1) since it
really depends on what are you build on top of this interface.

These are basic IPC concepts.  I would suggest to Google for POSIX
message queue or just check the Wikipedia page first.  We also have
a pretty good mqueue(3) manual page.

-- 
Mindaugas


Re: posix message queues and multiple receivers

2013-11-26 Thread Marc Balmer
Am 26.11.13 15:13, schrieb Mindaugas Rasiukevicius:
 Hi,
 
 The question is not really kernel related.  Possibly tech-userlevel@,
 but neither it is related with NetBSD per se.

I asked here because it is implemented in the kernel and because what I
see might as well be a buglet (given that aio does not work either as
expected).


 Marc Balmer m...@msys.ch wrote:
 What is the prupose or reasoning behind the fact that multiple processes
 can open a message queue for reading using mq_open()?

 I wrote simple mq sender and mq receiver programs; when I start multiple
 receivers on the same mq, and send a message to it, only one of the
 receivers gets the message, in a round robin fashion.  That is probably
 by design, but if a mq is meant to connect only two processes, why can
 more than two processes open it?
 
 Why do you think it is meant to connect only two processes?  It is an
 asynchronous inter-process communication mechanism, it is just a FIFO
 queue of messages.  To expand what Martin said, you can have multiple
 producers and multiple consumers (M:N, not only 1:N or M:1) since it
 really depends on what are you build on top of this interface.

So what is the purpose of those interface?  When I inject a message, I
don't know which of the possibly many receivers is getting it?

I somewhat fail to understand the utility of more than one receivers.

 
 These are basic IPC concepts.  I would suggest to Google for POSIX
 message queue or just check the Wikipedia page first.  We also have
 a pretty good mqueue(3) manual page.

And you really think I did not read the man pages and other docs before?
 I am almost spending the whole trying to find out more about posix
queues...





Re: posix message queues and multiple receivers

2013-11-26 Thread Mindaugas Rasiukevicius
Marc Balmer m...@msys.ch wrote:
 Am 26.11.13 15:13, schrieb Mindaugas Rasiukevicius:
  Hi,
  
  The question is not really kernel related.  Possibly tech-userlevel@,
  but neither it is related with NetBSD per se.
 
 I asked here because it is implemented in the kernel and because what I
 see might as well be a buglet (given that aio does not work either as
 expected).

No, this is exactly how a message queue works (whether it is POSIX, SysV
or some other implementation).

  Why do you think it is meant to connect only two processes?  It is an
  asynchronous inter-process communication mechanism, it is just a FIFO
  queue of messages.  To expand what Martin said, you can have multiple
  producers and multiple consumers (M:N, not only 1:N or M:1) since it
  really depends on what are you build on top of this interface.
 
 So what is the purpose of those interface?  When I inject a message, I
 don't know which of the possibly many receivers is getting it?
 
 I somewhat fail to understand the utility of more than one receivers.
 

I think Martin already explained..

Imagine you just want to get a message, process it (parse, decrypt,
transform, perhaps consume multiple messages and aggregate them, or
whatever you want to do) and either pass to a next component in the
system or perhaps store it (write to disk, memory, or some database).
Instead of processing the messages in a single-threaded manner, you
can spawn multiple workers (receivers) and process them in parallel.

-- 
Mindaugas


Re: posix message queues and multiple receivers

2013-11-26 Thread Mouse
 Why do you think it is meant to connect only two processes?  It is
 [...] just a FIFO queue of messages.  [...]
 So what is the purpose of those interface?  When I inject a message,
 I don't know which of the possibly many receivers is getting it?

Right.  To rephrase that, when I make a request, I don't know which
worker process will service it.  But you also don't _care_ which.

Consider nfsd: it typically has multiple worker processes (four by
default); when a client sends a request, it doesn't know which worker
will handle it, nor does it care.  (It typically uses sockets, but they
are the same in this respect: each message goes to exactly one reader,
no matter how mnay readers are reading.)

 I somewhat fail to understand the utility of more than one receivers.

When serving a request takes nontrivial time, and multiple requests can
usefully be in progress at once, it is useful - it typically improves
performance - to have multiple workers serving requests.  NFS, as
mentioned above, is a fairly good example (in these respects).

/~\ The ASCII Mouse
\ / Ribbon Campaign
 X  Against HTMLmo...@rodents-montreal.org
/ \ Email!   7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B