James,

The way I look at it (abstractly speaking) is that the slider represents
how long a processor will be able to use a thread to work on flowfiles
(from its inbound queue, allowing onTrigger to run more times to generate
more outbound flowfiles, etc).  Moving that slider towards higher
throughput, the processor will do more work, but will hog that thread for a
longer period of time before another processor can use it.  So, overall
latency could go down, because flowfiles will sit in other queues for
possibly longer periods of time before another processor gets a thread to
start doing work, but that particular processor will probably see higher
throughput.

That's in pretty general terms, though.

On Fri, Apr 7, 2017 at 9:49 AM James McMahon <[email protected]> wrote:

> I see that some processors provide a slider to set a balance between
> Latency and Throughput. Not all processors provide this, but some do. They
> seem to be inversely related.
>
> I also notice that the default appears to be Lower latency, implying also
> lower throughput. Why is that the default? I would think that being a
> workflow, maximizing throughput would be the ultimate goal. Yet it seems
> that the processors opt for defaults to lowest latency, lowest throughput.
>
> What is the relationship between Latency and Throughput? Do most folks in
> the user group typically go in and change that to Highest on throughput? Is
> that something to avoid because of demands on CPU, RAM, and disk IO?
>
> Thanks very much. -Jim
>

Reply via email to