I see that some processors provide a slider to set a balance between
Latency and Throughput. Not all processors provide this, but some do. They
seem to be inversely related.

I also notice that the default appears to be Lower latency, implying also
lower throughput. Why is that the default? I would think that being a
workflow, maximizing throughput would be the ultimate goal. Yet it seems
that the processors opt for defaults to lowest latency, lowest throughput.

What is the relationship between Latency and Throughput? Do most folks in
the user group typically go in and change that to Highest on throughput? Is
that something to avoid because of demands on CPU, RAM, and disk IO?

Thanks very much. -Jim

Reply via email to