Hey All,

We started measuring the latency we can provide with our streaming
architecture and we stumbled upon some interesting measurements.

It seems that we can control the output buffers well and if we just
generate a sequence of numbers the outputs get flushed well under 0.5 ms.
This would be fine and also what we expected.

The problem is that no matter how fast the the outputs flush there is a
huge latency generated at the receiving task (about 250ms). We suspect that
the somehow the input buffer must have much bigger size than the output
buffer.

Whats even more interseting that if we dont use a Task vertex only a source
and a sink, we dont experience the same issue and the whole latency is
about 0.6 ms. Adding a simple forwarding task between the source and the
sink makes this 250 ms, and the latency is generated somewhere between the
Source and the Map. (From the map to the sink its fast again).

Does anyone know why could this happen and how can we solve the issue ?

Regards,
Gyula

Reply via email to