Hi all,

I have a general question about how stream-processing frameworks/engines
usually behave in the following scenario:

Say I have a Pipeline that consumes from 1 Kafka partition, so that my
initial (optimal) parallelism is 1 as well.

For any downstream computation, is it common for stream processors to
"fan-out/parallelise" the stream by shuffling the data into more
streams/partitions/bundles ?

Thanks,
Amit

Reply via email to