Hi Ovidiu,

the way Flink works is to assign key group ranges to operators. For each
element you calculate a hash value and based on that you assign it to a key
group. Thus, in your example, you have either a key group with more than 1
key or multiple key groups with 1 or more keys assigned to an operator.

So what you could try to do is to reduce the number of key groups to your
parallelism via env.setMaxParallelism() and then try to figure a key out
whose hashes are uniformly distributed over the key groups. The key group
assignment is calculated via murmurHash(key.hashCode()) % maxParallelism.

Alternatively if you don’t need a keyed stream, you could try to use a
custom partitioner via DataStream.partitionCustom.

Cheers,
Till
​

On Mon, Feb 20, 2017 at 11:46 AM, Ovidiu-Cristian MARCU <
ovidiu-cristian.ma...@inria.fr> wrote:

> Hi,
>
> Can you please comment on how can I ensure stream input records are
> distributed evenly onto task slots?
> See attached screen Records received issue.
>
> I have a simple application which is applying some window function over a
> stream partitioned as follows:
> (parallelism is equal to the number of keys; records with the same key are
> streamed evenly)
>
> // get the execution environment
> final StreamExecutionEnvironment env = StreamExecutionEnvironment.
> getExecutionEnvironment();
> // get input data by connecting to the socket
> DataStream<String> text = env.socketTextStream("localhost", port, "\n");
> DataStream<Tuple8<String, String, String, Integer, String, Double, Long,
> Long>> input = text.flatMap(...);
> DataStream<Double> counts1 = null;
> counts1 =* input.keyBy(0*).countWindow(windowSize, slideSize)
> .apply(new WindowFunction<Tuple8<String, String, String, Integer, String,
> Double, Long, Long>, Double, Tuple, GlobalWindow>() {
> ...
> });
> counts1.writeAsText(params.get("output1"));
> env.execute("Socket Window WordCount”);
>
> Best,
> Ovidiu
>

Reply via email to