有个疑问,
如下程序片段:
------
Properties properties = new Properties();
properties.setProperty("bootstrap.servers",kafkaAddr);
properties.setProperty("group.id",kafkaOdsGroup);
properties.setProperty("auto.offset.reset",kafkaOdsAutoOffsetReset);
properties.setProperty(FlinkKafkaConsumerBase.KEY_PARTITION_DISCOVERY_INTERVAL_MILLIS,kafkaOdsPartitionDiscoverInterval);
properties.setProperty("transaction.timeout.ms",KafkaOdsTransactionTimeout);//kafka事务超时时间
FlinkKafkaConsumer<String> flinkKafkaConsumer = new
FlinkKafkaConsumer<>(kafkaOdsTopic,new SimpleStringSchema(),properties);
DataStreamSource dataStreamSource =
env.addSource(flinkKafkaConsumer);
dataStreamSource.printToErr("1");
dataStreamSource.printToErr("2");
dataStreamSource.printToErr("3");
----------------
我对一个datastream进行多次相同操作的sink,请问是否会导致上游整个DAG重复驱动执行,基于spark的惯性思维,我认为上游DAG是会重复驱动执行的?
--
Sent from: http://apache-flink.147419.n8.nabble.com/