[
https://issues.apache.org/jira/browse/FLINK-9893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Aleksandr Filichkin updated FLINK-9893:
---------------------------------------
Description:
The problem is I cannot run it in IDE when I have more than 1 taskslot in my
job.
public class StreamingJob {
{quote}public static void main(String[] args) throws Exception {
// set up the streaming execution environment
final StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment();
Properties kafkaProperties = new Properties();
kafkaProperties.setProperty("bootstrap.servers", "localhost:9092");
kafkaProperties.setProperty("group.id", "test");
env.setParallelism(1);
DataStream<String> kafkaSource = env.addSource(new
FlinkKafkaConsumer010<>("flink-source", new SimpleStringSchema(),
kafkaProperties)).name("Kafka-Source").slotSharingGroup("Kafka-Source");
kafkaSource.print().slotSharingGroup("Print");
env.execute("Flink Streaming Java API Skeleton");
}
}{quote}
I know that job need 2 slot for this job and I can have two taskmanagers in
Flink cluster, but how can I run it locally in IDE.
Currently I have to specify the same slotSharingGroup name for all operator
locally to have one slot. But it's not flexible.
How do you handle it?
was:
The problem is I cannot run it in IDE when I have more than 1 taskslot in my
job.
public class StreamingJob {
public static void main(String[] args) throws Exception {
// set up the streaming execution environment
final StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment();
Properties kafkaProperties = new Properties();
kafkaProperties.setProperty("bootstrap.servers", "localhost:9092");
kafkaProperties.setProperty("group.id", "test");
env.setParallelism(1);
DataStream<String> kafkaSource = env.addSource(new
FlinkKafkaConsumer010<>("flink-source", new SimpleStringSchema(),
kafkaProperties)).name("Kafka-Source").slotSharingGroup("Kafka-Source");
kafkaSource.print().slotSharingGroup("Print");
env.execute("Flink Streaming Java API Skeleton");
}
}
I know that job need 2 slot for this job and I can have two taskmanagers in
Flink cluster, but how can I run it locally in IDE.
Currently I have to specify the same slotSharingGroup name for all operator
locally to have one slot. But it's not flexible.
How do you handle it?
> Cannot run Flink job in IDE when we have more than 1 taskslot
> -------------------------------------------------------------
>
> Key: FLINK-9893
> URL: https://issues.apache.org/jira/browse/FLINK-9893
> Project: Flink
> Issue Type: Wish
> Components: Streaming
> Affects Versions: 1.5.1
> Reporter: Aleksandr Filichkin
> Priority: Major
>
> The problem is I cannot run it in IDE when I have more than 1 taskslot in my
> job.
> public class StreamingJob {
> {quote}public static void main(String[] args) throws Exception {
> // set up the streaming execution environment
> final StreamExecutionEnvironment env =
> StreamExecutionEnvironment.getExecutionEnvironment();
> Properties kafkaProperties = new Properties();
> kafkaProperties.setProperty("bootstrap.servers", "localhost:9092");
> kafkaProperties.setProperty("group.id", "test");
> env.setParallelism(1);
> DataStream<String> kafkaSource = env.addSource(new
> FlinkKafkaConsumer010<>("flink-source", new SimpleStringSchema(),
> kafkaProperties)).name("Kafka-Source").slotSharingGroup("Kafka-Source");
> kafkaSource.print().slotSharingGroup("Print");
> env.execute("Flink Streaming Java API Skeleton");
> }
> }{quote}
> I know that job need 2 slot for this job and I can have two taskmanagers in
> Flink cluster, but how can I run it locally in IDE.
> Currently I have to specify the same slotSharingGroup name for all operator
> locally to have one slot. But it's not flexible.
> How do you handle it?
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)