[jira] [Commented] (FLINK-15527) can not control the number of container on yarn single job module

2020-04-28 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17094303#comment-17094303
 ] 

Till Rohrmann commented on FLINK-15527:
---

Agreed. This seems to be duplicate of FLINK-16605. Closing this ticket now.

> can not control the number of container on yarn single job module
> -
>
> Key: FLINK-15527
> URL: https://issues.apache.org/jira/browse/FLINK-15527
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.10.0
>Reporter: chenchencc
>Priority: Major
>  Labels: usability
> Fix For: 1.11.0
>
> Attachments: flink-conf.yaml, image-2020-01-09-14-30-46-973.png, 
> yarn_application.png
>
>
> when run yarn single job run many container but paralism set 4
> *scripts:*
> ./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c 
> com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
> _notes_: in  1.9.1 has cli paramter -yn to control the number of containers 
> and in 1.10 remove it
> *result:*
> the number of containers is 500+
>  
> *code use:*
> query the table and save it to the hdfs text
>  
> the storge of table is 200g+
>  
>  
>  
>  
> *code:*
> com.cc.test.HiveTest2
> public static void main(String[] args) throws Exception
> { EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamExecutionEnvironment settings2 = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> settings2.setParallelism(Integer.valueOf(args[2]));
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, 
> settings); String name = "myhive"; String defaultDatabase = "test"; String 
> hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 
> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
> version); tableEnv.registerCatalog("myhive", hive); 
> tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = 
> tableEnv.sqlQuery("select id from orderparent_test2 where id = 
> 'A21204170176'"); tableEnv.toAppendStream(table, Row.class).print(); 
> tableEnv.toAppendStream(table, Row.class) 
> .writeAsText("hdfs:///user/chenchao1/"+ args[0], 
> FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15527) can not control the number of container on yarn single job module

2020-04-28 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17094248#comment-17094248
 ] 

Xintong Song commented on FLINK-15527:
--

Hi all,
I believe this issue should be solved by FLINK-16605, which is very close to be 
finished. It would be appreciated if you could help confirm that. And if 
there's no objections, I'll try to close this ticket.
Thanks.

> can not control the number of container on yarn single job module
> -
>
> Key: FLINK-15527
> URL: https://issues.apache.org/jira/browse/FLINK-15527
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.10.0
>Reporter: chenchencc
>Priority: Major
>  Labels: usability
> Fix For: 1.11.0
>
> Attachments: flink-conf.yaml, image-2020-01-09-14-30-46-973.png, 
> yarn_application.png
>
>
> when run yarn single job run many container but paralism set 4
> *scripts:*
> ./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c 
> com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
> _notes_: in  1.9.1 has cli paramter -yn to control the number of containers 
> and in 1.10 remove it
> *result:*
> the number of containers is 500+
>  
> *code use:*
> query the table and save it to the hdfs text
>  
> the storge of table is 200g+
>  
>  
>  
>  
> *code:*
> com.cc.test.HiveTest2
> public static void main(String[] args) throws Exception
> { EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamExecutionEnvironment settings2 = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> settings2.setParallelism(Integer.valueOf(args[2]));
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, 
> settings); String name = "myhive"; String defaultDatabase = "test"; String 
> hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 
> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
> version); tableEnv.registerCatalog("myhive", hive); 
> tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = 
> tableEnv.sqlQuery("select id from orderparent_test2 where id = 
> 'A21204170176'"); tableEnv.toAppendStream(table, Row.class).print(); 
> tableEnv.toAppendStream(table, Row.class) 
> .writeAsText("hdfs:///user/chenchao1/"+ args[0], 
> FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15527) can not control the number of container on yarn single job module

2020-02-03 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17029040#comment-17029040
 ] 

Jingsong Lee commented on FLINK-15527:
--

I labeled it usability because I found that the typical way of running batch 
jobs is basically to limit the maximum number of processes and high parallelism.

> can not control the number of container on yarn single job module
> -
>
> Key: FLINK-15527
> URL: https://issues.apache.org/jira/browse/FLINK-15527
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.10.0
>Reporter: chenchencc
>Priority: Major
>  Labels: usability
> Fix For: 1.11.0
>
> Attachments: flink-conf.yaml, image-2020-01-09-14-30-46-973.png, 
> yarn_application.png
>
>
> when run yarn single job run many container but paralism set 4
> *scripts:*
> ./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c 
> com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
> _notes_: in  1.9.1 has cli paramter -yn to control the number of containers 
> and in 1.10 remove it
> *result:*
> the number of containers is 500+
>  
> *code use:*
> query the table and save it to the hdfs text
>  
> the storge of table is 200g+
>  
>  
>  
>  
> *code:*
> com.cc.test.HiveTest2
> public static void main(String[] args) throws Exception
> { EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamExecutionEnvironment settings2 = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> settings2.setParallelism(Integer.valueOf(args[2]));
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, 
> settings); String name = "myhive"; String defaultDatabase = "test"; String 
> hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 
> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
> version); tableEnv.registerCatalog("myhive", hive); 
> tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = 
> tableEnv.sqlQuery("select id from orderparent_test2 where id = 
> 'A21204170176'"); tableEnv.toAppendStream(table, Row.class).print(); 
> tableEnv.toAppendStream(table, Row.class) 
> .writeAsText("hdfs:///user/chenchao1/"+ args[0], 
> FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15527) can not control the number of container on yarn single job module

2020-01-12 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17014009#comment-17014009
 ] 

Jingsong Lee commented on FLINK-15527:
--

Hi [~xintongsong]

You are right about using yarn queue with limited resource quota. This is an 
inconvenient solution.

At present, our default behavior is batch rather than pipeline, which has many 
advantages and does not affect performance. Generating more parallelisms means 
that the granularity of fault tolerance can be smaller. The typical mode of 
batch job is limited resources and large parallelisms. Batch by batch 
parallelisms runs slowly. So generally speaking, the amount of parallelisms has 
little to do with the current resources.

> can not control the number of container on yarn single job module
> -
>
> Key: FLINK-15527
> URL: https://issues.apache.org/jira/browse/FLINK-15527
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: chenchencc
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: flink-conf.yaml, image-2020-01-09-14-30-46-973.png, 
> yarn_application.png
>
>
> when run yarn single job run many container but paralism set 4
> *scripts:*
> ./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c 
> com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
> _notes_: in  1.9.1 has cli paramter -yn to control the number of containers 
> and in 1.10 remove it
> *result:*
> the number of containers is 500+
>  
> *code use:*
> query the table and save it to the hdfs text
>  
> the storge of table is 200g+
>  
>  
>  
>  
> *code:*
> com.cc.test.HiveTest2
> public static void main(String[] args) throws Exception
> { EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamExecutionEnvironment settings2 = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> settings2.setParallelism(Integer.valueOf(args[2]));
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, 
> settings); String name = "myhive"; String defaultDatabase = "test"; String 
> hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 
> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
> version); tableEnv.registerCatalog("myhive", hive); 
> tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = 
> tableEnv.sqlQuery("select id from orderparent_test2 where id = 
> 'A21204170176'"); tableEnv.toAppendStream(table, Row.class).print(); 
> tableEnv.toAppendStream(table, Row.class) 
> .writeAsText("hdfs:///user/chenchao1/"+ args[0], 
> FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15527) can not control the number of container on yarn single job module

2020-01-12 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17014006#comment-17014006
 ] 

Xintong Song commented on FLINK-15527:
--

{quote}> Hive setting too high job parallelism

It is not a problem, that's how it should be. This is not a bug.

I agree that we can introduce max containers in 1.11.

For 1.10, user can control their hive source with 
"table.exec.hive.infer-source-parallelism.max" to limit max parallelism. But 
not work for complex jobs. If a job have a lot of stages, user are hard to 
control the total parallelism in SQL world.
{quote}
Jut trying to understand the problem. If this is not a but, you mean the job is 
suppose to have a very high parallelism. But these parallel tasks do not need 
to all run at the same time. The user would prefer to use a smaller set or yarn 
cluster resources to run these tasks in a pipelined (one after another) manner. 
Am I understanding it correctly? [~lzljs3620320] 

If that is indeed the case, one can also try to limit the job resource 
consumption by configuring a dedicated queue with limited resource quota for 
the job. Of course that would be much more inconvenient compared to control it 
with flink directly, and may not always be doable (say if the flink user do not 
have admin access to the yarn cluster). I'm just trying to provide an idea of 
potential workaround for Flink 1.10 as it is. 

> can not control the number of container on yarn single job module
> -
>
> Key: FLINK-15527
> URL: https://issues.apache.org/jira/browse/FLINK-15527
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: chenchencc
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: flink-conf.yaml, image-2020-01-09-14-30-46-973.png, 
> yarn_application.png
>
>
> when run yarn single job run many container but paralism set 4
> *scripts:*
> ./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c 
> com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
> _notes_: in  1.9.1 has cli paramter -yn to control the number of containers 
> and in 1.10 remove it
> *result:*
> the number of containers is 500+
>  
> *code use:*
> query the table and save it to the hdfs text
>  
> the storge of table is 200g+
>  
>  
>  
>  
> *code:*
> com.cc.test.HiveTest2
> public static void main(String[] args) throws Exception
> { EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamExecutionEnvironment settings2 = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> settings2.setParallelism(Integer.valueOf(args[2]));
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, 
> settings); String name = "myhive"; String defaultDatabase = "test"; String 
> hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 
> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
> version); tableEnv.registerCatalog("myhive", hive); 
> tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = 
> tableEnv.sqlQuery("select id from orderparent_test2 where id = 
> 'A21204170176'"); tableEnv.toAppendStream(table, Row.class).print(); 
> tableEnv.toAppendStream(table, Row.class) 
> .writeAsText("hdfs:///user/chenchao1/"+ args[0], 
> FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15527) can not control the number of container on yarn single job module

2020-01-12 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17014002#comment-17014002
 ] 

Jingsong Lee commented on FLINK-15527:
--

> Hive setting too high job parallelism

It is not a problem, that's how it should be. This is not a bug.

I agree that we can introduce max containers in 1.11.

For 1.10, user can control their hive source with 
"table.exec.hive.infer-source-parallelism.max" to limit max parallelism. But 
not work for complex jobs. If a job have a lot of stages, user are hard to 
control the total parallelism in SQL world.

> can not control the number of container on yarn single job module
> -
>
> Key: FLINK-15527
> URL: https://issues.apache.org/jira/browse/FLINK-15527
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: chenchencc
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: flink-conf.yaml, image-2020-01-09-14-30-46-973.png, 
> yarn_application.png
>
>
> when run yarn single job run many container but paralism set 4
> *scripts:*
> ./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c 
> com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
> _notes_: in  1.9.1 has cli paramter -yn to control the number of containers 
> and in 1.10 remove it
> *result:*
> the number of containers is 500+
>  
> *code use:*
> query the table and save it to the hdfs text
>  
> the storge of table is 200g+
>  
>  
>  
>  
> *code:*
> com.cc.test.HiveTest2
> public static void main(String[] args) throws Exception
> { EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamExecutionEnvironment settings2 = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> settings2.setParallelism(Integer.valueOf(args[2]));
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, 
> settings); String name = "myhive"; String defaultDatabase = "test"; String 
> hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 
> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
> version); tableEnv.registerCatalog("myhive", hive); 
> tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = 
> tableEnv.sqlQuery("select id from orderparent_test2 where id = 
> 'A21204170176'"); tableEnv.toAppendStream(table, Row.class).print(); 
> tableEnv.toAppendStream(table, Row.class) 
> .writeAsText("hdfs:///user/chenchao1/"+ args[0], 
> FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15527) can not control the number of container on yarn single job module

2020-01-12 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17013999#comment-17013999
 ] 

Yangze Guo commented on FLINK-15527:


I'm not quite familiar with the contract of yarn session mode in early version, 
but it seems to be changed. You could follow [~fly_in_gis]'s 
[reply|https://issues.apache.org/jira/browse/FLINK-15527?focusedCommentId=17011456&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17011456].

+1 to provide a mechanism to let user controls the resource usage of Flink 
cluster. Regarding the solution, two proposals emerge in this thread:
- limit the number of containers(or total resource) through configuration
- implement reactive resource manager

Also agreed with [~xintongsong], this feature should not be add to release-1.10.

> can not control the number of container on yarn single job module
> -
>
> Key: FLINK-15527
> URL: https://issues.apache.org/jira/browse/FLINK-15527
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: chenchencc
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: flink-conf.yaml, image-2020-01-09-14-30-46-973.png, 
> yarn_application.png
>
>
> when run yarn single job run many container but paralism set 4
> *scripts:*
> ./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c 
> com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
> _notes_: in  1.9.1 has cli paramter -yn to control the number of containers 
> and in 1.10 remove it
> *result:*
> the number of containers is 500+
>  
> *code use:*
> query the table and save it to the hdfs text
>  
> the storge of table is 200g+
>  
>  
>  
>  
> *code:*
> com.cc.test.HiveTest2
> public static void main(String[] args) throws Exception
> { EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamExecutionEnvironment settings2 = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> settings2.setParallelism(Integer.valueOf(args[2]));
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, 
> settings); String name = "myhive"; String defaultDatabase = "test"; String 
> hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 
> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
> version); tableEnv.registerCatalog("myhive", hive); 
> tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = 
> tableEnv.sqlQuery("select id from orderparent_test2 where id = 
> 'A21204170176'"); tableEnv.toAppendStream(table, Row.class).print(); 
> tableEnv.toAppendStream(table, Row.class) 
> .writeAsText("hdfs:///user/chenchao1/"+ args[0], 
> FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15527) can not control the number of container on yarn single job module

2020-01-12 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17014000#comment-17014000
 ] 

Xintong Song commented on FLINK-15527:
--

I would suggest we use this ticket to track and fix the problem Hive setting 
too high job parallelism (in 1.10.0 I assume), and create another ticket for 
control total resource of flink cluster and try to improve this in the next 
release. WDYT?

> can not control the number of container on yarn single job module
> -
>
> Key: FLINK-15527
> URL: https://issues.apache.org/jira/browse/FLINK-15527
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: chenchencc
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: flink-conf.yaml, image-2020-01-09-14-30-46-973.png, 
> yarn_application.png
>
>
> when run yarn single job run many container but paralism set 4
> *scripts:*
> ./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c 
> com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
> _notes_: in  1.9.1 has cli paramter -yn to control the number of containers 
> and in 1.10 remove it
> *result:*
> the number of containers is 500+
>  
> *code use:*
> query the table and save it to the hdfs text
>  
> the storge of table is 200g+
>  
>  
>  
>  
> *code:*
> com.cc.test.HiveTest2
> public static void main(String[] args) throws Exception
> { EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamExecutionEnvironment settings2 = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> settings2.setParallelism(Integer.valueOf(args[2]));
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, 
> settings); String name = "myhive"; String defaultDatabase = "test"; String 
> hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 
> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
> version); tableEnv.registerCatalog("myhive", hive); 
> tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = 
> tableEnv.sqlQuery("select id from orderparent_test2 where id = 
> 'A21204170176'"); tableEnv.toAppendStream(table, Row.class).print(); 
> tableEnv.toAppendStream(table, Row.class) 
> .writeAsText("hdfs:///user/chenchao1/"+ args[0], 
> FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15527) can not control the number of container on yarn single job module

2020-01-12 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17013997#comment-17013997
 ] 

Xintong Song commented on FLINK-15527:
--

Thanks for involving me, [~lzljs3620320].

If I understand correctly, for this specific case, the problem is that Hive 
component overlooked user's configuration '-p 4' and set the job with too high 
parallelism.

I also agree that it would be helpful to control the number of task executors 
not relying on the parallelism, e.g., something like [~trohrmann] mentioned. I 
think at least for batch jobs this approach will allow us to pipeline the tasks 
and run them with much fewer slots. However, I'm not sure whether this should 
be fixed in 1.10.0, since it sound like a feature improvement rather than a bug 
fix to me, and we have far beyond the feature freeze.

> can not control the number of container on yarn single job module
> -
>
> Key: FLINK-15527
> URL: https://issues.apache.org/jira/browse/FLINK-15527
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: chenchencc
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: flink-conf.yaml, image-2020-01-09-14-30-46-973.png, 
> yarn_application.png
>
>
> when run yarn single job run many container but paralism set 4
> *scripts:*
> ./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c 
> com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
> _notes_: in  1.9.1 has cli paramter -yn to control the number of containers 
> and in 1.10 remove it
> *result:*
> the number of containers is 500+
>  
> *code use:*
> query the table and save it to the hdfs text
>  
> the storge of table is 200g+
>  
>  
>  
>  
> *code:*
> com.cc.test.HiveTest2
> public static void main(String[] args) throws Exception
> { EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamExecutionEnvironment settings2 = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> settings2.setParallelism(Integer.valueOf(args[2]));
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, 
> settings); String name = "myhive"; String defaultDatabase = "test"; String 
> hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 
> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
> version); tableEnv.registerCatalog("myhive", hive); 
> tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = 
> tableEnv.sqlQuery("select id from orderparent_test2 where id = 
> 'A21204170176'"); tableEnv.toAppendStream(table, Row.class).print(); 
> tableEnv.toAppendStream(table, Row.class) 
> .writeAsText("hdfs:///user/chenchao1/"+ args[0], 
> FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15527) can not control the number of container on yarn single job module

2020-01-12 Thread Kurt Young (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17013989#comment-17013989
 ] 

Kurt Young commented on FLINK-15527:


IIRC yarn session mode is just a standalone cluster but automatically deployed 
by yarn, so this have been changed? IMO this mode is very useful for users, 
they can easily control the max resource usage of the flink cluster. If user 
only deploy one job into this cluster, it can also simulate the usage like 
controlling the max resource usage of a single job. 

> can not control the number of container on yarn single job module
> -
>
> Key: FLINK-15527
> URL: https://issues.apache.org/jira/browse/FLINK-15527
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: chenchencc
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: flink-conf.yaml, image-2020-01-09-14-30-46-973.png, 
> yarn_application.png
>
>
> when run yarn single job run many container but paralism set 4
> *scripts:*
> ./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c 
> com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
> _notes_: in  1.9.1 has cli paramter -yn to control the number of containers 
> and in 1.10 remove it
> *result:*
> the number of containers is 500+
>  
> *code use:*
> query the table and save it to the hdfs text
>  
> the storge of table is 200g+
>  
>  
>  
>  
> *code:*
> com.cc.test.HiveTest2
> public static void main(String[] args) throws Exception
> { EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamExecutionEnvironment settings2 = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> settings2.setParallelism(Integer.valueOf(args[2]));
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, 
> settings); String name = "myhive"; String defaultDatabase = "test"; String 
> hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 
> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
> version); tableEnv.registerCatalog("myhive", hive); 
> tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = 
> tableEnv.sqlQuery("select id from orderparent_test2 where id = 
> 'A21204170176'"); tableEnv.toAppendStream(table, Row.class).print(); 
> tableEnv.toAppendStream(table, Row.class) 
> .writeAsText("hdfs:///user/chenchao1/"+ args[0], 
> FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15527) can not control the number of container on yarn single job module

2020-01-12 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17013986#comment-17013986
 ] 

Jingsong Lee commented on FLINK-15527:
--

CC: [~xintongsong]

> can not control the number of container on yarn single job module
> -
>
> Key: FLINK-15527
> URL: https://issues.apache.org/jira/browse/FLINK-15527
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: chenchencc
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: flink-conf.yaml, image-2020-01-09-14-30-46-973.png, 
> yarn_application.png
>
>
> when run yarn single job run many container but paralism set 4
> *scripts:*
> ./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c 
> com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
> _notes_: in  1.9.1 has cli paramter -yn to control the number of containers 
> and in 1.10 remove it
> *result:*
> the number of containers is 500+
>  
> *code use:*
> query the table and save it to the hdfs text
>  
> the storge of table is 200g+
>  
>  
>  
>  
> *code:*
> com.cc.test.HiveTest2
> public static void main(String[] args) throws Exception
> { EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamExecutionEnvironment settings2 = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> settings2.setParallelism(Integer.valueOf(args[2]));
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, 
> settings); String name = "myhive"; String defaultDatabase = "test"; String 
> hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 
> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
> version); tableEnv.registerCatalog("myhive", hive); 
> tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = 
> tableEnv.sqlQuery("select id from orderparent_test2 where id = 
> 'A21204170176'"); tableEnv.toAppendStream(table, Row.class).print(); 
> tableEnv.toAppendStream(table, Row.class) 
> .writeAsText("hdfs:///user/chenchao1/"+ args[0], 
> FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15527) can not control the number of container on yarn single job module

2020-01-12 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17013983#comment-17013983
 ] 

Yangze Guo commented on FLINK-15527:


[~ykt836] I think this approach won't solve this problem since we now using the 
same "Active" resource manager in session mode and per-job mode. We'd better 
add some primitives for container limitation or introduce Reactive resource 
manager.

> can not control the number of container on yarn single job module
> -
>
> Key: FLINK-15527
> URL: https://issues.apache.org/jira/browse/FLINK-15527
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: chenchencc
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: flink-conf.yaml, image-2020-01-09-14-30-46-973.png, 
> yarn_application.png
>
>
> when run yarn single job run many container but paralism set 4
> *scripts:*
> ./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c 
> com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
> _notes_: in  1.9.1 has cli paramter -yn to control the number of containers 
> and in 1.10 remove it
> *result:*
> the number of containers is 500+
>  
> *code use:*
> query the table and save it to the hdfs text
>  
> the storge of table is 200g+
>  
>  
>  
>  
> *code:*
> com.cc.test.HiveTest2
> public static void main(String[] args) throws Exception
> { EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamExecutionEnvironment settings2 = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> settings2.setParallelism(Integer.valueOf(args[2]));
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, 
> settings); String name = "myhive"; String defaultDatabase = "test"; String 
> hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 
> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
> version); tableEnv.registerCatalog("myhive", hive); 
> tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = 
> tableEnv.sqlQuery("select id from orderparent_test2 where id = 
> 'A21204170176'"); tableEnv.toAppendStream(table, Row.class).print(); 
> tableEnv.toAppendStream(table, Row.class) 
> .writeAsText("hdfs:///user/chenchao1/"+ args[0], 
> FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15527) can not control the number of container on yarn single job module

2020-01-12 Thread Kurt Young (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17013963#comment-17013963
 ] 

Kurt Young commented on FLINK-15527:


[~chenchencc] can you use yarn session mode to achieve what you need? To me, 
yarn per job mode is somewhat like map reduce, we might launch as many 
containers as we needed after infer the job parallelism. With session mode, you 
can strictly control the number of TMs, aka containers, and no matter how high 
the parallelism if for your source operator, they will be finished step by 
step.  

> can not control the number of container on yarn single job module
> -
>
> Key: FLINK-15527
> URL: https://issues.apache.org/jira/browse/FLINK-15527
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: chenchencc
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: flink-conf.yaml, image-2020-01-09-14-30-46-973.png, 
> yarn_application.png
>
>
> when run yarn single job run many container but paralism set 4
> *scripts:*
> ./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c 
> com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
> _notes_: in  1.9.1 has cli paramter -yn to control the number of containers 
> and in 1.10 remove it
> *result:*
> the number of containers is 500+
>  
> *code use:*
> query the table and save it to the hdfs text
>  
> the storge of table is 200g+
>  
>  
>  
>  
> *code:*
> com.cc.test.HiveTest2
> public static void main(String[] args) throws Exception
> { EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamExecutionEnvironment settings2 = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> settings2.setParallelism(Integer.valueOf(args[2]));
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, 
> settings); String name = "myhive"; String defaultDatabase = "test"; String 
> hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 
> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
> version); tableEnv.registerCatalog("myhive", hive); 
> tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = 
> tableEnv.sqlQuery("select id from orderparent_test2 where id = 
> 'A21204170176'"); tableEnv.toAppendStream(table, Row.class).print(); 
> tableEnv.toAppendStream(table, Row.class) 
> .writeAsText("hdfs:///user/chenchao1/"+ args[0], 
> FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15527) can not control the number of container on yarn single job module

2020-01-12 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17013959#comment-17013959
 ] 

Jingsong Lee commented on FLINK-15527:
--

Hi [~trohrmann], thanks for your involving. I have provided 
"table.exec.hive.infer-source-parallelism.max" to limit max parallelism for 
hive source. But not still not satisfy user's requirement. [~chenchencc]

> can not control the number of container on yarn single job module
> -
>
> Key: FLINK-15527
> URL: https://issues.apache.org/jira/browse/FLINK-15527
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: chenchencc
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: flink-conf.yaml, image-2020-01-09-14-30-46-973.png, 
> yarn_application.png
>
>
> when run yarn single job run many container but paralism set 4
> *scripts:*
> ./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c 
> com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
> _notes_: in  1.9.1 has cli paramter -yn to control the number of containers 
> and in 1.10 remove it
> *result:*
> the number of containers is 500+
>  
> *code use:*
> query the table and save it to the hdfs text
>  
> the storge of table is 200g+
>  
>  
>  
>  
> *code:*
> com.cc.test.HiveTest2
> public static void main(String[] args) throws Exception
> { EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamExecutionEnvironment settings2 = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> settings2.setParallelism(Integer.valueOf(args[2]));
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, 
> settings); String name = "myhive"; String defaultDatabase = "test"; String 
> hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 
> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
> version); tableEnv.registerCatalog("myhive", hive); 
> tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = 
> tableEnv.sqlQuery("select id from orderparent_test2 where id = 
> 'A21204170176'"); tableEnv.toAppendStream(table, Row.class).print(); 
> tableEnv.toAppendStream(table, Row.class) 
> .writeAsText("hdfs:///user/chenchao1/"+ args[0], 
> FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15527) can not control the number of container on yarn single job module

2020-01-10 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17013086#comment-17013086
 ] 

Till Rohrmann commented on FLINK-15527:
---

Isn't the problem that the Hive component does some parallelism inference which 
ends up to set the source parallelism to something quite large? The runtime 
will just do what the JobGraph tells it to do.

I agree that it could be helpful to limit the number of containers when running 
Flink with an active RM. I imagine something like {{--min-task-executors=10 
--max-task-executors=20}} for example. But this should not have solved problem 
since Flink weren't able to run a job with a parallelism exceeding {{20 * 
number_of_slots}}.

> can not control the number of container on yarn single job module
> -
>
> Key: FLINK-15527
> URL: https://issues.apache.org/jira/browse/FLINK-15527
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: chenchencc
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: flink-conf.yaml, image-2020-01-09-14-30-46-973.png, 
> yarn_application.png
>
>
> when run yarn single job run many container but paralism set 4
> *scripts:*
> ./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c 
> com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
> _notes_: in  1.9.1 has cli paramter -yn to control the number of containers 
> and in 1.10 remove it
> *result:*
> the number of containers is 500+
>  
> *code use:*
> query the table and save it to the hdfs text
>  
> the storge of table is 200g+
>  
>  
>  
>  
> *code:*
> com.cc.test.HiveTest2
> public static void main(String[] args) throws Exception
> { EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamExecutionEnvironment settings2 = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> settings2.setParallelism(Integer.valueOf(args[2]));
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, 
> settings); String name = "myhive"; String defaultDatabase = "test"; String 
> hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 
> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
> version); tableEnv.registerCatalog("myhive", hive); 
> tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = 
> tableEnv.sqlQuery("select id from orderparent_test2 where id = 
> 'A21204170176'"); tableEnv.toAppendStream(table, Row.class).print(); 
> tableEnv.toAppendStream(table, Row.class) 
> .writeAsText("hdfs:///user/chenchao1/"+ args[0], 
> FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15527) can not control the number of container on yarn single job module

2020-01-09 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011726#comment-17011726
 ] 

Yang Wang commented on FLINK-15527:
---

[~lzljs3620320] Currently, {{YarnResourceManager}} is an 
{{ActiveResourceManager}}. So running Flink on Yarn is always the active mode. 
I think what you want is reactive mode. We only have limited resources(fix 
container number), the batch job will run staging by staging.

Just limit the max resource or container number of the 
{{ActiveResourceManager}} could work. However, i think it is a hack. We need to 
revive this ticket 
[FLINK-10407|https://issues.apache.org/jira/browse/FLINK-10407]. Yarn also 
could be supported for reactive mode.

[~trohrmann] What do you think of this? 

> can not control the number of container on yarn single job module
> -
>
> Key: FLINK-15527
> URL: https://issues.apache.org/jira/browse/FLINK-15527
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: chenchencc
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: flink-conf.yaml, image-2020-01-09-14-30-46-973.png, 
> yarn_application.png
>
>
> when run yarn single job run many container but paralism set 4
> *scripts:*
> ./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c 
> com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
> _notes_: in  1.9.1 has cli paramter -yn to control the number of containers 
> and in 1.10 remove it
> *result:*
> the number of containers is 500+
>  
> *code use:*
> query the table and save it to the hdfs text
>  
> the storge of table is 200g+
>  
>  
>  
>  
> *code:*
> com.cc.test.HiveTest2
> public static void main(String[] args) throws Exception
> { EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamExecutionEnvironment settings2 = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> settings2.setParallelism(Integer.valueOf(args[2]));
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, 
> settings); String name = "myhive"; String defaultDatabase = "test"; String 
> hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 
> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
> version); tableEnv.registerCatalog("myhive", hive); 
> tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = 
> tableEnv.sqlQuery("select id from orderparent_test2 where id = 
> 'A21204170176'"); tableEnv.toAppendStream(table, Row.class).print(); 
> tableEnv.toAppendStream(table, Row.class) 
> .writeAsText("hdfs:///user/chenchao1/"+ args[0], 
> FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15527) can not control the number of container on yarn single job module

2020-01-09 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011609#comment-17011609
 ] 

Jingsong Lee commented on FLINK-15527:
--

Hi [~trohrmann], But what users really want is control the max resource of yarn 
application. That is a better way to solve his problem. What do you think about 
this?

(hive parameter is just workaround)

> can not control the number of container on yarn single job module
> -
>
> Key: FLINK-15527
> URL: https://issues.apache.org/jira/browse/FLINK-15527
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: chenchencc
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: flink-conf.yaml, image-2020-01-09-14-30-46-973.png, 
> yarn_application.png
>
>
> when run yarn single job run many container but paralism set 4
> *scripts:*
> ./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c 
> com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
> _notes_: in  1.9.1 has cli paramter -yn to control the number of containers 
> and in 1.10 remove it
> *result:*
> the number of containers is 500+
>  
> *code use:*
> query the table and save it to the hdfs text
>  
> the storge of table is 200g+
>  
>  
>  
>  
> *code:*
> com.cc.test.HiveTest2
> public static void main(String[] args) throws Exception
> { EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamExecutionEnvironment settings2 = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> settings2.setParallelism(Integer.valueOf(args[2]));
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, 
> settings); String name = "myhive"; String defaultDatabase = "test"; String 
> hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 
> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
> version); tableEnv.registerCatalog("myhive", hive); 
> tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = 
> tableEnv.sqlQuery("select id from orderparent_test2 where id = 
> 'A21204170176'"); tableEnv.toAppendStream(table, Row.class).print(); 
> tableEnv.toAppendStream(table, Row.class) 
> .writeAsText("hdfs:///user/chenchao1/"+ args[0], 
> FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15527) can not control the number of container on yarn single job module

2020-01-09 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011596#comment-17011596
 ] 

Till Rohrmann commented on FLINK-15527:
---

Changed the affected components as this seems to be a problem with Flink's Hive 
component and not its Yarn integration.

> can not control the number of container on yarn single job module
> -
>
> Key: FLINK-15527
> URL: https://issues.apache.org/jira/browse/FLINK-15527
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: chenchencc
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: flink-conf.yaml, image-2020-01-09-14-30-46-973.png, 
> yarn_application.png
>
>
> when run yarn single job run many container but paralism set 4
> *scripts:*
> ./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c 
> com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
> _notes_: in  1.9.1 has cli paramter -yn to control the number of containers 
> and in 1.10 remove it
> *result:*
> the number of containers is 500+
>  
> *code use:*
> query the table and save it to the hdfs text
>  
> the storge of table is 200g+
>  
>  
>  
>  
> *code:*
> com.cc.test.HiveTest2
> public static void main(String[] args) throws Exception
> { EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamExecutionEnvironment settings2 = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> settings2.setParallelism(Integer.valueOf(args[2]));
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, 
> settings); String name = "myhive"; String defaultDatabase = "test"; String 
> hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 
> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
> version); tableEnv.registerCatalog("myhive", hive); 
> tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = 
> tableEnv.sqlQuery("select id from orderparent_test2 where id = 
> 'A21204170176'"); tableEnv.toAppendStream(table, Row.class).print(); 
> tableEnv.toAppendStream(table, Row.class) 
> .writeAsText("hdfs:///user/chenchao1/"+ args[0], 
> FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15527) can not control the number of container on yarn single job module

2020-01-08 Thread chenchencc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011509#comment-17011509
 ] 

chenchencc commented on FLINK-15527:


i run the yarn-session.sh --help just now.i didn't found the -n.I think flink 
should control the max containers.. And the 

table.exec.hive.infer-source-parallelism.max paramter is useful for the hive 
connectors.

> can not control the number of container on yarn single job module
> -
>
> Key: FLINK-15527
> URL: https://issues.apache.org/jira/browse/FLINK-15527
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.10.0
>Reporter: chenchencc
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: flink-conf.yaml, image-2020-01-09-14-30-46-973.png, 
> yarn_application.png
>
>
> when run yarn single job run many container but paralism set 4
> *scripts:*
> ./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c 
> com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
> _notes_: in  1.9.1 has cli paramter -yn to control the number of containers 
> and in 1.10 remove it
> *result:*
> the number of containers is 500+
>  
> *code use:*
> query the table and save it to the hdfs text
>  
> the storge of table is 200g+
>  
>  
>  
>  
> *code:*
> com.cc.test.HiveTest2
> public static void main(String[] args) throws Exception
> { EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamExecutionEnvironment settings2 = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> settings2.setParallelism(Integer.valueOf(args[2]));
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, 
> settings); String name = "myhive"; String defaultDatabase = "test"; String 
> hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 
> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
> version); tableEnv.registerCatalog("myhive", hive); 
> tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = 
> tableEnv.sqlQuery("select id from orderparent_test2 where id = 
> 'A21204170176'"); tableEnv.toAppendStream(table, Row.class).print(); 
> tableEnv.toAppendStream(table, Row.class) 
> .writeAsText("hdfs:///user/chenchao1/"+ args[0], 
> FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15527) can not control the number of container on yarn single job module

2020-01-08 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011489#comment-17011489
 ] 

Jingsong Lee commented on FLINK-15527:
--

[~chenchencc] Yes, you can, that is the same.

> can not control the number of container on yarn single job module
> -
>
> Key: FLINK-15527
> URL: https://issues.apache.org/jira/browse/FLINK-15527
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.10.0
>Reporter: chenchencc
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: application_1576096842353_203666.log, flink-conf.yaml, 
> image-2020-01-09-14-30-46-973.png, yarn_application.png
>
>
> when run yarn single job run many container but paralism set 4
> *scripts:*
> ./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c 
> com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
> _notes_: in  1.9.1 has cli paramter -yn to control the number of containers 
> and in 1.10 remove it
> *result:*
> the number of containers is 500+
>  
> *code use:*
> query the table and save it to the hdfs text
>  
> the storge of table is 200g+
>  
>  
>  
>  
> *code:*
> com.cc.test.HiveTest2
> public static void main(String[] args) throws Exception
> { EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamExecutionEnvironment settings2 = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> settings2.setParallelism(Integer.valueOf(args[2]));
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, 
> settings); String name = "myhive"; String defaultDatabase = "test"; String 
> hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 
> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
> version); tableEnv.registerCatalog("myhive", hive); 
> tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = 
> tableEnv.sqlQuery("select id from orderparent_test2 where id = 
> 'A21204170176'"); tableEnv.toAppendStream(table, Row.class).print(); 
> tableEnv.toAppendStream(table, Row.class) 
> .writeAsText("hdfs:///user/chenchao1/"+ args[0], 
> FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15527) can not control the number of container on yarn single job module

2020-01-08 Thread chenchencc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011486#comment-17011486
 ] 

chenchencc commented on FLINK-15527:


hi [~lzljs3620320] , thanks your answer . and Can i start yarn session to do it 
for 1.10 verion?

> can not control the number of container on yarn single job module
> -
>
> Key: FLINK-15527
> URL: https://issues.apache.org/jira/browse/FLINK-15527
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.10.0
>Reporter: chenchencc
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: application_1576096842353_203666.log, flink-conf.yaml, 
> image-2020-01-09-14-30-46-973.png, yarn_application.png
>
>
> when run yarn single job run many container but paralism set 4
> *scripts:*
> ./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c 
> com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
> _notes_: in  1.9.1 has cli paramter -yn to control the number of containers 
> and in 1.10 remove it
> *result:*
> the number of containers is 500+
>  
> *code use:*
> query the table and save it to the hdfs text
>  
> the storge of table is 200g+
>  
>  
>  
>  
> *code:*
> com.cc.test.HiveTest2
> public static void main(String[] args) throws Exception
> { EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamExecutionEnvironment settings2 = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> settings2.setParallelism(Integer.valueOf(args[2]));
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, 
> settings); String name = "myhive"; String defaultDatabase = "test"; String 
> hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 
> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
> version); tableEnv.registerCatalog("myhive", hive); 
> tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = 
> tableEnv.sqlQuery("select id from orderparent_test2 where id = 
> 'A21204170176'"); tableEnv.toAppendStream(table, Row.class).print(); 
> tableEnv.toAppendStream(table, Row.class) 
> .writeAsText("hdfs:///user/chenchao1/"+ args[0], 
> FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15527) can not control the number of container on yarn single job module

2020-01-08 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011482#comment-17011482
 ] 

Jingsong Lee commented on FLINK-15527:
--

Hi [~chenchencc], you can add config option 
"table.exec.hive.infer-source-parallelism.max" to "flink-conf.yaml" to limit 
source parallelism. (This config option is experimental feature)

> can not control the number of container on yarn single job module
> -
>
> Key: FLINK-15527
> URL: https://issues.apache.org/jira/browse/FLINK-15527
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.10.0
>Reporter: chenchencc
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: application_1576096842353_203666.log, flink-conf.yaml, 
> image-2020-01-09-14-30-46-973.png, yarn_application.png
>
>
> when run yarn single job run many container but paralism set 4
> *scripts:*
> ./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c 
> com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
> _notes_: in  1.9.1 has cli paramter -yn to control the number of containers 
> and in 1.10 remove it
> *result:*
> the number of containers is 500+
>  
> *code use:*
> query the table and save it to the hdfs text
>  
> the storge of table is 200g+
>  
>  
>  
>  
> *code:*
> com.cc.test.HiveTest2
> public static void main(String[] args) throws Exception
> { EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamExecutionEnvironment settings2 = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> settings2.setParallelism(Integer.valueOf(args[2]));
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, 
> settings); String name = "myhive"; String defaultDatabase = "test"; String 
> hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 
> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
> version); tableEnv.registerCatalog("myhive", hive); 
> tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = 
> tableEnv.sqlQuery("select id from orderparent_test2 where id = 
> 'A21204170176'"); tableEnv.toAppendStream(table, Row.class).print(); 
> tableEnv.toAppendStream(table, Row.class) 
> .writeAsText("hdfs:///user/chenchao1/"+ args[0], 
> FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15527) can not control the number of container on yarn single job module

2020-01-08 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011479#comment-17011479
 ] 

Jingsong Lee commented on FLINK-15527:
--

Hi [~fly_in_gis], can we add back `-yn` to limit the max resource of yarn app? 
I've had this requirement several times from users.

> can not control the number of container on yarn single job module
> -
>
> Key: FLINK-15527
> URL: https://issues.apache.org/jira/browse/FLINK-15527
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.10.0
>Reporter: chenchencc
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: application_1576096842353_203666.log, flink-conf.yaml, 
> image-2020-01-09-14-30-46-973.png, yarn_application.png
>
>
> when run yarn single job run many container but paralism set 4
> *scripts:*
> ./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c 
> com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
> _notes_: in  1.9.1 has cli paramter -yn to control the number of containers 
> and in 1.10 remove it
> *result:*
> the number of containers is 500+
>  
> *code use:*
> query the table and save it to the hdfs text
>  
> the storge of table is 200g+
>  
>  
>  
>  
> *code:*
> com.cc.test.HiveTest2
> public static void main(String[] args) throws Exception
> { EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamExecutionEnvironment settings2 = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> settings2.setParallelism(Integer.valueOf(args[2]));
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, 
> settings); String name = "myhive"; String defaultDatabase = "test"; String 
> hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 
> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
> version); tableEnv.registerCatalog("myhive", hive); 
> tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = 
> tableEnv.sqlQuery("select id from orderparent_test2 where id = 
> 'A21204170176'"); tableEnv.toAppendStream(table, Row.class).print(); 
> tableEnv.toAppendStream(table, Row.class) 
> .writeAsText("hdfs:///user/chenchao1/"+ args[0], 
> FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15527) can not control the number of container on yarn single job module

2020-01-08 Thread chenchencc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011478#comment-17011478
 ] 

chenchencc commented on FLINK-15527:


But i don't want to use so much parallelism. I accpet to use less paralilelism 
and use more time

> can not control the number of container on yarn single job module
> -
>
> Key: FLINK-15527
> URL: https://issues.apache.org/jira/browse/FLINK-15527
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.10.0
>Reporter: chenchencc
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: application_1576096842353_203666.log, flink-conf.yaml, 
> image-2020-01-09-14-30-46-973.png, yarn_application.png
>
>
> when run yarn single job run many container but paralism set 4
> *scripts:*
> ./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c 
> com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
> _notes_: in  1.9.1 has cli paramter -yn to control the number of containers 
> and in 1.10 remove it
> *result:*
> the number of containers is 500+
>  
> *code use:*
> query the table and save it to the hdfs text
>  
> the storge of table is 200g+
>  
>  
>  
>  
> *code:*
> com.cc.test.HiveTest2
> public static void main(String[] args) throws Exception
> { EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamExecutionEnvironment settings2 = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> settings2.setParallelism(Integer.valueOf(args[2]));
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, 
> settings); String name = "myhive"; String defaultDatabase = "test"; String 
> hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 
> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
> version); tableEnv.registerCatalog("myhive", hive); 
> tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = 
> tableEnv.sqlQuery("select id from orderparent_test2 where id = 
> 'A21204170176'"); tableEnv.toAppendStream(table, Row.class).print(); 
> tableEnv.toAppendStream(table, Row.class) 
> .writeAsText("hdfs:///user/chenchao1/"+ args[0], 
> FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15527) can not control the number of container on yarn single job module

2020-01-08 Thread Kurt Young (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011474#comment-17011474
 ] 

Kurt Young commented on FLINK-15527:


According to your log:

2020-01-09 14:05:03,775 INFO org.apache.hadoop.mapred.FileInputFormat - Total 
input paths to process : 1343

 

So we set source operator's parallelism to 1000, that's way you have hundreds 
of contains. 

> can not control the number of container on yarn single job module
> -
>
> Key: FLINK-15527
> URL: https://issues.apache.org/jira/browse/FLINK-15527
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.10.0
>Reporter: chenchencc
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: application_1576096842353_203666.log, flink-conf.yaml, 
> image-2020-01-09-14-30-46-973.png, yarn_application.png
>
>
> when run yarn single job run many container but paralism set 4
> *scripts:*
> ./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c 
> com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
> _notes_: in  1.9.1 has cli paramter -yn to control the number of containers 
> and in 1.10 remove it
> *result:*
> the number of containers is 500+
>  
> *code use:*
> query the table and save it to the hdfs text
>  
> the storge of table is 200g+
>  
>  
>  
>  
> *code:*
> com.cc.test.HiveTest2
> public static void main(String[] args) throws Exception
> { EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamExecutionEnvironment settings2 = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> settings2.setParallelism(Integer.valueOf(args[2]));
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, 
> settings); String name = "myhive"; String defaultDatabase = "test"; String 
> hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 
> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
> version); tableEnv.registerCatalog("myhive", hive); 
> tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = 
> tableEnv.sqlQuery("select id from orderparent_test2 where id = 
> 'A21204170176'"); tableEnv.toAppendStream(table, Row.class).print(); 
> tableEnv.toAppendStream(table, Row.class) 
> .writeAsText("hdfs:///user/chenchao1/"+ args[0], 
> FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15527) can not control the number of container on yarn single job module

2020-01-08 Thread chenchencc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011469#comment-17011469
 ] 

chenchencc commented on FLINK-15527:


Because the secure of company , i difficutely to open the flink web. 

> can not control the number of container on yarn single job module
> -
>
> Key: FLINK-15527
> URL: https://issues.apache.org/jira/browse/FLINK-15527
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.10.0
>Reporter: chenchencc
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: yarn_application.png
>
>
> when run yarn single job run many container but paralism set 4
> *scripts:*
> ./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c 
> com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
> _notes_: in  1.9.1 has cli paramter -yn to control the number of containers 
> and in 1.10 remove it
> *result:*
> the number of containers is 500+
>  
> *code use:*
> query the table and save it to the hdfs text
>  
> the storge of table is 200g+
>  
>  
>  
>  
> *code:*
> com.cc.test.HiveTest2
> public static void main(String[] args) throws Exception
> { EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamExecutionEnvironment settings2 = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> settings2.setParallelism(Integer.valueOf(args[2]));
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, 
> settings); String name = "myhive"; String defaultDatabase = "test"; String 
> hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 
> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
> version); tableEnv.registerCatalog("myhive", hive); 
> tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = 
> tableEnv.sqlQuery("select id from orderparent_test2 where id = 
> 'A21204170176'"); tableEnv.toAppendStream(table, Row.class).print(); 
> tableEnv.toAppendStream(table, Row.class) 
> .writeAsText("hdfs:///user/chenchao1/"+ args[0], 
> FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15527) can not control the number of container on yarn single job module

2020-01-08 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011456#comment-17011456
 ] 

Yang Wang commented on FLINK-15527:
---

[~chenchencc] Even the `-yn` exists in cli options, it could not take effect. 
Since after FLIP-6, the flink {{YarnResourceManager}} will always allocate 
TaskManagers dynamically on demand.

>From 1.10, the Yarn per-job has been real per job mode. Please refer 
>[FLIP-82|https://cwiki.apache.org/confluence/display/FLINK/FLIP-82%3A+Use+real+per-job+mode+for+YARN+per-job+attached+execution]
> for more information. Before 1.10, we use a Yarn session to simulate per-job 
>in attach mode. This is the major changes for Yarn per-job mode.

 

Regarding the changes, Flink should not have containers leak. Could you share 
your jobmanager logs so that we could find the root cause? And could you please 
check all the slots on the 500+ containers are in use?

> can not control the number of container on yarn single job module
> -
>
> Key: FLINK-15527
> URL: https://issues.apache.org/jira/browse/FLINK-15527
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.10.0
>Reporter: chenchencc
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: flink-conf.yaml
>
>
> when run yarn single job run many container but paralism set 4
> *scripts:*
> ./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c 
> com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
> _notes_: in  1.9.1 has cli paramter -yn to control the number of containers 
> and in 1.10 remove it
> *result:*
> the number of containers is 500+
>  
> *code use:*
> query the table and save it to the hdfs text
>  
> the storge of table is 200g+
>  
>  
>  
>  
> *code:*
> com.cc.test.HiveTest2
> public static void main(String[] args) throws Exception
> { EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamExecutionEnvironment settings2 = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> settings2.setParallelism(Integer.valueOf(args[2]));
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, 
> settings); String name = "myhive"; String defaultDatabase = "test"; String 
> hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 
> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
> version); tableEnv.registerCatalog("myhive", hive); 
> tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = 
> tableEnv.sqlQuery("select id from orderparent_test2 where id = 
> 'A21204170176'"); tableEnv.toAppendStream(table, Row.class).print(); 
> tableEnv.toAppendStream(table, Row.class) 
> .writeAsText("hdfs:///user/chenchao1/"+ args[0], 
> FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15527) can not control the number of container on yarn single job module

2020-01-08 Thread Kurt Young (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011453#comment-17011453
 ] 

Kurt Young commented on FLINK-15527:


Hi, have you start the job successfully? Could you check what's the real 
parallelism of your job?

> can not control the number of container on yarn single job module
> -
>
> Key: FLINK-15527
> URL: https://issues.apache.org/jira/browse/FLINK-15527
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.10.0
>Reporter: chenchencc
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: flink-conf.yaml
>
>
> when run yarn single job run many container but paralism set 4
> *scripts:*
> ./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c 
> com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
> _notes_: in  1.9.1 has cli paramter -yn to control the number of containers 
> and in 1.10 remove it
> *result:*
> the number of containers is 500+
>  
> *code use:*
> query the table and save it to the hdfs text
>  
> the storge of table is 200g+
>  
>  
>  
>  
> *code:*
> com.cc.test.HiveTest2
> public static void main(String[] args) throws Exception
> { EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamExecutionEnvironment settings2 = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> settings2.setParallelism(Integer.valueOf(args[2]));
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, 
> settings); String name = "myhive"; String defaultDatabase = "test"; String 
> hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 
> HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
> version); tableEnv.registerCatalog("myhive", hive); 
> tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = 
> tableEnv.sqlQuery("select id from orderparent_test2 where id = 
> 'A21204170176'"); tableEnv.toAppendStream(table, Row.class).print(); 
> tableEnv.toAppendStream(table, Row.class) 
> .writeAsText("hdfs:///user/chenchao1/"+ args[0], 
> FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)