[GitHub] [flink] flinkbot edited a comment on issue #9354: [FLINK-13568][sql-parser] DDL create table doesn't allow STRING data …

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9354: [FLINK-13568][sql-parser] DDL create 
table doesn't allow STRING data …
URL: https://github.com/apache/flink/pull/9354#issuecomment-518083679
 
 
   ## CI report:
   
   * 754c52de984cb476ae0442c6704219b64c68441e : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121903039)
   * af75fff40f4e9e57bd09403741ff1a7c63285941 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122105364)
   * 8eddadbdb9543c7a42cdba7c1ebe938934671e28 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122210862)
   * 81f5ee77a0e7bb83ce0a2b2447e45a6c364d69ea : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/13853)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13590) flink-on-yarn sometimes could create many little files that are xxx-taskmanager-conf.yaml

2019-08-07 Thread Yang Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901791#comment-16901791
 ] 

Yang Wang commented on FLINK-13590:
---

Hi [~shu_wen...@qq.com]

Thanks for sharing this issue. I think it just because a file named 
\{uuid}-taskmanager-conf.yaml will be created for each task manager when 
launching a Yarn container. And it will only be cleaned up when the Yarn 
application finished. So the conf files will become more and more after task 
manager failover. It could be optimized to upload only one 
taskmanager-conf.yaml and override the different config options through task 
manager environment. Also the optimization could reduce the launch time for 
task manager.

Do you want to create a PR to fix this problem? Or i could take over this 
ticket.

> flink-on-yarn sometimes could create many little files that are 
> xxx-taskmanager-conf.yaml
> -
>
> Key: FLINK-13590
> URL: https://issues.apache.org/jira/browse/FLINK-13590
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Reporter: shuwenjun
>Priority: Major
> Attachments: taskmanager-conf-yaml.png
>
>
> Both of 1.7.2 and 1.8.0 are used, but they could create many little files.
>  These files are the configuration file of  taskmanager and when the flink 
> session try to apply a new container, one of the files will be created. And I 
> don't know why sometimes the flink session apply container again and again? 
> Or when one container has lost, it could delete its taskmanager-conf.yaml 
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-9166) Performance issue with many topologies in a single job

2019-08-07 Thread pj (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901792#comment-16901792
 ] 

pj commented on FLINK-9166:
---

[~ssu...@gmail.com] I don't understand this statement "(b) use slot-sharing 
groups. 
The slot sharing groups are inherited to subsequent operators. So if you set 
them before the table api / sql query is defined, you should get the expected 
result.
If you are unable to set the slot sharing group, you could maybe add a no-op 
MapOperator before the query, so that you can set the group."

Could you please share some sample code for this? I also encounter the same 
problem like you, we need run about 200 sqls in one application.

> Performance issue with many topologies in a single job
> --
>
> Key: FLINK-9166
> URL: https://issues.apache.org/jira/browse/FLINK-9166
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Legacy Planner
>Affects Versions: 1.4.2
>Reporter: SUBRAMANYA SURESH
>Priority: Major
>  Labels: flink, graph, performance, sql, yarn
>
> With a high number of Flink SQL queries (100 of below), the Flink command 
> line client fails with a "JobManager did not respond within 60 ms" on a 
> Yarn cluster.
>  * JobManager logs has nothing after the last TaskManager started except 
> DEBUG logs with "job with ID 5cd95f89ed7a66ec44f2d19eca0592f7 not found in 
> JobManager", indicating its likely stuck (creating the ExecutionGraph?).
>  * The same works as standalone java program locally (high CPU initially)
>  * Note: Each Row in structStream contains 515 columns (many end up null) 
> including a column that has the raw message.
>  * In the YARN cluster we specify 18GB for TaskManager, 18GB for the 
> JobManager, 145 TaskManagers with 5 slots each and parallelism of 725 
> (partitions in our Kafka source).
> *Query:*
> {code:java}
>  select count (*), 'idnumber' as criteria, Environment, CollectedTimestamp, 
> EventTimestamp, RawMsg, Source 
>  from structStream
>  where Environment='MyEnvironment' and Rule='MyRule' and LogType='MyLogType' 
> and Outcome='Success'
>  group by tumble(proctime, INTERVAL '1' SECOND), Environment, 
> CollectedTimestamp, EventTimestamp, RawMsg, Source
> {code}
> *Code:*
> {code:java}
> public static void main(String[] args) throws Exception {
>  
> FileSystems.newFileSystem(KafkaReadingStreamingJob.class.getResource(WHITELIST_CSV).toURI(),
>  new HashMap<>());
>  final StreamExecutionEnvironment streamingEnvironment = 
> getStreamExecutionEnvironment();
>  final StreamTableEnvironment tableEnv = 
> TableEnvironment.getTableEnvironment(streamingEnvironment);
>  final DataStream structStream = 
> getKafkaStreamOfRows(streamingEnvironment);
>  tableEnv.registerDataStream("structStream", structStream);
>  tableEnv.scan("structStream").printSchema();
>  for (int i = 0; i < 100; i++){
>for (String query : Queries.sample){
>  // Queries.sample has one query that is above. 
>  Table selectQuery = tableEnv.sqlQuery(query);
>  DataStream selectQueryStream = tableEnv.toAppendStream(selectQuery, 
>  Row.class);
>  selectQueryStream.print();
>}
>  }
>  // execute program
>  streamingEnvironment.execute("Kafka Streaming SQL");
> }
> private static DataStream 
> getKafkaStreamOfRows(StreamExecutionEnvironment environment) throws Exception 
> {
>   Properties properties = getKafkaProperties();
>   // TestDeserializer deserializes the JSON to a ROW of string columns (515)
>   // and also adds a column for the raw message. 
>   FlinkKafkaConsumer011 consumer = new 
> FlinkKafkaConsumer011(KAFKA_TOPIC_TO_CONSUME, new
> TestDeserializer(getRowTypeInfo()), properties);
>   DataStream stream = environment.addSource(consumer);
>   return stream;
> }
> private static RowTypeInfo getRowTypeInfo() throws Exception {
>   // This has 515 fields. 
>   List fieldNames = DDIManager.getDDIFieldNames();
>   fieldNames.add("rawkafka"); // rawMessage added by TestDeserializer
>   fieldNames.add("proctime");
>  // Fill typeInformationArray with StringType to all but the last field which 
> is of type Time
>   .
>   return new RowTypeInfo(typeInformationArray, fieldNamesArray);
> }
> private static StreamExecutionEnvironment getStreamExecutionEnvironment() 
> throws IOException {
>   final StreamExecutionEnvironment env =  
> StreamExecutionEnvironment.getExecutionEnvironment(); 
>env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);
>env.enableCheckpointing(6);
>env.setStateBackend(new FsStateBackend(CHECKPOINT_DIR));
>env.setParallelism(725);
>return env;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] docete commented on issue #9331: [FLINK-13523][table-planner-blink] Verify and correct arithmetic function's semantic for Blink planner

2019-08-07 Thread GitBox
docete commented on issue #9331: [FLINK-13523][table-planner-blink] Verify and 
correct arithmetic function's semantic for Blink planner
URL: https://github.com/apache/flink/pull/9331#issuecomment-518972270
 
 
   Passed pre-commit test on my travis: 
https://travis-ci.com/docete/flink/builds/122209856


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong opened a new pull request #9377: [FLINK-13561][table-planner-blink] Verify and correct time function's semantic for Blink planner

2019-08-07 Thread GitBox
wuchong opened a new pull request #9377: [FLINK-13561][table-planner-blink] 
Verify and correct time function's semantic for Blink planner
URL: https://github.com/apache/flink/pull/9377
 
 
   
   
   ## What is the purpose of the change
   
   Fix behavior of several datetime builtin functions in blink planner.
   
   ## Brief change log
   
   - af3db19573d91acae9cc15e2186680b49b659fdd  Fix NOW() should return 
TIMESTAMP instead of BIGINT
 - This aligns the behavior to other systems (MySQL, Spark). Because NOW() 
is Synonyms for CURRENT_TIMESTAMP.
   
   - d3af9e280d705e51c1f3bc16908e5650c7a639ea  Fix UNIX_TIMESTAMP(string 
[,format]) should work in session time zone
 - This aligns the behavior to other systems (MySQL, Spark). 
UNIX_TIMESTAMP(string [,format]) is an inverse of FROM_UNIXTIME(bigint 
[,format]). We also remove the support of UNIX_TIMESTAMP(timestamp) in this 
commit.
   
   - 5939a990cd86fc16141a39587e565988734162bc  Fix FROM_UNIXTIME(bigint 
[,format]) should work in session time zone
 - This aligns the behavior to other systems (MySQL, Spark).
   
   - 584df370a4feb34b1ef1bdb4a81b60809dad7128  Drop TO_DATE(int) function 
support
 - This commit drops TO_DATE(int) function support in blink planner to 
align with other systems. We only support TO_DATE(string [,format]) in this 
version.
   
   - 1a590b6d9c792b136a109cc5f3ad1ab3bf0f6916  Drop TO_TIMESTAMP(bigint) 
function support
 - This commit drops TO_TIMESTAMP(bigint) function support in blink planner 
to align with other systems. We only support TO_TIMESTAMP(string [,format]) in 
this version.
   
   - 7dfb34225535a673c99baa4cd7b7dd0ed8a34783  Drop CONVERT_TZ(timestamp, 
format, from_tz, to_tz) function support
 - This commit drops CONVERT_TZ(timestamp, format, from_tz, to_tz) function 
support in blink planner to align with other systems. We only support 
CONVERT_TZ(timestamp, from_tz, to_tz) in this version.
   
   - c2b09aa4f5c00eb5436e5f2e4ed1983657c9753e  Drop DATE_FORMAT(timestamp, 
from_format, to_format) function support
 - This commit drops DATE_FORMAT(timestamp, from_format, to_format) 
function support in blink planner to align with other systems. We only support 
DATE_FORMAT(timestamp, to_format) and DATE_FORMAT(string, to_format) in this 
version.
   
   - 84be6c933af6b8a960df17f6767d620db7f3a59f  Remove some builtin datetime 
functions which can be covered by existing functions
 - Removes DATE_FORMAT_TZ, DATE_ADD,DATE_SUB, DATEDIFF, FROM_TIMESTAMP, 
TO_TIMESTAMP_TZ builtin functions which can be covered by existing functions.
   
   
   ## Verifying this change
   
   This change is already covered by existing tests. For the functions changed 
behavior, we modified existing tests and add some more tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13561) Verify and correct time function's semantic for Blink planner

2019-08-07 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-13561:

Description: 
- Drop CONVERT_TZ(timestamp, format, from_tz, to_tz) function support
   - Drop CONVERT_TZ(timestamp, format, from_tz, to_tz) function support in 
blink planner to align with other databases. We only support 
CONVERT_TZ(timestamp, from_tz, to_tz) in this version.

- Drop DATE_FORMAT(timestamp, from_format, to_format) function support 
  - Drop DATE_FORMAT(timestamp, from_format, to_format) function support in 
blink planner to align with other databases. We only support 
DATE_FORMAT(timestamp, to_format) and DATE_FORMAT(string, to_format) in this 
version.

- Drop TO_DATE(int) function support
  - Drop TO_DATE(int) function support in blink planner to align with other 
databases. We only support TO_DATE(string [,format]) in this version.

- Drop TO_TIMESTAMP(bigint) function support
  - Drop TO_TIMESTAMP(bigint) function support in blink planner to align with 
other systems. We only support TO_TIMESTAMP(string [,format]) in this version.

- Remove some builtin datetime functions which can be covered by existing 
functions
  - Removes DATE_FORMAT_TZ, DATE_ADD,DATE_SUB, DATEDIFF, FROM_TIMESTAMP, 
TO_TIMESTAMP_TZ

- Fix FROM_UNIXTIME(bigint [,format]) should work in session time zone
  - This aligns the behavior to other systems (MySQL, Spark).

- Fix UNIX_TIMESTAMP(string [,format]) should work in session time zone
  - This aligns the behavior to other systems (MySQL, Spark). 
UNIX_TIMESTAMP(string [,format]) is an inverse of FROM_UNIXTIME(bigint 
[,format]). We also remove the support of UNIX_TIMESTAMP(timestamp) in this 
commit.

- Fix NOW() should return TIMESTAMP instead of BIGINT.
  -  This aligns the behavior to other systems (MySQL, Spark). Because NOW() is 
Synonyms for CURRENT_TIMESTAMP.



  was:
Some time function should be corrected:

toTimestamp('2016-03-31') not support in blink.

unix_timestamp and from_unixtime should care about time zone.


> Verify and correct time function's semantic for Blink planner
> -
>
> Key: FLINK-13561
> URL: https://issues.apache.org/jira/browse/FLINK-13561
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jingsong Lee
>Assignee: Jark Wu
>Priority: Major
> Fix For: 1.9.0
>
>
> - Drop CONVERT_TZ(timestamp, format, from_tz, to_tz) function support
>- Drop CONVERT_TZ(timestamp, format, from_tz, to_tz) function support in 
> blink planner to align with other databases. We only support 
> CONVERT_TZ(timestamp, from_tz, to_tz) in this version.
> - Drop DATE_FORMAT(timestamp, from_format, to_format) function support 
>   - Drop DATE_FORMAT(timestamp, from_format, to_format) function support in 
> blink planner to align with other databases. We only support 
> DATE_FORMAT(timestamp, to_format) and DATE_FORMAT(string, to_format) in this 
> version.
> - Drop TO_DATE(int) function support
>   - Drop TO_DATE(int) function support in blink planner to align with other 
> databases. We only support TO_DATE(string [,format]) in this version.
> - Drop TO_TIMESTAMP(bigint) function support
>   - Drop TO_TIMESTAMP(bigint) function support in blink planner to align with 
> other systems. We only support TO_TIMESTAMP(string [,format]) in this version.
> - Remove some builtin datetime functions which can be covered by existing 
> functions
>   - Removes DATE_FORMAT_TZ, DATE_ADD,DATE_SUB, DATEDIFF, FROM_TIMESTAMP, 
> TO_TIMESTAMP_TZ
> - Fix FROM_UNIXTIME(bigint [,format]) should work in session time zone
>   - This aligns the behavior to other systems (MySQL, Spark).
> - Fix UNIX_TIMESTAMP(string [,format]) should work in session time zone
>   - This aligns the behavior to other systems (MySQL, Spark). 
> UNIX_TIMESTAMP(string [,format]) is an inverse of FROM_UNIXTIME(bigint 
> [,format]). We also remove the support of UNIX_TIMESTAMP(timestamp) in this 
> commit.
> - Fix NOW() should return TIMESTAMP instead of BIGINT.
>   -  This aligns the behavior to other systems (MySQL, Spark). Because NOW() 
> is Synonyms for CURRENT_TIMESTAMP.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13561) Verify and correct time function's semantic for Blink planner

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13561:
---
Labels: pull-request-available  (was: )

> Verify and correct time function's semantic for Blink planner
> -
>
> Key: FLINK-13561
> URL: https://issues.apache.org/jira/browse/FLINK-13561
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jingsong Lee
>Assignee: Jark Wu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>
> - Drop CONVERT_TZ(timestamp, format, from_tz, to_tz) function support
>- Drop CONVERT_TZ(timestamp, format, from_tz, to_tz) function support in 
> blink planner to align with other databases. We only support 
> CONVERT_TZ(timestamp, from_tz, to_tz) in this version.
> - Drop DATE_FORMAT(timestamp, from_format, to_format) function support 
>   - Drop DATE_FORMAT(timestamp, from_format, to_format) function support in 
> blink planner to align with other databases. We only support 
> DATE_FORMAT(timestamp, to_format) and DATE_FORMAT(string, to_format) in this 
> version.
> - Drop TO_DATE(int) function support
>   - Drop TO_DATE(int) function support in blink planner to align with other 
> databases. We only support TO_DATE(string [,format]) in this version.
> - Drop TO_TIMESTAMP(bigint) function support
>   - Drop TO_TIMESTAMP(bigint) function support in blink planner to align with 
> other systems. We only support TO_TIMESTAMP(string [,format]) in this version.
> - Remove some builtin datetime functions which can be covered by existing 
> functions
>   - Removes DATE_FORMAT_TZ, DATE_ADD,DATE_SUB, DATEDIFF, FROM_TIMESTAMP, 
> TO_TIMESTAMP_TZ
> - Fix FROM_UNIXTIME(bigint [,format]) should work in session time zone
>   - This aligns the behavior to other systems (MySQL, Spark).
> - Fix UNIX_TIMESTAMP(string [,format]) should work in session time zone
>   - This aligns the behavior to other systems (MySQL, Spark). 
> UNIX_TIMESTAMP(string [,format]) is an inverse of FROM_UNIXTIME(bigint 
> [,format]). We also remove the support of UNIX_TIMESTAMP(timestamp) in this 
> commit.
> - Fix NOW() should return TIMESTAMP instead of BIGINT.
>   -  This aligns the behavior to other systems (MySQL, Spark). Because NOW() 
> is Synonyms for CURRENT_TIMESTAMP.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot commented on issue #9377: [FLINK-13561][table-planner-blink] Verify and correct time function's semantic for Blink planner

2019-08-07 Thread GitBox
flinkbot commented on issue #9377: [FLINK-13561][table-planner-blink] Verify 
and correct time function's semantic for Blink planner
URL: https://github.com/apache/flink/pull/9377#issuecomment-518973162
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 84be6c933af6b8a960df17f6767d620db7f3a59f (Wed Aug 07 
07:18:20 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13590) flink-on-yarn sometimes could create many little files that are xxx-taskmanager-conf.yaml

2019-08-07 Thread shuwenjun (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901802#comment-16901802
 ] 

shuwenjun commented on FLINK-13590:
---

Hi Yang,

You are right, it's right that the files are created when launching a new yarn 
container. And it is better optimized to override taskmanager-conf.yaml. Now, 
it could cause a more serious problem, that it will produce hundreds of 
thousands of small files, because the RM keep retrying to apply for 
taskmanager(container) when the source is enough. And the RM always don't get 
some new correct taskmanagers.

 

Thank you for your comment.

 

> flink-on-yarn sometimes could create many little files that are 
> xxx-taskmanager-conf.yaml
> -
>
> Key: FLINK-13590
> URL: https://issues.apache.org/jira/browse/FLINK-13590
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Reporter: shuwenjun
>Priority: Major
> Attachments: taskmanager-conf-yaml.png
>
>
> Both of 1.7.2 and 1.8.0 are used, but they could create many little files.
>  These files are the configuration file of  taskmanager and when the flink 
> session try to apply a new container, one of the files will be created. And I 
> don't know why sometimes the flink session apply container again and again? 
> Or when one container has lost, it could delete its taskmanager-conf.yaml 
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9376: [1.9][FLINK-13452][runtime] Ensure to fail global when exception happens during reseting tasks of regions

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9376: [1.9][FLINK-13452][runtime] Ensure to 
fail global when exception happens during reseting tasks of regions
URL: https://github.com/apache/flink/pull/9376#issuecomment-518940672
 
 
   ## CI report:
   
   * ea55fdb7f417fee67e4862a9b871748bfc2210eb : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/122215725)
   * b931e9c10b231ed1823fe6a97bccf73bb835dbc2 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122217445)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9377: [FLINK-13561][table-planner-blink] Verify and correct time function's semantic for Blink planner

2019-08-07 Thread GitBox
flinkbot commented on issue #9377: [FLINK-13561][table-planner-blink] Verify 
and correct time function's semantic for Blink planner
URL: https://github.com/apache/flink/pull/9377#issuecomment-518975063
 
 
   ## CI report:
   
   * 84be6c933af6b8a960df17f6767d620db7f3a59f : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/16159)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13606) PrometheusReporterEndToEndITCase.testReporter unstable on Travis

2019-08-07 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-13606:
-

 Summary: PrometheusReporterEndToEndITCase.testReporter unstable on 
Travis
 Key: FLINK-13606
 URL: https://issues.apache.org/jira/browse/FLINK-13606
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Metrics
Affects Versions: 1.9.0
Reporter: Till Rohrmann
 Fix For: 1.9.0


The {{PrometheusReporterEndToEndITCase.testReporter}} is unstable on Travis. It 
fails with {{java.io.IOException: Process failed due to timeout.}}

https://api.travis-ci.org/v3/job/568280216/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] hequn8128 commented on a change in pull request #9370: [FLINK-13594][python] Improve the 'from_element' method of flink python api to apply to blink planner

2019-08-07 Thread GitBox
hequn8128 commented on a change in pull request #9370: [FLINK-13594][python] 
Improve the 'from_element' method of flink python api to apply to blink planner
URL: https://github.com/apache/flink/pull/9370#discussion_r311380624
 
 

 ##
 File path: flink-python/pyflink/table/tests/test_calc.py
 ##
 @@ -97,14 +97,60 @@ def test_from_element(self):
   PythonOnlyPoint(3.0, 4.0))],
 schema)
 t.insert_into("Results")
-self.t_env.execute("test")
+t_env.execute("test")
 actual = source_sink_utils.results()
 
 expected = ['1,1.0,hi,hello,1970-01-02,01:00:00,1970-01-02 00:00:00.0,'
-'1970-01-02 00:00:00.0,8640010,[1.0, null],[1.0, 
2.0],[abc],[1970-01-02],'
+'1970-01-02 00:00:00.0,8640,[1.0, null],[1.0, 
2.0],[abc],[1970-01-02],'
 '1,1,2.0,{key=1.0},[65, 66, 67, 68],[1.0, 2.0],[3.0, 4.0]']
 self.assert_equals(actual, expected)
 
+def test_blink_from_element(self):
+t_env = 
BatchTableEnvironment.create(environment_settings=EnvironmentSettings
+ 
.new_instance().use_blink_planner()
+ .in_batch_mode().build())
+field_names = ["a", "b", "c", "d", "e", "f", "g", "h",
 
 Review comment:
   Maybe we can extract these field_names, field_types, schema and data into a 
base class, similar to the `StreamTestData` in java or scala. In this way, we 
can reuse these source information across all  these python tests.
   
   What do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hequn8128 commented on a change in pull request #9370: [FLINK-13594][python] Improve the 'from_element' method of flink python api to apply to blink planner

2019-08-07 Thread GitBox
hequn8128 commented on a change in pull request #9370: [FLINK-13594][python] 
Improve the 'from_element' method of flink python api to apply to blink planner
URL: https://github.com/apache/flink/pull/9370#discussion_r311379434
 
 

 ##
 File path: 
flink-python/src/main/java/org/apache/flink/api/common/python/PythonBridgeUtils.java
 ##
 @@ -44,33 +36,41 @@
  */
 public final class PythonBridgeUtils {
 
 Review comment:
   We need to correct the comments of the class. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hequn8128 commented on a change in pull request #9370: [FLINK-13594][python] Improve the 'from_element' method of flink python api to apply to blink planner

2019-08-07 Thread GitBox
hequn8128 commented on a change in pull request #9370: [FLINK-13594][python] 
Improve the 'from_element' method of flink python api to apply to blink planner
URL: https://github.com/apache/flink/pull/9370#discussion_r311379986
 
 

 ##
 File path: 
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/util/python/PythonTableUtils.scala
 ##
 @@ -24,61 +24,37 @@ import java.time.{LocalDate, LocalDateTime, LocalTime}
 import java.util.TimeZone
 import java.util.function.BiConsumer
 
-import org.apache.flink.api.common.functions.MapFunction
+import org.apache.flink.api.common.ExecutionConfig
+import org.apache.flink.api.common.io.InputFormat
 import org.apache.flink.api.common.typeinfo.{BasicArrayTypeInfo, 
BasicTypeInfo, PrimitiveArrayTypeInfo, TypeInformation}
-import org.apache.flink.api.java.DataSet
+import org.apache.flink.api.java.io.CollectionInputFormat
 import org.apache.flink.api.java.typeutils.{MapTypeInfo, ObjectArrayTypeInfo, 
RowTypeInfo}
-import org.apache.flink.streaming.api.datastream.DataStream
-import org.apache.flink.table.api.java.{BatchTableEnvironment, 
StreamTableEnvironment}
-import org.apache.flink.table.api.{Table, Types}
+import org.apache.flink.core.io.InputSplit
+import org.apache.flink.table.api.{TableSchema, Types}
+import org.apache.flink.table.sources.InputFormatTableSource
 import org.apache.flink.types.Row
 
 object PythonTableUtils {
 
 Review comment:
   How about move this class into the flink-python module?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13605) AsyncDataStreamITCase.testUnorderedWait failed on Travis

2019-08-07 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-13605:
--
Fix Version/s: 1.9.0

> AsyncDataStreamITCase.testUnorderedWait failed on Travis
> 
>
> Key: FLINK-13605
> URL: https://issues.apache.org/jira/browse/FLINK-13605
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.9.0
>Reporter: Kostas Kloudas
>Priority: Blocker
> Fix For: 1.9.0
>
> Attachments: 0001-FLINK-13605.patch
>
>
> An instance of the failure can be found here 
> https://api.travis-ci.org/v3/job/568291353/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13605) AsyncDataStreamITCase.testUnorderedWait failed on Travis

2019-08-07 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-13605:
--
Priority: Blocker  (was: Major)

> AsyncDataStreamITCase.testUnorderedWait failed on Travis
> 
>
> Key: FLINK-13605
> URL: https://issues.apache.org/jira/browse/FLINK-13605
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.9.0
>Reporter: Kostas Kloudas
>Priority: Blocker
> Attachments: 0001-FLINK-13605.patch
>
>
> An instance of the failure can be found here 
> https://api.travis-ci.org/v3/job/568291353/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13605) AsyncDataStreamITCase.testUnorderedWait failed on Travis

2019-08-07 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901808#comment-16901808
 ] 

Till Rohrmann commented on FLINK-13605:
---

Another instance: https://api.travis-ci.org/v3/job/568526204/log.txt

> AsyncDataStreamITCase.testUnorderedWait failed on Travis
> 
>
> Key: FLINK-13605
> URL: https://issues.apache.org/jira/browse/FLINK-13605
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.9.0
>Reporter: Kostas Kloudas
>Priority: Blocker
> Fix For: 1.9.0
>
> Attachments: 0001-FLINK-13605.patch
>
>
> An instance of the failure can be found here 
> https://api.travis-ci.org/v3/job/568291353/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13605) AsyncDataStreamITCase.testUnorderedWait failed on Travis

2019-08-07 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-13605:
--
Component/s: Runtime / Task

> AsyncDataStreamITCase.testUnorderedWait failed on Travis
> 
>
> Key: FLINK-13605
> URL: https://issues.apache.org/jira/browse/FLINK-13605
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task, Tests
>Affects Versions: 1.9.0
>Reporter: Kostas Kloudas
>Priority: Blocker
> Fix For: 1.9.0
>
> Attachments: 0001-FLINK-13605.patch
>
>
> An instance of the failure can be found here 
> https://api.travis-ci.org/v3/job/568291353/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (FLINK-13605) AsyncDataStreamITCase.testUnorderedWait failed on Travis

2019-08-07 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901808#comment-16901808
 ] 

Till Rohrmann edited comment on FLINK-13605 at 8/7/19 7:29 AM:
---

Another instance: https://api.travis-ci.org/v3/job/568526204/log.txt

The test failed with

{code}
04:11:41.683 [ERROR] 
testUnorderedWait(org.apache.flink.streaming.api.scala.AsyncDataStreamITCase)  
Time elapsed: 0.214 s  <<< ERROR!
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at 
org.apache.flink.streaming.api.scala.AsyncDataStreamITCase.executeAndValidate(AsyncDataStreamITCase.scala:80)
at 
org.apache.flink.streaming.api.scala.AsyncDataStreamITCase.testAsyncWait(AsyncDataStreamITCase.scala:65)
at 
org.apache.flink.streaming.api.scala.AsyncDataStreamITCase.testUnorderedWait(AsyncDataStreamITCase.scala:47)
Caused by: org.apache.flink.streaming.runtime.tasks.TimerException: 
java.lang.InterruptedException
Caused by: java.lang.InterruptedException
{code}


was (Author: till.rohrmann):
Another instance: https://api.travis-ci.org/v3/job/568526204/log.txt

> AsyncDataStreamITCase.testUnorderedWait failed on Travis
> 
>
> Key: FLINK-13605
> URL: https://issues.apache.org/jira/browse/FLINK-13605
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task, Tests
>Affects Versions: 1.9.0
>Reporter: Kostas Kloudas
>Priority: Blocker
> Fix For: 1.9.0
>
> Attachments: 0001-FLINK-13605.patch
>
>
> An instance of the failure can be found here 
> https://api.travis-ci.org/v3/job/568291353/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13605) AsyncDataStreamITCase.testUnorderedWait failed on Travis

2019-08-07 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901811#comment-16901811
 ] 

Till Rohrmann commented on FLINK-13605:
---

Might be related to FLINK-13486.

> AsyncDataStreamITCase.testUnorderedWait failed on Travis
> 
>
> Key: FLINK-13605
> URL: https://issues.apache.org/jira/browse/FLINK-13605
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task, Tests
>Affects Versions: 1.9.0
>Reporter: Kostas Kloudas
>Priority: Blocker
> Fix For: 1.9.0
>
> Attachments: 0001-FLINK-13605.patch
>
>
> An instance of the failure can be found here 
> https://api.travis-ci.org/v3/job/568291353/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9336: [FLINK-13548][Deployment/YARN]Support priority of the Flink YARN application

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9336: [FLINK-13548][Deployment/YARN]Support 
priority of the Flink YARN application
URL: https://github.com/apache/flink/pull/9336#issuecomment-517610510
 
 
   ## CI report:
   
   * 4fe9e1ba5707fb4d208290116bc172142e6be08a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121719017)
   * 346ed33756127b27aed16fc91d8ce81048186c06 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121827648)
   * d9b31af0157fe9b2adf080575272502b6f2e0cb5 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122217463)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-13605) AsyncDataStreamITCase.testUnorderedWait failed on Travis

2019-08-07 Thread Kostas Kloudas (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas reassigned FLINK-13605:
--

Assignee: Kostas Kloudas

> AsyncDataStreamITCase.testUnorderedWait failed on Travis
> 
>
> Key: FLINK-13605
> URL: https://issues.apache.org/jira/browse/FLINK-13605
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task, Tests
>Affects Versions: 1.9.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Blocker
> Fix For: 1.9.0
>
> Attachments: 0001-FLINK-13605.patch
>
>
> An instance of the failure can be found here 
> https://api.travis-ci.org/v3/job/568291353/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] sunjincheng121 commented on a change in pull request #9322: [FLINK-13471][table] Add FlatAggregate support to stream Table API(blink planner)

2019-08-07 Thread GitBox
sunjincheng121 commented on a change in pull request #9322: 
[FLINK-13471][table] Add FlatAggregate support to stream Table API(blink 
planner)
URL: https://github.com/apache/flink/pull/9322#discussion_r311405960
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/calcite/TableAggregate.scala
 ##
 @@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.calcite
+
+import java.util
+
+import org.apache.calcite.plan.{RelOptCluster, RelTraitSet}
+import org.apache.calcite.rel.`type`.RelDataType
+import org.apache.calcite.rel.core.AggregateCall
+import org.apache.calcite.rel.{RelNode, RelWriter, SingleRel}
+import org.apache.calcite.util.{ImmutableBitSet, Pair, Util}
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory
+import org.apache.flink.table.types.utils.{LegacyTypeInfoDataTypeConverter, 
TypeConversions}
+import org.apache.flink.table.typeutils.FieldInfoUtils
+
+import scala.collection.JavaConversions._
+import scala.collection.mutable.ListBuffer
+
+/**
+  * Relational operator that represents a table aggregate. A TableAggregate is 
similar to the
+  * [[org.apache.calcite.rel.core.Aggregate]] but may output 0 or more records 
for a group.
+  */
+abstract class TableAggregate(
+cluster: RelOptCluster,
+traitSet: RelTraitSet,
+input: RelNode,
+groupSet: ImmutableBitSet,
+groupSets: util.List[ImmutableBitSet],
+val aggCalls: util.List[AggregateCall])
+  extends SingleRel(cluster, traitSet, input) {
+
+  private[flink] def getGroupSet: ImmutableBitSet = groupSet
+
+  private[flink] def getGroupSets: util.List[ImmutableBitSet] = groupSets
+
+  private[flink] def getAggCallList: util.List[AggregateCall] = aggCalls
+
+  private[flink] def getNamedAggCalls: util.List[Pair[AggregateCall, String]] 
= {
+getNamedAggCalls(aggCalls, deriveRowType(), groupSet)
+  }
+
+  override def deriveRowType(): RelDataType = {
+deriveTableAggRowType(cluster, input, groupSet, aggCalls)
+  }
+
+  protected def deriveTableAggRowType(
+  cluster: RelOptCluster,
+  child: RelNode,
+  groupSet: ImmutableBitSet,
+  aggCalls: util.List[AggregateCall]): RelDataType = {
+
+val typeFactory = cluster.getTypeFactory.asInstanceOf[FlinkTypeFactory]
+val builder = typeFactory.builder
+val groupNames = new ListBuffer[String]
+
+// group key fields
+groupSet.asList().foreach(e => {
+  val field = child.getRowType.getFieldList.get(e)
+  groupNames.append(field.getName)
+  builder.add(field)
+})
+
+// agg fields
+val aggCall = aggCalls.get(0)
+if (aggCall.`type`.isStruct) {
+  // only a structured type contains a field list.
+  aggCall.`type`.getFieldList.foreach(builder.add)
+} else {
+  // A non-structured type does not have a field list, so get field name 
through
+  // TableEnvImpl.getFieldNames.
 
 Review comment:
   Good catch!`TableEnvImpl ` -> `FieldInfoUtils` .


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13605) AsyncDataStreamITCase.testUnorderedWait failed on Travis

2019-08-07 Thread Biao Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901814#comment-16901814
 ] 

Biao Liu commented on FLINK-13605:
--

Oops, it's probably caused by 
https://issues.apache.org/jira/browse/FLINK-13486. Maybe this case is not fixed 
completely. I will check it.

> AsyncDataStreamITCase.testUnorderedWait failed on Travis
> 
>
> Key: FLINK-13605
> URL: https://issues.apache.org/jira/browse/FLINK-13605
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task, Tests
>Affects Versions: 1.9.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Blocker
> Fix For: 1.9.0
>
> Attachments: 0001-FLINK-13605.patch
>
>
> An instance of the failure can be found here 
> https://api.travis-ci.org/v3/job/568291353/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13605) AsyncDataStreamITCase.testUnorderedWait failed on Travis

2019-08-07 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-13605:
--
Component/s: (was: Runtime / Task)
 API / DataStream

> AsyncDataStreamITCase.testUnorderedWait failed on Travis
> 
>
> Key: FLINK-13605
> URL: https://issues.apache.org/jira/browse/FLINK-13605
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Tests
>Affects Versions: 1.9.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Blocker
> Fix For: 1.9.0
>
> Attachments: 0001-FLINK-13605.patch
>
>
> An instance of the failure can be found here 
> https://api.travis-ci.org/v3/job/568291353/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13489) Heavy deployment end-to-end test fails on Travis with TM heartbeat timeout

2019-08-07 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901816#comment-16901816
 ] 

Till Rohrmann commented on FLINK-13489:
---

[~kevin.cyj] are you still working on this issue? I've seen the test still fail 
after FLINK-13579.

> Heavy deployment end-to-end test fails on Travis with TM heartbeat timeout
> --
>
> Key: FLINK-13489
> URL: https://issues.apache.org/jira/browse/FLINK-13489
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Yingjie Cao
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>
> https://api.travis-ci.org/v3/job/564925128/log.txt
> {code}
> 
>  The program finished with the following exception:
> org.apache.flink.client.program.ProgramInvocationException: Job failed. 
> (JobID: 1b4f1807cc749628cfc1bdf04647527a)
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:250)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:338)
>   at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:60)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1507)
>   at 
> org.apache.flink.deployment.HeavyDeploymentStressTestProgram.main(HeavyDeploymentStressTestProgram.java:70)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
>   at 
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146)
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:247)
>   ... 21 more
> Caused by: java.util.concurrent.TimeoutException: Heartbeat of TaskManager 
> with id ea456d6a590eca7598c19c4d35e56db9 timed out.
>   at 
> org.apache.flink.runtime.jobmaster.JobMaster$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(JobMaster.java:1149)
>   at 
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:397)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:190)
>   at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
>   at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
>   at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
>   at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
>   at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
>   at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.sc

[GitHub] [flink] kl0u commented on issue #9228: [FLINK-13428][Connectors / FileSystem] allow part file names to be configurable

2019-08-07 Thread GitBox
kl0u commented on issue #9228: [FLINK-13428][Connectors / FileSystem] allow 
part file names to be configurable
URL: https://github.com/apache/flink/pull/9228#issuecomment-518977800
 
 
   No @eskabetxe , I can do that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #9089: [FLINK-13225][table-planner-blink] Introduce type inference for hive functions in blink

2019-08-07 Thread GitBox
JingsongLi commented on a change in pull request #9089: 
[FLINK-13225][table-planner-blink] Introduce type inference for hive functions 
in blink
URL: https://github.com/apache/flink/pull/9089#discussion_r311407088
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/utils/HiveFunction.java
 ##
 @@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.utils;
+
+import org.apache.flink.table.types.DataType;
+
+/**
+ * This test class is for hive module HiveFunction.
+ */
+public interface HiveFunction {
 
 Review comment:
   These test are for hacky. `HiveCatalogUseBlinkITCase` have covered these 
cases, we can just remove them.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] mikiaichiyu edited a comment on issue #9353: Add a new connector ' flink-connector-rocketmq'

2019-08-07 Thread GitBox
mikiaichiyu edited a comment on issue #9353: Add a new connector ' 
flink-connector-rocketmq'
URL: https://github.com/apache/flink/pull/9353#issuecomment-518915810
 
 
   Hi @tillrohrmann
   
   When I try to add a new connector to flink I found I have to get a JIRA 
ticket first, and I go to 
https://flink.apache.org/contributing/contribute-code.html -> 'Flink’s bug 
tracker: Jira.' then I found I can't get the entrance for ticket. since it's 
the first time to commit to flink I think I need your help, could you please 
let me know how to get the ticket for the commit?  thanks  
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13606) PrometheusReporterEndToEndITCase.testReporter unstable on Travis

2019-08-07 Thread TisonKun (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901819#comment-16901819
 ] 

TisonKun commented on FLINK-13606:
--

It timeouts when download Prometheus from 
{{"https://github.com/prometheus/prometheus/releases/download/v"; + 
PROMETHEUS_VERSION + '/' + prometheusArchive.getFileName()}}

However, the timeout has been set to 5 minutes which seems quite long enough...

> PrometheusReporterEndToEndITCase.testReporter unstable on Travis
> 
>
> Key: FLINK-13606
> URL: https://issues.apache.org/jira/browse/FLINK-13606
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Metrics
>Affects Versions: 1.9.0
>Reporter: Till Rohrmann
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.9.0
>
>
> The {{PrometheusReporterEndToEndITCase.testReporter}} is unstable on Travis. 
> It fails with {{java.io.IOException: Process failed due to timeout.}}
> https://api.travis-ci.org/v3/job/568280216/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (FLINK-13489) Heavy deployment end-to-end test fails on Travis with TM heartbeat timeout

2019-08-07 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901816#comment-16901816
 ] 

Till Rohrmann edited comment on FLINK-13489 at 8/7/19 7:37 AM:
---

-[~kevin.cyj] are you still working on this issue? I've seen the test still 
fail after FLINK-13579.-

In the latest {{release-1.9}} cron job the tests weren't executed because the 
tpch end-to-end test failed consistently.


was (Author: till.rohrmann):
[~kevin.cyj] are you still working on this issue? I've seen the test still fail 
after FLINK-13579.

> Heavy deployment end-to-end test fails on Travis with TM heartbeat timeout
> --
>
> Key: FLINK-13489
> URL: https://issues.apache.org/jira/browse/FLINK-13489
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Yingjie Cao
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>
> https://api.travis-ci.org/v3/job/564925128/log.txt
> {code}
> 
>  The program finished with the following exception:
> org.apache.flink.client.program.ProgramInvocationException: Job failed. 
> (JobID: 1b4f1807cc749628cfc1bdf04647527a)
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:250)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:338)
>   at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:60)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1507)
>   at 
> org.apache.flink.deployment.HeavyDeploymentStressTestProgram.main(HeavyDeploymentStressTestProgram.java:70)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
>   at 
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146)
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:247)
>   ... 21 more
> Caused by: java.util.concurrent.TimeoutException: Heartbeat of TaskManager 
> with id ea456d6a590eca7598c19c4d35e56db9 timed out.
>   at 
> org.apache.flink.runtime.jobmaster.JobMaster$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(JobMaster.java:1149)
>   at 
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:397)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:190)
>   at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
>   at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
>   at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
>   

[GitHub] [flink] klion26 commented on a change in pull request #9348: [FLINK-13505][docs-zh] Translate "Java Lambda Expressions" page into Chinese

2019-08-07 Thread GitBox
klion26 commented on a change in pull request #9348: [FLINK-13505][docs-zh] 
Translate "Java Lambda Expressions" page into Chinese
URL: https://github.com/apache/flink/pull/9348#discussion_r311408006
 
 

 ##
 File path: docs/dev/java_lambdas.zh.md
 ##
 @@ -22,41 +22,37 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Java 8 introduced several new language features designed for faster and 
clearer coding. With the most important feature,
-the so-called "Lambda Expressions", it opened the door to functional 
programming. Lambda expressions allow for implementing and
-passing functions in a straightforward way without having to declare 
additional (anonymous) classes.
+Java 8 引入了几种新的语言特性,旨在实现更快、更清晰的编码。 作为最重要的特性,即所谓的“Lambda 
表达式”,它开启了函数式编程的大门。Lambda 表达式允许以简捷的方式实现和传递函数,而无需声明额外的(匿名)类。
 
-Attention Flink supports the usage of 
lambda expressions for all operators of the Java API, however, whenever a 
lambda expression uses Java generics you need to declare type information 
*explicitly*. 
+注意 Flink 支持对 Java API 的所有算子使用 Lambda 
表达式,但是,当 Lambda 表达式使用 Java 泛型时,你需要*显式*声明类型信息。
 
-This document shows how to use lambda expressions and describes current 
limitations. For a general introduction to the
-Flink API, please refer to the [Programming Guide]({{ site.baseurl 
}}/dev/api_concepts.html)
+本文档介绍了如何使用 Lambda 表达式并描述了其在当前应用中的限制。有关 Flink API 的一般性介绍, 请参阅[编程指南]({{ 
site.baseurl }}/zh/dev/api_concepts.html)。
 
-### Examples and Limitations
+### 示例和限制
 
-The following example illustrates how to implement a simple, inline `map()` 
function that squares its input using a lambda expression.
-The types of input `i` and output parameters of the `map()` function need not 
to be declared as they are inferred by the Java compiler.
+下例演示了如何实现一个简单的行内 `map()` 函数,它使用 Lambda 表达式计算输入的平方。不需要声明 `map()` 函数的输入 `i` 
和输出参数的数据类型,因为 Java 编译器会对它们做出推断。
 
 {% highlight java %}
 env.fromElements(1, 2, 3)
-// returns the squared i
+// 返回 i 的平方
 .map(i -> i*i)
 .print();
 {% endhighlight %}
 
-Flink can automatically extract the result type information from the 
implementation of the method signature `OUT map(IN value)` because `OUT` is not 
generic but `Integer`.
+由于 `OUT` 是 `Integer` 而不是泛型,Flink 可以由方法签名 `OUT map(IN value)` 的实现中自动提取出结果的类型信息。
 
-Unfortunately, functions such as `flatMap()` with a signature `void flatMap(IN 
value, Collector out)` are compiled into `void flatMap(IN value, Collector 
out)` by the Java compiler. This makes it impossible for Flink to infer the 
type information for the output type automatically.
+不幸的是,`flatMap()` 这样的函数,它的签名 `void flatMap(IN value, Collector out)` 被 
Java 编译器编译为 `void flatMap(IN value, Collector out)`。这样 Flink 就无法自动推断输出的类型信息了。
 
-Flink will most likely throw an exception similar to the following:
+Flink 很可能抛出类似如下的异常:
 
 {% highlight plain%}
 org.apache.flink.api.common.functions.InvalidTypesException: The generic type 
parameters of 'Collector' are missing.
-In many cases lambda methods don't provide enough information for 
automatic type extraction when Java generics are involved.
+In many cases Lambda methods don't provide enough information for 
automatic type extraction when Java generics are involved.
 
 Review comment:
   why do we need to change this one? if we change this one, then the English 
version should be updated also.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] klion26 commented on a change in pull request #9348: [FLINK-13505][docs-zh] Translate "Java Lambda Expressions" page into Chinese

2019-08-07 Thread GitBox
klion26 commented on a change in pull request #9348: [FLINK-13505][docs-zh] 
Translate "Java Lambda Expressions" page into Chinese
URL: https://github.com/apache/flink/pull/9348#discussion_r311408129
 
 

 ##
 File path: docs/dev/java_lambdas.zh.md
 ##
 @@ -22,41 +22,37 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Java 8 introduced several new language features designed for faster and 
clearer coding. With the most important feature,
-the so-called "Lambda Expressions", it opened the door to functional 
programming. Lambda expressions allow for implementing and
-passing functions in a straightforward way without having to declare 
additional (anonymous) classes.
+Java 8 引入了几种新的语言特性,旨在实现更快、更清晰的编码。 作为最重要的特性,即所谓的“Lambda 
表达式”,它开启了函数式编程的大门。Lambda 表达式允许以简捷的方式实现和传递函数,而无需声明额外的(匿名)类。
 
-Attention Flink supports the usage of 
lambda expressions for all operators of the Java API, however, whenever a 
lambda expression uses Java generics you need to declare type information 
*explicitly*. 
+注意 Flink 支持对 Java API 的所有算子使用 Lambda 
表达式,但是,当 Lambda 表达式使用 Java 泛型时,你需要*显式*声明类型信息。
 
-This document shows how to use lambda expressions and describes current 
limitations. For a general introduction to the
-Flink API, please refer to the [Programming Guide]({{ site.baseurl 
}}/dev/api_concepts.html)
+本文档介绍了如何使用 Lambda 表达式并描述了其在当前应用中的限制。有关 Flink API 的一般性介绍, 请参阅[编程指南]({{ 
site.baseurl }}/zh/dev/api_concepts.html)。
 
-### Examples and Limitations
+### 示例和限制
 
-The following example illustrates how to implement a simple, inline `map()` 
function that squares its input using a lambda expression.
-The types of input `i` and output parameters of the `map()` function need not 
to be declared as they are inferred by the Java compiler.
+下例演示了如何实现一个简单的行内 `map()` 函数,它使用 Lambda 表达式计算输入的平方。不需要声明 `map()` 函数的输入 `i` 
和输出参数的数据类型,因为 Java 编译器会对它们做出推断。
 
 {% highlight java %}
 env.fromElements(1, 2, 3)
-// returns the squared i
+// 返回 i 的平方
 .map(i -> i*i)
 .print();
 {% endhighlight %}
 
-Flink can automatically extract the result type information from the 
implementation of the method signature `OUT map(IN value)` because `OUT` is not 
generic but `Integer`.
+由于 `OUT` 是 `Integer` 而不是泛型,Flink 可以由方法签名 `OUT map(IN value)` 的实现中自动提取出结果的类型信息。
 
-Unfortunately, functions such as `flatMap()` with a signature `void flatMap(IN 
value, Collector out)` are compiled into `void flatMap(IN value, Collector 
out)` by the Java compiler. This makes it impossible for Flink to infer the 
type information for the output type automatically.
+不幸的是,`flatMap()` 这样的函数,它的签名 `void flatMap(IN value, Collector out)` 被 
Java 编译器编译为 `void flatMap(IN value, Collector out)`。这样 Flink 就无法自动推断输出的类型信息了。
 
-Flink will most likely throw an exception similar to the following:
+Flink 很可能抛出类似如下的异常:
 
 {% highlight plain%}
 org.apache.flink.api.common.functions.InvalidTypesException: The generic type 
parameters of 'Collector' are missing.
-In many cases lambda methods don't provide enough information for 
automatic type extraction when Java generics are involved.
+In many cases Lambda methods don't provide enough information for 
automatic type extraction when Java generics are involved.
 An easy workaround is to use an (anonymous) class instead that implements 
the 'org.apache.flink.api.common.functions.FlatMapFunction' interface.
 Otherwise the type has to be specified explicitly using type information.
 {% endhighlight %}
 
-In this case, the type information needs to be *specified explicitly*, 
otherwise the output will be treated as type `Object` which leads to 
unefficient serialization.
+在这种情况下,需要*显式*指定类型信息,否则输出将被视为 `Object` 类型,这会导致低效的序列化。
 
 Review comment:
   maybe "要*显式*指" -> "要 *显式* 指"


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] klion26 commented on a change in pull request #9348: [FLINK-13505][docs-zh] Translate "Java Lambda Expressions" page into Chinese

2019-08-07 Thread GitBox
klion26 commented on a change in pull request #9348: [FLINK-13505][docs-zh] 
Translate "Java Lambda Expressions" page into Chinese
URL: https://github.com/apache/flink/pull/9348#discussion_r311407297
 
 

 ##
 File path: docs/dev/java_lambdas.zh.md
 ##
 @@ -22,41 +22,37 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Java 8 introduced several new language features designed for faster and 
clearer coding. With the most important feature,
-the so-called "Lambda Expressions", it opened the door to functional 
programming. Lambda expressions allow for implementing and
-passing functions in a straightforward way without having to declare 
additional (anonymous) classes.
+Java 8 引入了几种新的语言特性,旨在实现更快、更清晰的编码。 作为最重要的特性,即所谓的“Lambda 
表达式”,它开启了函数式编程的大门。Lambda 表达式允许以简捷的方式实现和传递函数,而无需声明额外的(匿名)类。
 
-Attention Flink supports the usage of 
lambda expressions for all operators of the Java API, however, whenever a 
lambda expression uses Java generics you need to declare type information 
*explicitly*. 
+注意 Flink 支持对 Java API 的所有算子使用 Lambda 
表达式,但是,当 Lambda 表达式使用 Java 泛型时,你需要*显式*声明类型信息。
 
 Review comment:
   maybe "要*显式*声" need change to "要 *显式* 声", you can verify this locally 
according to the 
[instructions](https://github.com/apache/flink/blob/master/docs/README.md) 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] klion26 commented on a change in pull request #9348: [FLINK-13505][docs-zh] Translate "Java Lambda Expressions" page into Chinese

2019-08-07 Thread GitBox
klion26 commented on a change in pull request #9348: [FLINK-13505][docs-zh] 
Translate "Java Lambda Expressions" page into Chinese
URL: https://github.com/apache/flink/pull/9348#discussion_r311406351
 
 

 ##
 File path: docs/dev/java_lambdas.zh.md
 ##
 @@ -22,41 +22,37 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Java 8 introduced several new language features designed for faster and 
clearer coding. With the most important feature,
-the so-called "Lambda Expressions", it opened the door to functional 
programming. Lambda expressions allow for implementing and
-passing functions in a straightforward way without having to declare 
additional (anonymous) classes.
+Java 8 引入了几种新的语言特性,旨在实现更快、更清晰的编码。 作为最重要的特性,即所谓的“Lambda 
表达式”,它开启了函数式编程的大门。Lambda 表达式允许以简捷的方式实现和传递函数,而无需声明额外的(匿名)类。
 
 Review comment:
   即所谓的“Lambda 表达式” -> 即所谓的 “Lambda 表达式”?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] twalthr commented on a change in pull request #9328: [FLINK-13521][sql-client] Allow setting configurations in SQL CLI

2019-08-07 Thread GitBox
twalthr commented on a change in pull request #9328: [FLINK-13521][sql-client] 
Allow setting configurations in SQL CLI
URL: https://github.com/apache/flink/pull/9328#discussion_r311408366
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/config/Environment.java
 ##
 @@ -48,6 +49,8 @@
 
public static final String EXECUTION_ENTRY = "execution";
 
+   public static final String CONFIGURATION_ENTRY = "table";
 
 Review comment:
   I agree with Jark. We should not have another level of nesting when using 
the `SET` command.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on issue #9331: [FLINK-13523][table-planner-blink] Verify and correct arithmetic function's semantic for Blink planner

2019-08-07 Thread GitBox
wuchong commented on issue #9331: [FLINK-13523][table-planner-blink] Verify and 
correct arithmetic function's semantic for Blink planner
URL: https://github.com/apache/flink/pull/9331#issuecomment-518979809
 
 
   The TPC-H e2e tests are not affected because all the `avg` or `/` are on 
double fields.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13607) TPC-H end-to-end test (Blink planner) failed on Travis

2019-08-07 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-13607:
-

 Summary: TPC-H end-to-end test (Blink planner) failed on Travis
 Key: FLINK-13607
 URL: https://issues.apache.org/jira/browse/FLINK-13607
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API, Tests
Affects Versions: 1.9.0
Reporter: Till Rohrmann
 Fix For: 1.9.0


The {{TPC-H end-to-end test (Blink planner)}} fail consistently on Travis with

{code}
Generating test data...
Error: Could not find or load main class 
org.apache.flink.table.tpch.TpchDataGenerator
{code}

https://api.travis-ci.org/v3/job/568280203/log.txt
https://api.travis-ci.org/v3/job/568280209/log.txt
https://api.travis-ci.org/v3/job/568280215/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13607) TPC-H end-to-end test (Blink planner) failed on Travis

2019-08-07 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901825#comment-16901825
 ] 

Till Rohrmann commented on FLINK-13607:
---

[~ykt836] can this be cause by FLINK-13592?

> TPC-H end-to-end test (Blink planner) failed on Travis
> --
>
> Key: FLINK-13607
> URL: https://issues.apache.org/jira/browse/FLINK-13607
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Tests
>Affects Versions: 1.9.0
>Reporter: Till Rohrmann
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.9.0
>
>
> The {{TPC-H end-to-end test (Blink planner)}} fail consistently on Travis with
> {code}
> Generating test data...
> Error: Could not find or load main class 
> org.apache.flink.table.tpch.TpchDataGenerator
> {code}
> https://api.travis-ci.org/v3/job/568280203/log.txt
> https://api.travis-ci.org/v3/job/568280209/log.txt
> https://api.travis-ci.org/v3/job/568280215/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13601) RegionFailoverITCase is unstable

2019-08-07 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901833#comment-16901833
 ] 

Till Rohrmann commented on FLINK-13601:
---

I've assigned you to this ticket [~yunta]. Please move it into "in progress" 
once you start working on it.

> RegionFailoverITCase is unstable
> 
>
> Key: FLINK-13601
> URL: https://issues.apache.org/jira/browse/FLINK-13601
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / Coordination
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Aljoscha Krettek
>Assignee: Yun Tang
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.9.0
>
>
> Excerpt from https://travis-ci.com/flink-ci/flink/jobs/222711830:
> {code}
> 10:44:31.222 [INFO] Running 
> org.apache.flink.test.checkpointing.RegionFailoverITCase
> org.apache.flink.client.program.ProgramInvocationException: Job failed 
> (JobID: 9e0fbeaa580123e05cfce5554f443d23)
> at 
> org.apache.flink.client.program.MiniClusterClient.submitJob(MiniClusterClient.java:92)
> at 
> org.apache.flink.test.checkpointing.RegionFailoverITCase.testMultiRegionFailover(RegionFailoverITCase.java:132)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
> at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146)
> at 
> org.apache.flink.client.program.MiniClusterClient.submitJob(MiniClusterClient.java:90)
> ... 13 more
> Caused by: java.lang.RuntimeException: Test failed due to unexpected 
> recovered index: 2000, while last completed checkpoint record index: 1837
> at 
> org.apache.flink.test.checkpointing.RegionFailoverITCase$StringGeneratingSourceFunction.initializeState(RegionFailoverITCase.java:300)
> at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
> at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
> at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:281)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:862)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:367)
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:688)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:518)
> ... 1 more
> 10:44:39.210 [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time 
> elapsed: 7.983 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.RegionFailoverITCase
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (FLINK-13607) TPC-H end-to-end test (Blink planner) failed on Travis

2019-08-07 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young reassigned FLINK-13607:
--

Assignee: Kurt Young

> TPC-H end-to-end test (Blink planner) failed on Travis
> --
>
> Key: FLINK-13607
> URL: https://issues.apache.org/jira/browse/FLINK-13607
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Tests
>Affects Versions: 1.9.0
>Reporter: Till Rohrmann
>Assignee: Kurt Young
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.9.0
>
>
> The {{TPC-H end-to-end test (Blink planner)}} fail consistently on Travis with
> {code}
> Generating test data...
> Error: Could not find or load main class 
> org.apache.flink.table.tpch.TpchDataGenerator
> {code}
> https://api.travis-ci.org/v3/job/568280203/log.txt
> https://api.travis-ci.org/v3/job/568280209/log.txt
> https://api.travis-ci.org/v3/job/568280215/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13489) Heavy deployment end-to-end test fails on Travis with TM heartbeat timeout

2019-08-07 Thread Yingjie Cao (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901830#comment-16901830
 ] 

Yingjie Cao commented on FLINK-13489:
-

[~till.rohrmann] I am still working on this issue, could you share the latest 
failure log, I wonder if the failures we encountered are resulted by the same 
cause. I am now focusing on the akka timeout problem, but the probability of 
failure is low, about 1%-2%.

> Heavy deployment end-to-end test fails on Travis with TM heartbeat timeout
> --
>
> Key: FLINK-13489
> URL: https://issues.apache.org/jira/browse/FLINK-13489
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Yingjie Cao
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>
> https://api.travis-ci.org/v3/job/564925128/log.txt
> {code}
> 
>  The program finished with the following exception:
> org.apache.flink.client.program.ProgramInvocationException: Job failed. 
> (JobID: 1b4f1807cc749628cfc1bdf04647527a)
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:250)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:338)
>   at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:60)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1507)
>   at 
> org.apache.flink.deployment.HeavyDeploymentStressTestProgram.main(HeavyDeploymentStressTestProgram.java:70)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
>   at 
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146)
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:247)
>   ... 21 more
> Caused by: java.util.concurrent.TimeoutException: Heartbeat of TaskManager 
> with id ea456d6a590eca7598c19c4d35e56db9 timed out.
>   at 
> org.apache.flink.runtime.jobmaster.JobMaster$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(JobMaster.java:1149)
>   at 
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:397)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:190)
>   at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
>   at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
>   at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
>   at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatement

[jira] [Commented] (FLINK-13607) TPC-H end-to-end test (Blink planner) failed on Travis

2019-08-07 Thread Kurt Young (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901831#comment-16901831
 ] 

Kurt Young commented on FLINK-13607:


I will take a look.

> TPC-H end-to-end test (Blink planner) failed on Travis
> --
>
> Key: FLINK-13607
> URL: https://issues.apache.org/jira/browse/FLINK-13607
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Tests
>Affects Versions: 1.9.0
>Reporter: Till Rohrmann
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.9.0
>
>
> The {{TPC-H end-to-end test (Blink planner)}} fail consistently on Travis with
> {code}
> Generating test data...
> Error: Could not find or load main class 
> org.apache.flink.table.tpch.TpchDataGenerator
> {code}
> https://api.travis-ci.org/v3/job/568280203/log.txt
> https://api.travis-ci.org/v3/job/568280209/log.txt
> https://api.travis-ci.org/v3/job/568280215/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (FLINK-13601) RegionFailoverITCase is unstable

2019-08-07 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann reassigned FLINK-13601:
-

Assignee: Yun Tang

> RegionFailoverITCase is unstable
> 
>
> Key: FLINK-13601
> URL: https://issues.apache.org/jira/browse/FLINK-13601
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / Coordination
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Aljoscha Krettek
>Assignee: Yun Tang
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.9.0
>
>
> Excerpt from https://travis-ci.com/flink-ci/flink/jobs/222711830:
> {code}
> 10:44:31.222 [INFO] Running 
> org.apache.flink.test.checkpointing.RegionFailoverITCase
> org.apache.flink.client.program.ProgramInvocationException: Job failed 
> (JobID: 9e0fbeaa580123e05cfce5554f443d23)
> at 
> org.apache.flink.client.program.MiniClusterClient.submitJob(MiniClusterClient.java:92)
> at 
> org.apache.flink.test.checkpointing.RegionFailoverITCase.testMultiRegionFailover(RegionFailoverITCase.java:132)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
> at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146)
> at 
> org.apache.flink.client.program.MiniClusterClient.submitJob(MiniClusterClient.java:90)
> ... 13 more
> Caused by: java.lang.RuntimeException: Test failed due to unexpected 
> recovered index: 2000, while last completed checkpoint record index: 1837
> at 
> org.apache.flink.test.checkpointing.RegionFailoverITCase$StringGeneratingSourceFunction.initializeState(RegionFailoverITCase.java:300)
> at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
> at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
> at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:281)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:862)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:367)
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:688)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:518)
> ... 1 more
> 10:44:39.210 [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time 
> elapsed: 7.983 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.RegionFailoverITCase
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13605) AsyncDataStreamITCase.testUnorderedWait failed on Travis

2019-08-07 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901836#comment-16901836
 ] 

Till Rohrmann commented on FLINK-13605:
---

Another instance: https://api.travis-ci.org/v3/job/568314793/log.txt

> AsyncDataStreamITCase.testUnorderedWait failed on Travis
> 
>
> Key: FLINK-13605
> URL: https://issues.apache.org/jira/browse/FLINK-13605
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Tests
>Affects Versions: 1.9.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Blocker
> Fix For: 1.9.0
>
> Attachments: 0001-FLINK-13605.patch
>
>
> An instance of the failure can be found here 
> https://api.travis-ci.org/v3/job/568291353/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13605) AsyncDataStreamITCase.testUnorderedWait failed on Travis

2019-08-07 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901840#comment-16901840
 ] 

Till Rohrmann commented on FLINK-13605:
---

It seems to also affect the {{AsyncDataStreamITCase.testOrderedWai}}.

https://api.travis-ci.org/v3/job/568658124/log.txt

> AsyncDataStreamITCase.testUnorderedWait failed on Travis
> 
>
> Key: FLINK-13605
> URL: https://issues.apache.org/jira/browse/FLINK-13605
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Tests
>Affects Versions: 1.9.0
>Reporter: Kostas Kloudas
>Assignee: Biao Liu
>Priority: Blocker
> Fix For: 1.9.0
>
> Attachments: 0001-FLINK-13605.patch
>
>
> An instance of the failure can be found here 
> https://api.travis-ci.org/v3/job/568291353/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (FLINK-13605) AsyncDataStreamITCase.testUnorderedWait failed on Travis

2019-08-07 Thread Kostas Kloudas (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas reassigned FLINK-13605:
--

Assignee: Biao Liu  (was: Kostas Kloudas)

> AsyncDataStreamITCase.testUnorderedWait failed on Travis
> 
>
> Key: FLINK-13605
> URL: https://issues.apache.org/jira/browse/FLINK-13605
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Tests
>Affects Versions: 1.9.0
>Reporter: Kostas Kloudas
>Assignee: Biao Liu
>Priority: Blocker
> Fix For: 1.9.0
>
> Attachments: 0001-FLINK-13605.patch
>
>
> An instance of the failure can be found here 
> https://api.travis-ci.org/v3/job/568291353/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] kl0u commented on issue #9228: [FLINK-13428][Connectors / FileSystem] allow part file names to be configurable

2019-08-07 Thread GitBox
kl0u commented on issue #9228: [FLINK-13428][Connectors / FileSystem] allow 
part file names to be configurable
URL: https://github.com/apache/flink/pull/9228#issuecomment-518982686
 
 
   Done! Sorry for the delay @eskabetxe 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9089: [FLINK-13225][table-planner-blink] Introduce type inference for hive functions in blink

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9089: [FLINK-13225][table-planner-blink] 
Introduce type inference for hive functions in blink
URL: https://github.com/apache/flink/pull/9089#issuecomment-510488226
 
 
   ## CI report:
   
   * fb34a0f4245ddac5872ea77aad07887a6ff12d11 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/118890132)
   * ba44069acdbd82261839605b5d363548dae81522 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/119054606)
   * 349f15d9e799ac9d316a02392d60495058fda4aa : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120974318)
   * 8876d89f32920192e1d3615b588b72021dbc379a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120974650)
   * 2e15e52dabcde02c6634063ada8d7b885252c16f : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/19182)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-13428) StreamingFileSink allow part file name to be configurable

2019-08-07 Thread Kostas Kloudas (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas reassigned FLINK-13428:
--

Assignee: Kostas Kloudas

> StreamingFileSink allow part file name to be configurable
> -
>
> Key: FLINK-13428
> URL: https://issues.apache.org/jira/browse/FLINK-13428
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Joao Boto
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Allow that part file name could be configurable:
>  * partPrefix and partSuffix can be passed
>  
> the part prefix allow to set a better name to file
> the part suffix (if used as extension) allow system like Athena or Presto to 
> automatic detect the type of file and the compression if applied



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13489) Heavy deployment end-to-end test fails on Travis with TM heartbeat timeout

2019-08-07 Thread Yingjie Cao (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901841#comment-16901841
 ] 

Yingjie Cao commented on FLINK-13489:
-

[~till.rohrmann] Is there a Jira for the tpch end-to-end test  failure, or we 
solve the problem under this Jira?

> Heavy deployment end-to-end test fails on Travis with TM heartbeat timeout
> --
>
> Key: FLINK-13489
> URL: https://issues.apache.org/jira/browse/FLINK-13489
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Yingjie Cao
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>
> https://api.travis-ci.org/v3/job/564925128/log.txt
> {code}
> 
>  The program finished with the following exception:
> org.apache.flink.client.program.ProgramInvocationException: Job failed. 
> (JobID: 1b4f1807cc749628cfc1bdf04647527a)
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:250)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:338)
>   at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:60)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1507)
>   at 
> org.apache.flink.deployment.HeavyDeploymentStressTestProgram.main(HeavyDeploymentStressTestProgram.java:70)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
>   at 
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146)
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:247)
>   ... 21 more
> Caused by: java.util.concurrent.TimeoutException: Heartbeat of TaskManager 
> with id ea456d6a590eca7598c19c4d35e56db9 timed out.
>   at 
> org.apache.flink.runtime.jobmaster.JobMaster$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(JobMaster.java:1149)
>   at 
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:397)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:190)
>   at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
>   at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
>   at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
>   at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
>   at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
>   at scala.PartialFunction$OrElse.applyOrElse(PartialF

[jira] [Updated] (FLINK-13428) StreamingFileSink allow part file name to be configurable

2019-08-07 Thread Kostas Kloudas (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-13428:
---
Fix Version/s: 1.10.0

> StreamingFileSink allow part file name to be configurable
> -
>
> Key: FLINK-13428
> URL: https://issues.apache.org/jira/browse/FLINK-13428
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Joao Boto
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Allow that part file name could be configurable:
>  * partPrefix and partSuffix can be passed
>  
> the part prefix allow to set a better name to file
> the part suffix (if used as extension) allow system like Athena or Presto to 
> automatic detect the type of file and the compression if applied



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (FLINK-13428) StreamingFileSink allow part file name to be configurable

2019-08-07 Thread Kostas Kloudas (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas reassigned FLINK-13428:
--

Assignee: Joao Boto  (was: Kostas Kloudas)

> StreamingFileSink allow part file name to be configurable
> -
>
> Key: FLINK-13428
> URL: https://issues.apache.org/jira/browse/FLINK-13428
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Joao Boto
>Assignee: Joao Boto
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Allow that part file name could be configurable:
>  * partPrefix and partSuffix can be passed
>  
> the part prefix allow to set a better name to file
> the part suffix (if used as extension) allow system like Athena or Presto to 
> automatic detect the type of file and the compression if applied



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] asfgit closed pull request #9228: [FLINK-13428][Connectors / FileSystem] allow part file names to be configurable

2019-08-07 Thread GitBox
asfgit closed pull request #9228: [FLINK-13428][Connectors / FileSystem] allow 
part file names to be configurable
URL: https://github.com/apache/flink/pull/9228
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13159) java.lang.ClassNotFoundException when restore job

2019-08-07 Thread Tzu-Li (Gordon) Tai (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-13159:

Fix Version/s: 1.9.0
   1.8.2

> java.lang.ClassNotFoundException when restore job
> -
>
> Key: FLINK-13159
> URL: https://issues.apache.org/jira/browse/FLINK-13159
> Project: Flink
>  Issue Type: Bug
>  Components: API / Type Serialization System
>Affects Versions: 1.8.0, 1.8.1
>Reporter: kring
>Assignee: Yun Tang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.8.2, 1.9.0
>
> Attachments: image-2019-08-05-17-29-40-351.png, 
> image-2019-08-05-17-32-44-988.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code:java}
> java.lang.Exception: Exception while creating StreamOperatorStateContext.
> at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:195)
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:250)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:738)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:289)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.flink.util.FlinkException: Could not restore keyed 
> state backend for WindowOperator_b398b3dd4c544ddf2d47a0cc47d332f4_(1/6) from 
> any of the 1 prov
> ided restore options.
> at 
> org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:135)
> at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.keyedStatedBackend(StreamTaskStateInitializerImpl.java:307)
> at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:135)
> ... 5 common frames omitted
> Caused by: org.apache.flink.runtime.state.BackendBuildingException: Failed 
> when trying to restore heap backend
> at 
> org.apache.flink.runtime.state.heap.HeapKeyedStateBackendBuilder.build(HeapKeyedStateBackendBuilder.java:130)
> at 
> org.apache.flink.runtime.state.filesystem.FsStateBackend.createKeyedStateBackend(FsStateBackend.java:489)
> at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.lambda$keyedStatedBackend$1(StreamTaskStateInitializerImpl.java:291)
> at 
> org.apache.flink.streaming.api.operators.BackendRestorerProcedure.attemptCreateAndRestore(BackendRestorerProcedure.java:142)
> at 
> org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:121)
> ... 7 common frames omitted
> Caused by: java.lang.RuntimeException: Cannot instantiate class.
> at 
> org.apache.flink.api.java.typeutils.runtime.PojoSerializer.deserialize(PojoSerializer.java:384)
> at 
> org.apache.flink.runtime.state.heap.StateTableByKeyGroupReaders.lambda$createV2PlusReader$0(StateTableByKeyGroupReaders.java:74)
> at 
> org.apache.flink.runtime.state.KeyGroupPartitioner$PartitioningResultKeyGroupReader.readMappingsInKeyGroup(KeyGroupPartitioner.java:297)
> at 
> org.apache.flink.runtime.state.heap.HeapRestoreOperation.readKeyGroupStateData(HeapRestoreOperation.java:290)
> at 
> org.apache.flink.runtime.state.heap.HeapRestoreOperation.readStateHandleStateData(HeapRestoreOperation.java:251)
> at 
> org.apache.flink.runtime.state.heap.HeapRestoreOperation.restore(HeapRestoreOperation.java:153)
> at 
> org.apache.flink.runtime.state.heap.HeapKeyedStateBackendBuilder.build(HeapKeyedStateBackendBuilder.java:127)
> ... 11 common frames omitted
> Caused by: java.lang.ClassNotFoundException: xxx
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.flink.api.java.typeutils.runtime.PojoSerializer.deserialize(PojoSerializer.java:382)
> ... 17 common frames omitted
> {code}
> A strange problem with Flink is that after a task has been running properly 
> for a period of time, if any exception (such as ask timeout or ES request 
> timeout) is thrown, the task restart will report the above error (xxx is a 
> business model), and ten subsequent retries will not succeed, but the task 
> will be resubmitted. Then it can run normally. In addition, there are three 
> other tasks running at the same time, none of which has the problem.
> My flink version is 1.8.0.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13573) Merge SubmittedJobGraph into JobGraph

2019-08-07 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901843#comment-16901843
 ] 

Till Rohrmann commented on FLINK-13573:
---

Thanks for opening this issue [~Tison]. I think your observation is correct. 
How would we ensure backwards compatibility?

> Merge SubmittedJobGraph into JobGraph
> -
>
> Key: FLINK-13573
> URL: https://issues.apache.org/jira/browse/FLINK-13573
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.10.0
>Reporter: TisonKun
>Priority: Major
> Fix For: 1.10.0
>
>
> As time goes on, {{SubmittedJobGraph}} becomes a thin wrapper of {{JobGraph}} 
> without any additional information. It is reasonable that we merge 
> {{SubmittedJobGraph}} into {{JobGraph}} and use only {{JobGraph}}. 
> WDYT? cc [~till.rohrmann] [~GJL]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Closed] (FLINK-13428) StreamingFileSink allow part file name to be configurable

2019-08-07 Thread Kostas Kloudas (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas closed FLINK-13428.
--
Resolution: Fixed

Merged on master with e9d3d58eff5c9a3727b1d5b2b3dfaec136267951

> StreamingFileSink allow part file name to be configurable
> -
>
> Key: FLINK-13428
> URL: https://issues.apache.org/jira/browse/FLINK-13428
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Joao Boto
>Assignee: Joao Boto
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Allow that part file name could be configurable:
>  * partPrefix and partSuffix can be passed
>  
> the part prefix allow to set a better name to file
> the part suffix (if used as extension) allow system like Athena or Presto to 
> automatic detect the type of file and the compression if applied



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (FLINK-13606) PrometheusReporterEndToEndITCase.testReporter unstable on Travis

2019-08-07 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann reassigned FLINK-13606:
-

Assignee: Chesnay Schepler

> PrometheusReporterEndToEndITCase.testReporter unstable on Travis
> 
>
> Key: FLINK-13606
> URL: https://issues.apache.org/jira/browse/FLINK-13606
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Metrics
>Affects Versions: 1.9.0
>Reporter: Till Rohrmann
>Assignee: Chesnay Schepler
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.9.0
>
>
> The {{PrometheusReporterEndToEndITCase.testReporter}} is unstable on Travis. 
> It fails with {{java.io.IOException: Process failed due to timeout.}}
> https://api.travis-ci.org/v3/job/568280216/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] asfgit merged pull request #9376: [1.9][FLINK-13452][runtime] Ensure to fail global when exception happens during reseting tasks of regions

2019-08-07 Thread GitBox
asfgit merged pull request #9376: [1.9][FLINK-13452][runtime] Ensure to fail 
global when exception happens during reseting tasks of regions
URL: https://github.com/apache/flink/pull/9376
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13607) TPC-H end-to-end test (Blink planner) failed on Travis

2019-08-07 Thread Kurt Young (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901848#comment-16901848
 ] 

Kurt Young commented on FLINK-13607:


I think it's related to FLINK-13592, will wait and see if the next cron job 
passed. (there is no cron job after FLINK-13592 being merged). 

> TPC-H end-to-end test (Blink planner) failed on Travis
> --
>
> Key: FLINK-13607
> URL: https://issues.apache.org/jira/browse/FLINK-13607
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Tests
>Affects Versions: 1.9.0
>Reporter: Till Rohrmann
>Assignee: Kurt Young
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.9.0
>
>
> The {{TPC-H end-to-end test (Blink planner)}} fail consistently on Travis with
> {code}
> Generating test data...
> Error: Could not find or load main class 
> org.apache.flink.table.tpch.TpchDataGenerator
> {code}
> https://api.travis-ci.org/v3/job/568280203/log.txt
> https://api.travis-ci.org/v3/job/568280209/log.txt
> https://api.travis-ci.org/v3/job/568280215/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9322: [FLINK-13471][table] Add FlatAggregate support to stream Table API(blink planner)

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9322: [FLINK-13471][table] Add 
FlatAggregate support to stream Table API(blink planner)
URL: https://github.com/apache/flink/pull/9322#issuecomment-517273321
 
 
   ## CI report:
   
   * e3d5e780a11ddd27ab6721772925db730d95c75c : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121590025)
   * 6a6b137eb8d9548779fb2ed3867d7ce4e9077867 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/121843577)
   * 39e62d45a03e4719b6ceada80f70af30a04b4d8c : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121843720)
   * 2e6d1ed43bd5cfaacfa04d4030186dbf9540b694 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/19927)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] kl0u edited a comment on issue #9228: [FLINK-13428][Connectors / FileSystem] allow part file names to be configurable

2019-08-07 Thread GitBox
kl0u edited a comment on issue #9228: [FLINK-13428][Connectors / FileSystem] 
allow part file names to be configurable
URL: https://github.com/apache/flink/pull/9228#issuecomment-518977800
 
 
   No need @eskabetxe , I can do that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] dawidwys commented on issue #9247: [FLINK-13386][web]: Fix frictions in the new default Web Frontend

2019-08-07 Thread GitBox
dawidwys commented on issue #9247: [FLINK-13386][web]: Fix frictions in the new 
default Web Frontend
URL: https://github.com/apache/flink/pull/9247#issuecomment-518985417
 
 
   Hi @vthinkxie 
   Thank you for the PR and really sorry it took me so long to check it.
   
   All points beside `can't see watermarks for all operators at once` works as 
expected.
   What I meant is I missed a view like this one in old frontend:
   
![watermarks](https://user-images.githubusercontent.com/6242259/62604962-6b6a3200-b8f9-11e9-92a9-357fc5aec735.png)
   This is not a pressing issue though. We might just see if users complain 
about this view.
   
   I can't really check the PR contents as I am not very familiar with the 
frontend technology. Will try to ask a friend for that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13608) Update upgrade compatibility table (docs/ops/upgrading.md) for 1.9.0

2019-08-07 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-13608:
-

 Summary: Update upgrade compatibility table 
(docs/ops/upgrading.md) for 1.9.0
 Key: FLINK-13608
 URL: https://issues.apache.org/jira/browse/FLINK-13608
 Project: Flink
  Issue Type: Task
  Components: Documentation
Affects Versions: 1.9.0
Reporter: Till Rohrmann
 Fix For: 1.9.0


Update upgrade compatibility table (docs/ops/upgrading.md) for 1.9.0



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (FLINK-13452) Pipelined region failover strategy does not recover Job if checkpoint cannot be read

2019-08-07 Thread Gary Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao resolved FLINK-13452.
--
Resolution: Fixed

1.9: b931e9c10b231ed1823fe6a97bccf73bb835dbc2
1.10: 9828f5317cd4130d8518df5762bdd479b294b272

> Pipelined region failover strategy does not recover Job if checkpoint cannot 
> be read
> 
>
> Key: FLINK-13452
> URL: https://issues.apache.org/jira/browse/FLINK-13452
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Gary Yao
>Assignee: Yun Tang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0
>
> Attachments: jobmanager.log
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The job does not recover if a checkpoint cannot be read and 
> {{jobmanager.execution.failover-strategy}} is set to _"region"_. 
> *Analysis*
> The {{RestartCallback}} created by 
> {{AdaptedRestartPipelinedRegionStrategyNG}} throws a \{{RuntimeException}} if 
> no checkpoints could be read. When the restart is invoked in a separate 
> thread pool, the exception is swallowed. See:
> [https://github.com/apache/flink/blob/21621fbcde534969b748f21e9f8983e3f4e0fb1d/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/failover/AdaptedRestartPipelinedRegionStrategyNG.java#L117-L119]
> [https://github.com/apache/flink/blob/21621fbcde534969b748f21e9f8983e3f4e0fb1d/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/restart/FixedDelayRestartStrategy.java#L65]
> *Expected behavior*
>  * Job should restart
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9342: [FLINK-13438][hive] Fix DataTypes.DATE/TIME/TIMESTAMP support for hive connectors

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9342: [FLINK-13438][hive] Fix 
DataTypes.DATE/TIME/TIMESTAMP support for hive connectors
URL: https://github.com/apache/flink/pull/9342#issuecomment-517768053
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 7b4a9226cfffc1ea505c8d20b5b5f9ce8c5d2113 (Wed Aug 07 
08:59:34 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13612) 高并发初始化FlinkKafkaProducer011时StateDescriptor加载报错NPE

2019-08-07 Thread weiyunqing (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

weiyunqing updated FLINK-13612:
---
Description: 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011#NEXT_TRANSACTIONAL_ID_HINT_DESCRIPTOR

The NEXT_TRANSACTIONAL_ID_HINT_DESCRIPTOR variable state in 
FlinkKafkaProducer011 is modified with static

NullPointerException occur in high concurrency when 
initializeSerializerUnlessSet method is executed

 

java.lang.NullPointerException at 
org.apache.flink.api.common.state.StateDescriptor.initializeSerializerUnlessSet(StateDescriptor.java:264)
 at 
org.apache.flink.runtime.state.DefaultOperatorStateBackend.getListState(DefaultOperatorStateBackend.java:730)
 at 
org.apache.flink.runtime.state.DefaultOperatorStateBackend.getUnionListState(DefaultOperatorStateBackend.java:271)
 at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.initializeState(FlinkKafkaProducer011.java:837)
 at 
org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
 at 
org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
 at 
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
 at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:281)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:730)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:295) 
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:720) at 
java.lang.Thread.run(Thread.java:748)

  was:
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011#NEXT_TRANSACTIONAL_ID_HINT_DESCRIPTOR

FlinkKafkaProducer011中的NEXT_TRANSACTIONAL_ID_HINT_DESCRIPTOR变量state使用了static修饰

在执行initializeSerializerUnlessSet方法的时候高并发情况下会出现NPE异常

 

java.lang.NullPointerException at 
org.apache.flink.api.common.state.StateDescriptor.initializeSerializerUnlessSet(StateDescriptor.java:264)
 at 
org.apache.flink.runtime.state.DefaultOperatorStateBackend.getListState(DefaultOperatorStateBackend.java:730)
 at 
org.apache.flink.runtime.state.DefaultOperatorStateBackend.getUnionListState(DefaultOperatorStateBackend.java:271)
 at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.initializeState(FlinkKafkaProducer011.java:837)
 at 
org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
 at 
org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
 at 
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
 at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:281)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:730)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:295) 
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:720) at 
java.lang.Thread.run(Thread.java:748)


> 高并发初始化FlinkKafkaProducer011时StateDescriptor加载报错NPE
> --
>
> Key: FLINK-13612
> URL: https://issues.apache.org/jira/browse/FLINK-13612
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: shaded-7.0, 1.6.3, 1.6.4, 1.7.2
>Reporter: weiyunqing
>Priority: Major
> Fix For: shaded-7.0, 1.6.3, 1.6.4, 1.7.2
>
>
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011#NEXT_TRANSACTIONAL_ID_HINT_DESCRIPTOR
> The NEXT_TRANSACTIONAL_ID_HINT_DESCRIPTOR variable state in 
> FlinkKafkaProducer011 is modified with static
> NullPointerException occur in high concurrency when 
> initializeSerializerUnlessSet method is executed
>  
> java.lang.NullPointerException at 
> org.apache.flink.api.common.state.StateDescriptor.initializeSerializerUnlessSet(StateDescriptor.java:264)
>  at 
> org.apache.flink.runtime.state.DefaultOperatorStateBackend.getListState(DefaultOperatorStateBackend.java:730)
>  at 
> org.apache.flink.runtime.state.DefaultOperatorStateBackend.getUnionListState(DefaultOperatorStateBackend.java:271)
>  at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.initializeState(FlinkKafkaProducer011.java:837)
>  at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
>  at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
>  at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(Abstra

[jira] [Updated] (FLINK-13612) StateDescriptor Loading Error NPE at FlinkKafkaProducer011 with High Concurrency Initialization

2019-08-07 Thread weiyunqing (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

weiyunqing updated FLINK-13612:
---
Summary: StateDescriptor Loading Error NPE at FlinkKafkaProducer011 with 
High Concurrency Initialization  (was: 
高并发初始化FlinkKafkaProducer011时StateDescriptor加载报错NPE)

> StateDescriptor Loading Error NPE at FlinkKafkaProducer011 with High 
> Concurrency Initialization
> ---
>
> Key: FLINK-13612
> URL: https://issues.apache.org/jira/browse/FLINK-13612
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: shaded-7.0, 1.6.3, 1.6.4, 1.7.2
>Reporter: weiyunqing
>Priority: Major
> Fix For: shaded-7.0, 1.6.3, 1.6.4, 1.7.2
>
>
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011#NEXT_TRANSACTIONAL_ID_HINT_DESCRIPTOR
> The NEXT_TRANSACTIONAL_ID_HINT_DESCRIPTOR variable state in 
> FlinkKafkaProducer011 is modified with static
> NullPointerException occur in high concurrency when 
> initializeSerializerUnlessSet method is executed
>  
> java.lang.NullPointerException at 
> org.apache.flink.api.common.state.StateDescriptor.initializeSerializerUnlessSet(StateDescriptor.java:264)
>  at 
> org.apache.flink.runtime.state.DefaultOperatorStateBackend.getListState(DefaultOperatorStateBackend.java:730)
>  at 
> org.apache.flink.runtime.state.DefaultOperatorStateBackend.getUnionListState(DefaultOperatorStateBackend.java:271)
>  at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.initializeState(FlinkKafkaProducer011.java:837)
>  at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
>  at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
>  at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:281)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:730)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:295)
>  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:720) at 
> java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (FLINK-13573) Merge SubmittedJobGraph into JobGraph

2019-08-07 Thread TisonKun (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901884#comment-16901884
 ] 

TisonKun edited comment on FLINK-13573 at 8/7/19 8:44 AM:
--

Thanks for your attention [~till.rohrmann].

For backwards compatibility, it would be a problem if

A cluster running with a previous version(said 1.9) persisted job 
graphs({{SubmittedJobGraph}}) while another cluster(standby) was running with 
the same cluster-id(this is only valid in standalone mode) with a new 
version(said 1.10) expect job graphs as {{JobGraph}}.

In reality, user would rarely deploy clusters with different version for the 
same jobs. And if a user want to execute running jobs with a new cluster, the 
formal way is, first cancel with savepoint and submit the job to the new 
cluster with savepoint. In this way the job graph will be persisted by the new 
dispatcher(should be with a new cluster-id reasonably, hence a new namespace).

Besides, previously we have this commit(FLINK-11649) which would break 
serialization compatibility. But so far there has been no user who reported it 
as an issue. Thus I think we can merge {{SubmittedJobGraph}} into {{JobGraph}} 
without afraid of breaking backwards compatibility. Again, the formal way to 
execute a running job with a new cluster is first cancel with savepoint and 
submit the job to the new cluster with savepoint.


was (Author: tison):
Thanks for your attention [~till.rohrmann].

For backwards compatibility, it would be a problem if

a cluster ran with a previous version(said 1.9) persisted job 
graphs(SubmittedJobGraph)
while another cluster(standby) ran with the same cluster-id(this is only valid 
in standalone mode) with a new version(said 1.10) expect job graphs as 
{{JobGraph}}

In reality, user would rarely run clusters with different version for the same 
jobs. And if user want to execute running jobs with a new cluster, the formal 
way is first cancel with savepoint and submit the job to the new cluster with 
savepoint. In this way the job graph will be submitted by the new 
dispatcher(should be with a new cluster-id reasonably, hence a new namespace).

Besides, previously we have this commit(FLINK-11649) which would break 
serialize compatibility and so far there is no user report it as an issue. Thus 
I think we can merge {{SubmittedJobGraph}} into {{JobGraph}} without afraid of 
backwards compatibility. Again, the formal way to execute a running job with a 
new cluster is first cancel with savepoint and submit the job to the new 
cluster with savepoint.

> Merge SubmittedJobGraph into JobGraph
> -
>
> Key: FLINK-13573
> URL: https://issues.apache.org/jira/browse/FLINK-13573
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.10.0
>Reporter: TisonKun
>Priority: Major
> Fix For: 1.10.0
>
>
> As time goes on, {{SubmittedJobGraph}} becomes a thin wrapper of {{JobGraph}} 
> without any additional information. It is reasonable that we merge 
> {{SubmittedJobGraph}} into {{JobGraph}} and use only {{JobGraph}}. 
> WDYT? cc [~till.rohrmann] [~GJL]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13604) All kinds of problem when conversion from Logical type to DataType

2019-08-07 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-13604:

Fix Version/s: (was: 1.10.0)
   1.9.0

> All kinds of problem when conversion from Logical type to DataType
> --
>
> Key: FLINK-13604
> URL: https://issues.apache.org/jira/browse/FLINK-13604
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: Danny Chan
>Priority: Critical
> Fix For: 1.9.0
>
>
> For Blink Planner:
> # Time(3) is converted to Time, the precision is lost
> # ROW<`f0` INT NOT NULL, `f1` BOOLEAN> is converted to ROW<`f0` INT, `f1` 
> BOOLEAN>, the nullable attr is lost
> the conversion code is:
> {code:java}
> LogicalTypeDataTypeConverter.fromLogicalTypeToDataType(FlinkTypeFactory.toLogicalType(relType));
> {code}
> For Flink planner:
> # All the Char type is converted to String type, which is totally wrong.
> # All the decimal type is converted to Legacy(BigDecimal) which is confusing 
> ..
> The conversion code is:
> {code:java}
> TypeConversions.fromLegacyInfoToDataType(FlinkTypeFactory.toTypeInfo(relType))
> {code}
> Please see the tests 
> SqlToOperationConverterTest#testCreateTableWithFullDataTypes.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13604) All kinds of problem when conversion from Logical type to DataType

2019-08-07 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901890#comment-16901890
 ] 

Jark Wu commented on FLINK-13604:
-

Will you take this issue [~danny0405]?

> All kinds of problem when conversion from Logical type to DataType
> --
>
> Key: FLINK-13604
> URL: https://issues.apache.org/jira/browse/FLINK-13604
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: Danny Chan
>Priority: Critical
> Fix For: 1.9.0
>
>
> For Blink Planner:
> # Time(3) is converted to Time, the precision is lost
> # ROW<`f0` INT NOT NULL, `f1` BOOLEAN> is converted to ROW<`f0` INT, `f1` 
> BOOLEAN>, the nullable attr is lost
> the conversion code is:
> {code:java}
> LogicalTypeDataTypeConverter.fromLogicalTypeToDataType(FlinkTypeFactory.toLogicalType(relType));
> {code}
> For Flink planner:
> # All the Char type is converted to String type, which is totally wrong.
> # All the decimal type is converted to Legacy(BigDecimal) which is confusing 
> ..
> The conversion code is:
> {code:java}
> TypeConversions.fromLegacyInfoToDataType(FlinkTypeFactory.toTypeInfo(relType))
> {code}
> Please see the tests 
> SqlToOperationConverterTest#testCreateTableWithFullDataTypes.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9370: [FLINK-13594][python] Improve the 'from_element' method of flink python api to apply to blink planner

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9370: [FLINK-13594][python] Improve the 
'from_element' method of flink python api to apply to blink planner
URL: https://github.com/apache/flink/pull/9370#issuecomment-518545434
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 57c204232f2065a559740b58cc5f954940ec587b (Wed Aug 07 
09:03:38 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] WeiZhong94 commented on issue #9370: [FLINK-13594][python] Improve the 'from_element' method of flink python api to apply to blink planner

2019-08-07 Thread GitBox
WeiZhong94 commented on issue #9370: [FLINK-13594][python] Improve the 
'from_element' method of flink python api to apply to blink planner
URL: https://github.com/apache/flink/pull/9370#issuecomment-519010352
 
 
   @hequn8128 Thanks for your review! I have addressed your comments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13599) Kinesis end-to-end test failed on Travis

2019-08-07 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901902#comment-16901902
 ] 

Till Rohrmann commented on FLINK-13599:
---

I would like to keep the priority of this issue as critical because it is a 
test instability (independent of the cause). We should try to resolve test 
instabilities asap since a stitch in time saves nine!

> Kinesis end-to-end test failed on Travis
> 
>
> Key: FLINK-13599
> URL: https://issues.apache.org/jira/browse/FLINK-13599
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis, Tests
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Till Rohrmann
>Priority: Minor
>  Labels: test-stability
> Fix For: 1.9.0
>
>
> The {{Kinesis end-to-end test}} failed on Travis with 
> {code}
> 2019-08-06 08:48:20,177 ERROR org.apache.flink.client.cli.CliFrontend 
>   - Error while running the command.
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Unable to execute HTTP request: Connect to localhost:4567 
> [localhost/127.0.0.1] failed: Connection refused (Connection refused)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:593)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
>   at 
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
>   at 
> org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)
> Caused by: org.apache.flink.kinesis.shaded.com.amazonaws.SdkClientException: 
> Unable to execute HTTP request: Connect to localhost:4567 
> [localhost/127.0.0.1] failed: Connection refused (Connection refused)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1116)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1066)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.doInvoke(AmazonKinesisClient.java:2388)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.invoke(AmazonKinesisClient.java:2364)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.executeDescribeStream(AmazonKinesisClient.java:754)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.describeStream(AmazonKinesisClient.java:729)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.describeStream(AmazonKinesisClient.java:766)
>   at 
> org.apache.flink.streaming.kinesis.test.KinesisPubsubClient.createTopic(KinesisPubsubClient.java:63)
>   at 
> org.apache.flink.streaming.kinesis.test.KinesisExampleTest.main(KinesisExampleTest.java:57)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
>   ... 9 more
> Caus

[GitHub] [flink] flinkbot edited a comment on issue #9370: [FLINK-13594][python] Improve the 'from_element' method of flink python api to apply to blink planner

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9370: [FLINK-13594][python] Improve the 
'from_element' method of flink python api to apply to blink planner
URL: https://github.com/apache/flink/pull/9370#issuecomment-518545434
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 57c204232f2065a559740b58cc5f954940ec587b (Wed Aug 07 
09:05:40 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13599) Kinesis end-to-end test failed on Travis

2019-08-07 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-13599:
--
Priority: Critical  (was: Minor)

> Kinesis end-to-end test failed on Travis
> 
>
> Key: FLINK-13599
> URL: https://issues.apache.org/jira/browse/FLINK-13599
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis, Tests
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.9.0
>
>
> The {{Kinesis end-to-end test}} failed on Travis with 
> {code}
> 2019-08-06 08:48:20,177 ERROR org.apache.flink.client.cli.CliFrontend 
>   - Error while running the command.
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Unable to execute HTTP request: Connect to localhost:4567 
> [localhost/127.0.0.1] failed: Connection refused (Connection refused)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:593)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
>   at 
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
>   at 
> org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)
> Caused by: org.apache.flink.kinesis.shaded.com.amazonaws.SdkClientException: 
> Unable to execute HTTP request: Connect to localhost:4567 
> [localhost/127.0.0.1] failed: Connection refused (Connection refused)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1116)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1066)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.doInvoke(AmazonKinesisClient.java:2388)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.invoke(AmazonKinesisClient.java:2364)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.executeDescribeStream(AmazonKinesisClient.java:754)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.describeStream(AmazonKinesisClient.java:729)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.describeStream(AmazonKinesisClient.java:766)
>   at 
> org.apache.flink.streaming.kinesis.test.KinesisPubsubClient.createTopic(KinesisPubsubClient.java:63)
>   at 
> org.apache.flink.streaming.kinesis.test.KinesisExampleTest.main(KinesisExampleTest.java:57)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
>   ... 9 more
> Caused by: 
> org.apache.flink.kinesis.shaded.org.apache.http.conn.HttpHostConnectException:
>  Connect to localhost:4567 [localhost/127.0.0.1] failed: Connection refused 
> (Connection refused)
>   at 
> org.apache.flink.kin

[jira] [Comment Edited] (FLINK-13599) Kinesis end-to-end test failed on Travis

2019-08-07 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901902#comment-16901902
 ] 

Till Rohrmann edited comment on FLINK-13599 at 8/7/19 9:04 AM:
---

I would like to keep the priority of this issue as critical because it is a 
test instability (independent of the cause). We should try to resolve test 
instabilities asap since a stitch in time saves nine!

Maybe we need to add a retry loop for setting up end-to-end tests in order to 
harden against outages of external systems.


was (Author: till.rohrmann):
I would like to keep the priority of this issue as critical because it is a 
test instability (independent of the cause). We should try to resolve test 
instabilities asap since a stitch in time saves nine!

> Kinesis end-to-end test failed on Travis
> 
>
> Key: FLINK-13599
> URL: https://issues.apache.org/jira/browse/FLINK-13599
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis, Tests
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Till Rohrmann
>Priority: Minor
>  Labels: test-stability
> Fix For: 1.9.0
>
>
> The {{Kinesis end-to-end test}} failed on Travis with 
> {code}
> 2019-08-06 08:48:20,177 ERROR org.apache.flink.client.cli.CliFrontend 
>   - Error while running the command.
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Unable to execute HTTP request: Connect to localhost:4567 
> [localhost/127.0.0.1] failed: Connection refused (Connection refused)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:593)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
>   at 
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
>   at 
> org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)
> Caused by: org.apache.flink.kinesis.shaded.com.amazonaws.SdkClientException: 
> Unable to execute HTTP request: Connect to localhost:4567 
> [localhost/127.0.0.1] failed: Connection refused (Connection refused)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1116)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1066)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.doInvoke(AmazonKinesisClient.java:2388)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.invoke(AmazonKinesisClient.java:2364)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.executeDescribeStream(AmazonKinesisClient.java:754)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.describeStream(AmazonKinesisClient.java:729)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.describeStream(AmazonKinesisClient.java:766)
>   at 
> org.apache.flink.streaming.kinesis.test.KinesisPubsubClient.createTopic(KinesisPubsubClient.java:63)
>   at 
> org.apache.flink.streaming.kinesis.test.KinesisExampleTest.main(KinesisExampleTest.java:57)
>   at sun.reflect.NativeMethodAccesso

[GitHub] [flink] flinkbot edited a comment on issue #9342: [FLINK-13438][hive] Fix DataTypes.DATE/TIME/TIMESTAMP support for hive connectors

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9342: [FLINK-13438][hive] Fix 
DataTypes.DATE/TIME/TIMESTAMP support for hive connectors
URL: https://github.com/apache/flink/pull/9342#issuecomment-517770642
 
 
   ## CI report:
   
   * 76704f271662b57cbe36679d3d249bcdd7fdf66a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121784366)
   * 7b4a9226cfffc1ea505c8d20b5b5f9ce8c5d2113 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/122239651)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] docete commented on a change in pull request #9331: [FLINK-13523][table-planner-blink] Verify and correct arithmetic function's semantic for Blink planner

2019-08-07 Thread GitBox
docete commented on a change in pull request #9331: 
[FLINK-13523][table-planner-blink] Verify and correct arithmetic function's 
semantic for Blink planner
URL: https://github.com/apache/flink/pull/9331#discussion_r311445726
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/agg/DeclarativeAggCodeGen.scala
 ##
 @@ -204,8 +204,13 @@ class DeclarativeAggCodeGen(
   }
 
   def getValue(generator: ExprCodeGenerator): GeneratedExpression = {
-val resolvedGetValueExpression = function.getValueExpression
+val expr = function.getValueExpression
   .accept(ResolveReference())
+val resolvedGetValueExpression = ApiExpressionUtils.unresolvedCall(
 
 Review comment:
   For DeclarativeAggregateFunction functions, there is no such guarantee that 
getResultType and getValueExpression(after type inference) have the same type. 
If they are different type, a RuntimeException occurs.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13612) 高并发初始化FlinkKafkaProducer011时StateDescriptor加载报错NPE

2019-08-07 Thread weiyunqing (JIRA)
weiyunqing created FLINK-13612:
--

 Summary: 高并发初始化FlinkKafkaProducer011时StateDescriptor加载报错NPE
 Key: FLINK-13612
 URL: https://issues.apache.org/jira/browse/FLINK-13612
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.7.2, 1.6.4, 1.6.3, shaded-7.0
Reporter: weiyunqing
 Fix For: 1.7.2, 1.6.4, 1.6.3, shaded-7.0


org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011#NEXT_TRANSACTIONAL_ID_HINT_DESCRIPTOR

FlinkKafkaProducer011中的NEXT_TRANSACTIONAL_ID_HINT_DESCRIPTOR变量state使用了static修饰

在执行initializeSerializerUnlessSet方法的时候高并发情况下会出现NPE异常

 

java.lang.NullPointerException at 
org.apache.flink.api.common.state.StateDescriptor.initializeSerializerUnlessSet(StateDescriptor.java:264)
 at 
org.apache.flink.runtime.state.DefaultOperatorStateBackend.getListState(DefaultOperatorStateBackend.java:730)
 at 
org.apache.flink.runtime.state.DefaultOperatorStateBackend.getUnionListState(DefaultOperatorStateBackend.java:271)
 at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.initializeState(FlinkKafkaProducer011.java:837)
 at 
org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
 at 
org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
 at 
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
 at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:281)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:730)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:295) 
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:720) at 
java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13611) Introduce analyze statistic utility to generate table & column statistics

2019-08-07 Thread godfrey he (JIRA)
godfrey he created FLINK-13611:
--

 Summary: Introduce analyze statistic utility to generate table & 
column statistics
 Key: FLINK-13611
 URL: https://issues.apache.org/jira/browse/FLINK-13611
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / Planner
Reporter: godfrey he
 Fix For: 1.10.0


this issue aims to introduce a utility class to generate table & column 
statistics, the main steps include: 
1. generate sql, like {{select approx_count_distinct(a) as ndv, count(1) - 
count(a) as nullCount, avg(char_length(a)) as avgLen, max(char_lenght(a)) as 
maxLen, max(a) as maxValue, min(a) as minValue, ... from MyTable }}
2. execute the query
3. convert to the result to {{TableStats}} (maybe the source table is not a 
catalog table)
4. convert to {{TableStats}} to {{CatalogTableStatistics}} if needed

This issue does not involve DDL, however the DDL could use this utility class 
once it's supported.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] docete commented on a change in pull request #9331: [FLINK-13523][table-planner-blink] Verify and correct arithmetic function's semantic for Blink planner

2019-08-07 Thread GitBox
docete commented on a change in pull request #9331: 
[FLINK-13523][table-planner-blink] Verify and correct arithmetic function's 
semantic for Blink planner
URL: https://github.com/apache/flink/pull/9331#discussion_r311446493
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/agg/DeclarativeAggCodeGen.scala
 ##
 @@ -204,8 +204,13 @@ class DeclarativeAggCodeGen(
   }
 
   def getValue(generator: ExprCodeGenerator): GeneratedExpression = {
-val resolvedGetValueExpression = function.getValueExpression
+val expr = function.getValueExpression
   .accept(ResolveReference())
+val resolvedGetValueExpression = ApiExpressionUtils.unresolvedCall(
 
 Review comment:
   Will add comments for readability


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13606) PrometheusReporterEndToEndITCase.testReporter unstable on Travis

2019-08-07 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-13606:
-
Priority: Minor  (was: Blocker)

> PrometheusReporterEndToEndITCase.testReporter unstable on Travis
> 
>
> Key: FLINK-13606
> URL: https://issues.apache.org/jira/browse/FLINK-13606
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Metrics
>Affects Versions: 1.9.0
>Reporter: Till Rohrmann
>Assignee: Chesnay Schepler
>Priority: Minor
>  Labels: test-stability
> Fix For: 1.9.0
>
>
> The {{PrometheusReporterEndToEndITCase.testReporter}} is unstable on Travis. 
> It fails with {{java.io.IOException: Process failed due to timeout.}}
> https://api.travis-ci.org/v3/job/568280216/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13609) StreamingFileSink - reset part counter on bucket change

2019-08-07 Thread Joao Boto (JIRA)
Joao Boto created FLINK-13609:
-

 Summary: StreamingFileSink - reset part counter on bucket change
 Key: FLINK-13609
 URL: https://issues.apache.org/jira/browse/FLINK-13609
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / FileSystem
Reporter: Joao Boto


When writing to files using StreamingFileSink on bucket change we expect that 
partcounter will reset its counter to 0

as a example
 * using DateTimeBucketAssigner using ({color:#6a8759}/MM/dd/HH{color}) 
 * and ten files hour (for simplicity)

this will create the:
 * bucket 2019/08/07/00 with files partfile-0-0 to partfile-0-9
 * bucket 2019/08/07/01 with files partfile-0-10 to partfile-0-19
 * bucket 2019/08/07/02 with files partfile-0-20 to partfile-0-29

and we expect this:
 * bucket 2019/08/07/00 with files partfile-0-0 to partfile-0-9
 * bucket 2019/08/07/01 with files partfile-0-0 to partfile-0-9
 * bucket 2019/08/07/02 with files partfile-0-0 to partfile-0-9

 

[~kkl0u] i don't know if it's the expected behavior  (or this can be configured)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9331: [FLINK-13523][table-planner-blink] Verify and correct arithmetic function's semantic for Blink planner

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9331: [FLINK-13523][table-planner-blink] 
Verify and correct arithmetic function's semantic for Blink planner
URL: https://github.com/apache/flink/pull/9331#issuecomment-517546275
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 48c66a9d7f5b1903fa3271fcfc2ce048ac25a45d (Wed Aug 07 
09:07:43 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13537) Changing Kafka producer pool size and scaling out may create overlapping transaction IDs

2019-08-07 Thread Nico Kruber (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber updated FLINK-13537:

Description: 
The Kafka producer's transaction IDs are only generated once when there was no 
previous state for that operator. In the case where we restore and increase 
parallelism (scale-out), some operators may not have previous state and create 
new IDs. Now, if we also reduce the {{poolSize}}, these new IDs may overlap 
with the old ones which should never happen! Similarly, a scale-in + increasing 
{{poolSize}} could lead the the same thing.

An easy "fix" for this would be to forbid changing the {{poolSize}}. We could 
potentially be a bit better by only forbidding changes that can lead to 
transaction ID overlaps which we can identify from the formulae that 
{{TransactionalIdsGenerator}} uses. This should probably be the first step 
which can also be back-ported to older Flink versions just in case.


On a side note, the current scheme also relies on the fact, that the operator's 
list state distributes previous states during scale-out in a fashion that only 
the operators with the highest subtask indices do not get a previous state. 
This is somewhat "guaranteed" by {{OperatorStateStore#getListState()}} but I'm 
not sure whether we should actually rely on that there.

  was:
The Kafka producer's transaction IDs are only generated once when there was no 
previous state for that operator. In the case where we restore and increase 
parallelism (scale-out), some operators may not have previous state and create 
new IDs. Now, if we also reduce the poolSize, these new IDs may overlap with 
the old ones which should never happen!

On a side note, the current scheme also relies on the fact, that the operator's 
list state distributes previous states during scale-out in a fashion that only 
the operators with the highest subtask indices do not get a previous state. 
This is somewhat "guaranteed" by {{OperatorStateStore#getListState()}} but I'm 
not sure whether we should actually rely on that there.


> Changing Kafka producer pool size and scaling out may create overlapping 
> transaction IDs
> 
>
> Key: FLINK-13537
> URL: https://issues.apache.org/jira/browse/FLINK-13537
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.8.1, 1.9.0
>Reporter: Nico Kruber
>Priority: Major
>
> The Kafka producer's transaction IDs are only generated once when there was 
> no previous state for that operator. In the case where we restore and 
> increase parallelism (scale-out), some operators may not have previous state 
> and create new IDs. Now, if we also reduce the {{poolSize}}, these new IDs 
> may overlap with the old ones which should never happen! Similarly, a 
> scale-in + increasing {{poolSize}} could lead the the same thing.
> An easy "fix" for this would be to forbid changing the {{poolSize}}. We could 
> potentially be a bit better by only forbidding changes that can lead to 
> transaction ID overlaps which we can identify from the formulae that 
> {{TransactionalIdsGenerator}} uses. This should probably be the first step 
> which can also be back-ported to older Flink versions just in case.
> 
> On a side note, the current scheme also relies on the fact, that the 
> operator's list state distributes previous states during scale-out in a 
> fashion that only the operators with the highest subtask indices do not get a 
> previous state. This is somewhat "guaranteed" by 
> {{OperatorStateStore#getListState()}} but I'm not sure whether we should 
> actually rely on that there.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13607) TPC-H end-to-end test (Blink planner) failed on Travis

2019-08-07 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901904#comment-16901904
 ] 

Jark Wu commented on FLINK-13607:
-

I triggered the TPC-H e2e test in my own travis, and it passed. 
https://travis-ci.org/wuchong/flink/builds/568742518

> TPC-H end-to-end test (Blink planner) failed on Travis
> --
>
> Key: FLINK-13607
> URL: https://issues.apache.org/jira/browse/FLINK-13607
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Tests
>Affects Versions: 1.9.0
>Reporter: Till Rohrmann
>Assignee: Kurt Young
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.9.0
>
>
> The {{TPC-H end-to-end test (Blink planner)}} fail consistently on Travis with
> {code}
> Generating test data...
> Error: Could not find or load main class 
> org.apache.flink.table.tpch.TpchDataGenerator
> {code}
> https://api.travis-ci.org/v3/job/568280203/log.txt
> https://api.travis-ci.org/v3/job/568280209/log.txt
> https://api.travis-ci.org/v3/job/568280215/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9331: [FLINK-13523][table-planner-blink] Verify and correct arithmetic function's semantic for Blink planner

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9331: [FLINK-13523][table-planner-blink] 
Verify and correct arithmetic function's semantic for Blink planner
URL: https://github.com/apache/flink/pull/9331#issuecomment-517546275
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 48c66a9d7f5b1903fa3271fcfc2ce048ac25a45d (Wed Aug 07 
09:09:46 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13606) PrometheusReporterEndToEndITCase.testReporter unstable on Travis

2019-08-07 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-13606:
--
Priority: Critical  (was: Minor)

> PrometheusReporterEndToEndITCase.testReporter unstable on Travis
> 
>
> Key: FLINK-13606
> URL: https://issues.apache.org/jira/browse/FLINK-13606
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Metrics
>Affects Versions: 1.9.0
>Reporter: Till Rohrmann
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.9.0
>
>
> The {{PrometheusReporterEndToEndITCase.testReporter}} is unstable on Travis. 
> It fails with {{java.io.IOException: Process failed due to timeout.}}
> https://api.travis-ci.org/v3/job/568280216/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] docete commented on a change in pull request #9331: [FLINK-13523][table-planner-blink] Verify and correct arithmetic function's semantic for Blink planner

2019-08-07 Thread GitBox
docete commented on a change in pull request #9331: 
[FLINK-13523][table-planner-blink] Verify and correct arithmetic function's 
semantic for Blink planner
URL: https://github.com/apache/flink/pull/9331#discussion_r311448173
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/SplitAggregateRule.scala
 ##
 @@ -303,7 +304,19 @@ class SplitAggregateRule extends RelOptRule(
 aggGroupCount + index + avgAggCount + 1,
 finalAggregate.getRowType)
   avgAggCount += 1
-  relBuilder.call(FlinkSqlOperatorTable.DIVIDE, sumInputRef, 
countInputRef)
+  // TODO
+  val equals = relBuilder.call(
+FlinkSqlOperatorTable.EQUALS,
+countInputRef,
+relBuilder.getRexBuilder.makeBigintLiteral(JBigDecimal.valueOf(0)))
+  val falseT = relBuilder.call(FlinkSqlOperatorTable.DIVIDE, 
sumInputRef, countInputRef)
+  val trueT = relBuilder.cast(
+relBuilder.getRexBuilder.constantNull(), 
aggCall.`type`.getSqlTypeName)
+  relBuilder.call(
+FlinkSqlOperatorTable.IF,
 
 Review comment:
   We use SUM0 in this rule, which many run into zero / zero and throw / zero 
exception.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-13605) AsyncDataStreamITCase.testUnorderedWait failed on Travis

2019-08-07 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901840#comment-16901840
 ] 

Till Rohrmann edited comment on FLINK-13605 at 8/7/19 9:11 AM:
---

It seems to also affect the {{AsyncDataStreamITCase.testOrderedWait}}.

https://api.travis-ci.org/v3/job/568658124/log.txt


was (Author: till.rohrmann):
It seems to also affect the {{AsyncDataStreamITCase.testOrderedWai}}.

https://api.travis-ci.org/v3/job/568658124/log.txt

> AsyncDataStreamITCase.testUnorderedWait failed on Travis
> 
>
> Key: FLINK-13605
> URL: https://issues.apache.org/jira/browse/FLINK-13605
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Tests
>Affects Versions: 1.9.0
>Reporter: Kostas Kloudas
>Assignee: Biao Liu
>Priority: Blocker
> Fix For: 1.9.0
>
> Attachments: 0001-FLINK-13605.patch
>
>
> An instance of the failure can be found here 
> https://api.travis-ci.org/v3/job/568291353/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13611) Introduce analyze statistic utility to generate table & column statistics

2019-08-07 Thread godfrey he (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he updated FLINK-13611:
---
Description: 
this issue aims to introduce a utility class to generate table & column 
statistics, the main steps include: 
1. generate sql, like {{ select approx_count_distinct(a) as ndv, count(1) - 
count(a) as nullCount, avg(char_length(a)) as avgLen, max(char_lenght(a)) as 
maxLen, max(a) as maxValue, min(a) as minValue, ... from MyTable }}
2. execute the query
3. convert to the result to {{TableStats}} (maybe the source table is not a 
catalog table)
4. convert to {{TableStats}} to {{CatalogTableStatistics}} if needed

This issue does not involve DDL, however the DDL could use this utility class 
once it's supported.

  was:
this issue aims to introduce a utility class to generate table & column 
statistics, the main steps include: 
1. generate sql, like {{select approx_count_distinct(a) as ndv, count(1) - 
count(a) as nullCount, avg(char_length(a)) as avgLen, max(char_lenght(a)) as 
maxLen, max(a) as maxValue, min(a) as minValue, ... from MyTable }}
2. execute the query
3. convert to the result to {{TableStats}} (maybe the source table is not a 
catalog table)
4. convert to {{TableStats}} to {{CatalogTableStatistics}} if needed

This issue does not involve DDL, however the DDL could use this utility class 
once it's supported.


> Introduce analyze statistic utility to generate table & column statistics
> -
>
> Key: FLINK-13611
> URL: https://issues.apache.org/jira/browse/FLINK-13611
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Priority: Major
> Fix For: 1.10.0
>
>
> this issue aims to introduce a utility class to generate table & column 
> statistics, the main steps include: 
> 1. generate sql, like {{ select approx_count_distinct(a) as ndv, count(1) - 
> count(a) as nullCount, avg(char_length(a)) as avgLen, max(char_lenght(a)) as 
> maxLen, max(a) as maxValue, min(a) as minValue, ... from MyTable }}
> 2. execute the query
> 3. convert to the result to {{TableStats}} (maybe the source table is not a 
> catalog table)
> 4. convert to {{TableStats}} to {{CatalogTableStatistics}} if needed
> This issue does not involve DDL, however the DDL could use this utility class 
> once it's supported.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13611) Introduce analyze statistic utility to generate table & column statistics

2019-08-07 Thread godfrey he (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he updated FLINK-13611:
---
Description: 
this issue aims to introduce a utility class to generate table & column 
statistics, the main steps include: 
1. generate sql, like
{code:sql}
select approx_count_distinct(a) as ndv, count(1) - count(a) as nullCount, 
avg(char_length(a)) as avgLen, max(char_lenght(a)) as maxLen, max(a) as 
maxValue, min(a) as minValue, ... from MyTable
{code}

2. execute the query

3. convert to the result to {{TableStats}} (maybe the source table is not a 
catalog table)

4. convert to {{TableStats}} to {{CatalogTableStatistics}} if needed

This issue does not involve DDL, however the DDL could use this utility class 
once it's supported.

  was:
this issue aims to introduce a utility class to generate table & column 
statistics, the main steps include: 
1. generate sql, like {{ select approx_count_distinct(a) as ndv, count(1) - 
count(a) as nullCount, avg(char_length(a)) as avgLen, max(char_lenght(a)) as 
maxLen, max(a) as maxValue, min(a) as minValue, ... from MyTable }}
2. execute the query
3. convert to the result to {{TableStats}} (maybe the source table is not a 
catalog table)
4. convert to {{TableStats}} to {{CatalogTableStatistics}} if needed

This issue does not involve DDL, however the DDL could use this utility class 
once it's supported.


> Introduce analyze statistic utility to generate table & column statistics
> -
>
> Key: FLINK-13611
> URL: https://issues.apache.org/jira/browse/FLINK-13611
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Priority: Major
> Fix For: 1.10.0
>
>
> this issue aims to introduce a utility class to generate table & column 
> statistics, the main steps include: 
> 1. generate sql, like
> {code:sql}
> select approx_count_distinct(a) as ndv, count(1) - count(a) as nullCount, 
> avg(char_length(a)) as avgLen, max(char_lenght(a)) as maxLen, max(a) as 
> maxValue, min(a) as minValue, ... from MyTable
> {code}
> 2. execute the query
> 3. convert to the result to {{TableStats}} (maybe the source table is not a 
> catalog table)
> 4. convert to {{TableStats}} to {{CatalogTableStatistics}} if needed
> This issue does not involve DDL, however the DDL could use this utility class 
> once it's supported.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9089: [FLINK-13225][table-planner-blink] Introduce type inference for hive functions in blink

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9089: [FLINK-13225][table-planner-blink] 
Introduce type inference for hive functions in blink
URL: https://github.com/apache/flink/pull/9089#issuecomment-510488226
 
 
   ## CI report:
   
   * fb34a0f4245ddac5872ea77aad07887a6ff12d11 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/118890132)
   * ba44069acdbd82261839605b5d363548dae81522 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/119054606)
   * 349f15d9e799ac9d316a02392d60495058fda4aa : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120974318)
   * 8876d89f32920192e1d3615b588b72021dbc379a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120974650)
   * 2e15e52dabcde02c6634063ada8d7b885252c16f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/19182)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13441) Add batch sql E2E test which runs with fewer slots than parallelism

2019-08-07 Thread Timo Walther (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901879#comment-16901879
 ] 

Timo Walther commented on FLINK-13441:
--

[~ykt836] we should run the TPC-H test with more than one task manager and a 
higher parallelism (maybe 4?). What do you think?

> Add batch sql E2E test which runs with fewer slots than parallelism
> ---
>
> Key: FLINK-13441
> URL: https://issues.apache.org/jira/browse/FLINK-13441
> Project: Flink
>  Issue Type: Test
>  Components: API / DataSet, Tests
>Reporter: Till Rohrmann
>Assignee: Alex
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We should adapt the existing batch E2E test to use the newly introduced 
> {{ScheduleMode#LAZY_FROM_SOURCES_WITH_BATCH_SLOT_REQUEST}} and verify that 
> the job runs on a cluster with fewer slots than the job's parallelism. In 
> order to make this work, we need to set the shuffles to be blocking via 
> {{ExecutionMode#BATCH}}. As a batch job we should use the 
> {{DataSetAllroundTestProgram}}.
>  *Update:* currently, the 
> {{ScheduleMode#LAZY_FROM_SOURCES_WITH_BATCH_SLOT_REQUEST}} option is set only 
> by table planner(s) and cannot be set (configured) for general purpose 
> (batch) job. As agreed offline, this ticket would add a new e2e test for 
> batch sql job instead of modifying {{DataSetAllroundTestProgram}}.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13611) Introduce analyze statistic utility to generate table & column statistics

2019-08-07 Thread godfrey he (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901880#comment-16901880
 ] 

godfrey he commented on FLINK-13611:


i would like to take this ticket

> Introduce analyze statistic utility to generate table & column statistics
> -
>
> Key: FLINK-13611
> URL: https://issues.apache.org/jira/browse/FLINK-13611
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Priority: Major
> Fix For: 1.10.0
>
>
> this issue aims to introduce a utility class to generate table & column 
> statistics, the main steps include: 
> 1. generate sql, like {{select approx_count_distinct(a) as ndv, count(1) - 
> count(a) as nullCount, avg(char_length(a)) as avgLen, max(char_lenght(a)) as 
> maxLen, max(a) as maxValue, min(a) as minValue, ... from MyTable }}
> 2. execute the query
> 3. convert to the result to {{TableStats}} (maybe the source table is not a 
> catalog table)
> 4. convert to {{TableStats}} to {{CatalogTableStatistics}} if needed
> This issue does not involve DDL, however the DDL could use this utility class 
> once it's supported.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9370: [FLINK-13594][python] Improve the 'from_element' method of flink python api to apply to blink planner

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9370: [FLINK-13594][python] Improve the 
'from_element' method of flink python api to apply to blink planner
URL: https://github.com/apache/flink/pull/9370#issuecomment-518549297
 
 
   ## CI report:
   
   * 8dc43f17e0bbd3c33e5fe021e1e5004d1a7bef7f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122064561)
   * a369eb77db5b697bef2822c8d65d8c24447fb14d : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/122099516)
   * 5b48a8b1b08543f2c64d940646802c28b90cd168 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122106311)
   * e67bdbff8a6ebc5eb571b3d4c38f20bd1713c3d3 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/122236086)
   * f5829c9daa5e17ad9b2ebcd28d9efa7b1e4c5d81 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/122237011)
   * 57c204232f2065a559740b58cc5f954940ec587b : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/122240562)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9331: [FLINK-13523][table-planner-blink] Verify and correct arithmetic function's semantic for Blink planner

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9331: [FLINK-13523][table-planner-blink] 
Verify and correct arithmetic function's semantic for Blink planner
URL: https://github.com/apache/flink/pull/9331#issuecomment-517546275
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 48c66a9d7f5b1903fa3271fcfc2ce048ac25a45d (Wed Aug 07 
09:13:51 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on issue #9274: [FLINK-13495][table-planner-blink] blink-planner should support varchar/char/decimal precision to connector

2019-08-07 Thread GitBox
JingsongLi commented on issue #9274: [FLINK-13495][table-planner-blink] 
blink-planner should support varchar/char/decimal precision to connector
URL: https://github.com/apache/flink/pull/9274#issuecomment-519013875
 
 
   @godfreyhe addressed your comments, please take a look again.
   @wuchong can you take a look too?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   5   6   7   >