[GitHub] [flink] flinkbot edited a comment on pull request #12866: [FLINK-17425][blink-planner] supportsFilterPushDown rule in DynamicSource.

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12866:
URL: https://github.com/apache/flink/pull/12866#issuecomment-656576219


   
   ## CI report:
   
   * 21ff7c2a63f297e124f6040a72991c8438fa95af Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4547)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fsk119 commented on pull request #12908: [FLINK-18449][table sql/api]Kafka topic discovery & partition discove…

2020-07-15 Thread GitBox


fsk119 commented on pull request #12908:
URL: https://github.com/apache/flink/pull/12908#issuecomment-659150149


   @wuchong  CC



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12866: [FLINK-17425][blink-planner] supportsFilterPushDown rule in DynamicSource.

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12866:
URL: https://github.com/apache/flink/pull/12866#issuecomment-656576219


   
   ## CI report:
   
   * 5eb302884e390b6099859abcd7955e7f1c4e034e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4517)
 
   * 21ff7c2a63f297e124f6040a72991c8438fa95af Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4547)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18611) Include `m` as time unit expression for MINUTE in SQL

2020-07-15 Thread Q Kang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Q Kang updated FLINK-18611:
---
Description: 
According to the documentation of FileSystem SQL Connector, if we try to create 
a table with the following parameter:
{code:java}
'sink.partition-commit.delay'='1 m'
{code}
The program will fail with exception messages as below:
{code:java}
java.lang.IllegalArgumentException: Could not parse value '1 m' for key 
'sink.partition-commit.delay'.java.lang.IllegalArgumentException: Could not 
parse value '1 m' for key 'sink.partition-commit.delay'. 
..
Caused by: java.lang.IllegalArgumentException: Time interval unit label 'm' 
does not match any of the recognized units: DAYS: (d | day | days), HOURS: (h | 
hour | hours), MINUTES: (min | minute | minutes), SECONDS: (s | sec | secs | 
second | seconds), MILLISECONDS: (ms | milli | millis | millisecond | 
milliseconds), MICROSECONDS: (µs | micro | micros | microsecond | 
microseconds), NANOSECONDS: (ns | nano | nanos | nanosecond | nanoseconds){code}
In org/apache/flink/util/TimeUtils.java#TimeUnit, the definition of MINUTE 
seems do not agree with other units.
{code:java}
DAYS(ChronoUnit.DAYS, singular("d"), plural("day")),
HOURS(ChronoUnit.HOURS, singular("h"), plural("hour")),
MINUTES(ChronoUnit.MINUTES, singular("min"), plural("minute")),
SECONDS(ChronoUnit.SECONDS, singular("s"), plural("sec"), plural("second")),
...{code}
That is, `m` is not a valid expression for MINUTE, but `d`/`h`/`s` etc. are all 
valid expressions regarding to DAY/HOUR/SECOND, which might be a little 
confusing to users.

 

  was:
According to the documentation of FileSystem SQL Connector, if we try to create 
a table with the following parameter:
{code:java}
'sink.partition-commit.delay'='1 m'
{code}
The program will fail with exception messages as below:
{code:java}
java.lang.IllegalArgumentException: Could not parse value '1 m' for key 
'sink.partition-commit.delay'.java.lang.IllegalArgumentException: Could not 
parse value '1 m' for key 'sink.partition-commit.delay'. 
..
Caused by: java.lang.IllegalArgumentException: Time interval unit label 'm' 
does not match any of the recognized units: DAYS: (d | day | days), HOURS: (h | 
hour | hours), MINUTES: (min | minute | minutes), SECONDS: (s | sec | secs | 
second | seconds), MILLISECONDS: (ms | milli | millis | millisecond | 
milliseconds), MICROSECONDS: (µs | micro | micros | microsecond | 
microseconds), NANOSECONDS: (ns | nano | nanos | nanosecond | nanoseconds){code}
In org/apache/flink/util/TimeUtils.java#TimeUnit, the definition of MINUTE 
seems doesn't match with other units.
{code:java}
DAYS(ChronoUnit.DAYS, singular("d"), plural("day")),
HOURS(ChronoUnit.HOURS, singular("h"), plural("hour")),
MINUTES(ChronoUnit.MINUTES, singular("min"), plural("minute")),
SECONDS(ChronoUnit.SECONDS, singular("s"), plural("sec"), plural("second")),
...{code}
That is, `m` is not a valid expression for MINUTE, but `d`/`h`/`s` etc. are all 
valid expressions regarding to DAY/HOUR/SECOND, which might be a little 
confusing to users.

 


> Include `m` as time unit expression for MINUTE in SQL
> -
>
> Key: FLINK-18611
> URL: https://issues.apache.org/jira/browse/FLINK-18611
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Affects Versions: 1.11.0
>Reporter: Q Kang
>Priority: Minor
>
> According to the documentation of FileSystem SQL Connector, if we try to 
> create a table with the following parameter:
> {code:java}
> 'sink.partition-commit.delay'='1 m'
> {code}
> The program will fail with exception messages as below:
> {code:java}
> java.lang.IllegalArgumentException: Could not parse value '1 m' for key 
> 'sink.partition-commit.delay'.java.lang.IllegalArgumentException: Could not 
> parse value '1 m' for key 'sink.partition-commit.delay'. 
> ..
> Caused by: java.lang.IllegalArgumentException: Time interval unit label 'm' 
> does not match any of the recognized units: DAYS: (d | day | days), HOURS: (h 
> | hour | hours), MINUTES: (min | minute | minutes), SECONDS: (s | sec | secs 
> | second | seconds), MILLISECONDS: (ms | milli | millis | millisecond | 
> milliseconds), MICROSECONDS: (µs | micro | micros | microsecond | 
> microseconds), NANOSECONDS: (ns | nano | nanos | nanosecond | 
> nanoseconds){code}
> In org/apache/flink/util/TimeUtils.java#TimeUnit, the definition of MINUTE 
> seems do not agree with other units.
> {code:java}
> DAYS(ChronoUnit.DAYS, singular("d"), plural("day")),
> HOURS(ChronoUnit.HOURS, singular("h"), plural("hour")),
> MINUTES(ChronoUnit.MINUTES, singular("min"), plural("minute")),
> SECONDS(ChronoUnit.SECONDS, singular("s"), plural("sec"), plural("second")),
> ...{code}
> That is, `m` is not a valid expression for 

[jira] [Updated] (FLINK-18611) Include `m` as time unit expression for MINUTE in SQL

2020-07-15 Thread Q Kang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Q Kang updated FLINK-18611:
---
Description: 
According to the documentation of FileSystem SQL Connector, if we try to create 
a table with the following parameter:
{code:java}
'sink.partition-commit.delay'='1 m'
{code}
The program will fail with exception messages as below:
{code:java}
java.lang.IllegalArgumentException: Could not parse value '1 m' for key 
'sink.partition-commit.delay'.java.lang.IllegalArgumentException: Could not 
parse value '1 m' for key 'sink.partition-commit.delay'. 
..
Caused by: java.lang.IllegalArgumentException: Time interval unit label 'm' 
does not match any of the recognized units: DAYS: (d | day | days), HOURS: (h | 
hour | hours), MINUTES: (min | minute | minutes), SECONDS: (s | sec | secs | 
second | seconds), MILLISECONDS: (ms | milli | millis | millisecond | 
milliseconds), MICROSECONDS: (µs | micro | micros | microsecond | 
microseconds), NANOSECONDS: (ns | nano | nanos | nanosecond | nanoseconds){code}
In org/apache/flink/util/TimeUtils.java#TimeUnit, the definition of MINUTE 
seems doesn't match with other units.
{code:java}
DAYS(ChronoUnit.DAYS, singular("d"), plural("day")),
HOURS(ChronoUnit.HOURS, singular("h"), plural("hour")),
MINUTES(ChronoUnit.MINUTES, singular("min"), plural("minute")),
SECONDS(ChronoUnit.SECONDS, singular("s"), plural("sec"), plural("second")),
...{code}
That is, `m` is not a valid expression for MINUTE, but `d`/`h`/`s` etc. are all 
valid expressions regarding to DAY/HOUR/SECOND, which might be a little 
confusing to users.

 

  was:
According to the 
[documentation|[https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/filesystem.html]]
 of FileSystem SQL Connector, if we try to create a table with the following 
parameter:
{code:java}
'sink.partition-commit.delay'='1 m'
{code}
The program will fail with exception messages as below:
{code:java}
java.lang.IllegalArgumentException: Could not parse value '1 m' for key 
'sink.partition-commit.delay'.java.lang.IllegalArgumentException: Could not 
parse value '1 m' for key 'sink.partition-commit.delay'. 
..
Caused by: java.lang.IllegalArgumentException: Time interval unit label 'm' 
does not match any of the recognized units: DAYS: (d | day | days), HOURS: (h | 
hour | hours), MINUTES: (min | minute | minutes), SECONDS: (s | sec | secs | 
second | seconds), MILLISECONDS: (ms | milli | millis | millisecond | 
milliseconds), MICROSECONDS: (µs | micro | micros | microsecond | 
microseconds), NANOSECONDS: (ns | nano | nanos | nanosecond | nanoseconds){code}
In org/apache/flink/util/TimeUtils.java#TimeUnit, the definition of MINUTE 
seems doesn't match with other units.
{code:java}
DAYS(ChronoUnit.DAYS, singular("d"), plural("day")),
HOURS(ChronoUnit.HOURS, singular("h"), plural("hour")),
MINUTES(ChronoUnit.MINUTES, singular("min"), plural("minute")),
SECONDS(ChronoUnit.SECONDS, singular("s"), plural("sec"), plural("second")),
...{code}
That is, `m` is not a valid expression for MINUTE, but `d`/`h`/`s` etc. are all 
valid expressions regarding to DAY/HOUR/SECOND, which might be a little 
confusing to users.

 


> Include `m` as time unit expression for MINUTE in SQL
> -
>
> Key: FLINK-18611
> URL: https://issues.apache.org/jira/browse/FLINK-18611
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Affects Versions: 1.11.0
>Reporter: Q Kang
>Priority: Minor
>
> According to the documentation of FileSystem SQL Connector, if we try to 
> create a table with the following parameter:
> {code:java}
> 'sink.partition-commit.delay'='1 m'
> {code}
> The program will fail with exception messages as below:
> {code:java}
> java.lang.IllegalArgumentException: Could not parse value '1 m' for key 
> 'sink.partition-commit.delay'.java.lang.IllegalArgumentException: Could not 
> parse value '1 m' for key 'sink.partition-commit.delay'. 
> ..
> Caused by: java.lang.IllegalArgumentException: Time interval unit label 'm' 
> does not match any of the recognized units: DAYS: (d | day | days), HOURS: (h 
> | hour | hours), MINUTES: (min | minute | minutes), SECONDS: (s | sec | secs 
> | second | seconds), MILLISECONDS: (ms | milli | millis | millisecond | 
> milliseconds), MICROSECONDS: (µs | micro | micros | microsecond | 
> microseconds), NANOSECONDS: (ns | nano | nanos | nanosecond | 
> nanoseconds){code}
> In org/apache/flink/util/TimeUtils.java#TimeUnit, the definition of MINUTE 
> seems doesn't match with other units.
> {code:java}
> DAYS(ChronoUnit.DAYS, singular("d"), plural("day")),
> HOURS(ChronoUnit.HOURS, singular("h"), plural("hour")),
> MINUTES(ChronoUnit.MINUTES, singular("min"), plural("minute")),
> SECONDS(ChronoUnit.SECONDS, 

[jira] [Created] (FLINK-18611) Include `m` as time unit expression for MINUTE in SQL

2020-07-15 Thread Q Kang (Jira)
Q Kang created FLINK-18611:
--

 Summary: Include `m` as time unit expression for MINUTE in SQL
 Key: FLINK-18611
 URL: https://issues.apache.org/jira/browse/FLINK-18611
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / FileSystem
Affects Versions: 1.11.0
Reporter: Q Kang


According to the 
[documentation|[https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/filesystem.html]]
 of FileSystem SQL Connector, if we try to create a table with the following 
parameter:
{code:java}
'sink.partition-commit.delay'='1 m'
{code}
The program will fail with exception messages as below:
{code:java}
java.lang.IllegalArgumentException: Could not parse value '1 m' for key 
'sink.partition-commit.delay'.java.lang.IllegalArgumentException: Could not 
parse value '1 m' for key 'sink.partition-commit.delay'. 
..
Caused by: java.lang.IllegalArgumentException: Time interval unit label 'm' 
does not match any of the recognized units: DAYS: (d | day | days), HOURS: (h | 
hour | hours), MINUTES: (min | minute | minutes), SECONDS: (s | sec | secs | 
second | seconds), MILLISECONDS: (ms | milli | millis | millisecond | 
milliseconds), MICROSECONDS: (µs | micro | micros | microsecond | 
microseconds), NANOSECONDS: (ns | nano | nanos | nanosecond | nanoseconds){code}
In org/apache/flink/util/TimeUtils.java#TimeUnit, the definition of MINUTE 
seems doesn't match with other units.
{code:java}
DAYS(ChronoUnit.DAYS, singular("d"), plural("day")),
HOURS(ChronoUnit.HOURS, singular("h"), plural("hour")),
MINUTES(ChronoUnit.MINUTES, singular("min"), plural("minute")),
SECONDS(ChronoUnit.SECONDS, singular("s"), plural("sec"), plural("second")),
...{code}
That is, `m` is not a valid expression for MINUTE, but `d`/`h`/`s` etc. are all 
valid expressions regarding to DAY/HOUR/SECOND, which might be a little 
confusing to users.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12866: [FLINK-17425][blink-planner] supportsFilterPushDown rule in DynamicSource.

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12866:
URL: https://github.com/apache/flink/pull/12866#issuecomment-656576219


   
   ## CI report:
   
   * 5eb302884e390b6099859abcd7955e7f1c4e034e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4517)
 
   * 21ff7c2a63f297e124f6040a72991c8438fa95af UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] pyscala commented on pull request #12905: [FLINK-17789][API/Core] DelegatingConfiguration should remove prefix …

2020-07-15 Thread GitBox


pyscala commented on pull request #12905:
URL: https://github.com/apache/flink/pull/12905#issuecomment-659135862


   > Thanks @pyscala for your contribution. Changes looks good to me overall. 
But I want to hear from @zentol
   
   Thanks @JingsongLi  for your reply.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi commented on pull request #12905: [FLINK-17789][API/Core] DelegatingConfiguration should remove prefix …

2020-07-15 Thread GitBox


JingsongLi commented on pull request #12905:
URL: https://github.com/apache/flink/pull/12905#issuecomment-659134691


   Thanks @pyscala for your contribution. Changes looks good to me overall. But 
I want to hear from @zentol 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-17789) DelegatingConfiguration should remove prefix instead of add prefix in toMap

2020-07-15 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-17789:


Assignee: liufangliang

> DelegatingConfiguration should remove prefix instead of add prefix in toMap
> ---
>
> Key: FLINK-17789
> URL: https://issues.apache.org/jira/browse/FLINK-17789
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Reporter: Jingsong Lee
>Assignee: liufangliang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> {code:java}
> Configuration conf = new Configuration();
> conf.setString("k0", "v0");
> conf.setString("prefix.k1", "v1");
> DelegatingConfiguration dc = new DelegatingConfiguration(conf, "prefix.");
> System.out.println(dc.getString("k0", "empty")); // empty
> System.out.println(dc.getString("k1", "empty")); // v1
> System.out.println(dc.toMap().get("k1")); // should be v1, but null
> System.out.println(dc.toMap().get("prefix.prefix.k1")); // should be null, 
> but v1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] RocMarshal commented on pull request #12798: [FLINK-16087][docs-zh] Translate "Detecting Patterns" page of "Streaming Concepts" into Chinese

2020-07-15 Thread GitBox


RocMarshal commented on pull request #12798:
URL: https://github.com/apache/flink/pull/12798#issuecomment-659134008


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12873: [FLINK-18548][table-planner] Improve FlinkCalcMergeRule to merge calc nodes better

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12873:
URL: https://github.com/apache/flink/pull/12873#issuecomment-656741832


   
   ## CI report:
   
   * 24c77b9c54d326b4f6b51a716669df85d6f59a3f Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4545)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17789) DelegatingConfiguration should remove prefix instead of add prefix in toMap

2020-07-15 Thread liufangliang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17158859#comment-17158859
 ] 

liufangliang commented on FLINK-17789:
--

Hi [~lzljs3620320] [~chesnay] ,

Sorry for my irregular behavior, I will pay attention next time.

I also think `toMap` should be consistent with `addAllToProperties`. So my PR 
is to keep them consistent, no more.

> DelegatingConfiguration should remove prefix instead of add prefix in toMap
> ---
>
> Key: FLINK-17789
> URL: https://issues.apache.org/jira/browse/FLINK-17789
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Reporter: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> {code:java}
> Configuration conf = new Configuration();
> conf.setString("k0", "v0");
> conf.setString("prefix.k1", "v1");
> DelegatingConfiguration dc = new DelegatingConfiguration(conf, "prefix.");
> System.out.println(dc.getString("k0", "empty")); // empty
> System.out.println(dc.getString("k1", "empty")); // v1
> System.out.println(dc.toMap().get("k1")); // should be v1, but null
> System.out.println(dc.toMap().get("prefix.prefix.k1")); // should be null, 
> but v1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] pyscala commented on pull request #12905: [FLINK-17789][API/Core] DelegatingConfiguration should remove prefix …

2020-07-15 Thread GitBox


pyscala commented on pull request #12905:
URL: https://github.com/apache/flink/pull/12905#issuecomment-659118815


   Hi @zentol  , Thanks for your commits. Can you review this PR for us ?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17627) Add support for writing _SUCCESS file with StreamingFileSink

2020-07-15 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17158845#comment-17158845
 ] 

Jingsong Lee commented on FLINK-17627:
--

Hi [~qqibrow], we only support in table/sql level, no API for DataStream, but 
maybe target in 1.12. If you want to try development, you can refer to 
{{StreamingFileWriter}} and {{StreamingFileCommitter}}.

> Add support for writing _SUCCESS file with StreamingFileSink
> 
>
> Key: FLINK-17627
> URL: https://issues.apache.org/jira/browse/FLINK-17627
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Robert Metzger
>Priority: Major
>  Labels: usability
>
> (Note: This feature has been requested by multiple users: 
> https://lists.apache.org/list.html?u...@flink.apache.org:lte=1M:_SUCCESS)
> Hadoop Map Reduce is writing a _SUCCESS file to output directories once the 
> result has been completely written.
> Users migrating from Hadoop MR to Flink want to have a similar behavior in 
> Flinks StreamingFileSink.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12873: [FLINK-18548][table-planner] Improve FlinkCalcMergeRule to merge calc nodes better

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12873:
URL: https://github.com/apache/flink/pull/12873#issuecomment-656741832


   
   ## CI report:
   
   * d348328658445fbc11b8a310f8f0fbd163acf308 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4410)
 
   * 24c77b9c54d326b4f6b51a716669df85d6f59a3f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4545)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18594) The link is broken in kafka doc

2020-07-15 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17158844#comment-17158844
 ] 

Dian Fu commented on FLINK-18594:
-

follow-up fix: 3a7906e9729fa7bf84020502c339c1f76c34f7be

> The link is broken in kafka doc
> ---
>
> Key: FLINK-18594
> URL: https://issues.apache.org/jira/browse/FLINK-18594
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Assignee: Leonard Xu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> https://ci.apache.org/builders/flink-docs-master/builds/1897/steps/Build%20docs/logs/stdio
> {code}
>   Liquid Exception: Could not find document 
> 'dev/stream/state/checkpointing.md' in tag 'link'. Make sure the document 
> exists and the path is correct. in dev/table/connectors/kafka.zh.md
> Could not find document 'dev/stream/state/checkpointing.md' in tag 'link'.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17789) DelegatingConfiguration should remove prefix instead of add prefix in toMap

2020-07-15 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17158843#comment-17158843
 ] 

Jingsong Lee commented on FLINK-17789:
--

[~chesnay] I see what you mean.

Hi [~liufangliang], for this issue, there is no clear answer to change it or 
not, It seems that the community has not yet reached an agreement. 

> DelegatingConfiguration should remove prefix instead of add prefix in toMap
> ---
>
> Key: FLINK-17789
> URL: https://issues.apache.org/jira/browse/FLINK-17789
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Reporter: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> {code:java}
> Configuration conf = new Configuration();
> conf.setString("k0", "v0");
> conf.setString("prefix.k1", "v1");
> DelegatingConfiguration dc = new DelegatingConfiguration(conf, "prefix.");
> System.out.println(dc.getString("k0", "empty")); // empty
> System.out.println(dc.getString("k1", "empty")); // v1
> System.out.println(dc.toMap().get("k1")); // should be v1, but null
> System.out.println(dc.toMap().get("prefix.prefix.k1")); // should be null, 
> but v1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-17789) DelegatingConfiguration should remove prefix instead of add prefix in toMap

2020-07-15 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17158843#comment-17158843
 ] 

Jingsong Lee edited comment on FLINK-17789 at 7/16/20, 2:16 AM:


[~chesnay] I see what you mean.

Hi [~liufangliang], thank you for your participation, for this issue, there is 
no clear answer to change it or not, It seems that the community has not yet 
reached an agreement. 


was (Author: lzljs3620320):
[~chesnay] I see what you mean.

Hi [~liufangliang], for this issue, there is no clear answer to change it or 
not, It seems that the community has not yet reached an agreement. 

> DelegatingConfiguration should remove prefix instead of add prefix in toMap
> ---
>
> Key: FLINK-17789
> URL: https://issues.apache.org/jira/browse/FLINK-17789
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Reporter: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> {code:java}
> Configuration conf = new Configuration();
> conf.setString("k0", "v0");
> conf.setString("prefix.k1", "v1");
> DelegatingConfiguration dc = new DelegatingConfiguration(conf, "prefix.");
> System.out.println(dc.getString("k0", "empty")); // empty
> System.out.println(dc.getString("k1", "empty")); // v1
> System.out.println(dc.toMap().get("k1")); // should be v1, but null
> System.out.println(dc.toMap().get("prefix.prefix.k1")); // should be null, 
> but v1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12873: [FLINK-18548][table-planner] Improve FlinkCalcMergeRule to merge calc nodes better

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12873:
URL: https://github.com/apache/flink/pull/12873#issuecomment-656741832


   
   ## CI report:
   
   * d348328658445fbc11b8a310f8f0fbd163acf308 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4410)
 
   * 24c77b9c54d326b4f6b51a716669df85d6f59a3f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-14055) Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"

2020-07-15 Thread Zhenqiu Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17158763#comment-17158763
 ] 

Zhenqiu Huang commented on FLINK-14055:
---

[~phoenixjiangnan] [~twalthr]
I am going to add this feature in the coming weeks. Currently, it is blocking 
AthenaX to upgrade to 1.10 as we have internal calcite extension. Please let me 
know if there is anything I should take care of in advance.



> Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"
> 
>
> Key: FLINK-14055
> URL: https://issues.apache.org/jira/browse/FLINK-14055
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Bowen Li
>Assignee: Zhenqiu Huang
>Priority: Major
>
> As FLINK-7151 adds basic function DDL to Flink, this ticket is to support 
> dynamically loading functions from external source in function DDL with 
> advanced syntax like 
>  
> {code:java}
> CREATE FUNCTION func_name as class_name USING JAR/FILE/ACHIEVE 'xxx' [, 
> JAR/FILE/ACHIEVE 'yyy'] ;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17627) Add support for writing _SUCCESS file with StreamingFileSink

2020-07-15 Thread Lu Niu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17158704#comment-17158704
 ] 

Lu Niu commented on FLINK-17627:


[~lzljs3620320] We also want StreamingFileSink write _SUCCESS file. Is this 
delivered in 1.11 ? 

> Add support for writing _SUCCESS file with StreamingFileSink
> 
>
> Key: FLINK-17627
> URL: https://issues.apache.org/jira/browse/FLINK-17627
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Robert Metzger
>Priority: Major
>  Labels: usability
>
> (Note: This feature has been requested by multiple users: 
> https://lists.apache.org/list.html?u...@flink.apache.org:lte=1M:_SUCCESS)
> Hadoop Map Reduce is writing a _SUCCESS file to output directories once the 
> result has been completely written.
> Users migrating from Hadoop MR to Flink want to have a similar behavior in 
> Flinks StreamingFileSink.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12905: [FLINK-17789][API/Core] DelegatingConfiguration should remove prefix …

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12905:
URL: https://github.com/apache/flink/pull/12905#issuecomment-658715358


   
   ## CI report:
   
   * 150c361060b34a27151efa0654c693898d9372ff Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4542)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12905: [FLINK-17789][API/Core] DelegatingConfiguration should remove prefix …

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12905:
URL: https://github.com/apache/flink/pull/12905#issuecomment-658715358


   
   ## CI report:
   
   * e6d38893b79bcbd786e33592aa4b92f907376b6d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4540)
 
   * 150c361060b34a27151efa0654c693898d9372ff Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4542)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12905: [FLINK-17789][API/Core] DelegatingConfiguration should remove prefix …

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12905:
URL: https://github.com/apache/flink/pull/12905#issuecomment-658715358


   
   ## CI report:
   
   * e6d38893b79bcbd786e33592aa4b92f907376b6d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4540)
 
   * 150c361060b34a27151efa0654c693898d9372ff UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12909: [FLINK-18608] Fix null handling when converting CAST expression

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12909:
URL: https://github.com/apache/flink/pull/12909#issuecomment-658859932


   
   ## CI report:
   
   * 63c5517253d8a7a4197dd5ea1b5961b98bccb1f7 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4541)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12905: [FLINK-17789][API/Core] DelegatingConfiguration should remove prefix …

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12905:
URL: https://github.com/apache/flink/pull/12905#issuecomment-658715358


   
   ## CI report:
   
   * e6d38893b79bcbd786e33592aa4b92f907376b6d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4540)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-18610) Clean up Table connector docs grammar

2020-07-15 Thread Seth Wiesman (Jira)
Seth Wiesman created FLINK-18610:


 Summary: Clean up Table connector docs grammar
 Key: FLINK-18610
 URL: https://issues.apache.org/jira/browse/FLINK-18610
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Reporter: Seth Wiesman
Assignee: Seth Wiesman






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18478) AvroDeserializationSchema does not work with types generated by avrohugger

2020-07-15 Thread Georg Heiler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17158523#comment-17158523
 ] 

Georg Heiler commented on FLINK-18478:
--

[~aljoscha] can I also download a nightly jar snapshot which is already 
packaged from somewhere? The `repositories` are helpful, but now the latest 
master branch still fails for the flink-python submodule.

 

> AvroDeserializationSchema does not work with types generated by avrohugger
> --
>
> Key: FLINK-18478
> URL: https://issues.apache.org/jira/browse/FLINK-18478
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.2, 1.12.0, 1.11.1
>
>
> The main problem is that the code in {{SpecificData.createSchema()}} tries to 
> reflectively read the {{SCHEMA$}} field, that is normally there in Avro 
> generated classes. However, avrohugger generates this field in a companion 
> object, which the reflective Java code will therefore not find.
> This is also described in these ML threads:
>  * 
> [https://lists.apache.org/thread.html/5db58c7d15e4e9aaa515f935be3b342fe036e97d32e1fb0f0d1797ee@%3Cuser.flink.apache.org%3E]
>  * 
> [https://lists.apache.org/thread.html/cf1c5b8fa7f095739438807de9f2497e04ffe55237c5dea83355112d@%3Cuser.flink.apache.org%3E]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-18606) Remove generic parameter from SinkFunction.Context

2020-07-15 Thread Seth Wiesman (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seth Wiesman reassigned FLINK-18606:


Assignee: Niels Basjes

> Remove generic parameter from SinkFunction.Context
> -
>
> Key: FLINK-18606
> URL: https://issues.apache.org/jira/browse/FLINK-18606
> Project: Flink
>  Issue Type: Improvement
>Reporter: Niels Basjes
>Assignee: Niels Basjes
>Priority: Major
>  Labels: pull-request-available
>
> As discussed on the mailing list 
> https://lists.apache.org/thread.html/ra72d406e262f3b30ef4df95e8e4ba2d765859203499be3b6d5cd59a2%40%3Cdev.flink.apache.org%3E
> The SinkFunction.Context  interface is a generic that does not use this 
> generic parameter.
> In most places where this interface is used the generic parameter is omitted 
> and thus gives many warnings about using "raw types".
> This is to try to remove this generic parameter and asses the impact.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-10672) Task stuck while writing output to flink

2020-07-15 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-10672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17158504#comment-17158504
 ] 

Yun Gao commented on FLINK-10672:
-

Hi [~ibzib], very sorry to disturb,  do you have any updates from you side for 
this issue ?  Since it seems that Tfx / Beam / Flink all have evolved for some 
versions, does this issue still exists? 

> Task stuck while writing output to flink
> 
>
> Key: FLINK-10672
> URL: https://issues.apache.org/jira/browse/FLINK-10672
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.5.4
> Environment: OS: Debuan rodente 4.17
> Flink version: 1.5.4
> ||Key||Value||
> |jobmanager.heap.mb|1024|
> |jobmanager.rpc.address|localhost|
> |jobmanager.rpc.port|6123|
> |metrics.reporter.jmx.class|org.apache.flink.metrics.jmx.JMXReporter|
> |metrics.reporter.jmx.port|9250-9260|
> |metrics.reporters|jmx|
> |parallelism.default|1|
> |rest.port|8081|
> |taskmanager.heap.mb|1024|
> |taskmanager.numberOfTaskSlots|1|
> |web.tmpdir|/tmp/flink-web-bdb73d6c-5b9e-47b5-9ebf-eed0a7c82c26|
>  
> h1. Overview
> ||Data Port||All Slots||Free Slots||CPU Cores||Physical Memory||JVM Heap 
> Size||Flink Managed Memory||
> |43501|1|0|12|62.9 GB|922 MB|642 MB|
> h1. Memory
> h2. JVM (Heap/Non-Heap)
> ||Type||Committed||Used||Maximum||
> |Heap|922 MB|575 MB|922 MB|
> |Non-Heap|68.8 MB|64.3 MB|-1 B|
> |Total|991 MB|639 MB|922 MB|
> h2. Outside JVM
> ||Type||Count||Used||Capacity||
> |Direct|3,292|105 MB|105 MB|
> |Mapped|0|0 B|0 B|
> h1. Network
> h2. Memory Segments
> ||Type||Count||
> |Available|3,194|
> |Total|3,278|
> h1. Garbage Collection
> ||Collector||Count||Time||
> |G1_Young_Generation|13|336|
> |G1_Old_Generation|1|21|
>Reporter: Ankur Goenka
>Assignee: Yun Gao
>Priority: Major
>  Labels: beam
> Attachments: 0.14_all_jobs.jpg, 1uruvakHxBu.png, 3aDKQ24WvKk.png, 
> Po89UGDn58V.png, WithBroadcastJob.png, jmx_dump.json, jmx_dump_detailed.json, 
> jstack_129827.log, jstack_163822.log, jstack_66985.log
>
>
> I am running a fairly complex pipleline with 200+ task.
> The pipeline works fine with small data (order of 10kb input) but gets stuck 
> with a slightly larger data (300kb input).
>  
> The task gets stuck while writing the output toFlink, more specifically it 
> gets stuck while requesting memory segment in local buffer pool. The Task 
> manager UI shows that it has enough memory and memory segments to work with.
> The relevant stack trace is 
> {quote}"grpc-default-executor-0" #138 daemon prio=5 os_prio=0 
> tid=0x7fedb0163800 nid=0x30b7f in Object.wait() [0x7fedb4f9]
>  java.lang.Thread.State: TIMED_WAITING (on object monitor)
>  at (C/C++) 0x7fef201c7dae (Unknown Source)
>  at (C/C++) 0x7fef1f2aea07 (Unknown Source)
>  at (C/C++) 0x7fef1f241cd3 (Unknown Source)
>  at java.lang.Object.wait(Native Method)
>  - waiting on <0xf6d56450> (a java.util.ArrayDeque)
>  at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegment(LocalBufferPool.java:247)
>  - locked <0xf6d56450> (a java.util.ArrayDeque)
>  at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBuilderBlocking(LocalBufferPool.java:204)
>  at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.requestNewBufferBuilder(RecordWriter.java:213)
>  at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.sendToTarget(RecordWriter.java:144)
>  at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:107)
>  at 
> org.apache.flink.runtime.operators.shipping.OutputCollector.collect(OutputCollector.java:65)
>  at 
> org.apache.flink.runtime.operators.util.metrics.CountingCollector.collect(CountingCollector.java:35)
>  at 
> org.apache.beam.runners.flink.translation.functions.FlinkExecutableStagePruningFunction.flatMap(FlinkExecutableStagePruningFunction.java:42)
>  at 
> org.apache.beam.runners.flink.translation.functions.FlinkExecutableStagePruningFunction.flatMap(FlinkExecutableStagePruningFunction.java:26)
>  at 
> org.apache.flink.runtime.operators.chaining.ChainedFlatMapDriver.collect(ChainedFlatMapDriver.java:80)
>  at 
> org.apache.flink.runtime.operators.util.metrics.CountingCollector.collect(CountingCollector.java:35)
>  at 
> org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction$MyDataReceiver.accept(FlinkExecutableStageFunction.java:230)
>  - locked <0xf6a60bd0> (a java.lang.Object)
>  at 
> org.apache.beam.sdk.fn.data.BeamFnDataInboundObserver.accept(BeamFnDataInboundObserver.java:81)
>  at 
> org.apache.beam.sdk.fn.data.BeamFnDataInboundObserver.accept(BeamFnDataInboundObserver.java:32)
>  at 
> 

[GitHub] [flink] flinkbot commented on pull request #12909: [FLINK-18608] Fix null handling when converting CAST expression

2020-07-15 Thread GitBox


flinkbot commented on pull request #12909:
URL: https://github.com/apache/flink/pull/12909#issuecomment-658859932


   
   ## CI report:
   
   * 63c5517253d8a7a4197dd5ea1b5961b98bccb1f7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18607) Give the maven modules human readable names.

2020-07-15 Thread weizihan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17158499#comment-17158499
 ] 

weizihan commented on FLINK-18607:
--

I like this idea




魏子涵
邮箱:wzh1007181...@163.com

签名由 网易邮箱大师 定制

On 07/15/2020 19:08, Niels Basjes (Jira) wrote:
Niels Basjes created FLINK-18607:


Summary: Give the maven modules human readable names.
Key: FLINK-18607
URL: https://issues.apache.org/jira/browse/FLINK-18607
Project: Flink
 Issue Type: Improvement
   Reporter: Niels Basjes


As discussed on the mailing list.

https://lists.apache.org/thread.html/r6331f0ae603c8ace30085c1f25a1935050224507bfade89ffeadbc7b%40%3Cdev.flink.apache.org%3E

When building Flink the output both on the commandline and in IDEs like 
IntelliJ always show the artifact name as the module name.

By simply setting a more human readable module name in all of the pom.xml files 
the build output becomes much easier to read for developers.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


> Give the maven modules human readable names.
> 
>
> Key: FLINK-18607
> URL: https://issues.apache.org/jira/browse/FLINK-18607
> Project: Flink
>  Issue Type: Improvement
>Reporter: Niels Basjes
>Priority: Major
>  Labels: pull-request-available
>
> As discussed on the mailing list.
> https://lists.apache.org/thread.html/r6331f0ae603c8ace30085c1f25a1935050224507bfade89ffeadbc7b%40%3Cdev.flink.apache.org%3E
> When building Flink the output both on the commandline and in IDEs like 
> IntelliJ always show the artifact name as the module name.
> By simply setting a more human readable module name in all of the pom.xml 
> files the build output becomes much easier to read for developers.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wtog commented on pull request #12888: [FLINK-18588] hive ddl create table support 'if not exists'

2020-07-15 Thread GitBox


wtog commented on pull request #12888:
URL: https://github.com/apache/flink/pull/12888#issuecomment-658855927


   @lirui-apache Thanks for review. I`ve add test to `HiveDialectITCase`



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wtog edited a comment on pull request #12888: [FLINK-18588] hive ddl create table support 'if not exists'

2020-07-15 Thread GitBox


wtog edited a comment on pull request #12888:
URL: https://github.com/apache/flink/pull/12888#issuecomment-658855927


   @lirui-apache Thanks for review. I added test to `HiveDialectITCase`



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12909: [FLINK-18608] Fix null handling when converting CAST expression

2020-07-15 Thread GitBox


flinkbot commented on pull request #12909:
URL: https://github.com/apache/flink/pull/12909#issuecomment-658849221


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 63c5517253d8a7a4197dd5ea1b5961b98bccb1f7 (Wed Jul 15 
15:56:34 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-18608).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dawidwys commented on pull request #12909: [FLINK-18608] Fix null handling when converting CAST expression

2020-07-15 Thread GitBox


dawidwys commented on pull request #12909:
URL: https://github.com/apache/flink/pull/12909#issuecomment-658848167


   Could you have a look @twalthr ?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dawidwys commented on pull request #12909: [FLINK-18608] Fix null handling when converting CAST expression

2020-07-15 Thread GitBox


dawidwys commented on pull request #12909:
URL: https://github.com/apache/flink/pull/12909#issuecomment-658848505


   side note: I'd like to cherry-pick the test case into master branch later on.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12905: [FLINK-17789][API/Core] DelegatingConfiguration should remove prefix …

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12905:
URL: https://github.com/apache/flink/pull/12905#issuecomment-658715358


   
   ## CI report:
   
   * c9e05d742e3c66f494399d67077a8bbac0538803 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4531)
 
   * e6d38893b79bcbd786e33592aa4b92f907376b6d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18608) CustomizedConvertRule#convertCast drops nullability

2020-07-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18608:
---
Labels: pull-request-available  (was: )

> CustomizedConvertRule#convertCast drops nullability
> ---
>
> Key: FLINK-18608
> URL: https://issues.apache.org/jira/browse/FLINK-18608
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Flavio Pompermaier
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.2
>
>
> The following program shows that nullability is not respected during the 
> tranlastion:
> {code:java}
>  final TableEnvironment tableEnv = 
> DatalinksExecutionEnvironment.getBatchTableEnv();
> final Table inputTable = tableEnv.fromValues(//
> DataTypes.ROW(//
> DataTypes.FIELD("col1", DataTypes.STRING()), //
> DataTypes.FIELD("col2", DataTypes.STRING())//
> ), //
> Row.of(1L, "Hello"), //
> Row.of(2L, "Hello"), //
> Row.of(3L, ""), //
> Row.of(4L, "Ciao"));
> tableEnv.createTemporaryView("ParquetDataset", inputTable);
> tableEnv.executeSql(//
> "CREATE TABLE `out` (\n" + //
> "col1 STRING,\n" + //
> "col2 STRING\n" + //
> ") WITH (\n" + //
> " 'connector' = 'filesystem',\n" + //
> " 'format' = 'parquet',\n" + //
> " 'update-mode' = 'append',\n" + //
> " 'path' = 'file://" + TEST_FOLDER + "',\n" + //
> " 'sink.shuffle-by-partition.enable' = 'true'\n" + //
> ")");
> tableEnv.executeSql("INSERT INTO `out` SELECT * FROM ParquetDataset");
> {code}
> -
> Exception in thread "main" java.lang.AssertionError: Conversion to relational 
> algebra failed to preserve datatypes:
> validated type:
> RecordType(VARCHAR(2147483647) CHARACTER SET "UTF-16LE" col1, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" col2) NOT NULL
> converted type:
> RecordType(VARCHAR(2147483647) CHARACTER SET "UTF-16LE" NOT NULL col1, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" NOT NULL col2) NOT NULL
> rel:
> LogicalProject(col1=[$0], col2=[$1])
>   LogicalUnion(all=[true])
> LogicalProject(col1=[_UTF-16LE'1':VARCHAR(2147483647) CHARACTER SET 
> "UTF-16LE"], col2=[_UTF-16LE'Hello':VARCHAR(2147483647) CHARACTER SET 
> "UTF-16LE"])
>   LogicalValues(tuples=[[{ 0 }]])
> LogicalProject(col1=[_UTF-16LE'2':VARCHAR(2147483647) CHARACTER SET 
> "UTF-16LE"], col2=[_UTF-16LE'Hello':VARCHAR(2147483647) CHARACTER SET 
> "UTF-16LE"])
>   LogicalValues(tuples=[[{ 0 }]])
> LogicalProject(col1=[_UTF-16LE'3':VARCHAR(2147483647) CHARACTER SET 
> "UTF-16LE"], col2=[_UTF-16LE'':VARCHAR(2147483647) CHARACTER SET "UTF-16LE"])
>   LogicalValues(tuples=[[{ 0 }]])
> LogicalProject(col1=[_UTF-16LE'4':VARCHAR(2147483647) CHARACTER SET 
> "UTF-16LE"], col2=[_UTF-16LE'Ciao':VARCHAR(2147483647) CHARACTER SET 
> "UTF-16LE"])
>   LogicalValues(tuples=[[{ 0 }]])
> at 
> org.apache.calcite.sql2rel.SqlToRelConverter.checkConvertedType(SqlToRelConverter.java:465)
> at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:580)
> at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:164)
> at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:151)
> at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:773)
> at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:745)
> at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:238)
> at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlInsert(SqlToOperationConverter.java:527)
> at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:204)
> at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:78)
> at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:678)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dawidwys opened a new pull request #12909: [FLINK-18608] Fix null handling when converting CAST expression

2020-07-15 Thread GitBox


dawidwys opened a new pull request #12909:
URL: https://github.com/apache/flink/pull/12909


   ## What is the purpose of the change
   
   Make the CAST preserve nullability when converting from Expression to 
RexCall. This has been done in master branch as part of 
https://issues.apache.org/jira/browse/FLINK-13784. This  PR adds a test case 
and backports the changes from master.
   
   
   ## Verifying this change
   
   * Added test 
`org.apache.flink.table.planner.runtime.stream.table.ValuesITCase#testNullabilityOverwriting`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18572) Flink web UI doesn't display checkpoint configs like unaligned and tolerable-failed-checkpoints and

2020-07-15 Thread Steven Zhen Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Zhen Wu updated FLINK-18572:
---
Summary: Flink web UI doesn't display checkpoint configs like unaligned  
and tolerable-failed-checkpoints and   (was: Flink web UI doesn't display 
unaligned checkpoint config)

> Flink web UI doesn't display checkpoint configs like unaligned  and 
> tolerable-failed-checkpoints and 
> -
>
> Key: FLINK-18572
> URL: https://issues.apache.org/jira/browse/FLINK-18572
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Affects Versions: 1.11.0
>Reporter: Steven Zhen Wu
>Priority: Major
> Attachments: image-2020-07-12-10-14-49-990.png
>
>
> might be helpful to display the unaligned checkpoint boolean flag in web UI.
>  
> !image-2020-07-12-10-14-49-990.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12904: [FLINK-18569][table] Support limit() for unordered tables

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12904:
URL: https://github.com/apache/flink/pull/12904#issuecomment-658669745


   
   ## CI report:
   
   * 40d048a077885f435381c372eab315a276908ad8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4538)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12888: [FLINK-18588] hive ddl create table support 'if not exists'

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12888:
URL: https://github.com/apache/flink/pull/12888#issuecomment-657641584


   
   ## CI report:
   
   * 499ff3390c0a93678db520293a475cd7a51c29e5 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4539)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18608) CustomizedConvertRule#convertCast drops nullability

2020-07-15 Thread Flavio Pompermaier (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flavio Pompermaier updated FLINK-18608:
---
Description: 
The following program shows that nullability is not respected during the 
tranlastion:


{code:java}
 final TableEnvironment tableEnv = 
DatalinksExecutionEnvironment.getBatchTableEnv();
final Table inputTable = tableEnv.fromValues(//
DataTypes.ROW(//
DataTypes.FIELD("col1", DataTypes.STRING()), //
DataTypes.FIELD("col2", DataTypes.STRING())//
), //
Row.of(1L, "Hello"), //
Row.of(2L, "Hello"), //
Row.of(3L, ""), //
Row.of(4L, "Ciao"));
tableEnv.createTemporaryView("ParquetDataset", inputTable);
tableEnv.executeSql(//
"CREATE TABLE `out` (\n" + //
"col1 STRING,\n" + //
"col2 STRING\n" + //
") WITH (\n" + //
" 'connector' = 'filesystem',\n" + //
" 'format' = 'parquet',\n" + //
" 'update-mode' = 'append',\n" + //
" 'path' = 'file://" + TEST_FOLDER + "',\n" + //
" 'sink.shuffle-by-partition.enable' = 'true'\n" + //
")");

tableEnv.executeSql("INSERT INTO `out` SELECT * FROM ParquetDataset");
{code}


-

Exception in thread "main" java.lang.AssertionError: Conversion to relational 
algebra failed to preserve datatypes:
validated type:
RecordType(VARCHAR(2147483647) CHARACTER SET "UTF-16LE" col1, 
VARCHAR(2147483647) CHARACTER SET "UTF-16LE" col2) NOT NULL
converted type:
RecordType(VARCHAR(2147483647) CHARACTER SET "UTF-16LE" NOT NULL col1, 
VARCHAR(2147483647) CHARACTER SET "UTF-16LE" NOT NULL col2) NOT NULL
rel:
LogicalProject(col1=[$0], col2=[$1])
  LogicalUnion(all=[true])
LogicalProject(col1=[_UTF-16LE'1':VARCHAR(2147483647) CHARACTER SET 
"UTF-16LE"], col2=[_UTF-16LE'Hello':VARCHAR(2147483647) CHARACTER SET 
"UTF-16LE"])
  LogicalValues(tuples=[[{ 0 }]])
LogicalProject(col1=[_UTF-16LE'2':VARCHAR(2147483647) CHARACTER SET 
"UTF-16LE"], col2=[_UTF-16LE'Hello':VARCHAR(2147483647) CHARACTER SET 
"UTF-16LE"])
  LogicalValues(tuples=[[{ 0 }]])
LogicalProject(col1=[_UTF-16LE'3':VARCHAR(2147483647) CHARACTER SET 
"UTF-16LE"], col2=[_UTF-16LE'':VARCHAR(2147483647) CHARACTER SET "UTF-16LE"])
  LogicalValues(tuples=[[{ 0 }]])
LogicalProject(col1=[_UTF-16LE'4':VARCHAR(2147483647) CHARACTER SET 
"UTF-16LE"], col2=[_UTF-16LE'Ciao':VARCHAR(2147483647) CHARACTER SET 
"UTF-16LE"])
  LogicalValues(tuples=[[{ 0 }]])

at 
org.apache.calcite.sql2rel.SqlToRelConverter.checkConvertedType(SqlToRelConverter.java:465)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:580)
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:164)
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:151)
at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:773)
at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:745)
at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:238)
at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlInsert(SqlToOperationConverter.java:527)
at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:204)
at 
org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:78)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:678)

  was:
The following program shows that nullability is not respected during the 
tranlastion:


{code:java}
 final TableEnvironment tableEnv = 
DatalinksExecutionEnvironment.getBatchTableEnv();
final Table inputTable = tableEnv.fromValues(//
DataTypes.ROW(//
DataTypes.FIELD("col1", DataTypes.STRING()), //
DataTypes.FIELD("col2", DataTypes.STRING())//
), //
Row.of(1L, "Hello"), //
Row.of(2L, "Hello"), //
Row.of(3L, ""), //
Row.of(4L, "Ciao"));
tableEnv.createTemporaryView("ParquetDataset", inputTable);
tableEnv.executeSql(//
"CREATE TABLE `out` (\n" + //
"col1 STRING,\n" + //
"col2 STRING\n" + //
") WITH (\n" + //
" 'connector' = 'filesystem',\n" + //
// " 'format' = 'parquet',\n" + //
" 'update-mode' = 'append',\n" + //
" 'path' = 'file://" + TEST_FOLDER + "',\n" + //
" 'sink.shuffle-by-partition.enable' = 'true'\n" + //
")");

tableEnv.executeSql("INSERT INTO `out` SELECT * FROM ParquetDataset");
{code}


-


[jira] [Updated] (FLINK-18599) Compile error when use windowAll and TumblingProcessingTimeWindows

2020-07-15 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-18599:
-
Component/s: (was: API / DataStream)
 API / Scala

> Compile error when use windowAll and TumblingProcessingTimeWindows
> --
>
> Key: FLINK-18599
> URL: https://issues.apache.org/jira/browse/FLINK-18599
> Project: Flink
>  Issue Type: Bug
>  Components: API / Scala
>Affects Versions: 1.11.0
>Reporter: henvealf
>Priority: Major
>
> Code:
> {code:java}
> import org.apache.commons.lang3.StringUtils
> import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
> import 
> org.apache.flink.streaming.api.windowing.assigners.{TumblingProcessingTimeWindows}
> import org.apache.flink.streaming.api.windowing.time.Time
> import org.apache.flink.streaming.api.scala._
>
> val env = StreamExecutionEnvironment.getExecutionEnvironment
> val stream = env.fromElements("a", "b", "c")
> stream
>   .filter((str: String) => StringUtils.isNotEmpty(str))
>   .map( _ => 1)
>   .windowAll(TumblingProcessingTimeWindows.of(Time.seconds(5)))
>   .reduce((a1, a2) => a1 + a2)
>   .print()
> {code}
> Compile failed:
> {code:java}
> error: type mismatch;
>  found   : 
> org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows
>  required: 
> org.apache.flink.streaming.api.windowing.assigners.WindowAssigner[_ >: Int, ?]
> Note: Object <: Any (and 
> org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows
>  <: 
> org.apache.flink.streaming.api.windowing.assigners.WindowAssigner[Object,org.apache.flink.streaming.api.windowing.windows.TimeWindow]),
>  but Java-defined class WindowAssigner is invariant in type T.
> You may wish to investigate a wildcard type such as `_ <: Any`. (SLS 3.2.10)
>   .windowAll(TumblingProcessingTimeWindows.of(Time.seconds(5)))
>  ^
> one error found
> {code}
>  What went wrong?
>  Scala version: 2.11
>  Flink version: 1.11
>  Thanks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18599) Compile error when use windowAll and TumblingProcessingTimeWindows

2020-07-15 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17158241#comment-17158241
 ] 

Aljoscha Krettek commented on FLINK-18599:
--

There is a quick in the Scala compiler which causes {{windowAll}} (or 
{{window()}}) to not work when the type of the {{DataStream}} is a primitive 
type ({{Int}} in your case). If you change it to another type, for example 
{{("foo", 1)}} (a tuple type), the code will work when you also adapt the 
subsequent {{reduce()}}.  

> Compile error when use windowAll and TumblingProcessingTimeWindows
> --
>
> Key: FLINK-18599
> URL: https://issues.apache.org/jira/browse/FLINK-18599
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.11.0
>Reporter: henvealf
>Priority: Major
>
> Code:
> {code:java}
> import org.apache.commons.lang3.StringUtils
> import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
> import 
> org.apache.flink.streaming.api.windowing.assigners.{TumblingProcessingTimeWindows}
> import org.apache.flink.streaming.api.windowing.time.Time
> import org.apache.flink.streaming.api.scala._
>
> val env = StreamExecutionEnvironment.getExecutionEnvironment
> val stream = env.fromElements("a", "b", "c")
> stream
>   .filter((str: String) => StringUtils.isNotEmpty(str))
>   .map( _ => 1)
>   .windowAll(TumblingProcessingTimeWindows.of(Time.seconds(5)))
>   .reduce((a1, a2) => a1 + a2)
>   .print()
> {code}
> Compile failed:
> {code:java}
> error: type mismatch;
>  found   : 
> org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows
>  required: 
> org.apache.flink.streaming.api.windowing.assigners.WindowAssigner[_ >: Int, ?]
> Note: Object <: Any (and 
> org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows
>  <: 
> org.apache.flink.streaming.api.windowing.assigners.WindowAssigner[Object,org.apache.flink.streaming.api.windowing.windows.TimeWindow]),
>  but Java-defined class WindowAssigner is invariant in type T.
> You may wish to investigate a wildcard type such as `_ <: Any`. (SLS 3.2.10)
>   .windowAll(TumblingProcessingTimeWindows.of(Time.seconds(5)))
>  ^
> one error found
> {code}
>  What went wrong?
>  Scala version: 2.11
>  Flink version: 1.11
>  Thanks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-18599) Compile error when use windowAll and TumblingProcessingTimeWindows

2020-07-15 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-18599.

Resolution: Not A Bug

> Compile error when use windowAll and TumblingProcessingTimeWindows
> --
>
> Key: FLINK-18599
> URL: https://issues.apache.org/jira/browse/FLINK-18599
> Project: Flink
>  Issue Type: Bug
>  Components: API / Scala
>Affects Versions: 1.11.0
>Reporter: henvealf
>Priority: Major
>
> Code:
> {code:java}
> import org.apache.commons.lang3.StringUtils
> import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
> import 
> org.apache.flink.streaming.api.windowing.assigners.{TumblingProcessingTimeWindows}
> import org.apache.flink.streaming.api.windowing.time.Time
> import org.apache.flink.streaming.api.scala._
>
> val env = StreamExecutionEnvironment.getExecutionEnvironment
> val stream = env.fromElements("a", "b", "c")
> stream
>   .filter((str: String) => StringUtils.isNotEmpty(str))
>   .map( _ => 1)
>   .windowAll(TumblingProcessingTimeWindows.of(Time.seconds(5)))
>   .reduce((a1, a2) => a1 + a2)
>   .print()
> {code}
> Compile failed:
> {code:java}
> error: type mismatch;
>  found   : 
> org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows
>  required: 
> org.apache.flink.streaming.api.windowing.assigners.WindowAssigner[_ >: Int, ?]
> Note: Object <: Any (and 
> org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows
>  <: 
> org.apache.flink.streaming.api.windowing.assigners.WindowAssigner[Object,org.apache.flink.streaming.api.windowing.windows.TimeWindow]),
>  but Java-defined class WindowAssigner is invariant in type T.
> You may wish to investigate a wildcard type such as `_ <: Any`. (SLS 3.2.10)
>   .windowAll(TumblingProcessingTimeWindows.of(Time.seconds(5)))
>  ^
> one error found
> {code}
>  What went wrong?
>  Scala version: 2.11
>  Flink version: 1.11
>  Thanks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-web] sjwiesman merged pull request #360: Introducing the Application Mode blogpost.

2020-07-15 Thread GitBox


sjwiesman merged pull request #360:
URL: https://github.com/apache/flink-web/pull/360


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12906: [FLINK-18606][java-streaming] Remove unused generic parameter from SinkFunction.Context

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12906:
URL: https://github.com/apache/flink/pull/12906#issuecomment-658724114


   
   ## CI report:
   
   * 09455c5c0c6bd961650e28994a30629e9269c679 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4536)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12888: [FLINK-18588] hive ddl create table support 'if not exists'

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12888:
URL: https://github.com/apache/flink/pull/12888#issuecomment-657641584


   
   ## CI report:
   
   * 9bd52062639bd1eacf94f88caa310cd60817599e Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4458)
 
   * 499ff3390c0a93678db520293a475cd7a51c29e5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12908: [FLINK-18449][table sql/api]Kafka topic discovery & partition discove…

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12908:
URL: https://github.com/apache/flink/pull/12908#issuecomment-658771971


   
   ## CI report:
   
   * 1a4e5ee2a2f11ba650a98c1a211cea75736fe79d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4537)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12904: [FLINK-18569][table] Support limit() for unordered tables

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12904:
URL: https://github.com/apache/flink/pull/12904#issuecomment-658669745


   
   ## CI report:
   
   * 4efba99139063f911fb977ad1d1ef8562fe982d3 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4529)
 
   * 40d048a077885f435381c372eab315a276908ad8 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4538)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18478) AvroDeserializationSchema does not work with types generated by avrohugger

2020-07-15 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17158218#comment-17158218
 ] 

Aljoscha Krettek commented on FLINK-18478:
--

>From the log you posted in the gist it seems that code is still using the 
>Flink 1.11.0 version of {{AvroDeserializationSchema}} which doesn't have the 
>fix.

You can try and use the snapshots repository and depend on version 
{{1.12-SNAPSHOT}}. In maven you can do it by adding this repository:
{code:java}


apache.snapshots
Apache Development Snapshot Repository

https://repository.apache.org/content/repositories/snapshots/

false


true



{code}
 

> AvroDeserializationSchema does not work with types generated by avrohugger
> --
>
> Key: FLINK-18478
> URL: https://issues.apache.org/jira/browse/FLINK-18478
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.2, 1.12.0, 1.11.1
>
>
> The main problem is that the code in {{SpecificData.createSchema()}} tries to 
> reflectively read the {{SCHEMA$}} field, that is normally there in Avro 
> generated classes. However, avrohugger generates this field in a companion 
> object, which the reflective Java code will therefore not find.
> This is also described in these ML threads:
>  * 
> [https://lists.apache.org/thread.html/5db58c7d15e4e9aaa515f935be3b342fe036e97d32e1fb0f0d1797ee@%3Cuser.flink.apache.org%3E]
>  * 
> [https://lists.apache.org/thread.html/cf1c5b8fa7f095739438807de9f2497e04ffe55237c5dea83355112d@%3Cuser.flink.apache.org%3E]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12904: [FLINK-18569][table] Support limit() for unordered tables

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12904:
URL: https://github.com/apache/flink/pull/12904#issuecomment-658669745


   
   ## CI report:
   
   * 4efba99139063f911fb977ad1d1ef8562fe982d3 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4529)
 
   * 40d048a077885f435381c372eab315a276908ad8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] aljoscha commented on pull request #12157: [FLINK-17691][Connectors/Kafka] fix the bug FlinkKafkaProducer011 transactional.id too long when use Semantic.EXACTLY_ONCE mode

2020-07-15 Thread GitBox


aljoscha commented on pull request #12157:
URL: https://github.com/apache/flink/pull/12157#issuecomment-658793273


   Ahh, this discussion now reminded me of this older issue: 
https://issues.apache.org/jira/browse/FLINK-11654. Can you ask Jiangjie Qin 
there if he's still working on the issue? If not maybe you can take over.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] twalthr commented on pull request #12904: [FLINK-18569][table] Support limit() for unordered tables

2020-07-15 Thread GitBox


twalthr commented on pull request #12904:
URL: https://github.com/apache/flink/pull/12904#issuecomment-658782850


   Thanks @aljoscha and @KurtYoung. I hope I could address your feedback. I 
will merge this once the build is green.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12908: [FLINK-18449][table sql/api]Kafka topic discovery & partition discove…

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12908:
URL: https://github.com/apache/flink/pull/12908#issuecomment-658771971


   
   ## CI report:
   
   * 1a4e5ee2a2f11ba650a98c1a211cea75736fe79d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4537)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12906: [FLINK-18606][java-streaming] Remove unused generic parameter from SinkFunction.Context

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12906:
URL: https://github.com/apache/flink/pull/12906#issuecomment-658724114


   
   ## CI report:
   
   * 856e797c0ff602f2e6c7aeb89511218d5dfe2c14 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4532)
 
   * 09455c5c0c6bd961650e28994a30629e9269c679 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4536)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] nielsbasjes commented on a change in pull request #12907: [FLINK-18607][build] Give the maven module a human readable name

2020-07-15 Thread GitBox


nielsbasjes commented on a change in pull request #12907:
URL: https://github.com/apache/flink/pull/12907#discussion_r455066935



##
File path: 
flink-examples/flink-examples-build-helper/flink-examples-streaming-gcp-pubsub/pom.xml
##
@@ -29,7 +29,7 @@ under the License.

 

flink-examples-streaming-gcp-pubsub_${scala.binary.version}
-   flink-examples-streaming-gcp-pubsub
+   Flink : Examples : Dist : Streaming Google PubSub

Review comment:
   Yes, can be. I had doubts about this one. The artifact name is build 
helper yet the purpose is making it available in the dist.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12907: [FLINK-18607][build] Give the maven module a human readable name

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12907:
URL: https://github.com/apache/flink/pull/12907#issuecomment-658734085


   
   ## CI report:
   
   * bcd598e98411c31d577e8c1ee09c0468decd005a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4535)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12906: [FLINK-18606][java-streaming] Remove unused generic parameter from SinkFunction.Context

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12906:
URL: https://github.com/apache/flink/pull/12906#issuecomment-658724114


   
   ## CI report:
   
   * 856e797c0ff602f2e6c7aeb89511218d5dfe2c14 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4532)
 
   * 09455c5c0c6bd961650e28994a30629e9269c679 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12908: [FLINK-18449][table sql/api]Kafka topic discovery & partition discove…

2020-07-15 Thread GitBox


flinkbot commented on pull request #12908:
URL: https://github.com/apache/flink/pull/12908#issuecomment-658771971


   
   ## CI report:
   
   * 1a4e5ee2a2f11ba650a98c1a211cea75736fe79d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-web] kl0u opened a new pull request #360: Introducing the Application Mode blogpost.

2020-07-15 Thread GitBox


kl0u opened a new pull request #360:
URL: https://github.com/apache/flink-web/pull/360


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12908: [FLINK-18449][table sql/api]Kafka topic discovery & partition discove…

2020-07-15 Thread GitBox


flinkbot commented on pull request #12908:
URL: https://github.com/apache/flink/pull/12908#issuecomment-658758773


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 1a4e5ee2a2f11ba650a98c1a211cea75736fe79d (Wed Jul 15 
13:11:58 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18449) Make topic discovery and partition discovery configurable for FlinkKafkaConsumer in Table API

2020-07-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18449:
---
Labels: pull-request-available  (was: )

> Make topic discovery and partition discovery configurable for 
> FlinkKafkaConsumer in Table API
> -
>
> Key: FLINK-18449
> URL: https://issues.apache.org/jira/browse/FLINK-18449
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Shengkai Fang
>Assignee: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> In streaming api, we can use regex to find topic and enable partiton 
> discovery by setting non-negative value for 
> `{{flink.partition-discovery.interval-millis}}`. However, it's not work in 
> table api. I think we can add options such as 'topic-regex' and 
> '{{partition-discovery.interval-millis}}' in WITH block for users.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] fsk119 opened a new pull request #12908: [FLINK-18449][table sql/api]Kafka topic discovery & partition discove…

2020-07-15 Thread GitBox


fsk119 opened a new pull request #12908:
URL: https://github.com/apache/flink/pull/12908


   …ry dynamically in table api
   
   
   
   ## What is the purpose of the change
   
   *Enable Kafka Connector topic discovery & partition discovery in table api.*
   
   
   ## Brief change log
   
 - *Expose option 'topic-pattern' and 
'scan.topic-partition-discovery.interval'*
 - *Add validation for source when setting 'topic-pattern' and 'topic' 
together and setting 'topic-pattern' for sink.*
 - *Read value from Table option and use the value to build kafka consumer.*
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
 - *Added integration tests for new features*
 - *Added test that validates that setting topic and topic pattern together 
will fail and setting 'topic-pattern' for sink will fail.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (**yes** / no)
 - If yes, how is the feature documented? (not applicable / **docs** / 
JavaDocs / not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-18609) The user-defined jdbc-source process before add window ,job finished early.

2020-07-15 Thread ZHangTianLong (Jira)
ZHangTianLong created FLINK-18609:
-

 Summary: The user-defined jdbc-source process before add window 
,job finished early.
 Key: FLINK-18609
 URL: https://issues.apache.org/jira/browse/FLINK-18609
 Project: Flink
  Issue Type: Bug
  Components: API / DataSet
Affects Versions: 1.10.0
Reporter: ZHangTianLong


When I try to process MySQL data with a user-defined data source, after adding 
a window, when the data reading is completed. Before the window task is 
triggered, the job finished, resulting in the data not being processed. What 
should I do?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] KurtYoung commented on a change in pull request #12907: [FLINK-18607][build] Give the maven module a human readable name

2020-07-15 Thread GitBox


KurtYoung commented on a change in pull request #12907:
URL: https://github.com/apache/flink/pull/12907#discussion_r455029781



##
File path: 
flink-examples/flink-examples-build-helper/flink-examples-streaming-gcp-pubsub/pom.xml
##
@@ -29,7 +29,7 @@ under the License.

 

flink-examples-streaming-gcp-pubsub_${scala.binary.version}
-   flink-examples-streaming-gcp-pubsub
+   Flink : Examples : Dist : Streaming Google PubSub

Review comment:
   Dist -> Helper?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12907: [FLINK-18607][build] Give the maven module a human readable name

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12907:
URL: https://github.com/apache/flink/pull/12907#issuecomment-658734085


   
   ## CI report:
   
   * bcd598e98411c31d577e8c1ee09c0468decd005a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4535)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] godfreyhe commented on a change in pull request #12867: [FLINK-18558][streaming] Introduce collect iterator with at least once semantics and exactly once semantics without fault toler

2020-07-15 Thread GitBox


godfreyhe commented on a change in pull request #12867:
URL: https://github.com/apache/flink/pull/12867#discussion_r454930701



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/collect/CollectResultIterator.java
##
@@ -40,18 +43,37 @@
public CollectResultIterator(
CompletableFuture operatorIdFuture,
TypeSerializer serializer,
-   String accumulatorName) {
-   this.fetcher = new CollectResultFetcher<>(operatorIdFuture, 
serializer, accumulatorName);
+   String accumulatorName,
+   CheckpointConfig checkpointConfig) {
+   if (checkpointConfig.getCheckpointingMode() == 
CheckpointingMode.EXACTLY_ONCE) {
+   if (checkpointConfig.getCheckpointInterval() >= 
CheckpointCoordinatorConfiguration.MINIMAL_CHECKPOINT_TIME) {
+   this.fetcher = new CollectResultFetcher<>(
+   new 
CheckpointedCollectResultBuffer<>(serializer),
+   operatorIdFuture,
+   accumulatorName);
+   } else {
+   this.fetcher = new CollectResultFetcher<>(
+   new 
UncheckpointedCollectResultBuffer<>(serializer, false),
+   operatorIdFuture,
+   accumulatorName);
+   }

Review comment:
   I think this branch is unnecessary, because it's illegal that checkpoint 
interval is less than `MINIMAL_CHECKPOINT_TIME`, many places have such 
validation, e.g. `CheckpointConfig.setCheckpointInterval`

##
File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/collect/CollectSinkFunctionRandomITCase.java
##
@@ -0,0 +1,366 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.api.operators.collect;
+
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.common.typeutils.base.IntSerializer;
+import org.apache.flink.runtime.jobgraph.OperatorID;
+import 
org.apache.flink.streaming.api.operators.collect.utils.CollectSinkFunctionTestWrapper;
+import org.apache.flink.streaming.api.operators.collect.utils.TestJobClient;
+import org.apache.flink.util.OptionalFailure;
+import org.apache.flink.util.TestLogger;
+import org.apache.flink.util.function.RunnableWithException;
+
+import org.hamcrest.CoreMatchers;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.ListIterator;
+import java.util.Map;
+import java.util.Random;
+import java.util.concurrent.CompletableFuture;
+
+import static 
org.apache.flink.streaming.api.operators.collect.utils.CollectSinkFunctionTestWrapper.ACCUMULATOR_NAME;
+
+/**
+ * Random IT cases for {@link CollectSinkFunction}.
+ * It will perform random insert, random checkpoint and random restart.
+ */
+public class CollectSinkFunctionRandomITCase extends TestLogger {
+
+   private static final int MAX_RESULTS_PER_BATCH = 3;
+   private static final JobID TEST_JOB_ID = new JobID();
+   private static final OperatorID TEST_OPERATOR_ID = new OperatorID();
+
+   private static final TypeSerializer serializer = 
IntSerializer.INSTANCE;
+
+   private CollectSinkFunctionTestWrapper functionWrapper;
+   private boolean jobFinished;
+
+   @Test
+   public void testUncheckpointedFunction() throws Exception {
+   // run multiple times for this random test
+   for (int testCount = 30; testCount > 0; testCount--) {
+   functionWrapper = new 
CollectSinkFunctionTestWrapper<>(serializer, MAX_RESULTS_PER_BATCH * 4);
+   jobFinished = false;
+
+   List expected = new ArrayList<>();
+   for (int i = 0; i < 50; i++) {
+   

[jira] [Updated] (FLINK-18608) CustomizedConvertRule#convertCast drops nullability

2020-07-15 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-18608:
-
Priority: Blocker  (was: Major)

> CustomizedConvertRule#convertCast drops nullability
> ---
>
> Key: FLINK-18608
> URL: https://issues.apache.org/jira/browse/FLINK-18608
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Flavio Pompermaier
>Priority: Blocker
> Fix For: 1.11.2
>
>
> The following program shows that nullability is not respected during the 
> tranlastion:
> {code:java}
>  final TableEnvironment tableEnv = 
> DatalinksExecutionEnvironment.getBatchTableEnv();
> final Table inputTable = tableEnv.fromValues(//
> DataTypes.ROW(//
> DataTypes.FIELD("col1", DataTypes.STRING()), //
> DataTypes.FIELD("col2", DataTypes.STRING())//
> ), //
> Row.of(1L, "Hello"), //
> Row.of(2L, "Hello"), //
> Row.of(3L, ""), //
> Row.of(4L, "Ciao"));
> tableEnv.createTemporaryView("ParquetDataset", inputTable);
> tableEnv.executeSql(//
> "CREATE TABLE `out` (\n" + //
> "col1 STRING,\n" + //
> "col2 STRING\n" + //
> ") WITH (\n" + //
> " 'connector' = 'filesystem',\n" + //
> // " 'format' = 'parquet',\n" + //
> " 'update-mode' = 'append',\n" + //
> " 'path' = 'file://" + TEST_FOLDER + "',\n" + //
> " 'sink.shuffle-by-partition.enable' = 'true'\n" + //
> ")");
> tableEnv.executeSql("INSERT INTO `out` SELECT * FROM ParquetDataset");
> {code}
> -
> Exception in thread "main" java.lang.AssertionError: Conversion to relational 
> algebra failed to preserve datatypes:
> validated type:
> RecordType(VARCHAR(2147483647) CHARACTER SET "UTF-16LE" col1, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" col2) NOT NULL
> converted type:
> RecordType(VARCHAR(2147483647) CHARACTER SET "UTF-16LE" NOT NULL col1, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" NOT NULL col2) NOT NULL
> rel:
> LogicalProject(col1=[$0], col2=[$1])
>   LogicalUnion(all=[true])
> LogicalProject(col1=[_UTF-16LE'1':VARCHAR(2147483647) CHARACTER SET 
> "UTF-16LE"], col2=[_UTF-16LE'Hello':VARCHAR(2147483647) CHARACTER SET 
> "UTF-16LE"])
>   LogicalValues(tuples=[[{ 0 }]])
> LogicalProject(col1=[_UTF-16LE'2':VARCHAR(2147483647) CHARACTER SET 
> "UTF-16LE"], col2=[_UTF-16LE'Hello':VARCHAR(2147483647) CHARACTER SET 
> "UTF-16LE"])
>   LogicalValues(tuples=[[{ 0 }]])
> LogicalProject(col1=[_UTF-16LE'3':VARCHAR(2147483647) CHARACTER SET 
> "UTF-16LE"], col2=[_UTF-16LE'':VARCHAR(2147483647) CHARACTER SET "UTF-16LE"])
>   LogicalValues(tuples=[[{ 0 }]])
> LogicalProject(col1=[_UTF-16LE'4':VARCHAR(2147483647) CHARACTER SET 
> "UTF-16LE"], col2=[_UTF-16LE'Ciao':VARCHAR(2147483647) CHARACTER SET 
> "UTF-16LE"])
>   LogicalValues(tuples=[[{ 0 }]])
> at 
> org.apache.calcite.sql2rel.SqlToRelConverter.checkConvertedType(SqlToRelConverter.java:465)
> at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:580)
> at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:164)
> at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:151)
> at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:773)
> at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:745)
> at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:238)
> at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlInsert(SqlToOperationConverter.java:527)
> at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:204)
> at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:78)
> at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:678)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18608) CustomizedConvertRule#convertCast drops nullability

2020-07-15 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-18608:
-
Fix Version/s: 1.11.2

> CustomizedConvertRule#convertCast drops nullability
> ---
>
> Key: FLINK-18608
> URL: https://issues.apache.org/jira/browse/FLINK-18608
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Flavio Pompermaier
>Priority: Major
> Fix For: 1.11.2
>
>
> The following program shows that nullability is not respected during the 
> tranlastion:
> {code:java}
>  final TableEnvironment tableEnv = 
> DatalinksExecutionEnvironment.getBatchTableEnv();
> final Table inputTable = tableEnv.fromValues(//
> DataTypes.ROW(//
> DataTypes.FIELD("col1", DataTypes.STRING()), //
> DataTypes.FIELD("col2", DataTypes.STRING())//
> ), //
> Row.of(1L, "Hello"), //
> Row.of(2L, "Hello"), //
> Row.of(3L, ""), //
> Row.of(4L, "Ciao"));
> tableEnv.createTemporaryView("ParquetDataset", inputTable);
> tableEnv.executeSql(//
> "CREATE TABLE `out` (\n" + //
> "col1 STRING,\n" + //
> "col2 STRING\n" + //
> ") WITH (\n" + //
> " 'connector' = 'filesystem',\n" + //
> // " 'format' = 'parquet',\n" + //
> " 'update-mode' = 'append',\n" + //
> " 'path' = 'file://" + TEST_FOLDER + "',\n" + //
> " 'sink.shuffle-by-partition.enable' = 'true'\n" + //
> ")");
> tableEnv.executeSql("INSERT INTO `out` SELECT * FROM ParquetDataset");
> {code}
> -
> Exception in thread "main" java.lang.AssertionError: Conversion to relational 
> algebra failed to preserve datatypes:
> validated type:
> RecordType(VARCHAR(2147483647) CHARACTER SET "UTF-16LE" col1, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" col2) NOT NULL
> converted type:
> RecordType(VARCHAR(2147483647) CHARACTER SET "UTF-16LE" NOT NULL col1, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" NOT NULL col2) NOT NULL
> rel:
> LogicalProject(col1=[$0], col2=[$1])
>   LogicalUnion(all=[true])
> LogicalProject(col1=[_UTF-16LE'1':VARCHAR(2147483647) CHARACTER SET 
> "UTF-16LE"], col2=[_UTF-16LE'Hello':VARCHAR(2147483647) CHARACTER SET 
> "UTF-16LE"])
>   LogicalValues(tuples=[[{ 0 }]])
> LogicalProject(col1=[_UTF-16LE'2':VARCHAR(2147483647) CHARACTER SET 
> "UTF-16LE"], col2=[_UTF-16LE'Hello':VARCHAR(2147483647) CHARACTER SET 
> "UTF-16LE"])
>   LogicalValues(tuples=[[{ 0 }]])
> LogicalProject(col1=[_UTF-16LE'3':VARCHAR(2147483647) CHARACTER SET 
> "UTF-16LE"], col2=[_UTF-16LE'':VARCHAR(2147483647) CHARACTER SET "UTF-16LE"])
>   LogicalValues(tuples=[[{ 0 }]])
> LogicalProject(col1=[_UTF-16LE'4':VARCHAR(2147483647) CHARACTER SET 
> "UTF-16LE"], col2=[_UTF-16LE'Ciao':VARCHAR(2147483647) CHARACTER SET 
> "UTF-16LE"])
>   LogicalValues(tuples=[[{ 0 }]])
> at 
> org.apache.calcite.sql2rel.SqlToRelConverter.checkConvertedType(SqlToRelConverter.java:465)
> at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:580)
> at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:164)
> at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:151)
> at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:773)
> at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:745)
> at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:238)
> at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlInsert(SqlToOperationConverter.java:527)
> at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:204)
> at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:78)
> at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:678)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12907: [FLINK-18607][build] Give the maven module a human readable name

2020-07-15 Thread GitBox


flinkbot commented on pull request #12907:
URL: https://github.com/apache/flink/pull/12907#issuecomment-658734085


   
   ## CI report:
   
   * bcd598e98411c31d577e8c1ee09c0468decd005a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12906: [FLINK-18606][java-streaming] Remove unused generic parameter from SinkFunction.Context

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12906:
URL: https://github.com/apache/flink/pull/12906#issuecomment-658724114


   
   ## CI report:
   
   * 856e797c0ff602f2e6c7aeb89511218d5dfe2c14 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4532)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12907: [FLINK-18607][build] Give the maven module a human readable name

2020-07-15 Thread GitBox


flinkbot commented on pull request #12907:
URL: https://github.com/apache/flink/pull/12907#issuecomment-658731246


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit bcd598e98411c31d577e8c1ee09c0468decd005a (Wed Jul 15 
12:11:46 UTC 2020)
   
   **Warnings:**
* **187 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-18607).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-18608) CustomizedConvertRule#convertCast drops nullability

2020-07-15 Thread Flavio Pompermaier (Jira)
Flavio Pompermaier created FLINK-18608:
--

 Summary: CustomizedConvertRule#convertCast drops nullability
 Key: FLINK-18608
 URL: https://issues.apache.org/jira/browse/FLINK-18608
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.11.0
Reporter: Flavio Pompermaier


The following program shows that nullability is not respected during the 
tranlastion:


{code:java}
 final TableEnvironment tableEnv = 
DatalinksExecutionEnvironment.getBatchTableEnv();
final Table inputTable = tableEnv.fromValues(//
DataTypes.ROW(//
DataTypes.FIELD("col1", DataTypes.STRING()), //
DataTypes.FIELD("col2", DataTypes.STRING())//
), //
Row.of(1L, "Hello"), //
Row.of(2L, "Hello"), //
Row.of(3L, ""), //
Row.of(4L, "Ciao"));
tableEnv.createTemporaryView("ParquetDataset", inputTable);
tableEnv.executeSql(//
"CREATE TABLE `out` (\n" + //
"col1 STRING,\n" + //
"col2 STRING\n" + //
") WITH (\n" + //
" 'connector' = 'filesystem',\n" + //
// " 'format' = 'parquet',\n" + //
" 'update-mode' = 'append',\n" + //
" 'path' = 'file://" + TEST_FOLDER + "',\n" + //
" 'sink.shuffle-by-partition.enable' = 'true'\n" + //
")");

tableEnv.executeSql("INSERT INTO `out` SELECT * FROM ParquetDataset");
{code}


-

Exception in thread "main" java.lang.AssertionError: Conversion to relational 
algebra failed to preserve datatypes:
validated type:
RecordType(VARCHAR(2147483647) CHARACTER SET "UTF-16LE" col1, 
VARCHAR(2147483647) CHARACTER SET "UTF-16LE" col2) NOT NULL
converted type:
RecordType(VARCHAR(2147483647) CHARACTER SET "UTF-16LE" NOT NULL col1, 
VARCHAR(2147483647) CHARACTER SET "UTF-16LE" NOT NULL col2) NOT NULL
rel:
LogicalProject(col1=[$0], col2=[$1])
  LogicalUnion(all=[true])
LogicalProject(col1=[_UTF-16LE'1':VARCHAR(2147483647) CHARACTER SET 
"UTF-16LE"], col2=[_UTF-16LE'Hello':VARCHAR(2147483647) CHARACTER SET 
"UTF-16LE"])
  LogicalValues(tuples=[[{ 0 }]])
LogicalProject(col1=[_UTF-16LE'2':VARCHAR(2147483647) CHARACTER SET 
"UTF-16LE"], col2=[_UTF-16LE'Hello':VARCHAR(2147483647) CHARACTER SET 
"UTF-16LE"])
  LogicalValues(tuples=[[{ 0 }]])
LogicalProject(col1=[_UTF-16LE'3':VARCHAR(2147483647) CHARACTER SET 
"UTF-16LE"], col2=[_UTF-16LE'':VARCHAR(2147483647) CHARACTER SET "UTF-16LE"])
  LogicalValues(tuples=[[{ 0 }]])
LogicalProject(col1=[_UTF-16LE'4':VARCHAR(2147483647) CHARACTER SET 
"UTF-16LE"], col2=[_UTF-16LE'Ciao':VARCHAR(2147483647) CHARACTER SET 
"UTF-16LE"])
  LogicalValues(tuples=[[{ 0 }]])

at 
org.apache.calcite.sql2rel.SqlToRelConverter.checkConvertedType(SqlToRelConverter.java:465)
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:580)
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:164)
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:151)
at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:773)
at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:745)
at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:238)
at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlInsert(SqlToOperationConverter.java:527)
at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:204)
at 
org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:78)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:678)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] nielsbasjes commented on pull request #12907: [FLINK-18607][build] Give the maven module a human readable name

2020-07-15 Thread GitBox


nielsbasjes commented on pull request #12907:
URL: https://github.com/apache/flink/pull/12907#issuecomment-658728948


   Note that all 'pom'  modules have a name that ends 
with `:` because that makes the ordering in the Maven overview in 
IntelliJ better.
   
   Also note that the naming convention I show here is based upon my own 
preference: what I find easy to read.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] nielsbasjes edited a comment on pull request #12907: [FLINK-18607][build] Give the maven module a human readable name

2020-07-15 Thread GitBox


nielsbasjes edited a comment on pull request #12907:
URL: https://github.com/apache/flink/pull/12907#issuecomment-658728948


   Note that all `pom`  modules have a name that ends 
with `:` because that makes the ordering in the Maven overview in 
IntelliJ better.
   
   Also note that the naming convention I show here is based upon my own 
preference: what I find easy to read.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18607) Give the maven modules human readable names.

2020-07-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18607:
---
Labels: pull-request-available  (was: )

> Give the maven modules human readable names.
> 
>
> Key: FLINK-18607
> URL: https://issues.apache.org/jira/browse/FLINK-18607
> Project: Flink
>  Issue Type: Improvement
>Reporter: Niels Basjes
>Priority: Major
>  Labels: pull-request-available
>
> As discussed on the mailing list.
> https://lists.apache.org/thread.html/r6331f0ae603c8ace30085c1f25a1935050224507bfade89ffeadbc7b%40%3Cdev.flink.apache.org%3E
> When building Flink the output both on the commandline and in IDEs like 
> IntelliJ always show the artifact name as the module name.
> By simply setting a more human readable module name in all of the pom.xml 
> files the build output becomes much easier to read for developers.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] nielsbasjes opened a new pull request #12907: [FLINK-18607][build] Give the maven module a human readable name

2020-07-15 Thread GitBox


nielsbasjes opened a new pull request #12907:
URL: https://github.com/apache/flink/pull/12907


   ## What is the purpose of the change
   
   Make the build output more human readable for the developers van various 
situations.
   
   ## Brief change log
   
   Add or change the `` of each maven module (i.e. in every pom.xml)
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   The simplest way to test is by doing a `mvn clean` and verify the list of 
names at the end looks good.
   Also looking at the change set verify that the names for each module are 
correct.
   
   In some exceptional cases I made the name reflect a bit more the purpose 
like in the case of 
   `flink-examples/flink-examples-build-helper` where the description indicates 
`This is a utility module for building example jars to be used in flink-dist.` 
so I named it `Flink : Examples : Dist : `
   
   Fun fact: I noticed that the ordering of the modules during the build is 
changed a lot by maven because apparently the ordering of the modules in the 
pom.xml does not meet the configured dependencies between the modules. So maven 
reorders the build by itself.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): **no**
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: **no**
 - The serializers: **no**
 - The runtime per-record code paths (performance sensitive): **no**
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: **no**
 - The S3 file system connector: **no**
   
   ## Documentation
   
 - Does this pull request introduce a new feature? **no**
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18600) Kerberized YARN per-job on Docker test failed to download JDK 8u251

2020-07-15 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17158102#comment-17158102
 ] 

Dawid Wysakowicz commented on FLINK-18600:
--

Disabled temporarily affected tests
* master:
** 9036bc0fd8b24d0a270f964a8116f5db781e4b3c
* 1.11:
** 065eb7242e2fb437962b89f276fd82513db9ffcd


> Kerberized YARN per-job on Docker test failed to download JDK 8u251
> ---
>
> Key: FLINK-18600
> URL: https://issues.apache.org/jira/browse/FLINK-18600
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Dian Fu
>Priority: Blocker
> Fix For: 1.12.0, 1.11.2
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4514=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529
> {code}
> + mkdir -p /usr/java/default
> + curl -Ls 
> https://download.oracle.com/otn-pub/java/jdk/8u251-b08/3d5a2bb8f8d4428bbe94aed7ec7ae784/jdk-8u251-linux-x64.tar.gz
>  -H Cookie: oraclelicense=accept-securebackup-cookie
> + tar --strip-components=1 -xz -C /usr/java/default/
> gzip: stdin: not in gzip format
> tar: Child returned status 1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12905: [FLINK-17789][API/Core] DelegatingConfiguration should remove prefix …

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12905:
URL: https://github.com/apache/flink/pull/12905#issuecomment-658715358


   
   ## CI report:
   
   * c9e05d742e3c66f494399d67077a8bbac0538803 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4531)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12906: [FLINK-18606][java-streaming] Remove unused generic parameter from SinkFunction.Context

2020-07-15 Thread GitBox


flinkbot commented on pull request #12906:
URL: https://github.com/apache/flink/pull/12906#issuecomment-658724114


   
   ## CI report:
   
   * 856e797c0ff602f2e6c7aeb89511218d5dfe2c14 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12827: [FLINK-18163][task] Add RecordWriter.volatileFlusherException

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12827:
URL: https://github.com/apache/flink/pull/12827#issuecomment-654120002


   
   ## CI report:
   
   * 0f39d59075500b5bed9817cc82fc9bf2e097385d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4530)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-18602) Support specific offset for topic list for kafka connector in table api

2020-07-15 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-18602:
---

Assignee: Shengkai Fang

> Support specific offset for topic list for kafka connector in table api
> ---
>
> Key: FLINK-18602
> URL: https://issues.apache.org/jira/browse/FLINK-18602
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Shengkai Fang
>Assignee: Shengkai Fang
>Priority: Major
> Fix For: 1.12.0
>
>
> During FLINK-18449, we decide to support topic discovery for kafka connector 
> in table api. However, we can only use format 
> {{'partition:0,offset:42;partition:1,offset:300'}} to specify the offset for 
> single topic. I think the better format is 
> {{'topic:topic-1,partition:0,offset:42;topic:topic-2,partition:1,offset:300'}}
>  in topic discovery situation.
>   
>   
>   
>   
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18449) Make topic discovery and partition discovery configurable for FlinkKafkaConsumer in Table API

2020-07-15 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17158098#comment-17158098
 ] 

Jark Wu commented on FLINK-18449:
-

+1 to 'scan.topic-partition-discovery.interval'

> Make topic discovery and partition discovery configurable for 
> FlinkKafkaConsumer in Table API
> -
>
> Key: FLINK-18449
> URL: https://issues.apache.org/jira/browse/FLINK-18449
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Shengkai Fang
>Assignee: Shengkai Fang
>Priority: Major
> Fix For: 1.12.0
>
>
> In streaming api, we can use regex to find topic and enable partiton 
> discovery by setting non-negative value for 
> `{{flink.partition-discovery.interval-millis}}`. However, it's not work in 
> table api. I think we can add options such as 'topic-regex' and 
> '{{partition-discovery.interval-millis}}' in WITH block for users.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12906: [FLINK-18606][java-streaming] Remove unused generic parameter from SinkFunction.Context

2020-07-15 Thread GitBox


flinkbot commented on pull request #12906:
URL: https://github.com/apache/flink/pull/12906#issuecomment-658715776


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 856e797c0ff602f2e6c7aeb89511218d5dfe2c14 (Wed Jul 15 
11:33:06 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-18606).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18606) Remove generic parameter from SinkFunction.Context

2020-07-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18606:
---
Labels: pull-request-available  (was: )

> Remove generic parameter from SinkFunction.Context
> -
>
> Key: FLINK-18606
> URL: https://issues.apache.org/jira/browse/FLINK-18606
> Project: Flink
>  Issue Type: Improvement
>Reporter: Niels Basjes
>Priority: Major
>  Labels: pull-request-available
>
> As discussed on the mailing list 
> https://lists.apache.org/thread.html/ra72d406e262f3b30ef4df95e8e4ba2d765859203499be3b6d5cd59a2%40%3Cdev.flink.apache.org%3E
> The SinkFunction.Context  interface is a generic that does not use this 
> generic parameter.
> In most places where this interface is used the generic parameter is omitted 
> and thus gives many warnings about using "raw types".
> This is to try to remove this generic parameter and asses the impact.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12905: [FLINK-17789][API/Core] DelegatingConfiguration should remove prefix …

2020-07-15 Thread GitBox


flinkbot commented on pull request #12905:
URL: https://github.com/apache/flink/pull/12905#issuecomment-658715358


   
   ## CI report:
   
   * c9e05d742e3c66f494399d67077a8bbac0538803 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] nielsbasjes opened a new pull request #12906: [FLINK-18606][java-streaming] Remove unused generic parameter from SinkFunction.Context

2020-07-15 Thread GitBox


nielsbasjes opened a new pull request #12906:
URL: https://github.com/apache/flink/pull/12906


   ## What is the purpose of the change
   
   The SinkFunction.Context has an unused generic parameter which causes 
needless warnings in many places.
   
   ## Brief change log
   
   Remove the type parameter from SinkFunction.Context
   
   ## Verifying this change
   
   This change is already covered by existing tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): **no**
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: **yes**
 - The serializers: **no**
 - The runtime per-record code paths (performance sensitive): **yes**
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: **no**
 - The S3 file system connector: **no**
   
   ## Documentation
   
 - Does this pull request introduce a new feature? **no**
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] pyscala commented on pull request #12905: [FLINK-17789][API/Core] DelegatingConfiguration should remove prefix …

2020-07-15 Thread GitBox


pyscala commented on pull request #12905:
URL: https://github.com/apache/flink/pull/12905#issuecomment-658712259


   Hi @JingsongLi ,  Looking forward to your review, thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rkhachatryan commented on pull request #12827: [FLINK-18163][task] Add RecordWriter.volatileFlusherException

2020-07-15 Thread GitBox


rkhachatryan commented on pull request #12827:
URL: https://github.com/apache/flink/pull/12827#issuecomment-658711550


   No,
   > CI build failure unrelated: 
https://issues.apache.org/jira/browse/FLINK-18485
   
   (2nd build is PENDING at the moment)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12904: [FLINK-18569][table] Support limit() for unordered tables

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12904:
URL: https://github.com/apache/flink/pull/12904#issuecomment-658669745


   
   ## CI report:
   
   * 4efba99139063f911fb977ad1d1ef8562fe982d3 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4529)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12827: [FLINK-18163][task] Add RecordWriter.volatileFlusherException

2020-07-15 Thread GitBox


flinkbot edited a comment on pull request #12827:
URL: https://github.com/apache/flink/pull/12827#issuecomment-654120002


   
   ## CI report:
   
   * a47aa124dba8c4badae4567bb78cbc92d4ef072f Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4263)
 
   * 0f39d59075500b5bed9817cc82fc9bf2e097385d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4530)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12905: [FLINK-17789][API/Core] DelegatingConfiguration should remove prefix …

2020-07-15 Thread GitBox


flinkbot commented on pull request #12905:
URL: https://github.com/apache/flink/pull/12905#issuecomment-658707054


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit c9e05d742e3c66f494399d67077a8bbac0538803 (Wed Jul 15 
11:11:37 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-17789).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17789) DelegatingConfiguration should remove prefix instead of add prefix in toMap

2020-07-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17789:
---
Labels: pull-request-available  (was: )

> DelegatingConfiguration should remove prefix instead of add prefix in toMap
> ---
>
> Key: FLINK-17789
> URL: https://issues.apache.org/jira/browse/FLINK-17789
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Reporter: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> {code:java}
> Configuration conf = new Configuration();
> conf.setString("k0", "v0");
> conf.setString("prefix.k1", "v1");
> DelegatingConfiguration dc = new DelegatingConfiguration(conf, "prefix.");
> System.out.println(dc.getString("k0", "empty")); // empty
> System.out.println(dc.getString("k1", "empty")); // v1
> System.out.println(dc.toMap().get("k1")); // should be v1, but null
> System.out.println(dc.toMap().get("prefix.prefix.k1")); // should be null, 
> but v1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18607) Give the maven modules human readable names.

2020-07-15 Thread Niels Basjes (Jira)
Niels Basjes created FLINK-18607:


 Summary: Give the maven modules human readable names.
 Key: FLINK-18607
 URL: https://issues.apache.org/jira/browse/FLINK-18607
 Project: Flink
  Issue Type: Improvement
Reporter: Niels Basjes


As discussed on the mailing list.

https://lists.apache.org/thread.html/r6331f0ae603c8ace30085c1f25a1935050224507bfade89ffeadbc7b%40%3Cdev.flink.apache.org%3E

When building Flink the output both on the commandline and in IDEs like 
IntelliJ always show the artifact name as the module name.

By simply setting a more human readable module name in all of the pom.xml files 
the build output becomes much easier to read for developers.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] pyscala opened a new pull request #12905: [FLINK-17789][API/Core] DelegatingConfiguration should remove prefix …

2020-07-15 Thread GitBox


pyscala opened a new pull request #12905:
URL: https://github.com/apache/flink/pull/12905


   
   
   
   ## What is the purpose of the change
   
   1.DelegatingConfiguration should remove prefix instead of add prefix in toMap
   2.Make `toMap` consistent with `addAllToProperties` in class 
`DelegatingConfiguration`
   
   
   ## Brief change log
   
   1.DelegatingConfiguration should remove prefix instead of add prefix in toMap
   2.Make `toMap` consistent with `addAllToProperties` in class 
`DelegatingConfiguration`
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   unit test `testDelegationConfigurationToMapConsistentWithAddAllToProperties` 
in class `DelegatingConfigurationTest`
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): ( no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? ( no)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18606) Remove generic parameter from SinkFunction.Context

2020-07-15 Thread Niels Basjes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17158074#comment-17158074
 ] 

Niels Basjes commented on FLINK-18606:
--

The impact seems very limited.
I expect normal user applications only get in touch with the Context by means 
of implementing the {code}void invoke(IN value, Context context){code} of a new 
type of SinkFunction.
There the parameter is already missing so there should be no impact for them.

The only other affected place is the SinkContextUtil which is marked as 
{code}@Internal{code}


> Remove generic parameter from SinkFunction.Context
> -
>
> Key: FLINK-18606
> URL: https://issues.apache.org/jira/browse/FLINK-18606
> Project: Flink
>  Issue Type: Improvement
>Reporter: Niels Basjes
>Priority: Major
>
> As discussed on the mailing list 
> https://lists.apache.org/thread.html/ra72d406e262f3b30ef4df95e8e4ba2d765859203499be3b6d5cd59a2%40%3Cdev.flink.apache.org%3E
> The SinkFunction.Context  interface is a generic that does not use this 
> generic parameter.
> In most places where this interface is used the generic parameter is omitted 
> and thus gives many warnings about using "raw types".
> This is to try to remove this generic parameter and asses the impact.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >