[GitHub] [flink] AHeise commented on a change in pull request #15972: Add common source and operator metrics.

2021-07-07 Thread GitBox


AHeise commented on a change in pull request #15972:
URL: https://github.com/apache/flink/pull/15972#discussion_r665891888



##
File path: 
flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/groups/SourceMetricGroup.java
##
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.metrics.groups;
+
+import org.apache.flink.metrics.Counter;
+import org.apache.flink.metrics.Gauge;
+import org.apache.flink.metrics.SettableGauge;
+
+import javax.annotation.concurrent.NotThreadSafe;
+
+/**
+ * Pre-defined metrics for sources.
+ *
+ * All metrics can only be accessed in the main operator thread.
+ */
+@NotThreadSafe
+public interface SourceMetricGroup extends OperatorMetricGroup {
+/** The total number of record that failed to consume, process, or emit. */
+Counter getNumRecordsInErrorsCounter();
+
+/**
+ * Adds an optional gauge for last fetch time. Source readers can use this 
gauge to indicate the
+ * timestamp in milliseconds that Flink used to fetch a record.
+ *
+ * The timestamp will be used to calculate the currentFetchEventTimeLag 
metric 
+ * currentFetchEventTimeLag = FetchTime - EventTime.
+ *
+ * Note that this time must strictly reflect the time of the last 
polled record. For sources
+ * that retrieve batches from the external system, the best way is to 
attach the timestamp to
+ * the batch and return the time of that batch. For multi-threaded 
sources, the timestamp should
+ * be embedded into the hand-over data structure.
+ *
+ * @see SettableGauge SettableGauge to continuously update the value.
+ */
+> G addLastFetchTimeGauge(G lastFetchTimeGauge);
+
+/**
+ * Adds an optional gauge for the number of bytes that have not been 
fetched by the source. e.g.
+ * the remaining bytes in a file after the file descriptor reading 
position.
+ *
+ * Note that not every source can report this metric in an plausible 
and efficient way.
+ *
+ * @see SettableGauge SettableGauge to continuously update the value.
+ */
+> G addPendingBytesGauge(G pendingBytesGauge);

Review comment:
   I used `add` to be in line with `addGauge` of the `MetricGroup`. But I 
can see your point.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-16634) The PartitionDiscoverer in FlinkKafkaConsumer should not use the user provided client.id.

2021-07-07 Thread Jiangjie Qin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17377082#comment-17377082
 ] 

Jiangjie Qin commented on FLINK-16634:
--

[~liufangliang] Done.

> The PartitionDiscoverer in FlinkKafkaConsumer should not use the user 
> provided client.id.
> -
>
> Key: FLINK-16634
> URL: https://issues.apache.org/jira/browse/FLINK-16634
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0
>Reporter: Jiangjie Qin
>Assignee: Fangliang Liu
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> The {{PartitionDiscoverer}} creates a {{KafkaConsumer}} using the client.id 
> from the user provided properties. This may cause the MBean to collide with 
> the fetching {{KafkaConsumer}}. The {{PartitionDiscoverer}} should use a 
> unique client.id instead, such as "PartitionDiscoverer-RANDOM_LONG"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16634) The PartitionDiscoverer in FlinkKafkaConsumer should not use the user provided client.id.

2021-07-07 Thread Jiangjie Qin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin reassigned FLINK-16634:


Assignee: Fangliang Liu

> The PartitionDiscoverer in FlinkKafkaConsumer should not use the user 
> provided client.id.
> -
>
> Key: FLINK-16634
> URL: https://issues.apache.org/jira/browse/FLINK-16634
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0
>Reporter: Jiangjie Qin
>Assignee: Fangliang Liu
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> The {{PartitionDiscoverer}} creates a {{KafkaConsumer}} using the client.id 
> from the user provided properties. This may cause the MBean to collide with 
> the fetching {{KafkaConsumer}}. The {{PartitionDiscoverer}} should use a 
> unique client.id instead, such as "PartitionDiscoverer-RANDOM_LONG"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #16416: [FLINK-22657][Connectors/Hive] HiveParserDDLSemanticAnalyzer return operations directly

2021-07-07 Thread GitBox


flinkbot edited a comment on pull request #16416:
URL: https://github.com/apache/flink/pull/16416#issuecomment-876055217


   
   ## CI report:
   
   * 9c60fc2fed1f4140035abe32b0b9b7cd8cc55973 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20128)
 
   * ff16e9cedb7f995409813ce18254add0c3f2420a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16404: [FLINK-23277][state/changelog] Store and recover TTL metadata using changelog

2021-07-07 Thread GitBox


flinkbot edited a comment on pull request #16404:
URL: https://github.com/apache/flink/pull/16404#issuecomment-875114816


   
   ## CI report:
   
   * eea47c38b52880c66fd93dac162d5d7f895ca752 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20127)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lirui-apache commented on pull request #15965: [FLINK-23298][datagen] Normalize parameter names in RandomGeneratorVisitor and SequenceGeneratorVisitor

2021-07-07 Thread GitBox


lirui-apache commented on pull request #15965:
URL: https://github.com/apache/flink/pull/15965#issuecomment-876147727


   @fhan688 It has been merged in this commit: 
https://github.com/apache/flink/commit/e9cc49a31fc384bff1c84372788f14a173cd94b0


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16349: [FLINK-23267][table-planner-blink] Enable Java code splitting for all generated classes

2021-07-07 Thread GitBox


flinkbot edited a comment on pull request #16349:
URL: https://github.com/apache/flink/pull/16349#issuecomment-872790415


   
   ## CI report:
   
   * 3936c76b6680245fe4184499db8656ffa00cb70b Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20130)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-23237) Add log to print data that failed to deserialize when ignore-parse-error=true

2021-07-07 Thread hehuiyuan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17377059#comment-17377059
 ] 

hehuiyuan edited comment on FLINK-23237 at 7/8/21, 5:47 AM:


Hi  [~Aiden Gong]      [~jark]  , add log (debug or error) to print error data 
that failed to deserialize when set `ignore-parse-error` = `true`,  which is 
useful . Is it considered to do?

For example,

if the source has 1000 records, then the number of sink is 998. We can 
troubleshoot easily and find the error data from log.


was (Author: hehuiyuan):
Hi  [~Aiden Gong]      [~jark]  , additionally, add log (debug or error) to 
print error data that failed to deserialize when set `ignore-parse-error` = 
`true`,  which is useful . Is it considered to do?

For example,

if the source has 1000 records, then the number of sink is 998. We can 
troubleshoot easily and find the error data from log.

> Add log to print data that failed to deserialize when  
> ignore-parse-error=true 
> ---
>
> Key: FLINK-23237
> URL: https://issues.apache.org/jira/browse/FLINK-23237
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: hehuiyuan
>Priority: Minor
>
> Add log to print error data that failed to deserialize when set 
> `ignore-parse-error` = `true`
>  
> {code:java}
> public RowData deserialize(@Nullable byte[] message) throws IOException {
> if (message == null) {
> return null;
> }
> try {
> final JsonNode root = objectReader.readValue(message);
> return (RowData) runtimeConverter.convert(root);
> } catch (Throwable t) {
> if (ignoreParseErrors) {
> return null;
> }
> throw new IOException(
> String.format("Failed to deserialize CSV row '%s'.", new 
> String(message)), t);
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-23237) Add log to print data that failed to deserialize when ignore-parse-error=true

2021-07-07 Thread hehuiyuan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17377059#comment-17377059
 ] 

hehuiyuan edited comment on FLINK-23237 at 7/8/21, 5:46 AM:


Hi  [~Aiden Gong]      [~jark]  , additionally, add log (debug or error) to 
print error data that failed to deserialize when set `ignore-parse-error` = 
`true`,  which is useful . Is it considered to do?

For example,

if the source has 1000 records, then the number of sink is 998. We can 
troubleshoot easily and find the error data from log.


was (Author: hehuiyuan):
Hi  [~Aiden Gong] , additionally, add log (debug or error) to print error data 
that failed to deserialize when set `ignore-parse-error` = `true`,  which is 
useful . Is it considered to do?

For example,

if the source has 1000 records, then the number of sink is 998. We can 
troubleshoot easily and find the error data from log.

> Add log to print data that failed to deserialize when  
> ignore-parse-error=true 
> ---
>
> Key: FLINK-23237
> URL: https://issues.apache.org/jira/browse/FLINK-23237
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: hehuiyuan
>Priority: Minor
>
> Add log to print error data that failed to deserialize when set 
> `ignore-parse-error` = `true`
>  
> {code:java}
> public RowData deserialize(@Nullable byte[] message) throws IOException {
> if (message == null) {
> return null;
> }
> try {
> final JsonNode root = objectReader.readValue(message);
> return (RowData) runtimeConverter.convert(root);
> } catch (Throwable t) {
> if (ignoreParseErrors) {
> return null;
> }
> throw new IOException(
> String.format("Failed to deserialize CSV row '%s'.", new 
> String(message)), t);
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-23237) Add log to print data that failed to deserialize when ignore-parse-error=true

2021-07-07 Thread hehuiyuan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17377059#comment-17377059
 ] 

hehuiyuan edited comment on FLINK-23237 at 7/8/21, 5:35 AM:


Hi  [~Aiden Gong] , additionally, add log (debug or error) to print error data 
that failed to deserialize when set `ignore-parse-error` = `true`,  which is 
useful . Is it considered to do?

For example,

if the source has 1000 records, then the number of sink is 998. We can 
troubleshoot easily and find the error data from log.


was (Author: hehuiyuan):
[~Aiden Gong] , Additionally, add log (debug or error) to print error data that 
failed to deserialize when set `ignore-parse-error` = `true`,  which is useful .

Is it considered to do?

> Add log to print data that failed to deserialize when  
> ignore-parse-error=true 
> ---
>
> Key: FLINK-23237
> URL: https://issues.apache.org/jira/browse/FLINK-23237
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: hehuiyuan
>Priority: Minor
>
> Add log to print error data that failed to deserialize when set 
> `ignore-parse-error` = `true`
>  
> {code:java}
> public RowData deserialize(@Nullable byte[] message) throws IOException {
> if (message == null) {
> return null;
> }
> try {
> final JsonNode root = objectReader.readValue(message);
> return (RowData) runtimeConverter.convert(root);
> } catch (Throwable t) {
> if (ignoreParseErrors) {
> return null;
> }
> throw new IOException(
> String.format("Failed to deserialize CSV row '%s'.", new 
> String(message)), t);
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] LongWangXX commented on pull request #16417: FLINK-22969

2021-07-07 Thread GitBox


LongWangXX commented on pull request #16417:
URL: https://github.com/apache/flink/pull/16417#issuecomment-876141971


   @luoyuxia Thank you for your review, thank you for your hard work. I'm very 
sorry, there are so many questions, let me fix it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-14055) Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"

2021-07-07 Thread Zhenqiu Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17377072#comment-17377072
 ] 

Zhenqiu Huang commented on FLINK-14055:
---

[~gelimusong]
Please see the doc I shared. Yes, the URLclassloader is used in table 
environment for code generation. Beside this, we need to ship the udf resource 
to cluster and put these udf in the classpath of Task manager, so that TM can 
use the regular UserCodeClassLoader to access them. But for different 
deployment mode, we need to consider some deployment details.

> Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"
> 
>
> Key: FLINK-14055
> URL: https://issues.apache.org/jira/browse/FLINK-14055
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Bowen Li
>Priority: Major
>  Labels: auto-unassigned, sprint
>
> As FLINK-7151 adds basic function DDL to Flink, this ticket is to support 
> dynamically loading functions from external source in function DDL with 
> advanced syntax like 
>  
> {code:java}
> CREATE FUNCTION func_name as class_name USING JAR/FILE/ACHIEVE 'xxx' [, 
> JAR/FILE/ACHIEVE 'yyy'] ;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23237) Add log to print data that failed to deserialize when ignore-parse-error=true

2021-07-07 Thread hehuiyuan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17377059#comment-17377059
 ] 

hehuiyuan commented on FLINK-23237:
---

[~Aiden Gong] , Additionally, add log (debug or error) to print error data that 
failed to deserialize when set `ignore-parse-error` = `true`,  which is useful .

Is it considered to do?

> Add log to print data that failed to deserialize when  
> ignore-parse-error=true 
> ---
>
> Key: FLINK-23237
> URL: https://issues.apache.org/jira/browse/FLINK-23237
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: hehuiyuan
>Priority: Minor
>
> Add log to print error data that failed to deserialize when set 
> `ignore-parse-error` = `true`
>  
> {code:java}
> public RowData deserialize(@Nullable byte[] message) throws IOException {
> if (message == null) {
> return null;
> }
> try {
> final JsonNode root = objectReader.readValue(message);
> return (RowData) runtimeConverter.convert(root);
> } catch (Throwable t) {
> if (ignoreParseErrors) {
> return null;
> }
> throw new IOException(
> String.format("Failed to deserialize CSV row '%s'.", new 
> String(message)), t);
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-23107) Separate deduplicate rank from rank functions

2021-07-07 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-23107:


Assignee: Shuo Cheng

> Separate deduplicate rank from rank functions
> -
>
> Key: FLINK-23107
> URL: https://issues.apache.org/jira/browse/FLINK-23107
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: Jingsong Lee
>Assignee: Shuo Cheng
>Priority: Major
> Fix For: 1.14.0
>
>
> SELECT * FROM (SELECT *, ROW_NUMBER() OVER (PARTITION BY d ORDER BY e DESC) 
> AS rownum from T) WHERE rownum=1
> Actually above sql is a deduplicate rank instead of a normal rank. We should 
> separate the implementation for optimize the deduplicate rank and reduce bugs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23107) Separate deduplicate rank from rank functions

2021-07-07 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17377058#comment-17377058
 ] 

Jingsong Lee commented on FLINK-23107:
--

[~icshuo] Thanks, assigned to you~

> Separate deduplicate rank from rank functions
> -
>
> Key: FLINK-23107
> URL: https://issues.apache.org/jira/browse/FLINK-23107
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: Jingsong Lee
>Assignee: Shuo Cheng
>Priority: Major
> Fix For: 1.14.0
>
>
> SELECT * FROM (SELECT *, ROW_NUMBER() OVER (PARTITION BY d ORDER BY e DESC) 
> AS rownum from T) WHERE rownum=1
> Actually above sql is a deduplicate rank instead of a normal rank. We should 
> separate the implementation for optimize the deduplicate rank and reduce bugs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #16415: [FLINK-23262][tests] Check watermark frequency instead of count

2021-07-07 Thread GitBox


flinkbot edited a comment on pull request #16415:
URL: https://github.com/apache/flink/pull/16415#issuecomment-875773502


   
   ## CI report:
   
   * 251329bfc74f78dc4f0da29b76d10f1f952c9964 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20125)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-statefun] tzulitai commented on pull request #241: [FLINK-21308] Support delayed message cancellation

2021-07-07 Thread GitBox


tzulitai commented on pull request #241:
URL: https://github.com/apache/flink-statefun/pull/241#issuecomment-876126960


   Thanks @igalshilman! Merging this!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fhan688 commented on pull request #15965: [FLINK-23298][datagen] Normalize parameter names in RandomGeneratorVisitor and SequenceGeneratorVisitor

2021-07-07 Thread GitBox


fhan688 commented on pull request #15965:
URL: https://github.com/apache/flink/pull/15965#issuecomment-876126254


   sorry, I wonder why this pr closed not merged...  @lirui-apache 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16403: [FLINK-23276][state/changelog] Fix missing delegation in getPartitionedState

2021-07-07 Thread GitBox


flinkbot edited a comment on pull request #16403:
URL: https://github.com/apache/flink/pull/16403#issuecomment-875102625


   
   ## CI report:
   
   * 1d1368d281acdb14f47adc7ec19078d7e21dda5f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20126)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20041)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16418: [FLINK-23243][docs-zh]Translate "SELECT & WHERE clause" page into Chinese

2021-07-07 Thread GitBox


flinkbot edited a comment on pull request #16418:
URL: https://github.com/apache/flink/pull/16418#issuecomment-876101573


   
   ## CI report:
   
   * 9ceb4bc080af46d70ac5683267efa32c6810bcef Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20137)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-23304) More detail information in sql validate exception

2021-07-07 Thread YING HOU (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YING HOU updated FLINK-23304:
-
Description: Currently, When I was using a lot of the same udf in a sql, I 
can't locate where the semantic occor if some udf being used in a wrong way. So 
I try to extract more detail information such as position and sql context in 
the exception creating function.

> More detail information in sql validate exception
> -
>
> Key: FLINK-23304
> URL: https://issues.apache.org/jira/browse/FLINK-23304
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API, Table SQL / Planner
>Affects Versions: 1.13.0, 1.13.1
>Reporter: YING HOU
>Priority: Minor
> Fix For: 1.13.0
>
>
> Currently, When I was using a lot of the same udf in a sql, I can't locate 
> where the semantic occor if some udf being used in a wrong way. So I try to 
> extract more detail information such as position and sql context in the 
> exception creating function.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-23178) Raise an error for writing stream data into partitioned hive tables without a partition committer

2021-07-07 Thread Rui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li resolved FLINK-23178.

Fix Version/s: 1.13.2
   1.14.0
   Resolution: Fixed

Pushed to master: a94745ec85bf9e8ca3bc2fced5c1a466b836e0be
Pushed to release-1.13: f14307b869e1e9b518784276311ce6afb112312d

> Raise an error for writing stream data into partitioned hive tables without a 
> partition committer
> -
>
> Key: FLINK-23178
> URL: https://issues.apache.org/jira/browse/FLINK-23178
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0, 1.13.2
>
>
> If a user writes streaming data into hive but forgets to specify a partition 
> commit policy, the job will run successfully but data was never committed to 
> hive metastore.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #16418: [FLINK-23243][docs-zh]Translate "SELECT & WHERE clause" page into Chinese

2021-07-07 Thread GitBox


flinkbot commented on pull request #16418:
URL: https://github.com/apache/flink/pull/16418#issuecomment-876101573


   
   ## CI report:
   
   * 9ceb4bc080af46d70ac5683267efa32c6810bcef UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16417: FLINK-22969

2021-07-07 Thread GitBox


flinkbot edited a comment on pull request #16417:
URL: https://github.com/apache/flink/pull/16417#issuecomment-876088562


   
   ## CI report:
   
   * e8e280ccc9d30446c0280ecf8b8028b2cd25206f Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20135)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16395: [FLINK-22662][yarn][test] Stabilize YARNHighAvailabilityITCase

2021-07-07 Thread GitBox


flinkbot edited a comment on pull request #16395:
URL: https://github.com/apache/flink/pull/16395#issuecomment-874665804


   
   ## CI report:
   
   * 69f1bc393f0ed01e64fc6dad942bcfe7d60b9e43 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20044)
 
   * 3238d136ab650d4bc509b340f5ad515fe07ec2b7 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20134)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-23304) More detail information in sql validate exception

2021-07-07 Thread YING HOU (Jira)
YING HOU created FLINK-23304:


 Summary: More detail information in sql validate exception
 Key: FLINK-23304
 URL: https://issues.apache.org/jira/browse/FLINK-23304
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API, Table SQL / Planner
Affects Versions: 1.13.1, 1.13.0
Reporter: YING HOU
 Fix For: 1.13.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] luoyuxia commented on a change in pull request #16417: FLINK-22969

2021-07-07 Thread GitBox


luoyuxia commented on a change in pull request #16417:
URL: https://github.com/apache/flink/pull/16417#discussion_r665846147



##
File path: 
flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/internals/KafkaTopicsDescriptor.java
##
@@ -41,17 +42,25 @@
 private final Pattern topicPattern;
 
 public KafkaTopicsDescriptor(
-@Nullable List fixedTopics, @Nullable Pattern 
topicPattern) {
+@Nullable List fixedTopics, @Nullable Pattern topicPattern) {

Review comment:
   The code format is not right, have you followed the guide 
   
https://ci.apache.org/projects/flink/flink-docs-master/docs/flinkdev/ide_setup/#code-formatting
 ?
   We need a consistent code format.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] luoyuxia commented on a change in pull request #16417: FLINK-22969

2021-07-07 Thread GitBox


luoyuxia commented on a change in pull request #16417:
URL: https://github.com/apache/flink/pull/16417#discussion_r665844774



##
File path: 
flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/internals/KafkaTopicsDescriptor.java
##
@@ -41,17 +42,25 @@
 private final Pattern topicPattern;
 
 public KafkaTopicsDescriptor(
-@Nullable List fixedTopics, @Nullable Pattern 
topicPattern) {
+@Nullable List fixedTopics, @Nullable Pattern topicPattern) {
 checkArgument(
-(fixedTopics != null && topicPattern == null)
-|| (fixedTopics == null && topicPattern != null),
-"Exactly one of either fixedTopics or topicPattern must be 
specified.");
+(fixedTopics != null && topicPattern == null)
+|| (fixedTopics == null && topicPattern != null),
+"Exactly one of either fixedTopics or topicPattern must be 
specified.");
 
 if (fixedTopics != null) {
 checkArgument(
-!fixedTopics.isEmpty(),
-"If subscribing to a fixed topics list, the supplied list 
cannot be empty.");
+!fixedTopics.isEmpty(),
+"If subscribing to a fixed topics list, the supplied list 
cannot be empty.");
+fixedTopics.forEach(topic -> {
+checkNotNull(topic, "An null topic exists in the subscribed 
topics list.");
+checkArgument(!"".equals(topic), "An empty topic exists in the 
subscribed topics list.");
+});
 }
+if (topicPattern != null) {

Review comment:
   I think it's fine if topicPatten is empty.
   So we don't need to check whether topicPattern is empty or not.

##
File path: 
flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer.java
##
@@ -647,8 +646,8 @@ private FlinkKafkaProducer(
 new FlinkKafkaProducer.TransactionStateSerializer(),
 new FlinkKafkaProducer.ContextStateSerializer());
 
+checkArgument(!"".equals(defaultTopic),"defaultTopic is empty");

Review comment:
   maybe we can use
   '
   checkArgument(
   
!org.apache.flink.util.StringUtils.isNullOrWhitespaceOnly(defaultTopic), 
"defaultTopic cannot be null or empty string");
   '
   to check

##
File path: 
flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/internals/KafkaTopicsDescriptor.java
##
@@ -85,7 +94,7 @@ public boolean isMatchingTopic(String topic) {
 @Override
 public String toString() {
 return (fixedTopics == null)
-? "Topic Regex Pattern (" + topicPattern.pattern() + ")"
-: "Fixed Topics (" + fixedTopics + ")";
+? "Topic Regex Pattern (" + topicPattern.pattern() + ")"

Review comment:
   The code format is not right.

##
File path: 
flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/internals/KafkaTopicsDescriptor.java
##
@@ -41,17 +42,25 @@
 private final Pattern topicPattern;
 
 public KafkaTopicsDescriptor(
-@Nullable List fixedTopics, @Nullable Pattern 
topicPattern) {
+@Nullable List fixedTopics, @Nullable Pattern topicPattern) {
 checkArgument(
-(fixedTopics != null && topicPattern == null)
-|| (fixedTopics == null && topicPattern != null),
-"Exactly one of either fixedTopics or topicPattern must be 
specified.");
+(fixedTopics != null && topicPattern == null)
+|| (fixedTopics == null && topicPattern != null),
+"Exactly one of either fixedTopics or topicPattern must be 
specified.");
 
 if (fixedTopics != null) {
 checkArgument(
-!fixedTopics.isEmpty(),
-"If subscribing to a fixed topics list, the supplied list 
cannot be empty.");
+!fixedTopics.isEmpty(),
+"If subscribing to a fixed topics list, the supplied list 
cannot be empty.");
+fixedTopics.forEach(topic -> {
+checkNotNull(topic, "An null topic exists in the subscribed 
topics list.");

Review comment:
   Here, we can use 
   '
   checkArgument(
   
!org.apache.flink.util.StringUtils.isNullOrWhitespaceOnly(topic), "topic in the 
subscribed topics list cannot be null or empty string.");
   '
   to check

##
File path: 
flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/internals/KafkaTopicsDescriptor.java
##
@@ -41,17 +42,25 @@
 private final Pattern topicPattern;
 
 public KafkaTopicsDescriptor(
-@Nullable List fixedTopics, @Nullable Pattern 
topicPattern) {
+@Nullable List 

[jira] [Comment Edited] (FLINK-14055) Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"

2021-07-07 Thread XiaYu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17376986#comment-17376986
 ] 

XiaYu edited comment on FLINK-14055 at 7/8/21, 3:37 AM:


hi, everyone, iam a little bit confused about your classloader,

what is the difference between it and Flink UserCodeClassLoader? which i mean 
the classloader which i can access by RuntimeContext#

getUserCodeClassLoader.

how does it work on server


was (Author: gelimusong):
hi, everyone, iam a little bit confused about your classloader,

what is the difference between it and Flink UserCodeClassLoader? which i mean 
the classloader which i can access by RuntimeContext#

getUserCodeClassLoader

> Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"
> 
>
> Key: FLINK-14055
> URL: https://issues.apache.org/jira/browse/FLINK-14055
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Bowen Li
>Priority: Major
>  Labels: auto-unassigned, sprint
>
> As FLINK-7151 adds basic function DDL to Flink, this ticket is to support 
> dynamically loading functions from external source in function DDL with 
> advanced syntax like 
>  
> {code:java}
> CREATE FUNCTION func_name as class_name USING JAR/FILE/ACHIEVE 'xxx' [, 
> JAR/FILE/ACHIEVE 'yyy'] ;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22198) KafkaTableITCase hang.

2021-07-07 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17377006#comment-17377006
 ] 

Xintong Song commented on FLINK-22198:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20122=logs=b0097207-033c-5d9a-b48c-6d4796fbe60d=e8fcc430-213e-5cce-59d4-6942acf09121=6551

> KafkaTableITCase hang.
> --
>
> Key: FLINK-22198
> URL: https://issues.apache.org/jira/browse/FLINK-22198
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.12.4
>Reporter: Guowei Ma
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16287=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=6625
> There is no any artifacts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23107) Separate deduplicate rank from rank functions

2021-07-07 Thread Shuo Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17377003#comment-17377003
 ] 

Shuo Cheng commented on FLINK-23107:


cc [~lzljs3620320], I'll take this Jira.

> Separate deduplicate rank from rank functions
> -
>
> Key: FLINK-23107
> URL: https://issues.apache.org/jira/browse/FLINK-23107
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: Jingsong Lee
>Priority: Major
> Fix For: 1.14.0
>
>
> SELECT * FROM (SELECT *, ROW_NUMBER() OVER (PARTITION BY d ORDER BY e DESC) 
> AS rownum from T) WHERE rownum=1
> Actually above sql is a deduplicate rank instead of a normal rank. We should 
> separate the implementation for optimize the deduplicate rank and reduce bugs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] edmondsky commented on pull request #16418: [FLINK-23243][docs-zh]Translate "SELECT & WHERE clause" page into Chinese

2021-07-07 Thread GitBox


edmondsky commented on pull request #16418:
URL: https://github.com/apache/flink/pull/16418#issuecomment-876092378


   @95chenjz
   Ready to review


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #16418: [FLINK-23243][docs-zh]Translate "SELECT & WHERE clause" page into Chinese

2021-07-07 Thread GitBox


flinkbot commented on pull request #16418:
URL: https://github.com/apache/flink/pull/16418#issuecomment-876091446


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 9ceb4bc080af46d70ac5683267efa32c6810bcef (Thu Jul 08 
03:20:14 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-23300) Job fails very slow because of no notifyAllocationFailure for DeclarativeSlotManager

2021-07-07 Thread Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu closed FLINK-23300.
---
Resolution: Duplicate

> Job fails very slow because of no notifyAllocationFailure for 
> DeclarativeSlotManager
> 
>
> Key: FLINK-23300
> URL: https://issues.apache.org/jira/browse/FLINK-23300
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Task
>Affects Versions: 1.13.1
>Reporter: Liu
>Priority: Major
>
> When container is killed, flink on yarn can detect the problem very quickly. 
> But when using default DeclarativeSlotManager, notifyAllocationFailure is not 
> called and the task is not failed until heartbeat is timeout. So the failover 
> will be very slow. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23194) Cache and reuse the ContainerLaunchContext and accelarate the progress of createTaskExecutorLaunchContext on yarn

2021-07-07 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17377001#comment-17377001
 ] 

Yang Wang commented on FLINK-23194:
---

AFAIK, except for the key tab and Kerberos file, we will not access the HDFS 
while creating {{ContainerLaunchContext}}. Right? Because we already encode the 
Yarn local resources to a string in the {{YarnClusterDescriptor}} and decode it 
when creating {{ContainerLaunchContext}}.

 

Moreover, TaskManager might have different resource specs or JVM parameters in 
the future, then caching the {{ContainerLaunchContext}} will not make sense.

> Cache and reuse the ContainerLaunchContext and accelarate the progress of 
> createTaskExecutorLaunchContext on yarn
> -
>
> Key: FLINK-23194
> URL: https://issues.apache.org/jira/browse/FLINK-23194
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Affects Versions: 1.13.1, 1.12.4
>Reporter: zlzhang0122
>Priority: Major
> Fix For: 1.14.0
>
>
> When starting the TaskExecutor in container on yarn, this will create 
> ContainerLaunchContext for n times(n represent the number of the TaskManager).
> When I examined the progress of this creation, I found that most of them were 
> in common and had nothing to do with the particular TaskManager except the 
> launchCommand. We can create ContainerLaunchContext once and reuse it. Only 
> the launchCommand need to create separately for every particular TaskManager.
> So I propose that we can cache and reuse the ContainerLaunchContext object to 
> accelerate this creation progress. 
> I think this can have some benefit like below:
>  # this can accelerate the creation of ContainerLaunchContext and also the 
> start of the TaskExecutor, especially under the situation of massive 
> TaskManager.
>  # this can decrease the pressure of the HDFS, etc. 
>  # this can also avoid the suddenly failure of the HDFS or yarn, etc.
> We have implemented this on our production environment. So far there has no 
> problem and have a good benefit. Please let me know if there's any point that 
> I haven't considered.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23243) Translate "SELECT & WHERE clause" page into Chinese

2021-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-23243:
---
Labels: pull-request-available  (was: )

> Translate "SELECT & WHERE clause" page into Chinese
> ---
>
> Key: FLINK-23243
> URL: https://issues.apache.org/jira/browse/FLINK-23243
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Affects Versions: 1.13.0
>Reporter: Edmond Wang
>Assignee: Edmond Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.13/zh/docs/dev/table/sql/queries/select/]
> The markdown file is located in *docs/content.zh/docs/dev/table/sql/select.md*
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] edmondsky opened a new pull request #16418: [FLINK-23243][docs-zh]Translate "SELECT & WHERE clause" page into Chinese

2021-07-07 Thread GitBox


edmondsky opened a new pull request #16418:
URL: https://github.com/apache/flink/pull/16418


   ## What is the purpose of the change
   
   Translate "SELECT & WHERE clause" page into Chinese
   
   The page url is 
https://ci.apache.org/projects/flink/flink-docs-release-1.13/zh/docs/dev/table/sql/queries/select/
   
   The markdown file is located in docs/content.zh/docs/dev/table/sql/select.md
   
   
   ## Brief change log
   
 - *Translate 'flink/docs/content.zh/docs/dev/table/sql/select.md'.*
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`:  no
 - The serializers:  no
 - The runtime per-record code paths (performance sensitive):  no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper:  no
 - The S3 file system connector:  no
   
   ## Documentation
   
 - Does this pull request introduce a new feature?  no
 - If yes, how is the feature documented? not documented


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lirui-apache closed pull request #16370: [FLINK-23178][hive] Raise an error for writing stream data into parti…

2021-07-07 Thread GitBox


lirui-apache closed pull request #16370:
URL: https://github.com/apache/flink/pull/16370


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23268) [DOCS]The link on page docs/dev/table/sql/queries/match_recognize/ is failed and 404 is returned

2021-07-07 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17376998#comment-17376998
 ] 

Yun Tang commented on FLINK-23268:
--

[~hapihu], already assigned this ticket to you.

> [DOCS]The link on page docs/dev/table/sql/queries/match_recognize/ is failed 
> and 404 is returned
> 
>
> Key: FLINK-23268
> URL: https://issues.apache.org/jira/browse/FLINK-23268
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table SQL / API
>Reporter: wuguihu
>Assignee: wuguihu
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-20210706134442433.png
>
>
> Some link information on the page is incorrectly written, resulting in a 404 
> page
> The page url 
> :[https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/table/sql/queries/match_recognize/]
> The markdown 
> file:[https://github.com/apache/flink/blob/master/docs/content/docs/dev/table/sql/queries/match_recognize.md]
> When i click on this the link `[append 
> table](dynamic_tables.html#update-and-append-queries)`, I get a 404 page。
> The corresponding address is 
> :[https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/table/sql/queries/match_recognize/dynamic_tables.html#update-and-append-queries]
> Refer to document 
> [https://github.com/apache/flink/blob/master/docs/content.zh/docs/dev/table/sql/queries/match_recognize.md]
>  for the correct link information
>  
> The link shown below returns 404
> {code:java}
> //1 
>  [append table](dynamic_tables.html#update-and-append-queries)
> //2 
>  [processing time or event time](time_attributes.html)
> //3
>  [time attributes](time_attributes.html)
> //4
>  rowtime attribute
> //5
>  proctime attribute
> //6
>  [state retention time](query_configuration.html#idle-state-retention-time)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-23268) [DOCS]The link on page docs/dev/table/sql/queries/match_recognize/ is failed and 404 is returned

2021-07-07 Thread Yun Tang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Tang reassigned FLINK-23268:


Assignee: wuguihu

> [DOCS]The link on page docs/dev/table/sql/queries/match_recognize/ is failed 
> and 404 is returned
> 
>
> Key: FLINK-23268
> URL: https://issues.apache.org/jira/browse/FLINK-23268
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table SQL / API
>Reporter: wuguihu
>Assignee: wuguihu
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-20210706134442433.png
>
>
> Some link information on the page is incorrectly written, resulting in a 404 
> page
> The page url 
> :[https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/table/sql/queries/match_recognize/]
> The markdown 
> file:[https://github.com/apache/flink/blob/master/docs/content/docs/dev/table/sql/queries/match_recognize.md]
> When i click on this the link `[append 
> table](dynamic_tables.html#update-and-append-queries)`, I get a 404 page。
> The corresponding address is 
> :[https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/table/sql/queries/match_recognize/dynamic_tables.html#update-and-append-queries]
> Refer to document 
> [https://github.com/apache/flink/blob/master/docs/content.zh/docs/dev/table/sql/queries/match_recognize.md]
>  for the correct link information
>  
> The link shown below returns 404
> {code:java}
> //1 
>  [append table](dynamic_tables.html#update-and-append-queries)
> //2 
>  [processing time or event time](time_attributes.html)
> //3
>  [time attributes](time_attributes.html)
> //4
>  rowtime attribute
> //5
>  proctime attribute
> //6
>  [state retention time](query_configuration.html#idle-state-retention-time)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-14193) Update Web UI for fine grained TM/Slot resources

2021-07-07 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song reassigned FLINK-14193:


Assignee: Junhan Yang

> Update Web UI for fine grained TM/Slot resources
> 
>
> Key: FLINK-14193
> URL: https://issues.apache.org/jira/browse/FLINK-14193
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination, Runtime / Web Frontend
>Reporter: Xintong Song
>Assignee: Junhan Yang
>Priority: Major
>
> * Update RestAPI / WebUI to properly display information of available 
> resources and allocated slots of task executors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #16417: FLINK-22969

2021-07-07 Thread GitBox


flinkbot commented on pull request #16417:
URL: https://github.com/apache/flink/pull/16417#issuecomment-876088562


   
   ## CI report:
   
   * e8e280ccc9d30446c0280ecf8b8028b2cd25206f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16395: [FLINK-22662][yarn][test] Stabilize YARNHighAvailabilityITCase

2021-07-07 Thread GitBox


flinkbot edited a comment on pull request #16395:
URL: https://github.com/apache/flink/pull/16395#issuecomment-874665804


   
   ## CI report:
   
   * 69f1bc393f0ed01e64fc6dad942bcfe7d60b9e43 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20044)
 
   * 3238d136ab650d4bc509b340f5ad515fe07ec2b7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16349: [FLINK-23267][table-planner-blink] Enable Java code splitting for all generated classes

2021-07-07 Thread GitBox


flinkbot edited a comment on pull request #16349:
URL: https://github.com/apache/flink/pull/16349#issuecomment-872790415


   
   ## CI report:
   
   * 94974286af4271059af6bba15d7128085f27cba6 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20085)
 
   * 3936c76b6680245fe4184499db8656ffa00cb70b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20130)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Myasuka commented on pull request #16352: [FLINK-23102][runtime] Accessing FlameGraphs while not being enabled …

2021-07-07 Thread GitBox


Myasuka commented on pull request #16352:
URL: https://github.com/apache/flink/pull/16352#issuecomment-876087473


   Do we consider FLINK-22527 to provide more friendly hint to tell users how 
to enable flame graph?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-23222) Translate page 'Application Profiling & Debugging' of 'Operations/Debugging' into Chinese

2021-07-07 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-23222.
---
Fix Version/s: 1.14.0
 Assignee: pierrexiong
   Resolution: Fixed

Fixed in master: f0fdf875606ed3ac063646a136bbfddf4775ff9d

> Translate page 'Application Profiling & Debugging' of 'Operations/Debugging' 
> into Chinese
> -
>
> Key: FLINK-23222
> URL: https://issues.apache.org/jira/browse/FLINK-23222
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Affects Versions: 1.13.1
>Reporter: pierrexiong
>Assignee: pierrexiong
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> * The markdown file location: 
> flink/docs/content.zh/docs/ops/debugging/application_profiling.md
>  * The page url is: 
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/ops/debugging/application_profiling]
>  * Related issue: 
> https://issues.apache.org/jira/browse/FLINK-19036?jql=project%20%3D%20FLINK%20AND%20component%20%3D%20chinese-translation%20AND%20text%20~%20%22Application%20Profiling%22



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #16359: [FLINK-23222]Translate page 'Application Profiling & Debugging' into Chinese

2021-07-07 Thread GitBox


wuchong merged pull request #16359:
URL: https://github.com/apache/flink/pull/16359


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #16359: [FLINK-23222]Translate page 'Application Profiling & Debugging' into Chinese

2021-07-07 Thread GitBox


wuchong commented on pull request #16359:
URL: https://github.com/apache/flink/pull/16359#issuecomment-876086560


   Thanks @pierre94 for the great work and @RocMarshal  for the reviewing. 
   Merging...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-14055) Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"

2021-07-07 Thread XiaYu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17376986#comment-17376986
 ] 

XiaYu commented on FLINK-14055:
---

hi, everyone, iam a little bit confused about your classloader,

what is the difference between it and Flink UserCodeClassLoader? which i mean 
the classloader which i can access by RuntimeContext#

getUserCodeClassLoader

> Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"
> 
>
> Key: FLINK-14055
> URL: https://issues.apache.org/jira/browse/FLINK-14055
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Bowen Li
>Priority: Major
>  Labels: auto-unassigned, sprint
>
> As FLINK-7151 adds basic function DDL to Flink, this ticket is to support 
> dynamically loading functions from external source in function DDL with 
> advanced syntax like 
>  
> {code:java}
> CREATE FUNCTION func_name as class_name USING JAR/FILE/ACHIEVE 'xxx' [, 
> JAR/FILE/ACHIEVE 'yyy'] ;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23300) Job fails very slow because of no notifyAllocationFailure for DeclarativeSlotManager

2021-07-07 Thread Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17376985#comment-17376985
 ] 

Liu commented on FLINK-23300:
-

Thanks, [~Thesharing]. I will focus on the issues you mentioned.

> Job fails very slow because of no notifyAllocationFailure for 
> DeclarativeSlotManager
> 
>
> Key: FLINK-23300
> URL: https://issues.apache.org/jira/browse/FLINK-23300
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Task
>Affects Versions: 1.13.1
>Reporter: Liu
>Priority: Major
>
> When container is killed, flink on yarn can detect the problem very quickly. 
> But when using default DeclarativeSlotManager, notifyAllocationFailure is not 
> called and the task is not failed until heartbeat is timeout. So the failover 
> will be very slow. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-23298) [datagen] Normalize parameter names in RandomGeneratorVisitor and SequenceGeneratorVisitor

2021-07-07 Thread Rui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li resolved FLINK-23298.

Fix Version/s: 1.14.0
   Resolution: Fixed

Fixed in master: e9cc49a31fc384bff1c84372788f14a173cd94b0

> [datagen] Normalize parameter names in RandomGeneratorVisitor and 
> SequenceGeneratorVisitor
> --
>
> Key: FLINK-23298
> URL: https://issues.apache.org/jira/browse/FLINK-23298
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API, Table SQL / Ecosystem
>Reporter: fhan
>Assignee: fhan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> This PR normalized parameter names in RandomGeneratorVisitor and 
> SequenceGeneratorVisitor.
> related methods:
> [!https://user-images.githubusercontent.com/5745228/118935994-b47c6080-b97e-11eb-86ef-43c191f602fd.jpg|width=416,height=287!|https://user-images.githubusercontent.com/5745228/118935994-b47c6080-b97e-11eb-86ef-43c191f602fd.jpg]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lirui-apache closed pull request #15965: [FLINK-23298][datagen] Normalize parameter names in RandomGeneratorVisitor and SequenceGeneratorVisitor

2021-07-07 Thread GitBox


lirui-apache closed pull request #15965:
URL: https://github.com/apache/flink/pull/15965


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23169) Support user-level app staging directory when yarn.staging-directory is specified

2021-07-07 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17376979#comment-17376979
 ] 

Yang Wang commented on FLINK-23169:
---

Thanks [~hackergin] for proposing this ticket. Let me try to understand your 
use case.

 

By default, Flink will create the staging directory in the user home 
directory(e.g. /user/admin/.flink). I remember some users want to customize the 
staging directory, which is shared by all users, so we introduce the config 
option {{yarn.staging-directory}}.

 

I am not sure why you are not using the default user home directory. Moreover, 
the user home directory could also be configured via {{dfs.user.home.base.dir}} 
in hdfs-site.xml.

> Support user-level app staging directory when yarn.staging-directory is 
> specified
> -
>
> Key: FLINK-23169
> URL: https://issues.apache.org/jira/browse/FLINK-23169
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Reporter: jinfeng
>Priority: Major
>
> When yarn.staging-directory is specified,  different users will use the same 
> directory as the staging directory.   It may not friendly for a job platform 
> to submit job for different users.  I propose to use the user-level directory 
> by default when yarn.staging-directory is specified.  We only need to make 
> small changes  for `getStagingDir` function in 
> YarnClusterDescriptor 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-21804) Create and wire changelog writer with backend

2021-07-07 Thread Yun Tang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Tang resolved FLINK-21804.
--
Resolution: Fixed

> Create and wire changelog writer with backend
> -
>
> Key: FLINK-21804
> URL: https://issues.apache.org/jira/browse/FLINK-21804
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Reporter: Roman Khachatryan
>Assignee: Zakelly Lan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
> Attachments: Changelog Backend _ writer loading.png
>
>
> [Proposed 
> design|https://docs.google.com/document/d/10c6hZsOVxzUjeCLPSDpKGyZOYHi73yd92lCqRs1CyUE/edit#heading=h.5b9hthjg53vl]
> !Changelog Backend _ writer loading.png|width=600!
> * Black arrows - existing references/creations
> * {color:red}Red{color} arrows - required references/creations
> * {color:#00875A}Green{color} arrows - proposed references/creations to 
> enable required ones



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21804) Create and wire changelog writer with backend

2021-07-07 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17376978#comment-17376978
 ] 

Yun Tang commented on FLINK-21804:
--

Merged master: e1ea4d960b191565a191c74e455e324fbb529ff0

> Create and wire changelog writer with backend
> -
>
> Key: FLINK-21804
> URL: https://issues.apache.org/jira/browse/FLINK-21804
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Reporter: Roman Khachatryan
>Assignee: Zakelly Lan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
> Attachments: Changelog Backend _ writer loading.png
>
>
> [Proposed 
> design|https://docs.google.com/document/d/10c6hZsOVxzUjeCLPSDpKGyZOYHi73yd92lCqRs1CyUE/edit#heading=h.5b9hthjg53vl]
> !Changelog Backend _ writer loading.png|width=600!
> * Black arrows - existing references/creations
> * {color:red}Red{color} arrows - required references/creations
> * {color:#00875A}Green{color} arrows - proposed references/creations to 
> enable required ones



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] Myasuka merged pull request #16341: [FLINK-21804][state/changelog] Create and wire changelog storage with state backend

2021-07-07 Thread GitBox


Myasuka merged pull request #16341:
URL: https://github.com/apache/flink/pull/16341


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16416: [FLINK-22657][Connectors/Hive] HiveParserDDLSemanticAnalyzer return operations directly

2021-07-07 Thread GitBox


flinkbot edited a comment on pull request #16416:
URL: https://github.com/apache/flink/pull/16416#issuecomment-876055217


   
   ## CI report:
   
   * 9c60fc2fed1f4140035abe32b0b9b7cd8cc55973 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20128)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16349: [FLINK-23267][table-planner-blink] Enable Java code splitting for all generated classes

2021-07-07 Thread GitBox


flinkbot edited a comment on pull request #16349:
URL: https://github.com/apache/flink/pull/16349#issuecomment-872790415


   
   ## CI report:
   
   * 94974286af4271059af6bba15d7128085f27cba6 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20085)
 
   * 3936c76b6680245fe4184499db8656ffa00cb70b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-22969) Validate the topic is not null or empty string when create kafka source/sink function

2021-07-07 Thread longwang0616 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17376971#comment-17376971
 ] 

longwang0616 commented on FLINK-22969:
--

[~luoyuxia] Hello, I created a pull request, please review it.

> Validate the topic is not null or empty string when create kafka source/sink 
> function 
> --
>
> Key: FLINK-22969
> URL: https://issues.apache.org/jira/browse/FLINK-22969
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.14.0
>Reporter: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available, starter
> Attachments: image-2021-07-06-18-55-22-235.png, 
> image-2021-07-06-18-55-54-109.png, image-2021-07-06-19-01-22-483.png, 
> image-2021-07-06-19-03-22-899.png, image-2021-07-06-19-03-32-050.png, 
> image-2021-07-06-19-04-16-530.png, image-2021-07-06-19-04-53-651.png, 
> image-2021-07-06-19-05-48-964.png, image-2021-07-06-19-07-01-607.png, 
> image-2021-07-06-19-07-27-936.png, image-2021-07-06-22-41-52-089.png
>
>
> Add test in UpsertKafkaTableITCase
> {code:java}
>  @Test
> public void testSourceSinkWithKeyAndPartialValue() throws Exception {
> // we always use a different topic name for each parameterized topic,
> // in order to make sure the topic can be created.
> final String topic = "key_partial_value_topic_" + format;
> createTestTopic(topic, 1, 1); // use single partition to guarantee 
> orders in tests
> // -- Produce an event time stream into Kafka 
> ---
> String bootstraps = standardProps.getProperty("bootstrap.servers");
> // k_user_id and user_id have different data types to verify the 
> correct mapping,
> // fields are reordered on purpose
> final String createTable =
> String.format(
> "CREATE TABLE upsert_kafka (\n"
> + "  `k_user_id` BIGINT,\n"
> + "  `name` STRING,\n"
> + "  `timestamp` TIMESTAMP(3) METADATA,\n"
> + "  `k_event_id` BIGINT,\n"
> + "  `user_id` INT,\n"
> + "  `payload` STRING,\n"
> + "  PRIMARY KEY (k_event_id, k_user_id) NOT 
> ENFORCED"
> + ") WITH (\n"
> + "  'connector' = 'upsert-kafka',\n"
> + "  'topic' = '%s',\n"
> + "  'properties.bootstrap.servers' = '%s',\n"
> + "  'key.format' = '%s',\n"
> + "  'key.fields-prefix' = 'k_',\n"
> + "  'value.format' = '%s',\n"
> + "  'value.fields-include' = 'EXCEPT_KEY'\n"
> + ")",
> "", bootstraps, format, format);
> tEnv.executeSql(createTable);
> String initialValues =
> "INSERT INTO upsert_kafka\n"
> + "VALUES\n"
> + " (1, 'name 1', TIMESTAMP '2020-03-08 
> 13:12:11.123', 100, 41, 'payload 1'),\n"
> + " (2, 'name 2', TIMESTAMP '2020-03-09 
> 13:12:11.123', 101, 42, 'payload 2'),\n"
> + " (3, 'name 3', TIMESTAMP '2020-03-10 
> 13:12:11.123', 102, 43, 'payload 3'),\n"
> + " (2, 'name 2', TIMESTAMP '2020-03-11 
> 13:12:11.123', 101, 42, 'payload')";
> tEnv.executeSql(initialValues).await();
> // -- Consume stream from Kafka ---
> final List result = collectRows(tEnv.sqlQuery("SELECT * FROM 
> upsert_kafka"), 5);
> final List expected =
> Arrays.asList(
> changelogRow(
> "+I",
> 1L,
> "name 1",
> 
> LocalDateTime.parse("2020-03-08T13:12:11.123"),
> 100L,
> 41,
> "payload 1"),
> changelogRow(
> "+I",
> 2L,
> "name 2",
> 
> LocalDateTime.parse("2020-03-09T13:12:11.123"),
> 101L,
> 42,
> "payload 2"),
> changelogRow(
> "+I",
>  

[jira] [Assigned] (FLINK-23298) [datagen] Normalize parameter names in RandomGeneratorVisitor and SequenceGeneratorVisitor

2021-07-07 Thread Rui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li reassigned FLINK-23298:
--

Assignee: fhan

> [datagen] Normalize parameter names in RandomGeneratorVisitor and 
> SequenceGeneratorVisitor
> --
>
> Key: FLINK-23298
> URL: https://issues.apache.org/jira/browse/FLINK-23298
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API, Table SQL / Ecosystem
>Reporter: fhan
>Assignee: fhan
>Priority: Minor
>  Labels: pull-request-available
>
> This PR normalized parameter names in RandomGeneratorVisitor and 
> SequenceGeneratorVisitor.
> related methods:
> [!https://user-images.githubusercontent.com/5745228/118935994-b47c6080-b97e-11eb-86ef-43c191f602fd.jpg|width=416,height=287!|https://user-images.githubusercontent.com/5745228/118935994-b47c6080-b97e-11eb-86ef-43c191f602fd.jpg]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wsry commented on a change in pull request #11877: [FLINK-16641][network] Announce sender's backlog to solve the deadlock issue without exclusive buffers

2021-07-07 Thread GitBox


wsry commented on a change in pull request #11877:
URL: https://github.com/apache/flink/pull/11877#discussion_r665830690



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/BoundedBlockingSubpartitionDirectTransferReader.java
##
@@ -91,10 +91,14 @@ public BufferAndBacklog getNextBuffer() throws IOException {
 
 updateStatistics(current);
 
-// We simply assume all the data are non-events for batch jobs to 
avoid pre-fetching the
-// next header
-Buffer.DataType nextDataType =
-numDataAndEventBuffers > 0 ? Buffer.DataType.DATA_BUFFER : 
Buffer.DataType.NONE;
+// We simply assume all the data except for the last one 
(EndOfPartitionEvent)
+// are non-events for batch jobs to avoid pre-fetching the next header
+Buffer.DataType nextDataType = Buffer.DataType.NONE;
+if (numDataBuffers > 0) {
+nextDataType = Buffer.DataType.DATA_BUFFER;
+} else if (numDataAndEventBuffers > 0) {
+nextDataType = Buffer.DataType.EVENT_BUFFER;
+}

Review comment:
   After rethink about it, the first choice can support more events type in 
the future and the second choice make the assumption that we only have one 
event at the end of the data. Maybe the first choice is better?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Zakelly commented on pull request #16341: [FLINK-21804][state/changelog] Create and wire changelog storage with state backend

2021-07-07 Thread GitBox


Zakelly commented on pull request #16341:
URL: https://github.com/apache/flink/pull/16341#issuecomment-876077629


   Thanks a lot for your detailed review @rkhachatryan and @Myasuka 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Myasuka commented on pull request #16341: [FLINK-21804][state/changelog] Create and wire changelog storage with state backend

2021-07-07 Thread GitBox


Myasuka commented on pull request #16341:
URL: https://github.com/apache/flink/pull/16341#issuecomment-876076918


   Merging...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20329) Elasticsearch7DynamicSinkITCase hangs

2021-07-07 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17376969#comment-17376969
 ] 

Xintong Song commented on FLINK-20329:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20121=logs=e1276d0f-df12-55ec-86b5-c0ad597d83c9=906e9244-f3be-5604-1979-e767c8a6f6d9=11791

> Elasticsearch7DynamicSinkITCase hangs
> -
>
> Key: FLINK-20329
> URL: https://issues.apache.org/jira/browse/FLINK-20329
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Dian Fu
>Assignee: Yangze Guo
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10052=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20
> {code}
> 2020-11-24T16:04:05.9260517Z [INFO] Running 
> org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7DynamicSinkITCase
> 2020-11-24T16:19:25.5481231Z 
> ==
> 2020-11-24T16:19:25.5483549Z Process produced no output for 900 seconds.
> 2020-11-24T16:19:25.5484064Z 
> ==
> 2020-11-24T16:19:25.5484498Z 
> ==
> 2020-11-24T16:19:25.5484882Z The following Java processes are running (JPS)
> 2020-11-24T16:19:25.5485475Z 
> ==
> 2020-11-24T16:19:25.5694497Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-11-24T16:19:25.7263048Z 16192 surefirebooter5057948964630155904.jar
> 2020-11-24T16:19:25.7263515Z 18566 Jps
> 2020-11-24T16:19:25.7263709Z 959 Launcher
> 2020-11-24T16:19:25.7411148Z 
> ==
> 2020-11-24T16:19:25.7427013Z Printing stack trace of Java process 16192
> 2020-11-24T16:19:25.7427369Z 
> ==
> 2020-11-24T16:19:25.7484365Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-11-24T16:19:26.0848776Z 2020-11-24 16:19:26
> 2020-11-24T16:19:26.0849578Z Full thread dump OpenJDK 64-Bit Server VM 
> (25.275-b01 mixed mode):
> 2020-11-24T16:19:26.0849831Z 
> 2020-11-24T16:19:26.0850185Z "Attach Listener" #32 daemon prio=9 os_prio=0 
> tid=0x7fc148001000 nid=0x48e7 waiting on condition [0x]
> 2020-11-24T16:19:26.0850595Zjava.lang.Thread.State: RUNNABLE
> 2020-11-24T16:19:26.0850814Z 
> 2020-11-24T16:19:26.0851375Z "testcontainers-ryuk" #31 daemon prio=5 
> os_prio=0 tid=0x7fc251232000 nid=0x3fb0 in Object.wait() 
> [0x7fc1012c4000]
> 2020-11-24T16:19:26.0854688Zjava.lang.Thread.State: TIMED_WAITING (on 
> object monitor)
> 2020-11-24T16:19:26.0855379Z  at java.lang.Object.wait(Native Method)
> 2020-11-24T16:19:26.0855844Z  at 
> org.testcontainers.utility.ResourceReaper.lambda$null$1(ResourceReaper.java:142)
> 2020-11-24T16:19:26.0857272Z  - locked <0x8e2bd2d0> (a 
> java.util.ArrayList)
> 2020-11-24T16:19:26.0857977Z  at 
> org.testcontainers.utility.ResourceReaper$$Lambda$93/1981729428.run(Unknown 
> Source)
> 2020-11-24T16:19:26.0858471Z  at 
> org.rnorth.ducttape.ratelimits.RateLimiter.doWhenReady(RateLimiter.java:27)
> 2020-11-24T16:19:26.0858961Z  at 
> org.testcontainers.utility.ResourceReaper.lambda$start$2(ResourceReaper.java:133)
> 2020-11-24T16:19:26.0859422Z  at 
> org.testcontainers.utility.ResourceReaper$$Lambda$92/40191541.run(Unknown 
> Source)
> 2020-11-24T16:19:26.0859788Z  at java.lang.Thread.run(Thread.java:748)
> 2020-11-24T16:19:26.0860030Z 
> 2020-11-24T16:19:26.0860371Z "process reaper" #24 daemon prio=10 os_prio=0 
> tid=0x7fc0f803b800 nid=0x3f92 waiting on condition [0x7fc10296e000]
> 2020-11-24T16:19:26.0860913Zjava.lang.Thread.State: TIMED_WAITING 
> (parking)
> 2020-11-24T16:19:26.0861387Z  at sun.misc.Unsafe.park(Native Method)
> 2020-11-24T16:19:26.0862495Z  - parking to wait for  <0x8814bf30> (a 
> java.util.concurrent.SynchronousQueue$TransferStack)
> 2020-11-24T16:19:26.0863253Z  at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> 2020-11-24T16:19:26.0863760Z  at 
> java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
> 2020-11-24T16:19:26.0864274Z  at 
> java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
> 2020-11-24T16:19:26.0864762Z  at 
> java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
> 2020-11-24T16:19:26.0865299Z  

[jira] [Commented] (FLINK-20329) Elasticsearch7DynamicSinkITCase hangs

2021-07-07 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17376970#comment-17376970
 ] 

Xintong Song commented on FLINK-20329:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20121=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=bfbc6239-57a0-5db0-63f3-41551b4f7d51=12176

> Elasticsearch7DynamicSinkITCase hangs
> -
>
> Key: FLINK-20329
> URL: https://issues.apache.org/jira/browse/FLINK-20329
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Dian Fu
>Assignee: Yangze Guo
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10052=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20
> {code}
> 2020-11-24T16:04:05.9260517Z [INFO] Running 
> org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7DynamicSinkITCase
> 2020-11-24T16:19:25.5481231Z 
> ==
> 2020-11-24T16:19:25.5483549Z Process produced no output for 900 seconds.
> 2020-11-24T16:19:25.5484064Z 
> ==
> 2020-11-24T16:19:25.5484498Z 
> ==
> 2020-11-24T16:19:25.5484882Z The following Java processes are running (JPS)
> 2020-11-24T16:19:25.5485475Z 
> ==
> 2020-11-24T16:19:25.5694497Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-11-24T16:19:25.7263048Z 16192 surefirebooter5057948964630155904.jar
> 2020-11-24T16:19:25.7263515Z 18566 Jps
> 2020-11-24T16:19:25.7263709Z 959 Launcher
> 2020-11-24T16:19:25.7411148Z 
> ==
> 2020-11-24T16:19:25.7427013Z Printing stack trace of Java process 16192
> 2020-11-24T16:19:25.7427369Z 
> ==
> 2020-11-24T16:19:25.7484365Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-11-24T16:19:26.0848776Z 2020-11-24 16:19:26
> 2020-11-24T16:19:26.0849578Z Full thread dump OpenJDK 64-Bit Server VM 
> (25.275-b01 mixed mode):
> 2020-11-24T16:19:26.0849831Z 
> 2020-11-24T16:19:26.0850185Z "Attach Listener" #32 daemon prio=9 os_prio=0 
> tid=0x7fc148001000 nid=0x48e7 waiting on condition [0x]
> 2020-11-24T16:19:26.0850595Zjava.lang.Thread.State: RUNNABLE
> 2020-11-24T16:19:26.0850814Z 
> 2020-11-24T16:19:26.0851375Z "testcontainers-ryuk" #31 daemon prio=5 
> os_prio=0 tid=0x7fc251232000 nid=0x3fb0 in Object.wait() 
> [0x7fc1012c4000]
> 2020-11-24T16:19:26.0854688Zjava.lang.Thread.State: TIMED_WAITING (on 
> object monitor)
> 2020-11-24T16:19:26.0855379Z  at java.lang.Object.wait(Native Method)
> 2020-11-24T16:19:26.0855844Z  at 
> org.testcontainers.utility.ResourceReaper.lambda$null$1(ResourceReaper.java:142)
> 2020-11-24T16:19:26.0857272Z  - locked <0x8e2bd2d0> (a 
> java.util.ArrayList)
> 2020-11-24T16:19:26.0857977Z  at 
> org.testcontainers.utility.ResourceReaper$$Lambda$93/1981729428.run(Unknown 
> Source)
> 2020-11-24T16:19:26.0858471Z  at 
> org.rnorth.ducttape.ratelimits.RateLimiter.doWhenReady(RateLimiter.java:27)
> 2020-11-24T16:19:26.0858961Z  at 
> org.testcontainers.utility.ResourceReaper.lambda$start$2(ResourceReaper.java:133)
> 2020-11-24T16:19:26.0859422Z  at 
> org.testcontainers.utility.ResourceReaper$$Lambda$92/40191541.run(Unknown 
> Source)
> 2020-11-24T16:19:26.0859788Z  at java.lang.Thread.run(Thread.java:748)
> 2020-11-24T16:19:26.0860030Z 
> 2020-11-24T16:19:26.0860371Z "process reaper" #24 daemon prio=10 os_prio=0 
> tid=0x7fc0f803b800 nid=0x3f92 waiting on condition [0x7fc10296e000]
> 2020-11-24T16:19:26.0860913Zjava.lang.Thread.State: TIMED_WAITING 
> (parking)
> 2020-11-24T16:19:26.0861387Z  at sun.misc.Unsafe.park(Native Method)
> 2020-11-24T16:19:26.0862495Z  - parking to wait for  <0x8814bf30> (a 
> java.util.concurrent.SynchronousQueue$TransferStack)
> 2020-11-24T16:19:26.0863253Z  at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> 2020-11-24T16:19:26.0863760Z  at 
> java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
> 2020-11-24T16:19:26.0864274Z  at 
> java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
> 2020-11-24T16:19:26.0864762Z  at 
> java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
> 2020-11-24T16:19:26.0865299Z  

[GitHub] [flink] flinkbot commented on pull request #16417: FLINK-22969

2021-07-07 Thread GitBox


flinkbot commented on pull request #16417:
URL: https://github.com/apache/flink/pull/16417#issuecomment-876076375


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit e8e280ccc9d30446c0280ecf8b8028b2cd25206f (Thu Jul 08 
02:42:30 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-22969).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wsry commented on a change in pull request #11877: [FLINK-16641][network] Announce sender's backlog to solve the deadlock issue without exclusive buffers

2021-07-07 Thread GitBox


wsry commented on a change in pull request #11877:
URL: https://github.com/apache/flink/pull/11877#discussion_r665829387



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/PipelinedSubpartition.java
##
@@ -315,6 +326,17 @@ BufferAndBacklog pollBuffer() {
 if (buffer.readableBytes() > 0) {
 break;
 }
+
+// if we have an empty finished buffer and the exclusive 
credit is 0, we just return
+// the empty buffer so that the downstream task can release 
the allocated credit for
+// this empty buffer, this happens in two main scenarios 
currently:
+// 1. all data of a buffer builder has been read and after 
that the buffer builder
+// is finished
+// 2. in approximate recovery mode, a partial record takes a 
whole buffer builder
+if (buffersPerChannel == 0 && bufferConsumer.isFinished()) {
+break;
+}
+

Review comment:
   Let's maybe focus on the 3rd case first and we assume that the exclusive 
credit is 0.
   
   1. There are only one data buffer in the queue.
   2. Flush triggered.
   3. All data of the first buffer is committed but the buffer is still not 
finished.
   4. All data of the buffer is consumed by pollBuffer and the available credit 
becomes 0.
   5. The first buffer is finished, the second event is added and the data 
available notification is triggered.
   6. The upstream announces backlog to the downstream to request a credit.
   7.  The upstream receives available credit and start to pollBuffer.
   8. Skip the first empty buffer and send the second event.
   9. The downstream receive the event but the event does not consume any 
credit.
   
   Do you mean we should change the current logic and release the floating 
buffer for event in some cases (including reduce the available credit by 1 at 
the upstream, currently the available credit is not decreased for event)? If 
there are multiple empty buffers, should we just skip the first one or should 
we skip all?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-22969) Validate the topic is not null or empty string when create kafka source/sink function

2021-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-22969:
---
Labels: pull-request-available starter  (was: starter)

> Validate the topic is not null or empty string when create kafka source/sink 
> function 
> --
>
> Key: FLINK-22969
> URL: https://issues.apache.org/jira/browse/FLINK-22969
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.14.0
>Reporter: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available, starter
> Attachments: image-2021-07-06-18-55-22-235.png, 
> image-2021-07-06-18-55-54-109.png, image-2021-07-06-19-01-22-483.png, 
> image-2021-07-06-19-03-22-899.png, image-2021-07-06-19-03-32-050.png, 
> image-2021-07-06-19-04-16-530.png, image-2021-07-06-19-04-53-651.png, 
> image-2021-07-06-19-05-48-964.png, image-2021-07-06-19-07-01-607.png, 
> image-2021-07-06-19-07-27-936.png, image-2021-07-06-22-41-52-089.png
>
>
> Add test in UpsertKafkaTableITCase
> {code:java}
>  @Test
> public void testSourceSinkWithKeyAndPartialValue() throws Exception {
> // we always use a different topic name for each parameterized topic,
> // in order to make sure the topic can be created.
> final String topic = "key_partial_value_topic_" + format;
> createTestTopic(topic, 1, 1); // use single partition to guarantee 
> orders in tests
> // -- Produce an event time stream into Kafka 
> ---
> String bootstraps = standardProps.getProperty("bootstrap.servers");
> // k_user_id and user_id have different data types to verify the 
> correct mapping,
> // fields are reordered on purpose
> final String createTable =
> String.format(
> "CREATE TABLE upsert_kafka (\n"
> + "  `k_user_id` BIGINT,\n"
> + "  `name` STRING,\n"
> + "  `timestamp` TIMESTAMP(3) METADATA,\n"
> + "  `k_event_id` BIGINT,\n"
> + "  `user_id` INT,\n"
> + "  `payload` STRING,\n"
> + "  PRIMARY KEY (k_event_id, k_user_id) NOT 
> ENFORCED"
> + ") WITH (\n"
> + "  'connector' = 'upsert-kafka',\n"
> + "  'topic' = '%s',\n"
> + "  'properties.bootstrap.servers' = '%s',\n"
> + "  'key.format' = '%s',\n"
> + "  'key.fields-prefix' = 'k_',\n"
> + "  'value.format' = '%s',\n"
> + "  'value.fields-include' = 'EXCEPT_KEY'\n"
> + ")",
> "", bootstraps, format, format);
> tEnv.executeSql(createTable);
> String initialValues =
> "INSERT INTO upsert_kafka\n"
> + "VALUES\n"
> + " (1, 'name 1', TIMESTAMP '2020-03-08 
> 13:12:11.123', 100, 41, 'payload 1'),\n"
> + " (2, 'name 2', TIMESTAMP '2020-03-09 
> 13:12:11.123', 101, 42, 'payload 2'),\n"
> + " (3, 'name 3', TIMESTAMP '2020-03-10 
> 13:12:11.123', 102, 43, 'payload 3'),\n"
> + " (2, 'name 2', TIMESTAMP '2020-03-11 
> 13:12:11.123', 101, 42, 'payload')";
> tEnv.executeSql(initialValues).await();
> // -- Consume stream from Kafka ---
> final List result = collectRows(tEnv.sqlQuery("SELECT * FROM 
> upsert_kafka"), 5);
> final List expected =
> Arrays.asList(
> changelogRow(
> "+I",
> 1L,
> "name 1",
> 
> LocalDateTime.parse("2020-03-08T13:12:11.123"),
> 100L,
> 41,
> "payload 1"),
> changelogRow(
> "+I",
> 2L,
> "name 2",
> 
> LocalDateTime.parse("2020-03-09T13:12:11.123"),
> 101L,
> 42,
> "payload 2"),
> changelogRow(
> "+I",
> 3L,
> 

[GitHub] [flink] LongWangXX opened a new pull request #16417: FLINK-22969

2021-07-07 Thread GitBox


LongWangXX opened a new pull request #16417:
URL: https://github.com/apache/flink/pull/16417


   
   ## What is the purpose of the change
   
   Fix FLINK-22969, verify that topic is not null and not an empty string when 
creating KafkaSourceFunction and KafkaSinkFunction。
   
   
   ## Brief change log
   
   The KafkaTopicsDescriptor object will be created in the constructor of the 
FlinkKafkaConsumerBase class, and I added a check topic in the constructor of 
KafkaTopicsDescriptor to see if the topic is empty。
   
   in FlinkKafkaProducer.FlinkKafkaProducer(
   String defaultTopic,
   KeyedSerializationSchema keyedSchema,
   FlinkKafkaPartitioner customPartitioner,
   KafkaSerializationSchema kafkaSchema,
   Properties producerConfig,
   FlinkKafkaProducer.Semantic semantic,
   The int kafkaProducersPoolSize) method is added to check whether the 
defaultTopic is empty, because all FlinkKafkaProducer constructors will call 
this constructor.
   
   
   ## Verifying this change
   This change is a trivial rework without any test coverage.
   
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no )
 - The runtime per-record code paths (performance sensitive): ( no )
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: ( no )
 - The S3 file system connector: ( no)
   
   ## Documentation
   
   Does this pull request introduce a new feature? ( no)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23300) Job fails very slow because of no notifyAllocationFailure for DeclarativeSlotManager

2021-07-07 Thread Zhilong Hong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17376966#comment-17376966
 ] 

Zhilong Hong commented on FLINK-23300:
--

I think FLINK-23202 and FLINK-23209 are working on solving this issue.

> Job fails very slow because of no notifyAllocationFailure for 
> DeclarativeSlotManager
> 
>
> Key: FLINK-23300
> URL: https://issues.apache.org/jira/browse/FLINK-23300
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Task
>Affects Versions: 1.13.1
>Reporter: Liu
>Priority: Major
>
> When container is killed, flink on yarn can detect the problem very quickly. 
> But when using default DeclarativeSlotManager, notifyAllocationFailure is not 
> called and the task is not failed until heartbeat is timeout. So the failover 
> will be very slow. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] hehuiyuan removed a comment on pull request #15965: [FLINK-23298][datagen] Normalize parameter names in RandomGeneratorVisitor and SequenceGeneratorVisitor

2021-07-07 Thread GitBox


hehuiyuan removed a comment on pull request #15965:
URL: https://github.com/apache/flink/pull/15965#issuecomment-876066968


   Hi @luoyuxia   here.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi merged pull request #16411: [FLINK-23184][table-runtime-blink] Fix compile error in code generation of unary plus and minus

2021-07-07 Thread GitBox


JingsongLi merged pull request #16411:
URL: https://github.com/apache/flink/pull/16411


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23298) [datagen] Normalize parameter names in RandomGeneratorVisitor and SequenceGeneratorVisitor

2021-07-07 Thread fhan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17376955#comment-17376955
 ] 

fhan commented on FLINK-23298:
--

[~luoyuxia]  i has  done it, please review

> [datagen] Normalize parameter names in RandomGeneratorVisitor and 
> SequenceGeneratorVisitor
> --
>
> Key: FLINK-23298
> URL: https://issues.apache.org/jira/browse/FLINK-23298
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API, Table SQL / Ecosystem
>Reporter: fhan
>Priority: Minor
>  Labels: pull-request-available
>
> This PR normalized parameter names in RandomGeneratorVisitor and 
> SequenceGeneratorVisitor.
> related methods:
> [!https://user-images.githubusercontent.com/5745228/118935994-b47c6080-b97e-11eb-86ef-43c191f602fd.jpg|width=416,height=287!|https://user-images.githubusercontent.com/5745228/118935994-b47c6080-b97e-11eb-86ef-43c191f602fd.jpg]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22969) Validate the topic is not null or empty string when create kafka source/sink function

2021-07-07 Thread longwang0616 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17376953#comment-17376953
 ] 

longwang0616 commented on FLINK-22969:
--

okay, thank you.

> Validate the topic is not null or empty string when create kafka source/sink 
> function 
> --
>
> Key: FLINK-22969
> URL: https://issues.apache.org/jira/browse/FLINK-22969
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.14.0
>Reporter: Shengkai Fang
>Priority: Major
>  Labels: starter
> Attachments: image-2021-07-06-18-55-22-235.png, 
> image-2021-07-06-18-55-54-109.png, image-2021-07-06-19-01-22-483.png, 
> image-2021-07-06-19-03-22-899.png, image-2021-07-06-19-03-32-050.png, 
> image-2021-07-06-19-04-16-530.png, image-2021-07-06-19-04-53-651.png, 
> image-2021-07-06-19-05-48-964.png, image-2021-07-06-19-07-01-607.png, 
> image-2021-07-06-19-07-27-936.png, image-2021-07-06-22-41-52-089.png
>
>
> Add test in UpsertKafkaTableITCase
> {code:java}
>  @Test
> public void testSourceSinkWithKeyAndPartialValue() throws Exception {
> // we always use a different topic name for each parameterized topic,
> // in order to make sure the topic can be created.
> final String topic = "key_partial_value_topic_" + format;
> createTestTopic(topic, 1, 1); // use single partition to guarantee 
> orders in tests
> // -- Produce an event time stream into Kafka 
> ---
> String bootstraps = standardProps.getProperty("bootstrap.servers");
> // k_user_id and user_id have different data types to verify the 
> correct mapping,
> // fields are reordered on purpose
> final String createTable =
> String.format(
> "CREATE TABLE upsert_kafka (\n"
> + "  `k_user_id` BIGINT,\n"
> + "  `name` STRING,\n"
> + "  `timestamp` TIMESTAMP(3) METADATA,\n"
> + "  `k_event_id` BIGINT,\n"
> + "  `user_id` INT,\n"
> + "  `payload` STRING,\n"
> + "  PRIMARY KEY (k_event_id, k_user_id) NOT 
> ENFORCED"
> + ") WITH (\n"
> + "  'connector' = 'upsert-kafka',\n"
> + "  'topic' = '%s',\n"
> + "  'properties.bootstrap.servers' = '%s',\n"
> + "  'key.format' = '%s',\n"
> + "  'key.fields-prefix' = 'k_',\n"
> + "  'value.format' = '%s',\n"
> + "  'value.fields-include' = 'EXCEPT_KEY'\n"
> + ")",
> "", bootstraps, format, format);
> tEnv.executeSql(createTable);
> String initialValues =
> "INSERT INTO upsert_kafka\n"
> + "VALUES\n"
> + " (1, 'name 1', TIMESTAMP '2020-03-08 
> 13:12:11.123', 100, 41, 'payload 1'),\n"
> + " (2, 'name 2', TIMESTAMP '2020-03-09 
> 13:12:11.123', 101, 42, 'payload 2'),\n"
> + " (3, 'name 3', TIMESTAMP '2020-03-10 
> 13:12:11.123', 102, 43, 'payload 3'),\n"
> + " (2, 'name 2', TIMESTAMP '2020-03-11 
> 13:12:11.123', 101, 42, 'payload')";
> tEnv.executeSql(initialValues).await();
> // -- Consume stream from Kafka ---
> final List result = collectRows(tEnv.sqlQuery("SELECT * FROM 
> upsert_kafka"), 5);
> final List expected =
> Arrays.asList(
> changelogRow(
> "+I",
> 1L,
> "name 1",
> 
> LocalDateTime.parse("2020-03-08T13:12:11.123"),
> 100L,
> 41,
> "payload 1"),
> changelogRow(
> "+I",
> 2L,
> "name 2",
> 
> LocalDateTime.parse("2020-03-09T13:12:11.123"),
> 101L,
> 42,
> "payload 2"),
> changelogRow(
> "+I",
> 3L,
> "name 3",
>  

[jira] [Commented] (FLINK-22969) Validate the topic is not null or empty string when create kafka source/sink function

2021-07-07 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17376952#comment-17376952
 ] 

luoyuxia commented on FLINK-22969:
--

[~longwang0616]

Hi, thanks for your contribution, the modification looks fine to me.  Just go 
head to open a pull request to us, we will review it for your.

> Validate the topic is not null or empty string when create kafka source/sink 
> function 
> --
>
> Key: FLINK-22969
> URL: https://issues.apache.org/jira/browse/FLINK-22969
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.14.0
>Reporter: Shengkai Fang
>Priority: Major
>  Labels: starter
> Attachments: image-2021-07-06-18-55-22-235.png, 
> image-2021-07-06-18-55-54-109.png, image-2021-07-06-19-01-22-483.png, 
> image-2021-07-06-19-03-22-899.png, image-2021-07-06-19-03-32-050.png, 
> image-2021-07-06-19-04-16-530.png, image-2021-07-06-19-04-53-651.png, 
> image-2021-07-06-19-05-48-964.png, image-2021-07-06-19-07-01-607.png, 
> image-2021-07-06-19-07-27-936.png, image-2021-07-06-22-41-52-089.png
>
>
> Add test in UpsertKafkaTableITCase
> {code:java}
>  @Test
> public void testSourceSinkWithKeyAndPartialValue() throws Exception {
> // we always use a different topic name for each parameterized topic,
> // in order to make sure the topic can be created.
> final String topic = "key_partial_value_topic_" + format;
> createTestTopic(topic, 1, 1); // use single partition to guarantee 
> orders in tests
> // -- Produce an event time stream into Kafka 
> ---
> String bootstraps = standardProps.getProperty("bootstrap.servers");
> // k_user_id and user_id have different data types to verify the 
> correct mapping,
> // fields are reordered on purpose
> final String createTable =
> String.format(
> "CREATE TABLE upsert_kafka (\n"
> + "  `k_user_id` BIGINT,\n"
> + "  `name` STRING,\n"
> + "  `timestamp` TIMESTAMP(3) METADATA,\n"
> + "  `k_event_id` BIGINT,\n"
> + "  `user_id` INT,\n"
> + "  `payload` STRING,\n"
> + "  PRIMARY KEY (k_event_id, k_user_id) NOT 
> ENFORCED"
> + ") WITH (\n"
> + "  'connector' = 'upsert-kafka',\n"
> + "  'topic' = '%s',\n"
> + "  'properties.bootstrap.servers' = '%s',\n"
> + "  'key.format' = '%s',\n"
> + "  'key.fields-prefix' = 'k_',\n"
> + "  'value.format' = '%s',\n"
> + "  'value.fields-include' = 'EXCEPT_KEY'\n"
> + ")",
> "", bootstraps, format, format);
> tEnv.executeSql(createTable);
> String initialValues =
> "INSERT INTO upsert_kafka\n"
> + "VALUES\n"
> + " (1, 'name 1', TIMESTAMP '2020-03-08 
> 13:12:11.123', 100, 41, 'payload 1'),\n"
> + " (2, 'name 2', TIMESTAMP '2020-03-09 
> 13:12:11.123', 101, 42, 'payload 2'),\n"
> + " (3, 'name 3', TIMESTAMP '2020-03-10 
> 13:12:11.123', 102, 43, 'payload 3'),\n"
> + " (2, 'name 2', TIMESTAMP '2020-03-11 
> 13:12:11.123', 101, 42, 'payload')";
> tEnv.executeSql(initialValues).await();
> // -- Consume stream from Kafka ---
> final List result = collectRows(tEnv.sqlQuery("SELECT * FROM 
> upsert_kafka"), 5);
> final List expected =
> Arrays.asList(
> changelogRow(
> "+I",
> 1L,
> "name 1",
> 
> LocalDateTime.parse("2020-03-08T13:12:11.123"),
> 100L,
> 41,
> "payload 1"),
> changelogRow(
> "+I",
> 2L,
> "name 2",
> 
> LocalDateTime.parse("2020-03-09T13:12:11.123"),
> 101L,
> 42,
> "payload 2"),
> 

[jira] [Updated] (FLINK-23298) [datagen] Normalize parameter names in RandomGeneratorVisitor and SequenceGeneratorVisitor

2021-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-23298:
---
Labels: pull-request-available  (was: )

> [datagen] Normalize parameter names in RandomGeneratorVisitor and 
> SequenceGeneratorVisitor
> --
>
> Key: FLINK-23298
> URL: https://issues.apache.org/jira/browse/FLINK-23298
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API, Table SQL / Ecosystem
>Reporter: fhan
>Priority: Minor
>  Labels: pull-request-available
>
> This PR normalized parameter names in RandomGeneratorVisitor and 
> SequenceGeneratorVisitor.
> related methods:
> [!https://user-images.githubusercontent.com/5745228/118935994-b47c6080-b97e-11eb-86ef-43c191f602fd.jpg|width=416,height=287!|https://user-images.githubusercontent.com/5745228/118935994-b47c6080-b97e-11eb-86ef-43c191f602fd.jpg]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] hehuiyuan commented on pull request #15965: [FLINK-23298][datagen] Normalize parameter names in RandomGeneratorVisitor and SequenceGeneratorVisitor

2021-07-07 Thread GitBox


hehuiyuan commented on pull request #15965:
URL: https://github.com/apache/flink/pull/15965#issuecomment-876066968


   Hi @luoyuxia   here.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16416: [FLINK-22657][Connectors/Hive] HiveParserDDLSemanticAnalyzer return operations directly

2021-07-07 Thread GitBox


flinkbot edited a comment on pull request #16416:
URL: https://github.com/apache/flink/pull/16416#issuecomment-876055217


   
   ## CI report:
   
   * 9c60fc2fed1f4140035abe32b0b9b7cd8cc55973 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20128)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-22198) KafkaTableITCase hang.

2021-07-07 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17376948#comment-17376948
 ] 

Xintong Song commented on FLINK-22198:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20120=logs=4be4ed2b-549a-533d-aa33-09e28e360cc8=0db94045-2aa0-53fa-f444-0130d6933518=7469

> KafkaTableITCase hang.
> --
>
> Key: FLINK-22198
> URL: https://issues.apache.org/jira/browse/FLINK-22198
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.12.4
>Reporter: Guowei Ma
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16287=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=6625
> There is no any artifacts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22860) Supplement 'HELP' command prompt message for SQL-Cli.

2021-07-07 Thread Roc Marshal (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17376949#comment-17376949
 ] 

Roc Marshal commented on FLINK-22860:
-

Could someone please help me to advance this PR ? Thank you.

> Supplement 'HELP' command prompt message for SQL-Cli.
> -
>
> Key: FLINK-22860
> URL: https://issues.apache.org/jira/browse/FLINK-22860
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client
>Reporter: Roc Marshal
>Assignee: Roc Marshal
>Priority: Minor
>  Labels: pull-request-available
> Attachments: attach.png
>
>
> !attach.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23303) org.apache.calcite.rex.RexLiteral cannot be cast to org.apache.calcite.rex.RexCall

2021-07-07 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-23303:
---
Description: 
{code:java}
CREATE TABLE database5_t1(
`c0` SMALLINT , `c1` INTEGER , `c2` SMALLINT
) WITH (
 'connector' = 'filesystem',
 'format' = 'testcsv',
 'path' = '$resultPath11'
)

INSERT INTO database5_t1(c0, c1, c2) VALUES(cast(-21957 as SMALLINT), 
1094690065, cast(16917 as SMALLINT))

SELECT database5_t1.c0 AS ref0 FROM database5_t1 WHERE (FALSE) NOT IN (((NOT 
CAST ((database5_t1.c0) AS BOOLEAN))) = (database5_t1.c0))
{code}


*After excuting the sql above, you will get the errors:
*


{code:java}
java.lang.ClassCastException: org.apache.calcite.rex.RexLiteral cannot be cast 
to org.apache.calcite.rex.RexCall

at 
org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.visitCall(RexNodeExtractor.scala:478)
at 
org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.visitCall(RexNodeExtractor.scala:367)
at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$$anonfun$extractConjunctiveConditions$1.apply(RexNodeExtractor.scala:138)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$$anonfun$extractConjunctiveConditions$1.apply(RexNodeExtractor.scala:137)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$.extractConjunctiveConditions(RexNodeExtractor.scala:137)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor.extractConjunctiveConditions(RexNodeExtractor.scala)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoSourceScanRuleBase.extractPredicates(PushFilterIntoSourceScanRuleBase.java:145)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoTableSourceScanRule.pushFilterIntoScan(PushFilterIntoTableSourceScanRule.java:81)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoTableSourceScanRule.onMatch(PushFilterIntoTableSourceScanRule.java:70)
at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:60)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:60)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:55)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.immutable.Range.foreach(Range.scala:160)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 

[jira] [Updated] (FLINK-23303) org.apache.calcite.rex.RexLiteral cannot be cast to org.apache.calcite.rex.RexCall

2021-07-07 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-23303:
---
Description: 
{code:java}
CREATE TABLE database5_t1(
`c0` SMALLINT , `c1` INTEGER , `c2` SMALLINT
) WITH (
 'connector' = 'filesystem',
 'format' = 'testcsv',
 'path' = '$resultPath11'
)

INSERT INTO database5_t1(c0, c1, c2) VALUES(cast(-21957 as SMALLINT), 
1094690065, cast(16917 as SMALLINT))

SELECT database5_t1.c0 AS ref0 FROM database5_t1 WHERE (FALSE) NOT IN (((NOT 
CAST ((database5_t1.c0) AS BOOLEAN))) = (database5_t1.c0))
{code}


After excuting the sql above, you will get the errors:



{code:java}
java.lang.ClassCastException: org.apache.calcite.rex.RexLiteral cannot be cast 
to org.apache.calcite.rex.RexCall

at 
org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.visitCall(RexNodeExtractor.scala:478)
at 
org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.visitCall(RexNodeExtractor.scala:367)
at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$$anonfun$extractConjunctiveConditions$1.apply(RexNodeExtractor.scala:138)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$$anonfun$extractConjunctiveConditions$1.apply(RexNodeExtractor.scala:137)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$.extractConjunctiveConditions(RexNodeExtractor.scala:137)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor.extractConjunctiveConditions(RexNodeExtractor.scala)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoSourceScanRuleBase.extractPredicates(PushFilterIntoSourceScanRuleBase.java:145)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoTableSourceScanRule.pushFilterIntoScan(PushFilterIntoTableSourceScanRule.java:81)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoTableSourceScanRule.onMatch(PushFilterIntoTableSourceScanRule.java:70)
at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:60)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:60)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:55)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.immutable.Range.foreach(Range.scala:160)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 

[jira] [Updated] (FLINK-23303) org.apache.calcite.rex.RexLiteral cannot be cast to org.apache.calcite.rex.RexCall

2021-07-07 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-23303:
---
Description: 
{code:java}
CREATE TABLE database5_t1(
`c0` SMALLINT , `c1` INTEGER , `c2` SMALLINT
) WITH (
 'connector' = 'filesystem',
 'format' = 'testcsv',
 'path' = '$resultPath11'
)

INSERT INTO database5_t1(c0, c1, c2) VALUES(cast(-21957 as SMALLINT), 
1094690065, cast(16917 as SMALLINT))

SELECT database5_t1.c0 AS ref0 FROM database5_t1 WHERE (FALSE) NOT IN (((NOT 
CAST ((database5_t1.c0) AS BOOLEAN))) = (database5_t1.c0))
{code}


**After excuting the sql above, you will get the errors:
**


{code:java}
java.lang.ClassCastException: org.apache.calcite.rex.RexLiteral cannot be cast 
to org.apache.calcite.rex.RexCall

at 
org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.visitCall(RexNodeExtractor.scala:478)
at 
org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.visitCall(RexNodeExtractor.scala:367)
at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$$anonfun$extractConjunctiveConditions$1.apply(RexNodeExtractor.scala:138)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$$anonfun$extractConjunctiveConditions$1.apply(RexNodeExtractor.scala:137)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$.extractConjunctiveConditions(RexNodeExtractor.scala:137)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor.extractConjunctiveConditions(RexNodeExtractor.scala)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoSourceScanRuleBase.extractPredicates(PushFilterIntoSourceScanRuleBase.java:145)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoTableSourceScanRule.pushFilterIntoScan(PushFilterIntoTableSourceScanRule.java:81)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoTableSourceScanRule.onMatch(PushFilterIntoTableSourceScanRule.java:70)
at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:60)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:60)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:55)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.immutable.Range.foreach(Range.scala:160)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 

[jira] [Updated] (FLINK-23303) org.apache.calcite.rex.RexLiteral cannot be cast to org.apache.calcite.rex.RexCall

2021-07-07 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-23303:
---
Description: 

{code:java}
CREATE TABLE database5_t1(
`c0` SMALLINT , `c1` INTEGER , `c2` SMALLINT
) WITH (
 'connector' = 'filesystem',
 'format' = 'testcsv',
 'path' = '$resultPath11'
)

INSERT INTO database5_t1(c0, c1, c2) VALUES(cast(-21957 as SMALLINT), 
1094690065, cast(16917 as SMALLINT))

SELECT database5_t1.c0 AS ref0 FROM database5_t1 WHERE (FALSE) NOT IN (((NOT 
CAST ((database5_t1.c0) AS BOOLEAN))) = (database5_t1.c0))
{code}


*After excuting the sql above, you will get the errors:
*


{code:java}
java.lang.ClassCastException: org.apache.calcite.rex.RexLiteral cannot be cast 
to org.apache.calcite.rex.RexCall

at 
org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.visitCall(RexNodeExtractor.scala:478)
at 
org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.visitCall(RexNodeExtractor.scala:367)
at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$$anonfun$extractConjunctiveConditions$1.apply(RexNodeExtractor.scala:138)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$$anonfun$extractConjunctiveConditions$1.apply(RexNodeExtractor.scala:137)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$.extractConjunctiveConditions(RexNodeExtractor.scala:137)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor.extractConjunctiveConditions(RexNodeExtractor.scala)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoSourceScanRuleBase.extractPredicates(PushFilterIntoSourceScanRuleBase.java:145)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoTableSourceScanRule.pushFilterIntoScan(PushFilterIntoTableSourceScanRule.java:81)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoTableSourceScanRule.onMatch(PushFilterIntoTableSourceScanRule.java:70)
at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:60)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:60)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:55)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.immutable.Range.foreach(Range.scala:160)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 

[jira] [Created] (FLINK-23303) org.apache.calcite.rex.RexLiteral cannot be cast to org.apache.calcite.rex.RexCall

2021-07-07 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-23303:
--

 Summary: org.apache.calcite.rex.RexLiteral cannot be cast to 
org.apache.calcite.rex.RexCall
 Key: FLINK-23303
 URL: https://issues.apache.org/jira/browse/FLINK-23303
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.14.0
Reporter: xiaojin.wy


CREATE TABLE database5_t1(
`c0` SMALLINT , `c1` INTEGER , `c2` SMALLINT
) WITH (
 'connector' = 'filesystem',
 'format' = 'testcsv',
 'path' = '$resultPath11'
)

INSERT INTO database5_t1(c0, c1, c2) VALUES(cast(-21957 as SMALLINT), 
1094690065, cast(16917 as SMALLINT))

SELECT database5_t1.c0 AS ref0 FROM database5_t1 WHERE (FALSE) NOT IN (((NOT 
CAST ((database5_t1.c0) AS BOOLEAN))) = (database5_t1.c0))

*After excuting the sql above, you will get the errors:
*

java.lang.ClassCastException: org.apache.calcite.rex.RexLiteral cannot be cast 
to org.apache.calcite.rex.RexCall

at 
org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.visitCall(RexNodeExtractor.scala:478)
at 
org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.visitCall(RexNodeExtractor.scala:367)
at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$$anonfun$extractConjunctiveConditions$1.apply(RexNodeExtractor.scala:138)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$$anonfun$extractConjunctiveConditions$1.apply(RexNodeExtractor.scala:137)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$.extractConjunctiveConditions(RexNodeExtractor.scala:137)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor.extractConjunctiveConditions(RexNodeExtractor.scala)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoSourceScanRuleBase.extractPredicates(PushFilterIntoSourceScanRuleBase.java:145)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoTableSourceScanRule.pushFilterIntoScan(PushFilterIntoTableSourceScanRule.java:81)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoTableSourceScanRule.onMatch(PushFilterIntoTableSourceScanRule.java:70)
at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:60)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:60)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:55)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 

[jira] [Created] (FLINK-23302) Precondition failed building CheckpointMetrics.

2021-07-07 Thread Kyle Weaver (Jira)
Kyle Weaver created FLINK-23302:
---

 Summary: Precondition failed building CheckpointMetrics.
 Key: FLINK-23302
 URL: https://issues.apache.org/jira/browse/FLINK-23302
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing
Reporter: Kyle Weaver


Beam has a flaky test using Flink savepoints. It looks like 
alignmentDurationNanos is less than -1, which shouldn't be possible. As far as 
I know clients (like Beam) don't have any control over this value, so my best 
guess is that it's a bug in Flink.

See 
https://issues.apache.org/jira/browse/BEAM-10955?focusedCommentId=17376928=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17376928
 for context.

The failing test is here: 
[https://github.com/apache/beam/blob/b401d23dfc2a487ae5775164a7834952391ff4fa/runners/flink/src/test/java/org/apache/beam/runners/flink/FlinkSavepointTest.java#L146]

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wsry commented on a change in pull request #11877: [FLINK-16641][network] Announce sender's backlog to solve the deadlock issue without exclusive buffers

2021-07-07 Thread GitBox


wsry commented on a change in pull request #11877:
URL: https://github.com/apache/flink/pull/11877#discussion_r665814336



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java
##
@@ -357,11 +359,26 @@ public void resumeConsumption() throws IOException {
 checkState(!isReleased.get(), "Channel released.");
 checkPartitionRequestQueueInitialized();
 
+if (initialCredit == 0) {
+unannouncedCredit.set(0);

Review comment:
   > Something sounds wrong here. The race condition that you described 
above, does it mean that unannouncedCredit can be out of sync? That we in 
reality have released all floating buffers, channel is blocked, but actually 
unannouncedCredit > 0? And it's only fixed after calling resumeConsumption()?
   
   Yes, exactly.
   
   > And as I understand it, without your change, this problem doesn't exist, 
as floating buffers are kept assigned to the blocked channel and the 
unannouncedCredit (or maybe even assigned AddCredit that might have been sent 
to the upstream node) are consistent with the reality. Also those assigned 
floating buffers are not used because channel is blocked, but that is not a big 
issue, because thanks to the exclusive buffers, other channels can make a 
progress?
   
   That only happens when the exclusive credit is 0. If the exclusive credit is 
not 0, the allocated floating buffers will not be released and if the exclusive 
credit is 0, we release the floating buffers allocated to let other channel use 
them to avoid deadlock, an extreme case is that we only have 1 floating buffer 
and no exclusive buffer. At downstream, the unannounced credit will be reset, 
at the upstream, the available credit is also reset to 0 when resume 
consumption. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wsry commented on a change in pull request #11877: [FLINK-16641][network] Announce sender's backlog to solve the deadlock issue without exclusive buffers

2021-07-07 Thread GitBox


wsry commented on a change in pull request #11877:
URL: https://github.com/apache/flink/pull/11877#discussion_r665814336



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java
##
@@ -357,11 +359,26 @@ public void resumeConsumption() throws IOException {
 checkState(!isReleased.get(), "Channel released.");
 checkPartitionRequestQueueInitialized();
 
+if (initialCredit == 0) {
+unannouncedCredit.set(0);

Review comment:
   >>> Something sounds wrong here. The race condition that you described 
above, does it mean that unannouncedCredit can be out of sync? That we in 
reality have released all floating buffers, channel is blocked, but actually 
unannouncedCredit > 0? And it's only fixed after calling resumeConsumption()?
   
   Yes, exactly.
   
   >>> And as I understand it, without your change, this problem doesn't exist, 
as floating buffers are kept assigned to the blocked channel and the 
unannouncedCredit (or maybe even assigned AddCredit that might have been sent 
to the upstream node) are consistent with the reality. Also those assigned 
floating buffers are not used because channel is blocked, but that is not a big 
issue, because thanks to the exclusive buffers, other channels can make a 
progress?
   
   That only happens when the exclusive credit is 0. If the exclusive credit is 
not 0, the allocated floating buffers will not be released and if the exclusive 
credit is 0, we release the floating buffers allocated to let other channel use 
them to avoid deadlock, an extreme case is that we only have 1 floating buffer 
and no exclusive buffer. At downstream, the unannounced credit will be reset, 
at the upstream, the available credit is also reset to 0 when resume 
consumption. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #16416: [FLINK-22657][Connectors/Hive] HiveParserDDLSemanticAnalyzer return operations directly

2021-07-07 Thread GitBox


flinkbot commented on pull request #16416:
URL: https://github.com/apache/flink/pull/16416#issuecomment-876055217


   
   ## CI report:
   
   * 9c60fc2fed1f4140035abe32b0b9b7cd8cc55973 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23298) [datagen] Normalize parameter names in RandomGeneratorVisitor and SequenceGeneratorVisitor

2021-07-07 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17376939#comment-17376939
 ] 

luoyuxia commented on FLINK-23298:
--

[~fhan688]

Thanks for reporting this.

Just go head to sumit a pr to us, we will review it for you.

> [datagen] Normalize parameter names in RandomGeneratorVisitor and 
> SequenceGeneratorVisitor
> --
>
> Key: FLINK-23298
> URL: https://issues.apache.org/jira/browse/FLINK-23298
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API, Table SQL / Ecosystem
>Reporter: fhan
>Priority: Minor
>
> This PR normalized parameter names in RandomGeneratorVisitor and 
> SequenceGeneratorVisitor.
> related methods:
> [!https://user-images.githubusercontent.com/5745228/118935994-b47c6080-b97e-11eb-86ef-43c191f602fd.jpg|width=416,height=287!|https://user-images.githubusercontent.com/5745228/118935994-b47c6080-b97e-11eb-86ef-43c191f602fd.jpg]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #16416: [FLINK-22657][Connectors/Hive] HiveParserDDLSemanticAnalyzer return operations directly

2021-07-07 Thread GitBox


flinkbot commented on pull request #16416:
URL: https://github.com/apache/flink/pull/16416#issuecomment-876048877


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 9c60fc2fed1f4140035abe32b0b9b7cd8cc55973 (Thu Jul 08 
01:33:31 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-22657) HiveParserDDLSemanticAnalyzer can directly return operations

2021-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-22657:
---
Labels: pull-request-available  (was: )

> HiveParserDDLSemanticAnalyzer can directly return operations
> 
>
> Key: FLINK-22657
> URL: https://issues.apache.org/jira/browse/FLINK-22657
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: Rui Li
>Assignee: luoyuxia
>Priority: Major
>  Labels: pull-request-available
>
> There's no need to first generate some "desc" and later convert to operation 
> with DDLOperationConverter



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] luoyuxia opened a new pull request #16416: [FLINK-22657][Connectors/Hive] HiveParserDDLSemanticAnalyzer return operations directly

2021-07-07 Thread GitBox


luoyuxia opened a new pull request #16416:
URL: https://github.com/apache/flink/pull/16416


   
   
   
   ## What is the purpose of the change
   
   *To make HiveParserDDLSemanticAnalyzer return operations directly.*
   
   
   ## Brief change log
   
 - *Convert HiveParserASTNode to Operation directly in 
HiveParserDDLSemanticAnalyzer instead of firstly generating description and 
then convert it to operation*
   
   
   ## Verifying this change
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`:  no
 - The serializers:  no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper:  no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14857: [FLINK-21087][runtime][checkpoint] StreamTask waits for all the pending checkpoints to finish before finished

2021-07-07 Thread GitBox


flinkbot edited a comment on pull request #14857:
URL: https://github.com/apache/flink/pull/14857#issuecomment-772961443


   
   ## CI report:
   
   * eac2bf7375798c8c8c48ca8ef1c738b7a4a8d815 UNKNOWN
   * e19e26451d32719c80887f6ce02d1cc7d8ec5512 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20119)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20109)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16404: [FLINK-23277][state/changelog] Store and recover TTL metadata using changelog

2021-07-07 Thread GitBox


flinkbot edited a comment on pull request #16404:
URL: https://github.com/apache/flink/pull/16404#issuecomment-875114816


   
   ## CI report:
   
   * 8eb9d49182bddde07fc1abd731347206c820471f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20042)
 
   * eea47c38b52880c66fd93dac162d5d7f895ca752 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20127)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16345: [FLINK-18783] Load AkkaRpcSystem through separate classloader

2021-07-07 Thread GitBox


flinkbot edited a comment on pull request #16345:
URL: https://github.com/apache/flink/pull/16345#issuecomment-872302449


   
   ## CI report:
   
   * 3c74e6208e91e48260fb5d1036680fc40e58a7f5 UNKNOWN
   * 51d28b579f190a877ade870c86b938a795a62818 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20118)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16404: [FLINK-23277][state/changelog] Store and recover TTL metadata using changelog

2021-07-07 Thread GitBox


flinkbot edited a comment on pull request #16404:
URL: https://github.com/apache/flink/pull/16404#issuecomment-875114816


   
   ## CI report:
   
   * 8eb9d49182bddde07fc1abd731347206c820471f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20042)
 
   * eea47c38b52880c66fd93dac162d5d7f895ca752 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16403: [FLINK-23276][state/changelog] Fix missing delegation in getPartitionedState

2021-07-07 Thread GitBox


flinkbot edited a comment on pull request #16403:
URL: https://github.com/apache/flink/pull/16403#issuecomment-875102625


   
   ## CI report:
   
   * 1d1368d281acdb14f47adc7ec19078d7e21dda5f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20126)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20041)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19303) Disable WAL in RocksDB recovery

2021-07-07 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-19303:
---
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Disable WAL in RocksDB recovery
> ---
>
> Key: FLINK-19303
> URL: https://issues.apache.org/jira/browse/FLINK-19303
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Juha Mynttinen
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
>
> During recovery of {{RocksDBStateBackend}} the recovery mechanism puts the 
> key value pairs to local RocksDB instance(s). To speed up the process, the 
> recovery process uses RocskDB write batch mechanism. [RocksDB 
> WAL|https://github.com/facebook/rocksdb/wiki/Write-Ahead-Log]  is enabled 
> during this process.
> During normal operations, i.e. when the state backend has been recovered and 
> the Flink application is running (on RocksDB state backend) WAL is disabled.
> The recovery process doesn't need WAL. In fact the recovery should be much 
> faster without WAL. Thus, WAL should be disabled in the recovery process.
> AFAIK the last thing that was done with WAL during recovery was an attempt to 
> remove it. Later that removal was removed because it causes stability issues 
> (https://issues.apache.org/jira/browse/FLINK-8922).
> Unfortunately the root cause why disabling WAL causes segfault during 
> recovery is unknown. After all, WAL is not used during normal operations.
> Potential explanation is some kind of bug in RocksDB write batch when using 
> WAL. It is possible later RocksDB versions have fixes / workarounds for the 
> issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-8093) flink job fail because of kafka producer create fail of "javax.management.InstanceAlreadyExistsException"

2021-07-07 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-8093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-8093:
--
  Labels: auto-deprioritized-critical auto-deprioritized-major 
auto-unassigned usability  (was: auto-deprioritized-critical auto-unassigned 
stale-major usability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> flink job fail because of kafka producer create fail of 
> "javax.management.InstanceAlreadyExistsException"
> -
>
> Key: FLINK-8093
> URL: https://issues.apache.org/jira/browse/FLINK-8093
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.3.2, 1.10.0
> Environment: flink 1.3.2, kafka 0.9.1
>Reporter: dongtingting
>Priority: Minor
>  Labels: auto-deprioritized-critical, auto-deprioritized-major, 
> auto-unassigned, usability
>
> one taskmanager has multiple taskslot, one task fail because of create 
> kafkaProducer fail,the reason for create kafkaProducer fail is 
> “javax.management.InstanceAlreadyExistsException: 
> kafka.producer:type=producer-metrics,client-id=producer-3”。 the detail trace 
> is :
> {noformat}
> 2017-11-04 19:41:23,281 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Source: Custom Source -> Filter -> Map -> Filter -> Sink: 
> dp_client_**_log (7/80) (99551f3f892232d7df5eb9060fa9940c) switched from 
> RUNNING to FAILED.
> org.apache.kafka.common.KafkaException: Failed to construct kafka producer
> at 
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:321)
> at 
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:181)
> at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase.getKafkaProducer(FlinkKafkaProducerBase.java:202)
> at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase.open(FlinkKafkaProducerBase.java:212)
> at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
> at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:111)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:375)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:252)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.kafka.common.KafkaException: Error registering mbean 
> kafka.producer:type=producer-metrics,client-id=producer-3
> at 
> org.apache.kafka.common.metrics.JmxReporter.reregister(JmxReporter.java:159)
> at 
> org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:77)
> at 
> org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:288)
> at org.apache.kafka.common.metrics.Metrics.addMetric(Metrics.java:255)
> at org.apache.kafka.common.metrics.Metrics.addMetric(Metrics.java:239)
> at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.registerMetrics(RecordAccumulator.java:137)
> at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.(RecordAccumulator.java:111)
> at 
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:261)
> ... 9 more
> Caused by: javax.management.InstanceAlreadyExistsException: 
> kafka.producer:type=producer-metrics,client-id=producer-3
> at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at 
> org.apache.kafka.common.metrics.JmxReporter.reregister(JmxReporter.java:157)
> ... 16 more
> {noformat}
> I doubt that task in different taskslot of one taskmanager use different 
> classloader, and taskid may be  the same in one process。 So this lead to 
> create kafkaProducer fail in one 

  1   2   3   4   5   >