[jira] [Commented] (FLINK-24660) Allow setting KafkaSubscriber in KafkaSourceBuilder

2022-04-04 Thread Mason Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17517223#comment-17517223
 ] 

Mason Chen commented on FLINK-24660:


[~kkondamudi] [~dfontana] My apologies–my email filters got messed up. I will 
put forth a PR

> Allow setting KafkaSubscriber in KafkaSourceBuilder
> ---
>
> Key: FLINK-24660
> URL: https://issues.apache.org/jira/browse/FLINK-24660
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0, 1.13.3
>Reporter: Mason Chen
>Assignee: Mason Chen
>Priority: Major
>  Labels: stale-assigned, starter
>
> Some users may have a different mechanism for subscribing the set of 
> topics/partitions. The builder can allow user custom implementations of 
> KafkaSubscriber



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-24660) Allow setting KafkaSubscriber in KafkaSourceBuilder

2022-04-04 Thread Karthik Kondamudi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17517214#comment-17517214
 ] 

Karthik Kondamudi commented on FLINK-24660:
---

[~mason6345] please let us know if you are still working on this issue, if not 
I'm happy to take this up.
cc [~arvid] 

> Allow setting KafkaSubscriber in KafkaSourceBuilder
> ---
>
> Key: FLINK-24660
> URL: https://issues.apache.org/jira/browse/FLINK-24660
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0, 1.13.3
>Reporter: Mason Chen
>Assignee: Mason Chen
>Priority: Major
>  Labels: stale-assigned, starter
>
> Some users may have a different mechanism for subscribing the set of 
> topics/partitions. The builder can allow user custom implementations of 
> KafkaSubscriber



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (FLINK-27020) use hive dialect in SqlClient would thrown an error based on 1.15 version

2022-04-04 Thread Jing Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17517203#comment-17517203
 ] 

Jing Zhang edited comment on FLINK-27020 at 4/5/22 4:31 AM:


[~martijnvisser]Thanks a lot for reminding.
It works after replace flink-table-planner-loader with flink-table-planner_2.12 
located in /opt.
I would like to add it to [hive dialect 
doc|https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/connectors/table/hive/hive_dialect/]
 in 1.15 version, WDYT?


was (Author: qingru zhang):
[~martijnvisser]Thanks a lot for reminding.
It works after replace flink-table-planner-loader with flink-table-planner_2.12 
located in /opt.
I would like to add it to [hive dialect 
doc|https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/connectors/table/hive/hive_dialect/]
 later, WDYT?

> use hive dialect in SqlClient would thrown an error based on 1.15 version
> -
>
> Key: FLINK-27020
> URL: https://issues.apache.org/jira/browse/FLINK-27020
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.15.0
>Reporter: Jing Zhang
>Priority: Major
> Attachments: image-2022-04-02-20-28-01-335.png
>
>
> I use 1.15 rc0 and encounter a problem.
> An error would be thrown out if I use hive dialect in SqlClient.
>  !image-2022-04-02-20-28-01-335.png! 
> And I already add flink-sql-connector-hive-2.3.6_2.12-1.15-SNAPSHOT.jar.
> I note that, load and use hive module could work fine, but use hive dialect 
> would fail.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-27056) "pipeline.time-characteristic" should be deprecated and have EVENT_TIME as default value

2022-04-04 Thread Zhanghao Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanghao Chen updated FLINK-27056:
--
Description: 
*Background*
 # pipeline.time-characteristic is the configuration option used to control the 
time characteristic for all created streams, and has the default value 
_PROCESSING_TIME_ at the point of writing. However, the configuration option 
won't take effect unless it is explicitly set by user as we read it into the 
code by configurtion.getOptional(xx).ifPresent(xx).
 # The default value of _TIME_CHARACTERISTIC_ has been changed from 
_PROCESSING_TIME_ to _EVENT_TIME_ in FLINK-19317 Make EventTime the default 
StreamTimeCharacteristic - ASF JIRA (apache.org)
 # _TIME_CHARACTERISTIC_ and the relevant operations that set or uses it have 
been deprecated in FLINK-19318 Deprecate timeWindow() operations in DataStream 
API - ASF JIRA (apache.org) and FLINK-19319 Deprecate 
StreamExecutionEnvironment.setStreamTimeCharacteristic() and TimeCharacteristic 
- ASF JIRA (apache.org)

*Proposed Change*
 # {*}{*}pipeline.time-characteristic should be deprecated, just like other 
_TIME_CHARACTERISTIC_ related operations as we no longer want user to set this.
 # pipeline.time-characteristic should have the default value of 
{_}EVENT_TIME{_}, to reflect the actual default value in system, and avoid 
misleading users.

Additionally, I think all configuration options which only take effect when it 
is explicitly set by user (aka those read into the system by 
configurtion.getOptional(xx).ifPresent(xx)), should have no default values.

  was:
*Background*
 # pipeline.time-characteristic is the configuration option used to control the 
time characteristic for all created streams, and has the default value 
_PROCESSING_TIME_ at the point of writing. However, the configuration option 
won't take effect unless it is explicitly set by user as we read it into the 
code by configurtion.getOptional(xx).ifPresent(xx).
 # The default value of _TIME_CHARACTERISTIC_ has been changed from 
_PROCESSING_TIME_ to _EVENT_TIME_ in [FLINK-19317] Make EventTime the default 
StreamTimeCharacteristic - ASF JIRA (apache.org)
 # _TIME_CHARACTERISTIC_ and the relevant operations that set or uses it have 
been deprecated in [FLINK-19318] Deprecate timeWindow() operations in 
DataStream API - ASF JIRA (apache.org) and [FLINK-19319] Deprecate 
StreamExecutionEnvironment.setStreamTimeCharacteristic() and TimeCharacteristic 
- ASF JIRA (apache.org)

*Proposed Change*
 # {*}{*}pipeline.time-characteristic should be deprecated, just like other 
_TIME_CHARACTERISTIC_ related operations as we no longer want user to set this.
 # pipeline.time-characteristic should have the default value of 
{_}EVENT_TIME{_}, to reflect the actual default value in system, and avoid 
misleading users.

Additionally, I think all configuration options which only take effect when it 
is explicitly set by user (aka those read into the system by 
configurtion.getOptional(xx).ifPresent(xx)), should have no default values.


> "pipeline.time-characteristic" should be deprecated and have EVENT_TIME as 
> default value
> 
>
> Key: FLINK-27056
> URL: https://issues.apache.org/jira/browse/FLINK-27056
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Configuration
>Affects Versions: 1.15.0, 1.12.7, 1.13.6, 1.14.4
>Reporter: Zhanghao Chen
>Priority: Major
> Fix For: 1.16.0
>
>
> *Background*
>  # pipeline.time-characteristic is the configuration option used to control 
> the time characteristic for all created streams, and has the default value 
> _PROCESSING_TIME_ at the point of writing. However, the configuration option 
> won't take effect unless it is explicitly set by user as we read it into the 
> code by configurtion.getOptional(xx).ifPresent(xx).
>  # The default value of _TIME_CHARACTERISTIC_ has been changed from 
> _PROCESSING_TIME_ to _EVENT_TIME_ in FLINK-19317 Make EventTime the default 
> StreamTimeCharacteristic - ASF JIRA (apache.org)
>  # _TIME_CHARACTERISTIC_ and the relevant operations that set or uses it have 
> been deprecated in FLINK-19318 Deprecate timeWindow() operations in 
> DataStream API - ASF JIRA (apache.org) and FLINK-19319 Deprecate 
> StreamExecutionEnvironment.setStreamTimeCharacteristic() and 
> TimeCharacteristic - ASF JIRA (apache.org)
> *Proposed Change*
>  # {*}{*}pipeline.time-characteristic should be deprecated, just like other 
> _TIME_CHARACTERISTIC_ related operations as we no longer want user to set 
> this.
>  # pipeline.time-characteristic should have the default value of 
> {_}EVENT_TIME{_}, to reflect the actual default value in system, and avoid 
> misleading users.
> Additionally, I think all configuration options which only 

[jira] [Created] (FLINK-27056) "pipeline.time-characteristic" should be deprecated and have EVENT_TIME as default value

2022-04-04 Thread Zhanghao Chen (Jira)
Zhanghao Chen created FLINK-27056:
-

 Summary: "pipeline.time-characteristic" should be deprecated and 
have EVENT_TIME as default value
 Key: FLINK-27056
 URL: https://issues.apache.org/jira/browse/FLINK-27056
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Configuration
Affects Versions: 1.14.4, 1.13.6, 1.12.7, 1.15.0
Reporter: Zhanghao Chen
 Fix For: 1.16.0


*Background*
 # pipeline.time-characteristic is the configuration option used to control the 
time characteristic for all created streams, and has the default value 
_PROCESSING_TIME_ at the point of writing. However, the configuration option 
won't take effect unless it is explicitly set by user as we read it into the 
code by configurtion.getOptional(xx).ifPresent(xx).
 # The default value of _TIME_CHARACTERISTIC_ has been changed from 
_PROCESSING_TIME_ to _EVENT_TIME_ in [FLINK-19317] Make EventTime the default 
StreamTimeCharacteristic - ASF JIRA (apache.org)
 # _TIME_CHARACTERISTIC_ and the relevant operations that set or uses it have 
been deprecated in [FLINK-19318] Deprecate timeWindow() operations in 
DataStream API - ASF JIRA (apache.org) and [FLINK-19319] Deprecate 
StreamExecutionEnvironment.setStreamTimeCharacteristic() and TimeCharacteristic 
- ASF JIRA (apache.org)

*Proposed Change*
 # {*}{*}pipeline.time-characteristic should be deprecated, just like other 
_TIME_CHARACTERISTIC_ related operations as we no longer want user to set this.
 # pipeline.time-characteristic should have the default value of 
{_}EVENT_TIME{_}, to reflect the actual default value in system, and avoid 
misleading users.

Additionally, I think all configuration options which only take effect when it 
is explicitly set by user (aka those read into the system by 
configurtion.getOptional(xx).ifPresent(xx)), should have no default values.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (FLINK-27020) use hive dialect in SqlClient would thrown an error based on 1.15 version

2022-04-04 Thread Jing Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17517203#comment-17517203
 ] 

Jing Zhang edited comment on FLINK-27020 at 4/5/22 4:27 AM:


[~martijnvisser]Thanks a lot for reminding.
It works after replace flink-table-planner-loader with flink-table-planner_2.12 
located in /opt.
I would like to add it to [hive dialect 
doc|https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/connectors/table/hive/hive_dialect/]
 later, WDYT?


was (Author: qingru zhang):
[~martijnvisser]Thanks a lot for reminding.
It works after replace flink-table-planner-loader with flink-table-planner_2.12 
located in /opt.

> use hive dialect in SqlClient would thrown an error based on 1.15 version
> -
>
> Key: FLINK-27020
> URL: https://issues.apache.org/jira/browse/FLINK-27020
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.15.0
>Reporter: Jing Zhang
>Priority: Major
> Attachments: image-2022-04-02-20-28-01-335.png
>
>
> I use 1.15 rc0 and encounter a problem.
> An error would be thrown out if I use hive dialect in SqlClient.
>  !image-2022-04-02-20-28-01-335.png! 
> And I already add flink-sql-connector-hive-2.3.6_2.12-1.15-SNAPSHOT.jar.
> I note that, load and use hive module could work fine, but use hive dialect 
> would fail.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Closed] (FLINK-27020) use hive dialect in SqlClient would thrown an error based on 1.15 version

2022-04-04 Thread Jing Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhang closed FLINK-27020.
--
Resolution: Not A Problem

> use hive dialect in SqlClient would thrown an error based on 1.15 version
> -
>
> Key: FLINK-27020
> URL: https://issues.apache.org/jira/browse/FLINK-27020
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.15.0
>Reporter: Jing Zhang
>Priority: Major
> Attachments: image-2022-04-02-20-28-01-335.png
>
>
> I use 1.15 rc0 and encounter a problem.
> An error would be thrown out if I use hive dialect in SqlClient.
>  !image-2022-04-02-20-28-01-335.png! 
> And I already add flink-sql-connector-hive-2.3.6_2.12-1.15-SNAPSHOT.jar.
> I note that, load and use hive module could work fine, but use hive dialect 
> would fail.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-27020) use hive dialect in SqlClient would thrown an error based on 1.15 version

2022-04-04 Thread Jing Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17517203#comment-17517203
 ] 

Jing Zhang commented on FLINK-27020:


[~martijnvisser]Thanks a lot for reminding.
It works after replace flink-table-planner-loader with flink-table-planner_2.12 
located in /opt.

> use hive dialect in SqlClient would thrown an error based on 1.15 version
> -
>
> Key: FLINK-27020
> URL: https://issues.apache.org/jira/browse/FLINK-27020
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.15.0
>Reporter: Jing Zhang
>Priority: Major
> Attachments: image-2022-04-02-20-28-01-335.png
>
>
> I use 1.15 rc0 and encounter a problem.
> An error would be thrown out if I use hive dialect in SqlClient.
>  !image-2022-04-02-20-28-01-335.png! 
> And I already add flink-sql-connector-hive-2.3.6_2.12-1.15-SNAPSHOT.jar.
> I note that, load and use hive module could work fine, but use hive dialect 
> would fail.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-27055) java.lang.ArrayIndexOutOfBoundsException in BinarySegmentUtils

2022-04-04 Thread Kenny Ma (Jira)
Kenny Ma created FLINK-27055:


 Summary: java.lang.ArrayIndexOutOfBoundsException in 
BinarySegmentUtils
 Key: FLINK-27055
 URL: https://issues.apache.org/jira/browse/FLINK-27055
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 1.12.0
Reporter: Kenny Ma


I am using SQL for my streaming job and the job keeps failing with the 
java.lang.ArrayIndexOutOfBoundsException thrown in BinarySegmentUtils.

Stacktrace:

 
{code:java}
java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1
at 
org.apache.flink.table.data.binary.BinarySegmentUtils.getLongSlowly(BinarySegmentUtils.java:773)
at 
org.apache.flink.table.data.binary.BinarySegmentUtils.getLongMultiSegments(BinarySegmentUtils.java:763)
at 
org.apache.flink.table.data.binary.BinarySegmentUtils.getLong(BinarySegmentUtils.java:751)
at 
org.apache.flink.table.data.binary.BinaryArrayData.getString(BinaryArrayData.java:210)
at 
org.apache.flink.table.data.ArrayData.lambda$createElementGetter$95d74a6c$1(ArrayData.java:250)
at 
org.apache.flink.table.data.conversion.MapMapConverter.toExternal(MapMapConverter.java:79)
at StreamExecCalc$11860.processElement(Unknown Source)
at 
org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:112)
at 
org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:93)
at 
org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:39)
at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)
at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)
at 
org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:53)
at 
org.apache.flink.table.runtime.operators.window.AggregateWindowOperator.collect(AggregateWindowOperator.java:183)
at 
org.apache.flink.table.runtime.operators.window.AggregateWindowOperator.emitWindowResult(AggregateWindowOperator.java:176)
at 
org.apache.flink.table.runtime.operators.window.WindowOperator.onEventTime(WindowOperator.java:384)
at 
org.apache.flink.streaming.api.operators.InternalTimerServiceImpl.advanceWatermark(InternalTimerServiceImpl.java:276)
at 
org.apache.flink.streaming.api.operators.InternalTimeServiceManagerImpl.advanceWatermark(InternalTimeServiceManagerImpl.java:183)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.processWatermark(AbstractStreamOperator.java:600)
at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitWatermark(OneInputStreamTask.java:199)
at 
org.apache.flink.streaming.runtime.streamstatus.StatusWatermarkValve.findAndOutputNewMinWatermarkAcrossAlignedChannels(StatusWatermarkValve.java:173)
at 
org.apache.flink.streaming.runtime.streamstatus.StatusWatermarkValve.inputWatermark(StatusWatermarkValve.java:95)
at 
org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:181)
at 
org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:152)
at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:67)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:372)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:186)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:575)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:539)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:722)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:547)
{code}
 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25768) Python test BlinkBatchTableEnvironmentTests.test_explain_with_multi_sinks failed on azure

2022-04-04 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-25768:
---
  Labels: auto-deprioritized-major test-stability  (was: stale-major 
test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Python test BlinkBatchTableEnvironmentTests.test_explain_with_multi_sinks 
> failed on azure
> -
>
> Key: FLINK-25768
> URL: https://issues.apache.org/jira/browse/FLINK-25768
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Table SQL / Planner
>Affects Versions: 1.13.5
>Reporter: Yun Gao
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> {code:java}
> 2022-01-22T01:51:13.0242386Z Jan 22 01:51:13 answer = 'xro24635'
> 2022-01-22T01:51:13.0242938Z Jan 22 01:51:13 gateway_client = 
> 
> 2022-01-22T01:51:13.0243909Z Jan 22 01:51:13 target_id = 'o24634', name = 
> 'addInsertSql'
> 2022-01-22T01:51:13.0244309Z Jan 22 01:51:13 
> 2022-01-22T01:51:13.0244761Z Jan 22 01:51:13 def get_return_value(answer, 
> gateway_client, target_id=None, name=None):
> 2022-01-22T01:51:13.0245397Z Jan 22 01:51:13 """Converts an answer 
> received from the Java gateway into a Python object.
> 2022-01-22T01:51:13.0245923Z Jan 22 01:51:13 
> 2022-01-22T01:51:13.0246348Z Jan 22 01:51:13 For example, string 
> representation of integers are converted to Python
> 2022-01-22T01:51:13.0246963Z Jan 22 01:51:13 integer, string 
> representation of objects are converted to JavaObject
> 2022-01-22T01:51:13.0247486Z Jan 22 01:51:13 instances, etc.
> 2022-01-22T01:51:13.0247820Z Jan 22 01:51:13 
> 2022-01-22T01:51:13.0248220Z Jan 22 01:51:13 :param answer: the 
> string returned by the Java gateway
> 2022-01-22T01:51:13.0248846Z Jan 22 01:51:13 :param gateway_client: 
> the gateway client used to communicate with the Java
> 2022-01-22T01:51:13.0249505Z Jan 22 01:51:13 Gateway. Only 
> necessary if the answer is a reference (e.g., object,
> 2022-01-22T01:51:13.0249945Z Jan 22 01:51:13 list, map)
> 2022-01-22T01:51:13.0250470Z Jan 22 01:51:13 :param target_id: the 
> name of the object from which the answer comes from
> 2022-01-22T01:51:13.0251084Z Jan 22 01:51:13 (e.g., *object1* in 
> `object1.hello()`). Optional.
> 2022-01-22T01:51:13.0251607Z Jan 22 01:51:13 :param name: the name of 
> the member from which the answer comes from
> 2022-01-22T01:51:13.0252199Z Jan 22 01:51:13 (e.g., *hello* in 
> `object1.hello()`). Optional.
> 2022-01-22T01:51:13.0252646Z Jan 22 01:51:13 """
> 2022-01-22T01:51:13.0253198Z Jan 22 01:51:13 if is_error(answer)[0]:
> 2022-01-22T01:51:13.0253684Z Jan 22 01:51:13 if len(answer) > 1:
> 2022-01-22T01:51:13.0254169Z Jan 22 01:51:13 type = answer[1]
> 2022-01-22T01:51:13.0254757Z Jan 22 01:51:13 value = 
> OUTPUT_CONVERTER[type](answer[2:], gateway_client)
> 2022-01-22T01:51:13.0255450Z Jan 22 01:51:13 if answer[1] == 
> REFERENCE_TYPE:
> 2022-01-22T01:51:13.0256085Z Jan 22 01:51:13 >   raise 
> Py4JJavaError(
> 2022-01-22T01:51:13.0256768Z Jan 22 01:51:13 "An 
> error occurred while calling {0}{1}{2}.\n".
> 2022-01-22T01:51:13.0257432Z Jan 22 01:51:13 
> format(target_id, ".", name), value)
> 2022-01-22T01:51:13.0258250Z Jan 22 01:51:13 E   
> py4j.protocol.Py4JJavaError: An error occurred while calling 
> o24634.addInsertSql.
> 2022-01-22T01:51:13.0259174Z Jan 22 01:51:13 E   : 
> java.lang.NullPointerException
> 2022-01-22T01:51:13.0259824Z Jan 22 01:51:13 Eat 
> java.util.Objects.requireNonNull(Objects.java:203)
> 2022-01-22T01:51:13.0260748Z Jan 22 01:51:13 Eat 
> org.apache.calcite.rel.metadata.RelMetadataQuery.(RelMetadataQuery.java:144)
> 2022-01-22T01:51:13.0261604Z Jan 22 01:51:13 Eat 
> org.apache.calcite.rel.metadata.RelMetadataQuery.(RelMetadataQuery.java:108)
> 2022-01-22T01:51:13.0262653Z Jan 22 01:51:13 Eat 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMetadataQuery.(FlinkRelMetadataQuery.java:78)
> 2022-01-22T01:51:13.0263927Z Jan 22 01:51:13 Eat 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMetadataQuery.instance(FlinkRelMetadataQuery.java:59)
> 2022-01-22T01:51:13.0264864Z Jan 22 

[jira] [Updated] (FLINK-11998) Flink Interactive Programming (Umbrella JIRA)

2022-04-04 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-11998:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
 (was: auto-deprioritized-major auto-unassigned stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Flink Interactive Programming (Umbrella JIRA)
> -
>
> Key: FLINK-11998
> URL: https://issues.apache.org/jira/browse/FLINK-11998
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Ruidong Li
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned
>
> This is the Umbrella JIRA for 
> [FLIP-36|https://cwiki.apache.org/confluence/display/FLINK/FLIP-36%3A+Support+Interactive+Programming+in+Flink].



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-21394) FLIP-164: Improve Schema Handling in Catalogs

2022-04-04 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21394:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
 (was: auto-deprioritized-major auto-unassigned stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> FLIP-164: Improve Schema Handling in Catalogs
> -
>
> Key: FLINK-21394
> URL: https://issues.apache.org/jira/browse/FLINK-21394
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Timo Walther
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned
>
> Schema information is necessary at different locations in the Table API for 
> defining tables and/or views. In particular, it is necessary to define the 
> schema in a programmatic DDL (FLIP-129) and when converting a DataStream to a 
> Table (FLIP-136).
> We need similar APIs in the Catalog interfaces such that catalog 
> implementations can define table/views in a unified way.
> This FLIP updates the class hierarchy to achieve the following goals:
> - make it visible whether a schema is resolved or unresolved and when the 
> resolution happens
> - offer a unified API for FLIP-129, FLIP-136, and catalogs
> - allow arbitrary data types and expressions in the schema for watermark spec 
> or columns
> - have access to other catalogs for declaring a data type or expression via 
> CatalogManager
> - cleaned up TableSchema
> - remain backwards compatible in the persisted properties and API



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-12047) State Processor API (previously named Savepoint Connector) to read / write / process savepoints

2022-04-04 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-12047:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
 (was: auto-deprioritized-major auto-unassigned stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> State Processor API (previously named Savepoint Connector) to read / write / 
> process savepoints
> ---
>
> Key: FLINK-12047
> URL: https://issues.apache.org/jira/browse/FLINK-12047
> Project: Flink
>  Issue Type: New Feature
>  Components: API / State Processor
>Reporter: Tzu-Li (Gordon) Tai
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned
>
> This JIRA tracks the ongoing efforts and discussions about a means to read / 
> write / process state in savepoints.
> There are already two known existing works (that was mentioned already in the 
> mailing lists) related to this:
> 1. Bravo [1]
> 2. https://github.com/sjwiesman/flink/tree/savepoint-connector
> Essentially, the two tools both provide a connector to read or write a Flink 
> savepoint, and allows to utilize Flink's processing APIs for querying / 
> processing the state in the savepoint.
> We should try to converge the efforts on this, and have a savepoint connector 
> like this in Flink.
> With this connector, the high-level benefits users should be able to achieve 
> with it are:
> 1. Create savepoints using existing data from other systems (i.e. 
> bootstrapping a Flink job's state with data in an external database).
> 2. Derive new state using existing state
> 3. Query state in savepoints, for example for debugging purposes
> 4. Migrate schema of state in savepoints offline, compared to the current 
> more limited approach of online migration on state access.
> 5. Change max parallelism of jobs, or any other kind of fixed configuration, 
> such as operator uids.
> [1] https://github.com/king/bravo



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot commented on pull request #19357: [Flink-26011][test] ArchUnit test for formats test code

2022-04-04 Thread GitBox


flinkbot commented on PR #19357:
URL: https://github.com/apache/flink/pull/19357#issuecomment-1088067086

   
   ## CI report:
   
   * 79a4ada4f2d28eec373ac46ece7f8bd75fbcfa4e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] JingGe opened a new pull request, #19357: [Flink-26011][test] ArchUnit test for formats test code

2022-04-04 Thread GitBox


JingGe opened a new pull request, #19357:
URL: https://github.com/apache/flink/pull/19357

   ## What is the purpose of the change
   
   Develop ArchUnit tests for the test code of formats. 
   
   
   ## Brief change log
   - Add ArchUnit tests for the test code of each format. 
   - add archunit.properties
   - Init violation stores
   
   ## Verifying this change
   
   This change add architectural tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (**yes** / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-25916) Using upsert-kafka with a flush buffer results in Null Pointer Exception

2022-04-04 Thread Arvid Heise (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17517087#comment-17517087
 ] 

Arvid Heise commented on FLINK-25916:
-

This looks like FLINK-24608 to me except this was fixed in 1.14.3. [~heaje] can 
you double-check if you use a newer version? I also see 1.15.0 in the ticket 
description, did anyone actually verify it with 1.15.0?

> Using upsert-kafka with a flush buffer results in Null Pointer Exception
> 
>
> Key: FLINK-25916
> URL: https://issues.apache.org/jira/browse/FLINK-25916
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / Runtime
>Affects Versions: 1.15.0, 1.14.3
> Environment: CentOS 7.9 x64
> Intel Xeon Gold 6140 CPU
>Reporter: Corey Shaw
>Priority: Major
>
> Flink Version: 1.14.3
> upsert-kafka version: 1.14.3
>  
> I have been trying to buffer output from the upsert-kafka connector using the 
> documented parameters {{sink.buffer-flush.max-rows}} and 
> {{sink.buffer-flush.interval}}
> Whenever I attempt to run an INSERT query with buffering, I receive the 
> following error (shortened for brevity):
> {code:java}
> Caused by: java.lang.NullPointerException
> at 
> org.apache.flink.streaming.connectors.kafka.table.ReducingUpsertWriter.flush(ReducingUpsertWriter.java:145)
>  
> at 
> org.apache.flink.streaming.connectors.kafka.table.ReducingUpsertWriter.lambda$registerFlush$3(ReducingUpsertWriter.java:124)
>  
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invokeProcessingTimeCallback(StreamTask.java:1693)
>  
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$null$22(StreamTask.java:1684)
>  
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50)
>  
> at 
> org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:90) 
> at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMailsWhenDefaultActionUnavailable(MailboxProcessor.java:338)
>  
> at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMail(MailboxProcessor.java:324)
>  
> at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:201)
>  
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:809)
>  
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:761)
>  
> at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)
>  
> at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:937) 
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766) 
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575) 
> at java.lang.Thread.run(Thread.java:829) [?:?] {code}
>  
> If I remove the parameters related to flush buffering, then everything works 
> as expected with no problems at all.  For reference, here is the full setup 
> with source, destination, and queries.  Yes, I realize the INSERT could use 
> an overhaul, but that's not the issue at hand :).
> {code:java}
> CREATE TABLE `source_topic` (
>     `timeGMT` INT,
>     `eventtime` AS TO_TIMESTAMP(FROM_UNIXTIME(`timeGMT`)),
>     `visIdHigh` BIGINT,
>     `visIdLow` BIGINT,
>     `visIdStr` AS CONCAT(IF(`visIdHigh` IS NULL, '', CAST(`visIdHigh` AS 
> STRING)), IF(`visIdLow` IS NULL, '', CAST(`visIdLow` AS STRING))),
>     WATERMARK FOR eventtime AS eventtime - INTERVAL '25' SECONDS
> ) WITH (
>     'connector' = 'kafka',
>     'properties.group.id' = 'flink_metrics',
>     'properties.bootstrap.servers' = 'brokers.example.com:9093',
>     'topic' = 'source_topic',
>     'scan.startup.mode' = 'earliest-offset',
>     'value.format' = 'avro-confluent',
>     'value.avro-confluent.url' = 'http://schema.example.com',
>     'value.fields-include' = 'EXCEPT_KEY'
> );
>  CREATE TABLE dest_topic (
> `messageType` VARCHAR,
> `observationID` BIGINT,
> `obsYear` BIGINT,
> `obsMonth` BIGINT,
> `obsDay` BIGINT,
> `obsHour` BIGINT,
> `obsMinute` BIGINT,
> `obsTz` VARCHAR(5),
> `value` BIGINT,
> PRIMARY KEY (observationID, messageType) NOT ENFORCED
> ) WITH (
> 'connector' = 'upsert-kafka',
> 'key.format' = 'json',
> 'properties.bootstrap.servers' = 'brokers.example.com:9092',
> 'sink.buffer-flush.max-rows' = '5',
> 'sink.buffer-flush.interval' = '1000',
> 'topic' = 'dest_topic ',
> 'value.format' = 'json'
> );
> INSERT INTO adobenow_metrics
>     SELECT `messageType`, `observationID`, obsYear, obsMonth, obsDay, 
> obsHour, obsMinute, obsTz, 

[jira] [Updated] (FLINK-26663) Pod augmentation for the operator

2022-04-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-26663:
---
Labels: pull-request-available  (was: )

> Pod augmentation for the operator
> -
>
> Key: FLINK-26663
> URL: https://issues.apache.org/jira/browse/FLINK-26663
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kubernetes Operator
>Reporter: Matyas Orhidi
>Priority: Major
>  Labels: pull-request-available
>
> Currently we provide no convenient way to augment the operator pod itself. 
> It'd be great if we could add something similar to the pod templating 
> mechanism used in Flink core.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink-kubernetes-operator] morhidi commented on pull request #155: [FLINK-26663] Pod augmentation for the operator

2022-04-04 Thread GitBox


morhidi commented on PR #155:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/155#issuecomment-1087956079

   cc @wangyang0918 @gyfora @mbalassi 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] morhidi opened a new pull request, #155: [FLINK-26663] Pod augmentation for the operator

2022-04-04 Thread GitBox


morhidi opened a new pull request, #155:
URL: https://github.com/apache/flink-kubernetes-operator/pull/155

   - adding basic example and docs for pod augmentation using Helm + kustomize


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-26751) [FLIP-171] Kafka implementation of AsyncSinkBase

2022-04-04 Thread Arvid Heise (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arvid Heise closed FLINK-26751.
---
Resolution: Won't Fix

As stated, with the current async sink design, it's not possible to implement 
an EOS sink. If we ever extend the async sink, we can reopen the ticket.

Note that async sink is a means to reduce boilerplate for "easy" sinks that do 
not require a sophisticated state management, in particular sinks that just do 
some API calls to write data in either a fire-and-forget or retry-until-success 
fashion. We would probably overengineer the interface if we also want to 
capture the various EOS flavors. It's wiser to add more base implementations of 
{{Sink}} in the future when multiple EOS sinks share common code (e.g., Pulsar 
and Kafka).

> [FLIP-171] Kafka implementation of AsyncSinkBase
> 
>
> Key: FLINK-26751
> URL: https://issues.apache.org/jira/browse/FLINK-26751
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Kafka
>Reporter: Almog Tavor
>Priority: Major
>
> *User stories:*
> Standardize the Kafka connector to implement AsyncSinkBase.
> *Scope:*
>  * Implement an asynchronous sink for Kafka by inheriting the AsyncSinkBase 
> class.
> h2. References
> More details to be found 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink]
> h4.  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] snuyanzin commented on pull request #19340: [FLINK-26961][BP-1.14][connectors][filesystems][formats] Update Jackson Databi…

2022-04-04 Thread GitBox


snuyanzin commented on PR #19340:
URL: https://github.com/apache/flink/pull/19340#issuecomment-1087898371

   @MartijnVisser thanks for the hint
   it looks it helped


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-27054) Elasticsearch SQL connector SSL issue

2022-04-04 Thread ricardo (Jira)
ricardo created FLINK-27054:
---

 Summary: Elasticsearch SQL connector SSL issue
 Key: FLINK-27054
 URL: https://issues.apache.org/jira/browse/FLINK-27054
 Project: Flink
  Issue Type: Bug
Reporter: ricardo


The current Flink ElasticSearch SQL connector ([Elasticsearch | Apache 
Flink|https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/elasticsearch/])
 is missing SSL options, can't connect to ES clusters which require SSL 
certificate.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] MartijnVisser commented on pull request #19352: [FLINK-27044][Connectors][Hive] Drop support for Hive versions 1.*, 2.1.* and 2.2.* which are no longer supported by the Hive community

2022-04-04 Thread GitBox


MartijnVisser commented on PR #19352:
URL: https://github.com/apache/flink/pull/19352#issuecomment-1087786644

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] snuyanzin commented on pull request #19340: [FLINK-26961][BP-1.14][connectors][filesystems][formats] Update Jackson Databi…

2022-04-04 Thread GitBox


snuyanzin commented on PR #19340:
URL: https://github.com/apache/flink/pull/19340#issuecomment-1087715894

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-26793) Flink Cassandra connector performance issue

2022-04-04 Thread Etienne Chauchot (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17516904#comment-17516904
 ] 

Etienne Chauchot commented on FLINK-26793:
--

[~bumblebee] in particular do you see in Flink logs the message you referred 
from stackoverflow: "{_}query is not prepared on /XX.YY.91.205:9042, preparing 
before retrying executing. Seeing this message a few times is fine, but seeing 
it a lot may be source of performance problems"{_} in addition to the scylla 
message you referred in the ticket.

Also, please go to Flink monitoring UI in the _Checkpoints/Overview_ tab and 
search for _Latest Restore._ That way we can check if there was a checkpoint 
restore and validate/invalidate my supposition about restoring entailing a 
re-creation of the MappingManager (in case it is not part of the Sink Operator 
snapshot state).

Also please confirm that in the code you showed above _datastream_ object 
contains POJOs and not tuples so that we make sure the pipeline indeed uses 
_CassandraPojoSink_ (sink instanciation is automatic depending on the content 
of your DataStream)

> Flink Cassandra connector performance issue 
> 
>
> Key: FLINK-26793
> URL: https://issues.apache.org/jira/browse/FLINK-26793
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Cassandra
>Affects Versions: 1.14.4
>Reporter: Jay Ghiya
>Assignee: Etienne Chauchot
>Priority: Major
>
> A warning is observed during long runs of flink job stating “Insertions into 
> scylla might be suffering. Expect performance problems unless this is 
> resolved.”
> Upon initial analysis - “flink cassandra connector is not keeping instance of 
> mapping manager that is used to convert a pojo to cassandra row. Ideally the 
> mapping manager should have the same life time as cluster and session objects 
> which are also created once when the driver is initialized”
> Reference: 
> https://stackoverflow.com/questions/59203418/cassandra-java-driver-warning



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] slinkydeveloper commented on pull request #19349: [FLINK-27043][table] Removing old csv format references

2022-04-04 Thread GitBox


slinkydeveloper commented on PR #19349:
URL: https://github.com/apache/flink/pull/19349#issuecomment-1087657014

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27031) ChangelogRescalingITCase.test failed due to IllegalStateException

2022-04-04 Thread Roman Khachatryan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17516867#comment-17516867
 ] 

Roman Khachatryan commented on FLINK-27031:
---

The test fails reliably when rescaling from 1 to 3 or higher DoP, ~6 runs out 
of 10.  ACCUMULATE_TIME_MILLIS doesn't affect it and can be zero.

Furthermore, it only fails when both Changelog and Unaligned checkpoints are 
enabled.

While debugging, I can see that:
- the problematic record comes from ResultSubpartition state
- according to its key, it should be assigned to subtask 1 or 2
- it is produced by upstream 0 and  consumed by downstream 0 (the downstream 
should filter it out)
- however, the downstream 0 doesn't setup Virtual Channels that could filter 
the record; when it does, the test passes

I didn't look further why Virtual Channels aren't being setup.

>From the above, the failure seems to be caused by Unaligned checkpoints rather 
>than changelog.
[~pnowojski] could you please take a look?

> ChangelogRescalingITCase.test failed due to IllegalStateException
> -
>
> Key: FLINK-27031
> URL: https://issues.apache.org/jira/browse/FLINK-27031
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network, Runtime / State Backends
>Affects Versions: 1.15.0, 1.16.0
>Reporter: Matthias Pohl
>Assignee: Roman Khachatryan
>Priority: Critical
>
> [This 
> build|https://dev.azure.com/mapohl/flink/_build/results?buildId=923=logs=cc649950-03e9-5fae-8326-2f1ad744b536=a9a20597-291c-5240-9913-a731d46d6dd1=12961]
>  failed in {{ChangelogRescalingITCase.test}}:
> {code}
> Apr 01 20:26:53 Caused by: java.lang.IllegalArgumentException: Key group 94 
> is not in KeyGroupRange{startKeyGroup=96, endKeyGroup=127}. Unless you're 
> directly using low level state access APIs, this is most likely caused by 
> non-deterministic shuffle key (hashCode and equals implementation).
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.state.KeyGroupRangeOffsets.newIllegalKeyGroupException(KeyGroupRangeOffsets.java:37)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.state.heap.StateTable.getMapForKeyGroup(StateTable.java:305)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.state.heap.StateTable.get(StateTable.java:261)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.state.heap.StateTable.get(StateTable.java:143)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.state.heap.HeapListState.add(HeapListState.java:94)
> Apr 01 20:26:53   at 
> org.apache.flink.state.changelog.ChangelogListState.add(ChangelogListState.java:78)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.processElement(WindowOperator.java:404)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:99)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:80)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:39)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:38)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:233)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.processElement(AbstractStreamTaskNetworkInput.java:134)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:105)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:531)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:227)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:841)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:767)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927)
> Apr 01 20:26:53   at 
> 

[jira] [Created] (FLINK-27053) IncrementalRemoteKeyedStateHandle.discardState swallows errors

2022-04-04 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-27053:
-

 Summary: IncrementalRemoteKeyedStateHandle.discardState swallows 
errors
 Key: FLINK-27053
 URL: https://issues.apache.org/jira/browse/FLINK-27053
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Reporter: Matthias Pohl


IncrementalRemoteKeyedStateHandle.discardState swallows errors instead of 
propagating them which would make the discard failure go un-noticed.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-27031) ChangelogRescalingITCase.test failed due to IllegalStateException

2022-04-04 Thread Roman Khachatryan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Khachatryan updated FLINK-27031:
--
Labels:   (was: test-stability)

> ChangelogRescalingITCase.test failed due to IllegalStateException
> -
>
> Key: FLINK-27031
> URL: https://issues.apache.org/jira/browse/FLINK-27031
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network, Runtime / State Backends
>Affects Versions: 1.15.0, 1.16.0
>Reporter: Matthias Pohl
>Assignee: Roman Khachatryan
>Priority: Critical
>
> [This 
> build|https://dev.azure.com/mapohl/flink/_build/results?buildId=923=logs=cc649950-03e9-5fae-8326-2f1ad744b536=a9a20597-291c-5240-9913-a731d46d6dd1=12961]
>  failed in {{ChangelogRescalingITCase.test}}:
> {code}
> Apr 01 20:26:53 Caused by: java.lang.IllegalArgumentException: Key group 94 
> is not in KeyGroupRange{startKeyGroup=96, endKeyGroup=127}. Unless you're 
> directly using low level state access APIs, this is most likely caused by 
> non-deterministic shuffle key (hashCode and equals implementation).
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.state.KeyGroupRangeOffsets.newIllegalKeyGroupException(KeyGroupRangeOffsets.java:37)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.state.heap.StateTable.getMapForKeyGroup(StateTable.java:305)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.state.heap.StateTable.get(StateTable.java:261)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.state.heap.StateTable.get(StateTable.java:143)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.state.heap.HeapListState.add(HeapListState.java:94)
> Apr 01 20:26:53   at 
> org.apache.flink.state.changelog.ChangelogListState.add(ChangelogListState.java:78)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.processElement(WindowOperator.java:404)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:99)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:80)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:39)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:38)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:233)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.processElement(AbstractStreamTaskNetworkInput.java:134)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:105)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:531)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:227)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:841)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:767)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
> Apr 01 20:26:53   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-27031) ChangelogRescalingITCase.test failed due to IllegalStateException

2022-04-04 Thread Roman Khachatryan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Khachatryan updated FLINK-27031:
--
Component/s: Runtime / Network

> ChangelogRescalingITCase.test failed due to IllegalStateException
> -
>
> Key: FLINK-27031
> URL: https://issues.apache.org/jira/browse/FLINK-27031
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network, Runtime / State Backends
>Affects Versions: 1.15.0, 1.16.0
>Reporter: Matthias Pohl
>Assignee: Roman Khachatryan
>Priority: Critical
>  Labels: test-stability
>
> [This 
> build|https://dev.azure.com/mapohl/flink/_build/results?buildId=923=logs=cc649950-03e9-5fae-8326-2f1ad744b536=a9a20597-291c-5240-9913-a731d46d6dd1=12961]
>  failed in {{ChangelogRescalingITCase.test}}:
> {code}
> Apr 01 20:26:53 Caused by: java.lang.IllegalArgumentException: Key group 94 
> is not in KeyGroupRange{startKeyGroup=96, endKeyGroup=127}. Unless you're 
> directly using low level state access APIs, this is most likely caused by 
> non-deterministic shuffle key (hashCode and equals implementation).
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.state.KeyGroupRangeOffsets.newIllegalKeyGroupException(KeyGroupRangeOffsets.java:37)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.state.heap.StateTable.getMapForKeyGroup(StateTable.java:305)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.state.heap.StateTable.get(StateTable.java:261)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.state.heap.StateTable.get(StateTable.java:143)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.state.heap.HeapListState.add(HeapListState.java:94)
> Apr 01 20:26:53   at 
> org.apache.flink.state.changelog.ChangelogListState.add(ChangelogListState.java:78)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.processElement(WindowOperator.java:404)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:99)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:80)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:39)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:38)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:233)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.processElement(AbstractStreamTaskNetworkInput.java:134)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:105)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:531)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:227)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:841)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:767)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
> Apr 01 20:26:53   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-27031) ChangelogRescalingITCase.test failed due to IllegalStateException

2022-04-04 Thread Roman Khachatryan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Khachatryan updated FLINK-27031:
--
Affects Version/s: 1.15.0

> ChangelogRescalingITCase.test failed due to IllegalStateException
> -
>
> Key: FLINK-27031
> URL: https://issues.apache.org/jira/browse/FLINK-27031
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.15.0, 1.16.0
>Reporter: Matthias Pohl
>Assignee: Roman Khachatryan
>Priority: Critical
>  Labels: test-stability
>
> [This 
> build|https://dev.azure.com/mapohl/flink/_build/results?buildId=923=logs=cc649950-03e9-5fae-8326-2f1ad744b536=a9a20597-291c-5240-9913-a731d46d6dd1=12961]
>  failed in {{ChangelogRescalingITCase.test}}:
> {code}
> Apr 01 20:26:53 Caused by: java.lang.IllegalArgumentException: Key group 94 
> is not in KeyGroupRange{startKeyGroup=96, endKeyGroup=127}. Unless you're 
> directly using low level state access APIs, this is most likely caused by 
> non-deterministic shuffle key (hashCode and equals implementation).
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.state.KeyGroupRangeOffsets.newIllegalKeyGroupException(KeyGroupRangeOffsets.java:37)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.state.heap.StateTable.getMapForKeyGroup(StateTable.java:305)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.state.heap.StateTable.get(StateTable.java:261)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.state.heap.StateTable.get(StateTable.java:143)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.state.heap.HeapListState.add(HeapListState.java:94)
> Apr 01 20:26:53   at 
> org.apache.flink.state.changelog.ChangelogListState.add(ChangelogListState.java:78)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.processElement(WindowOperator.java:404)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:99)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:80)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:39)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:38)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:233)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.processElement(AbstractStreamTaskNetworkInput.java:134)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:105)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:531)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:227)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:841)
> Apr 01 20:26:53   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:767)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
> Apr 01 20:26:53   at 
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
> Apr 01 20:26:53   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-27052) FsCompletedCheckpointStorageLocation.disposeStorageLocation doesn't expose errors properly (not processing the return value)

2022-04-04 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-27052:
-

 Summary: 
FsCompletedCheckpointStorageLocation.disposeStorageLocation doesn't expose 
errors properly (not processing the return value)
 Key: FLINK-27052
 URL: https://issues.apache.org/jira/browse/FLINK-27052
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Affects Versions: 1.15.0
Reporter: Matthias Pohl


{{CompletedCheckpointStorageLocation.disposeStorageLocation}} is called in 
{{CompletedCheckpoint.DiscardObject.discard}}. The implementing class 
{{FsCompletedCheckpointStorageLocation}} doesn't implement this method in an 
idempotent way because it doesn't process the return value of the delete 
method. Hence, we might lose error cases.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot commented on pull request #19356: [FLINK-26993][tests] Wait until checkpoint was actually triggered

2022-04-04 Thread GitBox


flinkbot commented on PR #19356:
URL: https://github.com/apache/flink/pull/19356#issuecomment-1087584893

   
   ## CI report:
   
   * a561e7d0e90e337b9d28f2b2abf9f0e8e8863515 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-27051) CompletedCheckpoint.DiscardObject.discard is not idempotent

2022-04-04 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-27051:
-

 Summary: CompletedCheckpoint.DiscardObject.discard is not 
idempotent
 Key: FLINK-27051
 URL: https://issues.apache.org/jira/browse/FLINK-27051
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Affects Versions: 1.15.0
Reporter: Matthias Pohl


`CompletedCheckpoint.DiscardObject.discard` is not implemented in an idempotent 
fashion because we're losing the operatorState even in the case of a failure 
(see 
[CompletedCheckpoint:328||https://github.com/apache/flink/blob/dc419b5639f68bcb0b773763f24179dd3536d713/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CompletedCheckpoint.java#L328].
 This prevents us from retrying the deletion.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Closed] (FLINK-26884) Move Elasticsearch connector to external connector repository

2022-04-04 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-26884.
--
Fix Version/s: 1.16.0
 Release Note: The Elasticsearch connector has been copied from the Flink 
repository to its own individual repository at 
https://github.com/apache/flink-connector-elasticsearch
   Resolution: Fixed

Code has been copied from apache/flink to apache/flink-connector-elasticsearch 
including commit history. Last commit hash was 
dddb2110994491750d85526f6a183d1b47085f82

> Move Elasticsearch connector to external connector repository
> -
>
> Key: FLINK-26884
> URL: https://issues.apache.org/jira/browse/FLINK-26884
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
> Fix For: 1.16.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink-connector-elasticsearch] MartijnVisser merged pull request #5: [Flink-26884] Copy elasticsearch connectors to the external repo

2022-04-04 Thread GitBox


MartijnVisser merged PR #5:
URL: https://github.com/apache/flink-connector-elasticsearch/pull/5


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] XComp commented on pull request #19351: [FLINK-27045][tests] Remove shared executor

2022-04-04 Thread GitBox


XComp commented on PR #19351:
URL: https://github.com/apache/flink/pull/19351#issuecomment-1087575899

   We might want to switch to a `ExecutorServiceResource` that takes care of 
shutting the thread pool down at the end of the test.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-26993) CheckpointCoordinatorTest#testMinCheckpointPause

2022-04-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-26993:
---
Labels: pull-request-available  (was: )

> CheckpointCoordinatorTest#testMinCheckpointPause
> 
>
> Key: FLINK-26993
> URL: https://issues.apache.org/jira/browse/FLINK-26993
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Checkpointing, Tests
>Reporter: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> The test triggers checkpoints, waits for the CC to have stored a pending 
> checkpoint, and then sends an acknowledge.
> The acknowledge can fail with an NPE because the 
> PendingCheckpoint#checkpointTargetLocation hasn't been set yet. This doesn't 
> happen synchronously with the PendingCheckpoint being added to 
> CheckpointCoordinator#pendingCheckpoints.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] zentol opened a new pull request, #19356: [FLINK-26993][tests] Wait until checkpoint was actually triggered

2022-04-04 Thread GitBox


zentol opened a new pull request, #19356:
URL: https://github.com/apache/flink/pull/19356

   This is a rather crude fix :/


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #19355: [FLINK-27047][tests] Remove timeouts

2022-04-04 Thread GitBox


flinkbot commented on PR #19355:
URL: https://github.com/apache/flink/pull/19355#issuecomment-1087567245

   
   ## CI report:
   
   * 638d3a49aa3908cae9d83653a2f7442c1cb3c2ee UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] SteNicholas commented on pull request #141: [FLINK-26894] Support custom validator implementations

2022-04-04 Thread GitBox


SteNicholas commented on PR #141:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/141#issuecomment-1087565240

   > @SteNicholas we have merged the session controller changes, I think it 
would be a good time to rebase this :)
   
   @gyfora I'm rebasing the main branch and updating this PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-27047) DispatcherResourceCleanupTest is unstable

2022-04-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-27047:
---
Labels: pull-request-available test-stability  (was: test-stability)

> DispatcherResourceCleanupTest is unstable
> -
>
> Key: FLINK-27047
> URL: https://issues.apache.org/jira/browse/FLINK-27047
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Coordination, Tests
>Affects Versions: 1.16.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.16.0
>
>
> {code}
> Apr 03 18:45:29 java.util.concurrent.TimeoutException
> Apr 03 18:45:29   at 
> java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1784)
> Apr 03 18:45:29   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
> Apr 03 18:45:29   at 
> org.apache.flink.runtime.dispatcher.DispatcherResourceCleanupTest.assertGlobalCleanupTriggered(DispatcherResourceCleanupTest.java:536)
> Apr 03 18:45:29   at 
> org.apache.flink.runtime.dispatcher.DispatcherResourceCleanupTest.testDuplicateJobSubmissionDoesNotDeleteJobMetaData(DispatcherResourceCleanupTest.java:465)
> {code}
> https://dev.azure.com/chesnay/flink/_build/results?buildId=2374=logs=d543d572-9428-5803-a30c-e8e09bf70915=4e4199a3-fbbb-5d5b-a2be-802955ffb013



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] zentol opened a new pull request, #19355: [FLINK-27047][tests] Remove timeouts

2022-04-04 Thread GitBox


zentol opened a new pull request, #19355:
URL: https://github.com/apache/flink/pull/19355

   There's no guarantee that the cleanup is initiated synchronously.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-27047) DispatcherResourceCleanupTest is unstable

2022-04-04 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl updated FLINK-27047:
--
Labels: test-stability  (was: )

> DispatcherResourceCleanupTest is unstable
> -
>
> Key: FLINK-27047
> URL: https://issues.apache.org/jira/browse/FLINK-27047
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Coordination, Tests
>Affects Versions: 1.16.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: test-stability
> Fix For: 1.16.0
>
>
> {code}
> Apr 03 18:45:29 java.util.concurrent.TimeoutException
> Apr 03 18:45:29   at 
> java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1784)
> Apr 03 18:45:29   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
> Apr 03 18:45:29   at 
> org.apache.flink.runtime.dispatcher.DispatcherResourceCleanupTest.assertGlobalCleanupTriggered(DispatcherResourceCleanupTest.java:536)
> Apr 03 18:45:29   at 
> org.apache.flink.runtime.dispatcher.DispatcherResourceCleanupTest.testDuplicateJobSubmissionDoesNotDeleteJobMetaData(DispatcherResourceCleanupTest.java:465)
> {code}
> https://dev.azure.com/chesnay/flink/_build/results?buildId=2374=logs=d543d572-9428-5803-a30c-e8e09bf70915=4e4199a3-fbbb-5d5b-a2be-802955ffb013



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink-connector-elasticsearch] MartijnVisser commented on pull request #4: [FLINK-26958][BuildSystem] Add Github Actions build for flink-connector-elasticsearch

2022-04-04 Thread GitBox


MartijnVisser commented on PR #4:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/4#issuecomment-1087551540

   @zentol Thanks a lot for the feedback. I've reworked the pipeline; I've used 
https://github.com/apache/flink-connector-elasticsearch/pull/5 to verify it 
(you can see it in action at 
https://github.com/MartijnVisser/flink-connector-elasticsearch/actions/runs/2090051876)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-elasticsearch] MartijnVisser commented on a diff in pull request #4: [FLINK-26958][BuildSystem] Add Github Actions build for flink-connector-elasticsearch

2022-04-04 Thread GitBox


MartijnVisser commented on code in PR #4:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/4#discussion_r841729398


##
.github/workflows/ci.yml:
##
@@ -0,0 +1,141 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+name: Build flink-connector-elasticsearch
+on: [push, pull_request]
+jobs:
+  compile_ci_jdk8:
+runs-on: ubuntu-latest
+env:
+  MVN_GOALS: clean install
+  MVN_PROFILES: -Pcheck-convergence -Dscala-2.12
+  MVN_TEST_OPTIONS: -Dmaven.javadoc.skip=true -DskipTests -Dfast -U -B
+  MVN_COMMON_OPTIONS: -Dflink.forkCount=2 -Dflink.forkCountTestPackage=2
+  #  Temporary disable convergence phase since that will fail
+  #  MVN_COMMON_OPTIONS: -Dflink.convergence.phase=install 
-Dflink.forkCount=2 -Dflink.forkCountTestPackage=2
+  MVN_CONNECTION_OPTIONS: -Dhttp.keepAlive=false 
-Dmaven.wagon.http.pool=false -Dmaven.wagon.httpconnectionManager.ttlSeconds=120
+steps:
+  - name: Check out repository code
+uses: actions/checkout@v2
+
+  - name: Set JDK
+uses: actions/setup-java@v2
+with:
+  java-version: '8'
+  distribution: 'adopt'
+
+  - name: Set Maven 3.2.5
+uses: stCarolas/setup-maven@v4.2
+with:
+  maven-version: 3.2.5
+
+  - name: Cache local Maven repository
+uses: actions/cache@v3
+with:
+  path: ~/.m2/repository
+  key: ${{ runner.os }}-maven-${{ hashFiles('**/pom.xml') }}
+  restore-keys: |
+${{ runner.os }}-maven-
+
+  - name: Compile flink-connector-elasticsearch
+run: mvn ${{ env.MVN_GOALS }} ${{ env.MVN_PROFILES }} ${{ 
env.MVN_TEST_OPTIONS }} ${{ env.MVN_COMMON_OPTIONS }} ${{ 
env.MVN_CONNECTION_OPTIONS }}
+
+  test_ci_jdk8:
+needs: compile_ci_jdk8
+runs-on: ubuntu-latest
+env:
+  MVN_GOALS: clean install
+  MVN_PROFILES: -Dscala-2.12
+  MVN_TEST_OPTIONS: -B --no-snapshot-updates
+  MVN_COMMON_OPTIONS: -Dflink.forkCount=2 -Dflink.forkCountTestPackage=2
+  MVN_CONNECTION_OPTIONS: -Dhttp.keepAlive=false 
-Dmaven.wagon.http.pool=false -Dmaven.wagon.httpconnectionManager.ttlSeconds=120
+steps:
+  - name: Check out repository code
+uses: actions/checkout@v2
+
+  - name: Cache local Maven repository
+uses: actions/cache@v3
+with:
+  path: ~/.m2/repository
+  key: ${{ runner.os }}-maven-${{ hashFiles('**/pom.xml') }}
+  restore-keys: |
+${{ runner.os }}-maven-
+
+  - name: Test - connectors
+run: mvn ${{ env.MVN_GOALS }} ${{ env.MVN_PROFILES }} ${{ 
env.MVN_TEST_OPTIONS }} ${{ env.MVN_COMMON_OPTIONS }} ${{ 
env.MVN_CONNECTION_OPTIONS }}
+
+  compile_ci_jdk11:
+runs-on: ubuntu-latest
+env:
+  MVN_GOALS: clean install
+  MVN_PROFILES: -Pcheck-convergence -Dscala-2.12
+  MVN_TEST_OPTIONS: -Dmaven.javadoc.skip=true -DskipTests -Dfast -U -B
+  MVN_COMMON_OPTIONS: -Dflink.forkCount=2 -Dflink.forkCountTestPackage=2
+  #  Temporary disable convergence phase since that will fail
+  #  MVN_COMMON_OPTIONS: -Dflink.convergence.phase=install 
-Dflink.forkCount=2 -Dflink.forkCountTestPackage=2
+  MVN_CONNECTION_OPTIONS: -Dhttp.keepAlive=false 
-Dmaven.wagon.http.pool=false -Dmaven.wagon.httpconnectionManager.ttlSeconds=120
+steps:
+  - name: Check out repository code
+uses: actions/checkout@v2
+
+  - name: Set JDK
+uses: actions/setup-java@v2
+with:
+  java-version: '11'
+  distribution: 'adopt'
+
+  - name: Set Maven 3.2.5
+uses: stCarolas/setup-maven@v4.2
+with:
+  maven-version: 3.2.5
+
+  - name: Cache local Maven repository
+uses: actions/cache@v3
+with:
+  path: ~/.m2/repository
+  key: ${{ runner.os }}-maven-${{ hashFiles('**/pom.xml') }}
+  restore-keys: |
+${{ runner.os }}-maven-
+
+  - name: Compile flink-connector-elasticsearch
+run: mvn ${{ env.MVN_GOALS }} ${{ 

[GitHub] [flink-connector-elasticsearch] MartijnVisser commented on a diff in pull request #4: [FLINK-26958][BuildSystem] Add Github Actions build for flink-connector-elasticsearch

2022-04-04 Thread GitBox


MartijnVisser commented on code in PR #4:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/4#discussion_r841728690


##
.github/workflows/ci.yml:
##
@@ -0,0 +1,141 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+name: Build flink-connector-elasticsearch
+on: [push, pull_request]
+jobs:
+  compile_ci_jdk8:
+runs-on: ubuntu-latest
+env:
+  MVN_GOALS: clean install
+  MVN_PROFILES: -Pcheck-convergence -Dscala-2.12
+  MVN_TEST_OPTIONS: -Dmaven.javadoc.skip=true -DskipTests -Dfast -U -B
+  MVN_COMMON_OPTIONS: -Dflink.forkCount=2 -Dflink.forkCountTestPackage=2
+  #  Temporary disable convergence phase since that will fail
+  #  MVN_COMMON_OPTIONS: -Dflink.convergence.phase=install 
-Dflink.forkCount=2 -Dflink.forkCountTestPackage=2
+  MVN_CONNECTION_OPTIONS: -Dhttp.keepAlive=false 
-Dmaven.wagon.http.pool=false -Dmaven.wagon.httpconnectionManager.ttlSeconds=120
+steps:
+  - name: Check out repository code
+uses: actions/checkout@v2
+
+  - name: Set JDK
+uses: actions/setup-java@v2
+with:
+  java-version: '8'
+  distribution: 'adopt'
+
+  - name: Set Maven 3.2.5

Review Comment:
   But should we still set it to this specific version to be on the safe side? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #19354: [FLINK-27042][metrics] Fix instability of StreamTaskTest#testMailboxMetricsScheduling

2022-04-04 Thread GitBox


flinkbot commented on PR #19354:
URL: https://github.com/apache/flink/pull/19354#issuecomment-1087547817

   
   ## CI report:
   
   * d5f62428a57f77e2484ed2b5b1e06adee4744a3f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-elasticsearch] MartijnVisser commented on a diff in pull request #4: [FLINK-26958][BuildSystem] Add Github Actions build for flink-connector-elasticsearch

2022-04-04 Thread GitBox


MartijnVisser commented on code in PR #4:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/4#discussion_r841728337


##
.github/workflows/ci.yml:
##
@@ -0,0 +1,141 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+name: Build flink-connector-elasticsearch
+on: [push, pull_request]
+jobs:
+  compile_ci_jdk8:
+runs-on: ubuntu-latest
+env:
+  MVN_GOALS: clean install
+  MVN_PROFILES: -Pcheck-convergence -Dscala-2.12
+  MVN_TEST_OPTIONS: -Dmaven.javadoc.skip=true -DskipTests -Dfast -U -B
+  MVN_COMMON_OPTIONS: -Dflink.forkCount=2 -Dflink.forkCountTestPackage=2

Review Comment:
   I'm also not 100% sure yet if this is the best thing to do. We could move 
them directly to the run command but that makes it one long line too which 
doesn't do much for readability imho. I did clean-up the not required options



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-elasticsearch] MartijnVisser commented on a diff in pull request #4: [FLINK-26958][BuildSystem] Add Github Actions build for flink-connector-elasticsearch

2022-04-04 Thread GitBox


MartijnVisser commented on code in PR #4:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/4#discussion_r841724628


##
.github/workflows/ci.yml:
##
@@ -0,0 +1,141 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+name: Build flink-connector-elasticsearch
+on: [push, pull_request]
+jobs:
+  compile_ci_jdk8:
+runs-on: ubuntu-latest
+env:
+  MVN_GOALS: clean install
+  MVN_PROFILES: -Pcheck-convergence -Dscala-2.12
+  MVN_TEST_OPTIONS: -Dmaven.javadoc.skip=true -DskipTests -Dfast -U -B
+  MVN_COMMON_OPTIONS: -Dflink.forkCount=2 -Dflink.forkCountTestPackage=2
+  #  Temporary disable convergence phase since that will fail
+  #  MVN_COMMON_OPTIONS: -Dflink.convergence.phase=install 
-Dflink.forkCount=2 -Dflink.forkCountTestPackage=2
+  MVN_CONNECTION_OPTIONS: -Dhttp.keepAlive=false 
-Dmaven.wagon.http.pool=false -Dmaven.wagon.httpconnectionManager.ttlSeconds=120
+steps:
+  - name: Check out repository code
+uses: actions/checkout@v2
+
+  - name: Set JDK
+uses: actions/setup-java@v2
+with:
+  java-version: '8'
+  distribution: 'adopt'
+
+  - name: Set Maven 3.2.5
+uses: stCarolas/setup-maven@v4.2

Review Comment:
   Yes, it's listed at https://issues.apache.org/jira/browse/INFRA-21683



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-elasticsearch] MartijnVisser commented on a diff in pull request #4: [FLINK-26958][BuildSystem] Add Github Actions build for flink-connector-elasticsearch

2022-04-04 Thread GitBox


MartijnVisser commented on code in PR #4:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/4#discussion_r841720418


##
.github/workflows/ci.yml:
##
@@ -0,0 +1,141 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+name: Build flink-connector-elasticsearch
+on: [push, pull_request]
+jobs:
+  compile_ci_jdk8:
+runs-on: ubuntu-latest
+env:
+  MVN_GOALS: clean install
+  MVN_PROFILES: -Pcheck-convergence -Dscala-2.12

Review Comment:
   Yes, need to check with Jing if we want to do that now or as a follow-up as 
both the code has been moved and the first version of this pipeline is there



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #19353: [FLINK-27050][runtime] Removes default RpcSystem instance

2022-04-04 Thread GitBox


flinkbot commented on PR #19353:
URL: https://github.com/apache/flink/pull/19353#issuecomment-1087537918

   
   ## CI report:
   
   * a942ecbc4be8adf0ee037228ad9ebc697b7e6080 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-27042) StreamTaskTest#testMailboxMetricsScheduling is unstable

2022-04-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-27042:
---
Labels: pull-request-available  (was: )

> StreamTaskTest#testMailboxMetricsScheduling is unstable
> ---
>
> Key: FLINK-27042
> URL: https://issues.apache.org/jira/browse/FLINK-27042
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Task, Tests
>Affects Versions: 1.16.0
>Reporter: Chesnay Schepler
>Assignee: Sebastian Mattheis
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> {code:java}
> java.lang.AssertionError: 
> Expected: a value greater than <0L>
>  but: <0L> was equal to <0L>
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskITCase.testMailboxMetricsScheduling(StreamTaskTest.java:1823)
> {code}
> Can be reproduced locally by looping the test. Probably due to wrong 
> assumptions about 2 time measurements being different.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] smattheis opened a new pull request, #19354: [FLINK-27042] Fix instability of StreamTaskTest#testMailboxMetricsScheduling

2022-04-04 Thread GitBox


smattheis opened a new pull request, #19354:
URL: https://github.com/apache/flink/pull/19354

   
   
   
   
   
   
   ## What is the purpose of the change
   
   * Fix instability of StreamTaskTest#testMailboxMetricsScheduling
   
   ## Brief change log
   
   * Remove assertion for latency measurement from 
StreamTaskTest#testMailboxMetricsScheduling as it
   causes instability and is already covered in 
StreamTaskTest#testMailboxMetricsMeasurement.
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   This change is already covered by existing tests, such as:
   * StreamTaskTest#testMailboxMetricsScheduling
   * StreamTaskTest#testMailboxMetricsMeasurement
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency):  __no__
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: __no__
 - The serializers: __no__ 
 - The runtime per-record code paths (performance sensitive): __no__ 
 - Anything that affects deployment or recovery:  __no__ 
 - The S3 file system connector: __no__
   
   ## Documentation
   
 - Does this pull request introduce a new feature? __no__
 - If yes, how is the feature documented? __not applicable__
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] metaswirl commented on a diff in pull request #19263: [FLINK-21585][metrics] Add options for in-/excluding metrics

2022-04-04 Thread GitBox


metaswirl commented on code in PR #19263:
URL: https://github.com/apache/flink/pull/19263#discussion_r841714252


##
flink-runtime/src/test/java/org/apache/flink/runtime/metrics/filter/DefaultMetricFilterTest.java:
##
@@ -0,0 +1,192 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.metrics.filter;
+
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.configuration.MetricOptions;
+import org.apache.flink.metrics.Counter;
+import org.apache.flink.metrics.Meter;
+import org.apache.flink.metrics.MetricType;
+import org.apache.flink.metrics.util.TestCounter;
+import org.apache.flink.metrics.util.TestMeter;
+
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.parallel.Execution;
+import org.junit.jupiter.api.parallel.ExecutionMode;
+
+import java.util.Arrays;
+import java.util.EnumSet;
+import java.util.regex.Pattern;
+
+import static org.assertj.core.api.Assertions.assertThat;
+
+@Execution(ExecutionMode.CONCURRENT)
+class DefaultMetricFilterTest {
+
+private static final Counter COUNTER = new TestCounter();
+private static final Meter METER = new TestMeter();
+
+@Test
+void testConvertToPatternWithoutWildcards() {
+final Pattern pattern = 
DefaultMetricFilter.convertToPattern("numRecordsIn");
+assertThat(pattern.toString()).isEqualTo("(numRecordsIn)");
+assertThat(pattern.matcher("numRecordsIn").matches()).isEqualTo(true);
+assertThat(pattern.matcher("numBytesOut").matches()).isEqualTo(false);
+}
+
+@Test
+void testConvertToPatternSingle() {
+final Pattern pattern = 
DefaultMetricFilter.convertToPattern("numRecords*");
+assertThat(pattern.toString()).isEqualTo("(numRecords.+)");
+assertThat(pattern.matcher("numRecordsIn").matches()).isEqualTo(true);
+assertThat(pattern.matcher("numBytesOut").matches()).isEqualTo(false);
+}
+
+@Test
+void testConvertToPatternMultiple() {
+final Pattern pattern = 
DefaultMetricFilter.convertToPattern("numRecords*;numBytes*");
+assertThat(pattern.toString()).isEqualTo("(numRecords.+|numBytes.+)");
+assertThat(pattern.matcher("numRecordsIn").matches()).isEqualTo(true);
+assertThat(pattern.matcher("numBytesOut").matches()).isEqualTo(true);
+assertThat(pattern.matcher("numBytes").matches()).isEqualTo(false);
+assertThat(pattern.matcher("hello").matches()).isEqualTo(false);
+}
+
+@Test
+void testParseMetricTypesSingle() {
+final EnumSet types = 
DefaultMetricFilter.parseMetricTypes("meter");
+assertThat(types).containsExactly(MetricType.METER);
+}
+
+@Test
+void testParseMetricTypesMultiple() {
+final EnumSet types = 
DefaultMetricFilter.parseMetricTypes("meter;counter");
+assertThat(types).containsExactlyInAnyOrder(MetricType.METER, 
MetricType.COUNTER);
+}
+
+@Test
+void testParseMetricTypesCaseIgnored() {
+final EnumSet types = 
DefaultMetricFilter.parseMetricTypes("meter;CoUnTeR");
+assertThat(types).containsExactlyInAnyOrder(MetricType.METER, 
MetricType.COUNTER);
+}
+
+@Test
+void testFromConfigurationIncludeByScope() {
+Configuration configuration = new Configuration();
+configuration.set(
+MetricOptions.REPORTER_INCLUDES, Arrays.asList("include1:*:*", 
"include2.*:*:*"));
+
+final MetricFilter metricFilter = 
DefaultMetricFilter.fromConfiguration(configuration);
+
+assertThat(metricFilter.filter(COUNTER, "name", 
"include1")).isEqualTo(true);
+assertThat(metricFilter.filter(COUNTER, "name", 
"include1.bar")).isEqualTo(false);
+assertThat(metricFilter.filter(COUNTER, "name", 
"include2")).isEqualTo(false);
+assertThat(metricFilter.filter(COUNTER, "name", 
"include2.bar")).isEqualTo(true);
+}
+
+@Test
+void testFromConfigurationIncludeByName() {
+Configuration configuration = new Configuration();
+configuration.set(MetricOptions.REPORTER_INCLUDES, 
Arrays.asList("*:name:*"));
+
+final MetricFilter metricFilter = 
DefaultMetricFilter.fromConfiguration(configuration);
+
+

[GitHub] [flink] flinkbot commented on pull request #19352: [FLINK-27044][Connectors][Hive] Drop support for Hive versions 1.*, 2.1.* and 2.2.* which are no longer supported by the Hive community

2022-04-04 Thread GitBox


flinkbot commented on PR #19352:
URL: https://github.com/apache/flink/pull/19352#issuecomment-1087529382

   
   ## CI report:
   
   * a07227dc89ff583ffe9cae0264e518aea73409a6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-27050) TestingDispatcher.Builder instantiates a RPCSystem without shutting it down

2022-04-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-27050:
---
Labels: pull-request-available test-stability  (was: test-stability)

> TestingDispatcher.Builder instantiates a RPCSystem without shutting it down
> ---
>
> Key: FLINK-27050
> URL: https://issues.apache.org/jira/browse/FLINK-27050
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> {{TestingDispatcher.Builder}} provides a default RpcSystem that isn't 
> shutdown causing leaking of threads.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] XComp opened a new pull request, #19353: [FLINK-27050][runtime] Removes default RpcSystem instance

2022-04-04 Thread GitBox


XComp opened a new pull request, #19353:
URL: https://github.com/apache/flink/pull/19353

   
   
   ## What is the purpose of the change
   
   Having a default RpcSystem in TestingDispatcher.Builder
   caused threads being spawned without cleanup. Instead, we
   should rely on the RpcSystem instance provided by the test.
   
   ## Brief change log
   
   * Removed the default instance from TestingDispatcher.Builder and made it a 
required parameter through the `build` method
   
   ## Verifying this change
   
   * Running a test manually in a loop and checking that there are no threads 
left hanging, e.g. in `DispatcherCleanupResourcesTest`:
   ```
   @Test
   public void foo() throws Exception {
   for (int i = 0; i < 100; i++) {
   setup();
   testDuplicateJobSubmissionDoesNotDeleteJobMetaData();
   teardown();
   }
   
   System.out.println("Stop the debugger here");
   }
   ```
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] metaswirl commented on a diff in pull request #19272: [FLINK-26710] fix TestLoggerResource

2022-04-04 Thread GitBox


metaswirl commented on code in PR #19272:
URL: https://github.com/apache/flink/pull/19272#discussion_r841709790


##
flink-test-utils-parent/flink-test-utils-junit/src/test/java/org/apache/flink/testutils/logging/TestLoggerResourceTest.java:
##
@@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.testutils.logging;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.junit.jupiter.api.Test;
+import org.slf4j.event.Level;
+
+import java.util.List;
+
+import static org.assertj.core.api.Assertions.assertThat;
+
+/**
+ * {@link TestLoggerResourceTest} ensures that the use of {@link 
TestLoggerResource} combined with
+ * other loggers (including multiple instances of itself) does not lead to 
unexpected behavior.
+ */
+public class TestLoggerResourceTest {
+final String parentLoggerName = TestLoggerResourceTest.class.getName() + 
".parent";
+final String childLoggerName = parentLoggerName + ".child";
+final Logger parentLogger = LogManager.getLogger(parentLoggerName);
+final Logger childLogger = LogManager.getLogger(childLoggerName);
+
+@Test
+public void loggerWithoutChild() throws Throwable {
+try (TestLoggerResource.SingleTestResource parentResource =
+TestLoggerResource.asSingleTestResource(parentLoggerName, 
Level.INFO)) {
+parentLogger.info("child-info");
+parentLogger.debug("child-debug");
+List msgs = parentResource.getMessages();
+assertThat(msgs).containsExactly("child-info");
+}
+}
+
+@Test
+public void loggerIsAlreadyDefined() throws Throwable {
+// Sometimes a logger is already defined for the same class (e.g., to 
debug the test on
+// failure.
+try (TestLoggerResource.SingleTestResource outerResource =
+
TestLoggerResource.asSingleTestResource(parentLoggerName, Level.INFO);
+TestLoggerResource.SingleTestResource innerResource =
+
TestLoggerResource.asSingleTestResource(parentLoggerName, Level.DEBUG)) {
+parentLogger.info("child-info");
+parentLogger.debug("child-debug");
+List msgsInner = innerResource.getMessages();
+assertThat(msgsInner).containsExactly("child-info", "child-debug");
+List msgsOuter = outerResource.getMessages();
+assertThat(msgsOuter).contains("child-info");

Review Comment:
   ok



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] metaswirl commented on a diff in pull request #19272: [FLINK-26710] fix TestLoggerResource

2022-04-04 Thread GitBox


metaswirl commented on code in PR #19272:
URL: https://github.com/apache/flink/pull/19272#discussion_r841709315


##
flink-test-utils-parent/flink-test-utils-junit/src/test/java/org/apache/flink/testutils/logging/TestLoggerResourceTest.java:
##
@@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.testutils.logging;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.junit.jupiter.api.Test;
+import org.slf4j.event.Level;
+
+import java.util.List;
+
+import static org.assertj.core.api.Assertions.assertThat;
+
+/**
+ * {@link TestLoggerResourceTest} ensures that the use of {@link 
TestLoggerResource} combined with
+ * other loggers (including multiple instances of itself) does not lead to 
unexpected behavior.
+ */
+public class TestLoggerResourceTest {
+final String parentLoggerName = TestLoggerResourceTest.class.getName() + 
".parent";
+final String childLoggerName = parentLoggerName + ".child";
+final Logger parentLogger = LogManager.getLogger(parentLoggerName);
+final Logger childLogger = LogManager.getLogger(childLoggerName);
+
+@Test
+public void loggerWithoutChild() throws Throwable {
+try (TestLoggerResource.SingleTestResource parentResource =
+TestLoggerResource.asSingleTestResource(parentLoggerName, 
Level.INFO)) {
+parentLogger.info("child-info");
+parentLogger.debug("child-debug");
+List msgs = parentResource.getMessages();
+assertThat(msgs).containsExactly("child-info");
+}
+}
+
+@Test
+public void loggerIsAlreadyDefined() throws Throwable {
+// Sometimes a logger is already defined for the same class (e.g., to 
debug the test on
+// failure.
+try (TestLoggerResource.SingleTestResource outerResource =
+
TestLoggerResource.asSingleTestResource(parentLoggerName, Level.INFO);
+TestLoggerResource.SingleTestResource innerResource =
+
TestLoggerResource.asSingleTestResource(parentLoggerName, Level.DEBUG)) {
+parentLogger.info("child-info");
+parentLogger.debug("child-debug");

Review Comment:
   fixed



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] metaswirl commented on a diff in pull request #19272: [FLINK-26710] fix TestLoggerResource

2022-04-04 Thread GitBox


metaswirl commented on code in PR #19272:
URL: https://github.com/apache/flink/pull/19272#discussion_r841708284


##
flink-test-utils-parent/flink-test-utils-junit/src/test/java/org/apache/flink/testutils/logging/TestLoggerResourceTest.java:
##
@@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.testutils.logging;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.junit.jupiter.api.Test;
+import org.slf4j.event.Level;
+
+import java.util.List;
+
+import static org.assertj.core.api.Assertions.assertThat;
+
+/**
+ * {@link TestLoggerResourceTest} ensures that the use of {@link 
TestLoggerResource} combined with
+ * other loggers (including multiple instances of itself) does not lead to 
unexpected behavior.
+ */
+public class TestLoggerResourceTest {
+final String parentLoggerName = TestLoggerResourceTest.class.getName() + 
".parent";
+final String childLoggerName = parentLoggerName + ".child";
+final Logger parentLogger = LogManager.getLogger(parentLoggerName);
+final Logger childLogger = LogManager.getLogger(childLoggerName);

Review Comment:
   fixed



##
flink-test-utils-parent/flink-test-utils-junit/src/test/java/org/apache/flink/testutils/logging/TestLoggerResourceTest.java:
##
@@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.testutils.logging;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.junit.jupiter.api.Test;
+import org.slf4j.event.Level;
+
+import java.util.List;
+
+import static org.assertj.core.api.Assertions.assertThat;
+
+/**
+ * {@link TestLoggerResourceTest} ensures that the use of {@link 
TestLoggerResource} combined with
+ * other loggers (including multiple instances of itself) does not lead to 
unexpected behavior.
+ */
+public class TestLoggerResourceTest {

Review Comment:
   fixed



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-27044) Drop support for Hive versions 1.*, 2.1.* and 2.2.*

2022-04-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-27044:
---
Labels: pull-request-available  (was: )

> Drop support for Hive versions 1.*, 2.1.* and 2.2.* 
> 
>
> Key: FLINK-27044
> URL: https://issues.apache.org/jira/browse/FLINK-27044
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Hive
>Reporter: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
>
> We should remove the connectors/support for the following Hive versions:
> - 1.*
> - 2.1.*
> - 2.2.*
> These versions are no longer supported by the Hive community. This was 
> discussed in https://lists.apache.org/thread/2w046dwl46tf2wy750gzmt0qrcz17z8t



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-27044) Drop support for Hive versions 1.*, 2.1.* and 2.2.*

2022-04-04 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser reassigned FLINK-27044:
--

Assignee: Martijn Visser

> Drop support for Hive versions 1.*, 2.1.* and 2.2.* 
> 
>
> Key: FLINK-27044
> URL: https://issues.apache.org/jira/browse/FLINK-27044
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Hive
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
>
> We should remove the connectors/support for the following Hive versions:
> - 1.*
> - 2.1.*
> - 2.2.*
> These versions are no longer supported by the Hive community. This was 
> discussed in https://lists.apache.org/thread/2w046dwl46tf2wy750gzmt0qrcz17z8t



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] MartijnVisser opened a new pull request, #19352: [FLINK-27044][Connectors][Hive] Drop support for Hive versions 1.*, 2.1.* and 2.2.* which are no longer supported by the Hive communit

2022-04-04 Thread GitBox


MartijnVisser opened a new pull request, #19352:
URL: https://github.com/apache/flink/pull/19352

   ## What is the purpose of the change
   
   This PR drops support for 3 versions of Hive:
   
   * All Hive 1.* support
   * All Hive 2.1.* support
   * All Hive 2.2.* support
   
   That means that Hive 2.3.* and Hive 3.1.* will be the remaining supported 
versions. These are also the version that are still maintained by the Hive 
community. 
   
   ## Brief change log
   
   * Removed `flink-sql-connector-hive-1.2.2` module and code
   * Removed `flink-sql-connector-hive-2.2.0` module and code
   * Removed references to all version < 2.3.0
   * Changed existing Hive profiles in Maven to version that still exist
   * Updated documentation to remove old references
   
   Note:
   * There are still `HiveShim` implementations (loaders for Hive) for older 
versions in `org/apache/flink/table/catalog/hive/client/`. I decided against 
removing them for now because the Hive 2.3.0 shim depends on all previous 
classes. This will be cleaned-up anyway when the Hive connectors will be 
externalized
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): yes
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] slinkydeveloper commented on pull request #19349: [FLINK-27043][table] Removing old csv format references

2022-04-04 Thread GitBox


slinkydeveloper commented on PR #19349:
URL: https://github.com/apache/flink/pull/19349#issuecomment-1087514912

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27050) TestingDispatcher.Builder instantiates a RPCSystem without shutting it down

2022-04-04 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17516795#comment-17516795
 ] 

Matthias Pohl commented on FLINK-27050:
---

This was introduced by {{9cac58227d2c28e7fb5262cc76b3f5b15515a73c}} on 
{{master}} when refactoring the {{TestingDispatcher}} initialization.

> TestingDispatcher.Builder instantiates a RPCSystem without shutting it down
> ---
>
> Key: FLINK-27050
> URL: https://issues.apache.org/jira/browse/FLINK-27050
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Major
>  Labels: test-stability
>
> {{TestingDispatcher.Builder}} provides a default RpcSystem that isn't 
> shutdown causing leaking of threads.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-27050) TestingDispatcher.Builder instantiates a RPCSystem without shutting it down

2022-04-04 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-27050:
-

 Summary: TestingDispatcher.Builder instantiates a RPCSystem 
without shutting it down
 Key: FLINK-27050
 URL: https://issues.apache.org/jira/browse/FLINK-27050
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.15.0
Reporter: Matthias Pohl


{{TestingDispatcher.Builder}} provides a default RpcSystem that isn't shutdown 
causing leaking of threads.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-27050) TestingDispatcher.Builder instantiates a RPCSystem without shutting it down

2022-04-04 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl reassigned FLINK-27050:
-

Assignee: Matthias Pohl

> TestingDispatcher.Builder instantiates a RPCSystem without shutting it down
> ---
>
> Key: FLINK-27050
> URL: https://issues.apache.org/jira/browse/FLINK-27050
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Major
>  Labels: test-stability
>
> {{TestingDispatcher.Builder}} provides a default RpcSystem that isn't 
> shutdown causing leaking of threads.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] DefuLi commented on pull request #19344: [hotfix] Modify spelling error in IOUtils.java

2022-04-04 Thread GitBox


DefuLi commented on PR #19344:
URL: https://github.com/apache/flink/pull/19344#issuecomment-1087505636

   @MartijnVisser  ok, thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-26957) FileSystemJobResultStore calls flush on an already closed OutputStream

2022-04-04 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17516784#comment-17516784
 ] 

Matthias Pohl edited comment on FLINK-26957 at 4/4/22 12:34 PM:


* master:
 ** c50b0706237114adec195b84202b969a148ccece
 ** caa296b813b8a719910f4e1337e011c772a12868
 * 1.15:
 ** fafeb7f9534c684b76db14b5cbd26c44251c8647
 ** cb0da8f2817bb51a01d168b70fdac99e7f34d94f


was (Author: mapohl):
master:
* c50b0706237114adec195b84202b969a148ccece
* caa296b813b8a719910f4e1337e011c772a12868
1.15:
* fafeb7f9534c684b76db14b5cbd26c44251c8647
* cb0da8f2817bb51a01d168b70fdac99e7f34d94f

> FileSystemJobResultStore calls flush on an already closed OutputStream 
> ---
>
> Key: FLINK-26957
> URL: https://issues.apache.org/jira/browse/FLINK-26957
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0, 1.16.0
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> We experienced problems with some FileSystems when creating the dirty JRS 
> entries (see initial discussion in FLINK-26555). The {{writeValue}} method 
> closes the {{OutputStream}} by default which causes the subsequent {{flush}} 
> call to fail.
> It didn't appear in the unit tests because {{LocalDataOutputStream.flush}} is 
> a no-op operation. We still have to investigate why it didn't appear when 
> doing the tests with the presto and hadoop S3 filesystems.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (FLINK-26957) FileSystemJobResultStore calls flush on an already closed OutputStream

2022-04-04 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl resolved FLINK-26957.
---
Resolution: Fixed

master:
* c50b0706237114adec195b84202b969a148ccece
* caa296b813b8a719910f4e1337e011c772a12868
1.15:
* fafeb7f9534c684b76db14b5cbd26c44251c8647
* cb0da8f2817bb51a01d168b70fdac99e7f34d94f

> FileSystemJobResultStore calls flush on an already closed OutputStream 
> ---
>
> Key: FLINK-26957
> URL: https://issues.apache.org/jira/browse/FLINK-26957
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0, 1.16.0
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> We experienced problems with some FileSystems when creating the dirty JRS 
> entries (see initial discussion in FLINK-26555). The {{writeValue}} method 
> closes the {{OutputStream}} by default which causes the subsequent {{flush}} 
> call to fail.
> It didn't appear in the unit tests because {{LocalDataOutputStream.flush}} is 
> a no-op operation. We still have to investigate why it didn't appear when 
> doing the tests with the presto and hadoop S3 filesystems.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] XComp merged pull request #19345: [FLINK-26957][BP-1.15][runtime] Removes flush in FileSystemJobResultStore.createDirtyResultInternal

2022-04-04 Thread GitBox


XComp merged PR #19345:
URL: https://github.com/apache/flink/pull/19345


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] XComp commented on pull request #19345: [FLINK-26957][BP-1.15][runtime] Removes flush in FileSystemJobResultStore.createDirtyResultInternal

2022-04-04 Thread GitBox


XComp commented on PR #19345:
URL: https://github.com/apache/flink/pull/19345#issuecomment-1087496890

   Merging the PR since the CI passed and the parent PR was approved.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] XComp merged pull request #19304: [FLINK-26957][runtime] Removes flush in FileSystemJobResultStore.createDirtyResultInternal

2022-04-04 Thread GitBox


XComp merged PR #19304:
URL: https://github.com/apache/flink/pull/19304


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] XComp commented on pull request #19304: [FLINK-26957][runtime] Removes flush in FileSystemJobResultStore.createDirtyResultInternal

2022-04-04 Thread GitBox


XComp commented on PR #19304:
URL: https://github.com/apache/flink/pull/19304#issuecomment-1087496375

   I'm gonna merge. I only dropped on commit.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27010) Support setting sql client args via flink conf

2022-04-04 Thread Moses (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17516780#comment-17516780
 ] 

Moses commented on FLINK-27010:
---

[~martijnvisser] +1

> Support setting sql client args via flink conf
> --
>
> Key: FLINK-27010
> URL: https://issues.apache.org/jira/browse/FLINK-27010
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client
>Affects Versions: 1.14.4
>Reporter: LuNng Wang
>Priority: Major
>
> '-i' '-j' and '-l' only be set in startup options. 
> I want to add the following options in flink-conf.yaml to set SQL Client 
> options.
> {code:java}
> sql-client.execution.init-file: /foo/foo.sql
> sql-client.execution.jar: foo.jar
> sql-client.execution.library: /foo{code}
> When startup options are set, the Flink default conf in yaml will be 
> overridden.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] XComp commented on a diff in pull request #19304: [FLINK-26957][runtime] Removes flush in FileSystemJobResultStore.createDirtyResultInternal

2022-04-04 Thread GitBox


XComp commented on code in PR #19304:
URL: https://github.com/apache/flink/pull/19304#discussion_r841681923


##
flink-core/src/test/java/org/apache/flink/core/fs/FileSystemBehaviorTestSuite.java:
##
@@ -263,6 +266,21 @@ public void testMkdirsFailsWithExistingParentFile() throws 
Exception {
 }
 }
 
+/**
+ * This test is added to make sure that each FileSystem's OutputStream 
implementation is
+ * repeatedly closable. This was necessary due to the fact that {@link
+ * ObjectMapper#writeValue(java.io.OutputStream, Object)} closes the 
{@link OutputStream}
+ * internally (see FLINK-26957 for more context).
+ */
+@Test
+public void testClosingOutputStreamTwice() throws Exception {

Review Comment:
   fair enough, I thought it would be still valuable. But I removed the 
commit...



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] MartijnVisser commented on pull request #154: [hotfix] Change email/repository notifications to match with Flink Core settings

2022-04-04 Thread GitBox


MartijnVisser commented on PR #154:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/154#issuecomment-1087493977

   > Dank.
   
   You're welcome :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] metaswirl commented on a diff in pull request #19272: [FLINK-26710] fix TestLoggerResource

2022-04-04 Thread GitBox


metaswirl commented on code in PR #19272:
URL: https://github.com/apache/flink/pull/19272#discussion_r841677959


##
flink-test-utils-parent/flink-test-utils-junit/src/main/java/org/apache/flink/testutils/logging/TestLoggerResource.java:
##
@@ -43,52 +52,125 @@
 
 private final String loggerName;
 private final org.slf4j.event.Level level;
+@Nullable private LoggerConfig backupLoggerConfig = null;
 
 private ConcurrentLinkedQueue loggingEvents;
 
 public TestLoggerResource(Class clazz, org.slf4j.event.Level level) {
-this.loggerName = clazz.getCanonicalName();
+this(clazz.getCanonicalName(), level);
+}
+
+private TestLoggerResource(String loggerName, org.slf4j.event.Level level) 
{
+this.loggerName = loggerName;
 this.level = level;
 }
 
 public List getMessages() {
 return new ArrayList<>(loggingEvents);
 }
 
+private static String generateRandomString() {
+return UUID.randomUUID().toString().replace("-", "");
+}
+
 @Override
 protected void before() throws Throwable {
 loggingEvents = new ConcurrentLinkedQueue<>();
 
+final LoggerConfig previousLoggerConfig =
+LOGGER_CONTEXT.getConfiguration().getLoggerConfig(loggerName);
+
+final Level previousLevel = previousLoggerConfig.getLevel();
+final Level userDefinedLevel = Level.getLevel(level.name());
+
+// Set log level to last specific. This ensures that the parent still 
receives all log
+// lines.
+// WARN is more specific than INFO is more specific than DEBUG etc.
+final Level newLevel =
+userDefinedLevel.isMoreSpecificThan(previousLevel)
+? previousLevel
+: userDefinedLevel;
+
+// Filter log lines according to user requirements.
+final Filter levelFilter =
+ThresholdFilter.createFilter(
+userDefinedLevel, Filter.Result.ACCEPT, 
Filter.Result.DENY);
+
 Appender testAppender =
-new AbstractAppender("test-appender", null, null, false) {
+new AbstractAppender(
+"test-appender-" + generateRandomString(), 
levelFilter, null, false) {
 @Override
 public void append(LogEvent event) {
 
loggingEvents.add(event.getMessage().getFormattedMessage());
 }
 };
 testAppender.start();
 
-AppenderRef appenderRef = 
AppenderRef.createAppenderRef(testAppender.getName(), null, null);
-LoggerConfig logger =
+LoggerConfig loggerConfig =
 LoggerConfig.createLogger(
-false,
-Level.getLevel(level.name()),
-"test",
+true,
+newLevel,
+loggerName,
 null,
-new AppenderRef[] {appenderRef},
+new AppenderRef[] {},
 null,
 LOGGER_CONTEXT.getConfiguration(),
 null);
-logger.addAppender(testAppender, null, null);
+loggerConfig.addAppender(testAppender, null, null);
 
-LOGGER_CONTEXT.getConfiguration().addLogger(loggerName, logger);
+if (previousLoggerConfig.getName().equals(loggerName)) {
+// remove the previous logger config for the duration of the test
+backupLoggerConfig = previousLoggerConfig;
+LOGGER_CONTEXT.getConfiguration().removeLogger(loggerName);

Review Comment:
   In this case the old logger will not be replaced. At least this is what I 
see in my experiments. There is no clear documentation for this.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] MartijnVisser merged pull request #19344: [hotfix] Modify spelling error in IOUtils.java

2022-04-04 Thread GitBox


MartijnVisser merged PR #19344:
URL: https://github.com/apache/flink/pull/19344


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] matriv commented on a diff in pull request #18609: [FLINK-25897][docs] Update gradle quickstart quide to gradle 7.3.3

2022-04-04 Thread GitBox


matriv commented on code in PR #18609:
URL: https://github.com/apache/flink/pull/18609#discussion_r841673533


##
docs/content/docs/dev/datastream/project-configuration.md:
##
@@ -372,19 +358,18 @@ dependencies {
 // Compile-time dependencies that should NOT be part of the
 // shadow jar and are provided in the lib folder of Flink
 // --
-compile "org.apache.flink:flink-streaming-java:${flinkVersion}"
-compile "org.apache.flink:flink-clients:${flinkVersion}"
+implementation "org.apache.flink:flink-java:${flinkVersion}"

Review Comment:
   Thx, I've fixed that.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] metaswirl commented on a diff in pull request #19272: [FLINK-26710] fix TestLoggerResource

2022-04-04 Thread GitBox


metaswirl commented on code in PR #19272:
URL: https://github.com/apache/flink/pull/19272#discussion_r841673414


##
flink-test-utils-parent/flink-test-utils-junit/src/main/java/org/apache/flink/testutils/logging/TestLoggerResource.java:
##
@@ -43,52 +52,125 @@
 
 private final String loggerName;
 private final org.slf4j.event.Level level;
+@Nullable private LoggerConfig backupLoggerConfig = null;
 
 private ConcurrentLinkedQueue loggingEvents;
 
 public TestLoggerResource(Class clazz, org.slf4j.event.Level level) {
-this.loggerName = clazz.getCanonicalName();
+this(clazz.getCanonicalName(), level);
+}
+
+private TestLoggerResource(String loggerName, org.slf4j.event.Level level) 
{
+this.loggerName = loggerName;
 this.level = level;
 }
 
 public List getMessages() {
 return new ArrayList<>(loggingEvents);
 }
 
+private static String generateRandomString() {
+return UUID.randomUUID().toString().replace("-", "");
+}
+
 @Override
 protected void before() throws Throwable {
 loggingEvents = new ConcurrentLinkedQueue<>();
 
+final LoggerConfig previousLoggerConfig =
+LOGGER_CONTEXT.getConfiguration().getLoggerConfig(loggerName);
+
+final Level previousLevel = previousLoggerConfig.getLevel();
+final Level userDefinedLevel = Level.getLevel(level.name());
+
+// Set log level to last specific. This ensures that the parent still 
receives all log
+// lines.
+// WARN is more specific than INFO is more specific than DEBUG etc.
+final Level newLevel =
+userDefinedLevel.isMoreSpecificThan(previousLevel)
+? previousLevel
+: userDefinedLevel;
+
+// Filter log lines according to user requirements.
+final Filter levelFilter =
+ThresholdFilter.createFilter(
+userDefinedLevel, Filter.Result.ACCEPT, 
Filter.Result.DENY);
+
 Appender testAppender =
-new AbstractAppender("test-appender", null, null, false) {
+new AbstractAppender(
+"test-appender-" + generateRandomString(), 
levelFilter, null, false) {
 @Override
 public void append(LogEvent event) {
 
loggingEvents.add(event.getMessage().getFormattedMessage());
 }
 };
 testAppender.start();
 
-AppenderRef appenderRef = 
AppenderRef.createAppenderRef(testAppender.getName(), null, null);
-LoggerConfig logger =
+LoggerConfig loggerConfig =
 LoggerConfig.createLogger(
-false,
-Level.getLevel(level.name()),
-"test",
+true,
+newLevel,
+loggerName,
 null,
-new AppenderRef[] {appenderRef},
+new AppenderRef[] {},
 null,
 LOGGER_CONTEXT.getConfiguration(),
 null);
-logger.addAppender(testAppender, null, null);
+loggerConfig.addAppender(testAppender, null, null);
 
-LOGGER_CONTEXT.getConfiguration().addLogger(loggerName, logger);
+if (previousLoggerConfig.getName().equals(loggerName)) {
+// remove the previous logger config for the duration of the test
+backupLoggerConfig = previousLoggerConfig;
+LOGGER_CONTEXT.getConfiguration().removeLogger(loggerName);
+
+// combine appender set
+// Note: The appender may still receive more or less messages 
depending on the log level
+// difference between the two logger
+for (Appender appender : 
previousLoggerConfig.getAppenders().values()) {
+loggerConfig.addAppender(appender, null, null);
+}
+}
+
+LOGGER_CONTEXT.getConfiguration().addLogger(loggerName, loggerConfig);
 LOGGER_CONTEXT.updateLoggers();
 }
 
 @Override
 protected void after() {
 LOGGER_CONTEXT.getConfiguration().removeLogger(loggerName);
+if (backupLoggerConfig != null) {

Review Comment:
   A very good point



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] dannycranmer commented on pull request #17360: [FLINK-24379][Formats] Add support for Glue schema registry in Table API

2022-04-04 Thread GitBox


dannycranmer commented on PR #17360:
URL: https://github.com/apache/flink/pull/17360#issuecomment-1087483377

   @MartijnVisser we do not have capacity to pick it up right now. If we do not 
hear back from @jherico then we could potentially pick it up sometime before 
the 1.16 release


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] metaswirl commented on a diff in pull request #19272: [FLINK-26710] fix TestLoggerResource

2022-04-04 Thread GitBox


metaswirl commented on code in PR #19272:
URL: https://github.com/apache/flink/pull/19272#discussion_r841670907


##
flink-test-utils-parent/flink-test-utils-junit/src/main/java/org/apache/flink/testutils/logging/TestLoggerResource.java:
##
@@ -43,52 +52,125 @@
 
 private final String loggerName;
 private final org.slf4j.event.Level level;
+@Nullable private LoggerConfig backupLoggerConfig = null;
 
 private ConcurrentLinkedQueue loggingEvents;
 
 public TestLoggerResource(Class clazz, org.slf4j.event.Level level) {
-this.loggerName = clazz.getCanonicalName();
+this(clazz.getCanonicalName(), level);
+}
+
+private TestLoggerResource(String loggerName, org.slf4j.event.Level level) 
{
+this.loggerName = loggerName;
 this.level = level;
 }
 
 public List getMessages() {
 return new ArrayList<>(loggingEvents);
 }
 
+private static String generateRandomString() {
+return UUID.randomUUID().toString().replace("-", "");
+}
+
 @Override
 protected void before() throws Throwable {
 loggingEvents = new ConcurrentLinkedQueue<>();
 
+final LoggerConfig previousLoggerConfig =
+LOGGER_CONTEXT.getConfiguration().getLoggerConfig(loggerName);
+
+final Level previousLevel = previousLoggerConfig.getLevel();
+final Level userDefinedLevel = Level.getLevel(level.name());
+
+// Set log level to last specific. This ensures that the parent still 
receives all log
+// lines.
+// WARN is more specific than INFO is more specific than DEBUG etc.
+final Level newLevel =
+userDefinedLevel.isMoreSpecificThan(previousLevel)
+? previousLevel
+: userDefinedLevel;
+
+// Filter log lines according to user requirements.
+final Filter levelFilter =
+ThresholdFilter.createFilter(
+userDefinedLevel, Filter.Result.ACCEPT, 
Filter.Result.DENY);
+
 Appender testAppender =
-new AbstractAppender("test-appender", null, null, false) {
+new AbstractAppender(
+"test-appender-" + generateRandomString(), 
levelFilter, null, false) {
 @Override
 public void append(LogEvent event) {
 
loggingEvents.add(event.getMessage().getFormattedMessage());
 }
 };
 testAppender.start();
 
-AppenderRef appenderRef = 
AppenderRef.createAppenderRef(testAppender.getName(), null, null);
-LoggerConfig logger =
+LoggerConfig loggerConfig =
 LoggerConfig.createLogger(
-false,
-Level.getLevel(level.name()),
-"test",
+true,

Review Comment:
   If you disable additivity, then the lines are not forwarded to the parent 
logger. The consequence is that the lines disappear for the user.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-elasticsearch] JingGe closed pull request #2: [Flink-26884][draft] move elasticsearch connectors to the external repo

2022-04-04 Thread GitBox


JingGe closed pull request #2: [Flink-26884][draft] move elasticsearch 
connectors to the external repo
URL: https://github.com/apache/flink-connector-elasticsearch/pull/2


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-elasticsearch] JingGe commented on pull request #2: [Flink-26884][draft] move elasticsearch connectors to the external repo

2022-04-04 Thread GitBox


JingGe commented on PR #2:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/2#issuecomment-1087480666

   please refer to the new PR: #5 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] dmvk commented on a diff in pull request #19304: [FLINK-26957][runtime] Removes flush in FileSystemJobResultStore.createDirtyResultInternal

2022-04-04 Thread GitBox


dmvk commented on code in PR #19304:
URL: https://github.com/apache/flink/pull/19304#discussion_r841664758


##
flink-core/src/test/java/org/apache/flink/core/fs/FileSystemBehaviorTestSuite.java:
##
@@ -263,6 +266,21 @@ public void testMkdirsFailsWithExistingParentFile() throws 
Exception {
 }
 }
 
+/**
+ * This test is added to make sure that each FileSystem's OutputStream 
implementation is
+ * repeatedly closable. This was necessary due to the fact that {@link
+ * ObjectMapper#writeValue(java.io.OutputStream, Object)} closes the 
{@link OutputStream}
+ * internally (see FLINK-26957 for more context).
+ */
+@Test
+public void testClosingOutputStreamTwice() throws Exception {

Review Comment:
   do we still need this test, if we're not double-closing the resource anymore?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-26712) Metadata keys should not conflict with physical columns

2022-04-04 Thread Timo Walther (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther closed FLINK-26712.

Fix Version/s: 1.15.0
   Resolution: Fixed

Fixed in master: 0097b5a6faed4a2b146dca81f4d92e08c5c96897
Fixed in 1.15: 948f06374d0c502437b56e25f6b3ec237fe7c720

> Metadata keys should not conflict with physical columns
> ---
>
> Key: FLINK-26712
> URL: https://issues.apache.org/jira/browse/FLINK-26712
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Timo Walther
>Assignee: Timo Walther
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> If you have an field called timestamp and in addition want to read the 
> timestamp from the metadata:
> {code}
> CREATE TABLE animal_sightings_with_metadata (
>   `timestamp` TIMESTAMP(3),
>   `name` STRING,
>   `country` STRING,
>   `number` INT,
>   `append_time` TIMESTAMP(3) METADATA FROM 'timestamp',
>   `partition` BIGINT METADATA VIRTUAL,
>   `offset` BIGINT METADATA VIRTUAL,
>   `headers` MAP METADATA,
>   `timestamp-type` STRING METADATA,
>   `leader-epoch` INT METADATA,
>   `topic` STRING METADATA
> )
> {code}
> This gives:
> {code}
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.table.api.ValidationException: Field names must be unique. 
> Found duplicates: [timestamp]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25897) Update project configuration gradle doc to 7.x version

2022-04-04 Thread Jira


[ 
https://issues.apache.org/jira/browse/FLINK-25897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17516768#comment-17516768
 ] 

David Morávek commented on FLINK-25897:
---

asf-site: 16607c8e4ec87382b9dfa411c75b6f229f7e0f4f

> Update project configuration gradle doc to 7.x version
> --
>
> Key: FLINK-25897
> URL: https://issues.apache.org/jira/browse/FLINK-25897
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Francesco Guardiani
>Assignee: Marios Trivyzas
>Priority: Major
>  Labels: pull-request-available, stale-assigned
>
> Update the gradle build script and its doc page to 7.x



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink-web] dmvk merged pull request #504: [FLINK-25897] Update gradle quickstart to gradle 7.3.3

2022-04-04 Thread GitBox


dmvk merged PR #504:
URL: https://github.com/apache/flink-web/pull/504


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-statefun] FilKarnicki commented on a diff in pull request #309: [FLINK-26570][statefun] Remote module configuration interpolation

2022-04-04 Thread GitBox


FilKarnicki commented on code in PR #309:
URL: https://github.com/apache/flink-statefun/pull/309#discussion_r841660248


##
docs/content/docs/modules/overview.md:
##
@@ -61,3 +61,36 @@ spec:
 
 A module YAML file can contain multiple YAML documents, separated by `---`, 
each representing a component to be included in the application.
 Each component is defined by a kind typename string and a spec object 
containing the component's properties.
+
+# Configuration string interpolation
+You can use `${placeholders}` inside `spec` elements. These will be replaced 
by entries from a configuration map, consisting of: 
+1. System properties
+2. Environment variables
+3. flink-conf.yaml entries with prefix 'statefun.module.global-config.' 
+4. Command line args

Review Comment:
   I agree in principal. That said, because `globalConfiguration` is already an 
combination of args and flink-conf.yaml entries with the 
`statefun.module.global-config.` prefix, there's no easy way to put env 
variables in between them without affecting other parts of the system. 
   
   I made a start in my fork and the number of changes is pretty high for what 
we gain. Please have a look and let me know if I Should working on the change 
you mentioned in this comment
   
   
https://github.com/FilKarnicki/flink-statefun/commit/02cd6a9553c74d464a0c4aeadf53d6930b52650e
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27041) KafkaSource in batch mode failing on 0 messages in any topic partition

2022-04-04 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17516766#comment-17516766
 ] 

Martijn Visser commented on FLINK-27041:


[~renqs] Can you have a look?

> KafkaSource in batch mode failing on 0 messages in any topic partition
> --
>
> Key: FLINK-27041
> URL: https://issues.apache.org/jira/browse/FLINK-27041
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.4
> Environment: Kafka cluster version: 3.1.0
> Flink version 1.14.4
>Reporter: Terxor
>Priority: Blocker
>
> First let's take the case of consuming from a Kafka topic with a single 
> partition having 0 messages. Execution in batch mode, with bounded offsets 
> set to latest, is expected to finish gracefully. However, it fails with an 
> exception.
> Consider this minimal working example (assume that test_topic exists with 1 
> partition and 0 messages):
> {code:java}
>   final StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
>   env.setRuntimeMode(RuntimeExecutionMode.BATCH);
>   KafkaSource kafkaSource = KafkaSource
>   .builder()
>   .setBootstrapServers("localhost:9092")
>   .setTopics("test_topic")
>   .setValueOnlyDeserializer(new 
> SimpleStringSchema())
>   .setBounded(OffsetsInitializer.latest())
>   .build();
>   DataStream stream = env.fromSource(
>   kafkaSource,
>   WatermarkStrategy.noWatermarks(),
>   "Kafka Source"
>   );
>   stream.print();
>   env.execute("Flink KafkaSource test job");
> {code}
> This produces exception:
> {code}
> Exception in thread "main" 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
>   ... [omitted for readability]
> Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by 
> NoRestartBackoffTimeStrategy
>   at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:138)
>   ... [omitted for readability]
> Caused by: java.lang.RuntimeException: One or more fetchers have encountered 
> exception
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:225)
>   at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:169)
>   at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:130)
>   at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:351)
>   at 
> org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
>   at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:496)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:203)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:809)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:761)
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:937)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575)
>   at java.base/java.lang.Thread.run(Thread.java:829)
> Caused by: java.lang.RuntimeException: SplitFetcher thread 0 received 
> unexpected exception while polling the records
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:150)
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:105)
>   at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   ... 1 

[jira] [Updated] (FLINK-26368) Add setProperty method to KafkaSinkBuilder

2022-04-04 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora updated FLINK-26368:
---
Fix Version/s: 1.16.0

> Add setProperty method to KafkaSinkBuilder
> --
>
> Key: FLINK-26368
> URL: https://issues.apache.org/jira/browse/FLINK-26368
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> The KafkaSinkBuilder currently only supports setting properties via the 
> setKafkaProducerConfig(config).
> We should add the setProperty(key, value) method like in the 
> KafkaSourceBuilder to allow overriding single properties, and maybe even the 
> setProperties(..) method just to make the 2 builders work the same way.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Closed] (FLINK-26368) Add setProperty method to KafkaSinkBuilder

2022-04-04 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-26368.
--
Resolution: Fixed

merged to master: a5a31de5b3068f7fbc756b44b1674f98f4c04dea

> Add setProperty method to KafkaSinkBuilder
> --
>
> Key: FLINK-26368
> URL: https://issues.apache.org/jira/browse/FLINK-26368
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>Priority: Major
>  Labels: pull-request-available
>
> The KafkaSinkBuilder currently only supports setting properties via the 
> setKafkaProducerConfig(config).
> We should add the setProperty(key, value) method like in the 
> KafkaSourceBuilder to allow overriding single properties, and maybe even the 
> setProperties(..) method just to make the 2 builders work the same way.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] gyfora merged pull request #18933: [FLINK-26368] [kafka] Add setProperty method to KafkaSinkBuilder

2022-04-04 Thread GitBox


gyfora merged PR #18933:
URL: https://github.com/apache/flink/pull/18933


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-27049) Add "What is Flink Kubernetes Operator?" link to flink website

2022-04-04 Thread Gyula Fora (Jira)
Gyula Fora created FLINK-27049:
--

 Summary: Add "What is Flink Kubernetes Operator?" link to flink 
website
 Key: FLINK-27049
 URL: https://issues.apache.org/jira/browse/FLINK-27049
 Project: Flink
  Issue Type: Improvement
  Components: Kubernetes Operator, Project Website
Reporter: Gyula Fora


Similar to statefun and ml projects we should also add a "What is Flink 
Kubernetes Operator?" link to the menu pointing to the doc site. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] morhidi commented on pull request #18933: [FLINK-26368] [kafka] Add setProperty method to KafkaSinkBuilder

2022-04-04 Thread GitBox


morhidi commented on PR #18933:
URL: https://github.com/apache/flink/pull/18933#issuecomment-1087455138

   +1 LGTM, way more elegant than it was before, thanks for contributing !


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] MartijnVisser commented on pull request #19340: [FLINK-26961][BP-1.14][connectors][filesystems][formats] Update Jackson Databi…

2022-04-04 Thread GitBox


MartijnVisser commented on PR #19340:
URL: https://github.com/apache/flink/pull/19340#issuecomment-1087452950

   Ah I assumed it was a broken test  
   
   To resolve that we need to have a look at 
https://github.com/apache/flink/blob/release-1.14/flink-table/flink-table-planner/pom.xml#L348-L358
   
   It's probably required to add 
`META-INF/versions/9/module-info.class`. Most likely caused 
because this newer version of Jackson compiles for multiple Java versions.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-web] asfgit closed pull request #519: Add Kubernetes Operator 0.1.0 release

2022-04-04 Thread GitBox


asfgit closed pull request #519: Add Kubernetes Operator 0.1.0 release
URL: https://github.com/apache/flink-web/pull/519


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] MartijnVisser commented on pull request #17360: [FLINK-24379][Formats] Add support for Glue schema registry in Table API

2022-04-04 Thread GitBox


MartijnVisser commented on PR #17360:
URL: https://github.com/apache/flink/pull/17360#issuecomment-1087449410

   @jherico @dannycranmer Is there anything we can do to move this PR forward? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   >