[jira] [Updated] (FLINK-33356) The navigation bar on Flink’s official website is messed up.

2023-10-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33356:
---
Labels: pull-request-available  (was: )

> The navigation bar on Flink’s official website is messed up.
> 
>
> Key: FLINK-33356
> URL: https://issues.apache.org/jira/browse/FLINK-33356
> Project: Flink
>  Issue Type: Bug
>  Components: Project Website
>Reporter: Junrui Li
>Assignee: Wencong Liu
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2023-10-25-11-55-52-653.png, 
> image-2023-10-25-12-34-22-790.png
>
>
> The side navigation bar on the Flink official website at the following link: 
> [https://nightlies.apache.org/flink/flink-docs-master/] appears to be messed 
> up, as shown in the attached screenshot.
> !image-2023-10-25-11-55-52-653.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33356][docs] Fix the messed navigation bar on Flink’s official website [flink]

2023-10-30 Thread via GitHub


WencongLiu opened a new pull request, #23627:
URL: https://github.com/apache/flink/pull/23627

   … website
   
   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33210] Introduce lineage graph relevant interfaces [flink]

2023-10-30 Thread via GitHub


flinkbot commented on PR #23626:
URL: https://github.com/apache/flink/pull/23626#issuecomment-1786483458

   
   ## CI report:
   
   * e9314bf45db02132e84bfb8b7a000485c49b866d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33210) Introduce lineage graph relevant interfaces

2023-10-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33210:
---
Labels: pull-request-available  (was: )

> Introduce lineage graph relevant interfaces 
> 
>
> Key: FLINK-33210
> URL: https://issues.apache.org/jira/browse/FLINK-33210
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream, Table SQL / API
>Affects Versions: 1.19.0
>Reporter: Fang Yong
>Assignee: Fang Yong
>Priority: Major
>  Labels: pull-request-available
>
> Introduce LineageGraph, LineageVertex and LineageEdge interfaces



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33210] Introduce lineage graph relevant interfaces [flink]

2023-10-30 Thread via GitHub


FangYongs opened a new pull request, #23626:
URL: https://github.com/apache/flink/pull/23626

   ## What is the purpose of the change
   
   This PR aims to add lineage graph relevant interfaces such as 
`LineageGraph`, `LineageVertex`, `SourceLineageVertex` and `LineageEdge`
   
   ## Brief change log
 - Add lineage graph relevant interfaces
 - Add default lineage graph implementation
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
 - Added test `LineageGraphTest`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no) no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no) yes
 - The serializers: (yes / no / don't know) no
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know) no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know) 
no
 - The S3 file system connector: (yes / no / don't know) no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no) yes
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented) not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33089] Drop support for Flink 1.13 and 1.14 and clean up related codepaths [flink-kubernetes-operator]

2023-10-30 Thread via GitHub


caicancai commented on PR #696:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/696#issuecomment-1786424302

   cc @gyfora


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33403] Bump flink version to 1.18.0 for flink-kubernetes-operator [flink-kubernetes-operator]

2023-10-30 Thread via GitHub


1996fanrui commented on PR #697:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/697#issuecomment-1786394585

   Hi @gyfora , the flink-kubernetes-operator has 3 overlapping classes with 
1.17.1, but has 10  overlapping classes, so the CI fails.
   
   I'd like to check with you what types of classes should be redefined in 
`flink-kubernetes-operator`? Should we change the CI check or remove them in 
this PR? I see these classes are similar with flink.
   
   For `JobResourceRequirementsBody` related classes, I guess we define them 
due to 1.18 isn't released before, but `flink-kubernetes-operator` needs to use 
them, right? If yes, should we remove them now?
   
   ```
   Warning:  flink-kubernetes-operator-1.7-SNAPSHOT.jar, 
flink-runtime-1.17.1.jar define 3 overlapping classes: 
   Warning:- 
org.apache.flink.runtime.rest.messages.job.savepoints.SavepointTriggerRequestBody
   Warning:- 
org.apache.flink.runtime.rest.messages.job.savepoints.stop.StopWithSavepointRequestBody
   Warning:- 
org.apache.flink.runtime.rest.messages.job.metrics.IOMetricsInfo
   
   
   
   Warning:  flink-kubernetes-operator-1.7-SNAPSHOT.jar, 
flink-runtime-1.18.0.jar define 10 overlapping classes: 
   Warning:- 
org.apache.flink.runtime.jobgraph.JobVertexResourceRequirements$Parallelism
   Warning:- 
org.apache.flink.runtime.rest.messages.job.savepoints.SavepointTriggerRequestBody
   Warning:- 
org.apache.flink.runtime.rest.messages.job.savepoints.stop.StopWithSavepointRequestBody
   Warning:- org.apache.flink.runtime.jobgraph.JobVertexResourceRequirements
   Warning:- 
org.apache.flink.runtime.rest.messages.job.metrics.IOMetricsInfo
   Warning:- 
org.apache.flink.runtime.rest.messages.job.JobResourceRequirementsHeaders
   Warning:- org.apache.flink.runtime.jobgraph.JobResourceRequirements
   Warning:- 
org.apache.flink.runtime.jobgraph.JobResourceRequirements$Builder
   Warning:- 
org.apache.flink.runtime.rest.messages.job.JobResourcesRequirementsUpdateHeaders
   Warning:- 
org.apache.flink.runtime.rest.messages.job.JobResourceRequirementsBody
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33401][connectors/kafka] Fix the wrong download link and version in the connector doc [flink-connector-kafka]

2023-10-30 Thread via GitHub


TanYuxin-tyx commented on PR #64:
URL: 
https://github.com/apache/flink-connector-kafka/pull/64#issuecomment-1786393530

   @MartijnVisser Hi, Martijn, could you please help review this?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33080) The checkpoint storage configured in the job level by config option will not take effect

2023-10-30 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-33080:

Affects Version/s: 1.17.1
   1.18.0
   1.19.0

> The checkpoint storage configured in the job level by config option will not 
> take effect
> 
>
> Key: FLINK-33080
> URL: https://issues.apache.org/jira/browse/FLINK-33080
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / Configuration
>Affects Versions: 1.18.0, 1.17.1, 1.19.0
>Reporter: Junrui Li
>Assignee: Junrui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> When we configure the checkpoint storage at the job level, it can only be 
> done through the following method:
> {code:java}
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> env.getCheckpointConfig().setCheckpointStorage(xxx); {code}
> or configure filesystem storage by config option 
> CheckpointingOptions.CHECKPOINTS_DIRECTORY through the following method:
> {code:java}
> Configuration configuration = new Configuration();
> configuration.set(CheckpointingOptions.CHECKPOINTS_DIRECTORY, checkpointDir); 
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment(configuration);{code}
> However, configure the other type checkpoint storage by the job-side 
> configuration like the following will not take effect:
> {code:java}
> Configuration configuration = new Configuration();
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment(configuration);
> configuration.set(CheckpointingOptions.CHECKPOINT_STORAGE, 
> "aaa.bbb.ccc.CustomCheckpointStorage");
> configuration.set(CheckpointingOptions.CHECKPOINTS_DIRECTORY, checkpointDir); 
> {code}
> This behavior is unexpected, we should allow this way will take effect.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33080) The checkpoint storage configured in the job level by config option will not take effect

2023-10-30 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu closed FLINK-33080.
---
Resolution: Fixed

Fixed via 25697476095a5b9cf38dc3b61c684d0e912b1353

> The checkpoint storage configured in the job level by config option will not 
> take effect
> 
>
> Key: FLINK-33080
> URL: https://issues.apache.org/jira/browse/FLINK-33080
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / Configuration
>Reporter: Junrui Li
>Assignee: Junrui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> When we configure the checkpoint storage at the job level, it can only be 
> done through the following method:
> {code:java}
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> env.getCheckpointConfig().setCheckpointStorage(xxx); {code}
> or configure filesystem storage by config option 
> CheckpointingOptions.CHECKPOINTS_DIRECTORY through the following method:
> {code:java}
> Configuration configuration = new Configuration();
> configuration.set(CheckpointingOptions.CHECKPOINTS_DIRECTORY, checkpointDir); 
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment(configuration);{code}
> However, configure the other type checkpoint storage by the job-side 
> configuration like the following will not take effect:
> {code:java}
> Configuration configuration = new Configuration();
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment(configuration);
> configuration.set(CheckpointingOptions.CHECKPOINT_STORAGE, 
> "aaa.bbb.ccc.CustomCheckpointStorage");
> configuration.set(CheckpointingOptions.CHECKPOINTS_DIRECTORY, checkpointDir); 
> {code}
> This behavior is unexpected, we should allow this way will take effect.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33080][runtime] Make the configured checkpoint storage in the job-side configuration will take effect. [flink]

2023-10-30 Thread via GitHub


zhuzhurk closed pull request #23408: [FLINK-33080][runtime] Make the configured 
checkpoint storage in the job-side configuration will take effect.
URL: https://github.com/apache/flink/pull/23408


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33401) Kafka connector has broken version

2023-10-30 Thread Yuxin Tan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17781203#comment-17781203
 ] 

Yuxin Tan commented on FLINK-33401:
---

[~pavelhp] Thanks for reporting this issue. I will take a look at this.

Since the new version of Kafka connector has not been released, the new version 
can not be available even if the bug is fixed. You can use the old version 
(https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/connectors/datastream/kafka/)
 before the new connector version is released. 

> Kafka connector has broken version
> --
>
> Key: FLINK-33401
> URL: https://issues.apache.org/jira/browse/FLINK-33401
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Reporter: Pavel Khokhlov
>Priority: Major
>  Labels: pull-request-available
>
> Trying to run Flink 1.18 with Kafka Connector
> but official documentation has a bug  
> [https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/connectors/datastream/kafka/]
> {noformat}
> 
> org.apache.flink
> flink-connector-kafka
> -1.18
> {noformat}
> Basically version *-1.18* doesn't exist.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33401][connectors/kafka] Fix the wrong download link and version in the connector doc [flink-connector-kafka]

2023-10-30 Thread via GitHub


TanYuxin-tyx opened a new pull request, #64:
URL: https://github.com/apache/flink-connector-kafka/pull/64

   Currently, Kafka Connector official documentation has a bug in the download 
link and the version number.
   This is the fix for https://issues.apache.org/jira/browse/FLINK-33401. 
   
   
https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/connectors/datastream/kafka/


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33401) Kafka connector has broken version

2023-10-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33401:
---
Labels: pull-request-available  (was: )

> Kafka connector has broken version
> --
>
> Key: FLINK-33401
> URL: https://issues.apache.org/jira/browse/FLINK-33401
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Reporter: Pavel Khokhlov
>Priority: Major
>  Labels: pull-request-available
>
> Trying to run Flink 1.18 with Kafka Connector
> but official documentation has a bug  
> [https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/connectors/datastream/kafka/]
> {noformat}
> 
> org.apache.flink
> flink-connector-kafka
> -1.18
> {noformat}
> Basically version *-1.18* doesn't exist.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33401][connectors/kafka] Fix the wrong download link and version in the connector doc [flink-connector-kafka]

2023-10-30 Thread via GitHub


boring-cyborg[bot] commented on PR #64:
URL: 
https://github.com/apache/flink-connector-kafka/pull/64#issuecomment-1786384204

   Thanks for opening this pull request! Please check out our contributing 
guidelines. (https://flink.apache.org/contributing/how-to-contribute.html)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] (FLINK-32108) KubernetesExtension calls assumeThat in @BeforeAll callback which doesn't print the actual failure message

2023-10-30 Thread xiang1 yu (Jira)


[ https://issues.apache.org/jira/browse/FLINK-32108 ]


xiang1 yu deleted comment on FLINK-32108:
---

was (Author: JIRAUSER302279):
Hi, @Matthias Pohl; I tested locally and there was a print message, and exit 
-1,  so I'm not sure if {{assumeThat}} doesn't work properly in the 
{{@BeforeAll}} context.

!image-2023-10-30-14-05-27-154.png!

> KubernetesExtension calls assumeThat in @BeforeAll callback which doesn't 
> print the actual failure message
> --
>
> Key: FLINK-32108
> URL: https://issues.apache.org/jira/browse/FLINK-32108
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Test Infrastructure
>Affects Versions: 1.18.0
>Reporter: Matthias Pohl
>Priority: Minor
>  Labels: pull-request-available, starter
> Attachments: image-2023-10-30-14-05-27-154.png
>
>
> {{KubernetesExtension}} implements {{BeforeAllCallback}} which calls the 
> {{assumeThat}} in the {{@BeforeAll}} context. {{assumeThat}} doesn't work 
> properly in the {{@BeforeAll}} context, though: The error message is not 
> printed and the test fails with exit code -1.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33405) ProcessJoinFunction not found in Pyflink

2023-10-30 Thread Jaehyeon Kim (Jira)
Jaehyeon Kim created FLINK-33405:


 Summary: ProcessJoinFunction not found in Pyflink
 Key: FLINK-33405
 URL: https://issues.apache.org/jira/browse/FLINK-33405
 Project: Flink
  Issue Type: Improvement
  Components: API / Python
Reporter: Jaehyeon Kim


ProcessJoinFunction doesn't exist in Pyflink. Is there a plan to add it?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-26624][Runtime] 1.17 backport Running HA (hashmap, async) end-to-end test failed on azu [flink]

2023-10-30 Thread via GitHub


victor9309 commented on PR #23624:
URL: https://github.com/apache/flink/pull/23624#issuecomment-1786370390

   > Thanks for picking this one up, @victor9309 . Can you provide backports 
for this issue? 1.18, 1.17 should be good enough.
   
   Thanks @XComp for the review.  I changed the 1.17 and 1.18 branch commit code


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-33404) on_timer method is missing in ProcessFunction and CoProcessFunction of Pyflink

2023-10-30 Thread Jaehyeon Kim (Jira)
Jaehyeon Kim created FLINK-33404:


 Summary: on_timer method is missing in ProcessFunction and 
CoProcessFunction of Pyflink
 Key: FLINK-33404
 URL: https://issues.apache.org/jira/browse/FLINK-33404
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Reporter: Jaehyeon Kim


Hello,

I find the `on_timer` method is not found in ProcessFunction and 
CoProcessFunction of Pyflink and it causes an error when I register a timer eg)

 ```
  ...
  File 
"/home/jaehyeon/personal/flink-demos/venv/lib/python3.8/site-packages/pyflink/fn_execution/datastream/process/input_handler.py",
 line 101, in process_timer
    yield from _emit_results(
  File 
"/home/jaehyeon/personal/flink-demos/venv/lib/python3.8/site-packages/pyflink/fn_execution/datastream/process/input_handler.py",
 line 131, in _emit_results
    for result in results:
  File 
"/home/jaehyeon/personal/flink-demos/venv/lib/python3.8/site-packages/pyflink/fn_execution/datastream/process/input_handler.py",
 line 114, in _on_processing_time
    yield from self._on_processing_time_func(timestamp, key, namespace)
  File 
"/home/jaehyeon/personal/flink-demos/venv/lib/python3.8/site-packages/pyflink/fn_execution/datastream/process/operations.py",
 line 308, in on_processing_time
    return _on_timer(TimeDomain.PROCESSING_TIME, timestamp, key)
  File 
"/home/jaehyeon/personal/flink-demos/venv/lib/python3.8/site-packages/pyflink/fn_execution/datastream/process/operations.py",
 line 317, in _on_timer
    return process_function.on_timer(timestamp, on_timer_ctx)
AttributeError: 'ReadingFilter' object has no attribute 'on_timer'

        at 
org.apache.beam.runners.fnexecution.control.FnApiControlClient$ResponseStreamObserver.onNext(FnApiControlClient.java:180)
        at 
org.apache.beam.runners.fnexecution.control.FnApiControlClient$ResponseStreamObserver.onNext(FnApiControlClient.java:160)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:262)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.Contexts$ContextualizedServerCallListener.onMessage(Contexts.java:76)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailableInternal(ServerCallImpl.java:332)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:315)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable.runInContext(ServerImpl.java:834)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
        ... 3 more
```

I'm working on Pyflink 1.17.1 but it would be applicable other versions. 

Can the method be added to the functions?

Cheers,
Jaehyeon



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-26624][Runtime] 1.18 backport Running HA (hashmap, async) end-to-end test failed on azu [flink]

2023-10-30 Thread via GitHub


flinkbot commented on PR #23625:
URL: https://github.com/apache/flink/pull/23625#issuecomment-1786362798

   
   ## CI report:
   
   * 398f8f4c3860ecdf009a350a2165d3dfe763af1d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26624][Runtime] 1.17 backport Running HA (hashmap, async) end-to-end test failed on azu [flink]

2023-10-30 Thread via GitHub


flinkbot commented on PR #23624:
URL: https://github.com/apache/flink/pull/23624#issuecomment-1786362122

   
   ## CI report:
   
   * b7ecb4320187e38ac21da43260c4aa3f44e411bf UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33380][connectors/mongodb] Bump flink version on flink-connectors-mongodb with push_pr.yml and w… [flink-connector-mongodb]

2023-10-30 Thread via GitHub


Jiabao-Sun commented on code in PR #18:
URL: 
https://github.com/apache/flink-connector-mongodb/pull/18#discussion_r1376996490


##
.github/workflows/weekly.yml:
##
@@ -35,15 +35,15 @@ jobs:
 }, {
   flink: 1.18-SNAPSHOT,
   branch: main
+}, {
+  flink: 1.19-SNAPSHOT,
+  branch: main
 }, {
   flink: 1.16.2,
   branch: v1.0
 }, {
   flink: 1.17.1,
   branch: v1.0
-}, {
-  flink: 1.18-SNAPSHOT,
-  branch: v1.0
 }]

Review Comment:
   For 1.19 we need pin the version in pom.xml.
   I think we can cherry-pick this commit into v1.0 branch as well.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26624][Runtime] 1.17 backport Running HA (hashmap, async) end-to-end test failed on azu [flink]

2023-10-30 Thread via GitHub


victor9309 commented on PR #23624:
URL: https://github.com/apache/flink/pull/23624#issuecomment-1786360897

   I changed `uniq `to `sort -u` and the results are shown in the figure below:
   
![image](https://github.com/apache/flink/assets/18453843/e9ec4c6a-7050-418f-bc79-67dd9f786e65)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-26624][Runtime] 1.18 backport Running HA (hashmap, async) end-to-end test failed on azu [flink]

2023-10-30 Thread via GitHub


victor9309 opened a new pull request, #23625:
URL: https://github.com/apache/flink/pull/23625

   
   
   ## What is the purpose of the change
   
   *(Running HA (hashmap, async) end-to-end test failed on azure due to unable 
to find master logs.)*
   
   
   ## Brief change log
   
 - *1.18 backport PR, modify  verify_num_occurences_in_logs.sh, Changed 
`uniq `to `sort -u`*
   
   ## Verifying this change
 - *This change is already covered by existing test.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-26624][Runtime] 1.17 backport Running HA (hashmap, async) end-to-end test failed on azu [flink]

2023-10-30 Thread via GitHub


victor9309 opened a new pull request, #23624:
URL: https://github.com/apache/flink/pull/23624

   
   
   ## What is the purpose of the change
   
   *(Running HA (hashmap, async) end-to-end test failed on azure due to unable 
to find master logs.)*
   
   
   ## Brief change log
   
 - *1.17 backport PR, modify  verify_num_occurences_in_logs.sh, Changed 
`uniq `to `sort -u`*
   
   ## Verifying this change
 - *This change is already covered by existing test.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33380][connectors/mongodb] Bump flink version on flink-connectors-mongodb with push_pr.yml and w… [flink-connector-mongodb]

2023-10-30 Thread via GitHub


Jiabao-Sun commented on code in PR #18:
URL: 
https://github.com/apache/flink-connector-mongodb/pull/18#discussion_r1376993865


##
.github/workflows/push_pr.yml:
##
@@ -25,7 +25,7 @@ jobs:
   compile_and_test:
 strategy:
   matrix:
-flink: [1.17-SNAPSHOT, 1.18-SNAPSHOT]
+flink: [1.17.1, 1.18.0, 1.19-SNAPSHOT]

Review Comment:
   ```suggestion
   flink: [1.16-SNAPSHOT, 1.17-SNAPSHOT, 1.18-SNAPSHOT, 1.19-SNAPSHOT]
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33380][connectors/mongodb] Bump flink version on flink-connectors-mongodb with push_pr.yml and w… [flink-connector-mongodb]

2023-10-30 Thread via GitHub


gtk96 commented on code in PR #18:
URL: 
https://github.com/apache/flink-connector-mongodb/pull/18#discussion_r1376993423


##
.github/workflows/weekly.yml:
##
@@ -35,15 +35,15 @@ jobs:
 }, {
   flink: 1.18-SNAPSHOT,
   branch: main
+}, {
+  flink: 1.19-SNAPSHOT,
+  branch: main
 }, {
   flink: 1.16.2,
   branch: v1.0
 }, {
   flink: 1.17.1,
   branch: v1.0
-}, {
-  flink: 1.18-SNAPSHOT,
-  branch: v1.0
 }]

Review Comment:
   Does version 1.0 support 1.18 and 1.19?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33380][connectors/mongodb] Bump flink version on flink-connectors-mongodb with push_pr.yml and w… [flink-connector-mongodb]

2023-10-30 Thread via GitHub


Jiabao-Sun commented on code in PR #18:
URL: 
https://github.com/apache/flink-connector-mongodb/pull/18#discussion_r1376992845


##
.github/workflows/weekly.yml:
##
@@ -35,15 +35,15 @@ jobs:
 }, {
   flink: 1.18-SNAPSHOT,
   branch: main
+}, {
+  flink: 1.19-SNAPSHOT,
+  branch: main
 }, {
   flink: 1.16.2,
   branch: v1.0
 }, {
   flink: 1.17.1,
   branch: v1.0
-}, {
-  flink: 1.18-SNAPSHOT,
-  branch: v1.0
 }]

Review Comment:
   I think we should't remove this lines here.
   
   ```js
   {
 flink: 1.16.2,
 branch: v1.0
   }, {
 flink: 1.17.1,
 branch: v1.0
   }, {
 flink: 1.18.0,
 branch: v1.0
   }, {
 flink: 1.19-SNAPSHOT,
 branch: v1.0
   }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33380][connectors/mongodb] Bump flink version on flink-connectors-mongodb with push_pr.yml and w… [flink-connector-mongodb]

2023-10-30 Thread via GitHub


gtk96 commented on code in PR #18:
URL: 
https://github.com/apache/flink-connector-mongodb/pull/18#discussion_r1376988739


##
pom.xml:
##
@@ -119,11 +119,11 @@ under the License.
test

 
-   
-   org.testcontainers
-   junit-jupiter
-   test
-   
+
+  org.testcontainers
+  junit-jupiter
+  test
+

Review Comment:
   fix



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33080][runtime] Make the configured checkpoint storage in the job-side configuration will take effect. [flink]

2023-10-30 Thread via GitHub


JunRuiLee commented on PR #23408:
URL: https://github.com/apache/flink/pull/23408#issuecomment-1786348383

   The CI test case for this PR has passed successfully. However, due to 
network instability in the environment, the post-job "Cache docker images" in 
the E2E testing stage experienced a timeout error. This issue is unrelated to 
this PR and can be disregarded. For reference, the CI result is [CI 
Link](https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=54184=results).
 cc @zhuzhurk 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33403) Bump flink version to 1.18.0 for flink-kubernetes-operator

2023-10-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33403:
---
Labels: pull-request-available  (was: )

> Bump flink version to 1.18.0 for flink-kubernetes-operator
> --
>
> Key: FLINK-33403
> URL: https://issues.apache.org/jira/browse/FLINK-33403
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33403] Bump flink version to 1.18.0 for flink-kubernetes-operator [flink-kubernetes-operator]

2023-10-30 Thread via GitHub


1996fanrui opened a new pull request, #697:
URL: https://github.com/apache/flink-kubernetes-operator/pull/697

   ## What is the purpose of the change
   
   Bump flink version to 1.18.0 for flink-kubernetes-operator
   
   
   ## Brief change log
   
   - Bump flink version to 1.18.0 for flink-kubernetes-operator
   - Using the latest shaded guava31
   - Support the latest `KubernetesClusterDescriptor`
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): yes
 - The public API, i.e., is any changes to the `CustomResourceDescriptors`: 
 no
 - Core observer or reconciler logic that is regularly executed: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-33089] Drop support for Flink 1.13 and 1.14 and clean up related codepaths [flink-kubernetes-operator]

2023-10-30 Thread via GitHub


caicancai opened a new pull request, #696:
URL: https://github.com/apache/flink-kubernetes-operator/pull/696

   …ed codepaths
   
   
   
   ## What is the purpose of the change
   
   *(For example: This pull request adds a new feature to periodically create 
and maintain savepoints through the `FlinkDeployment` custom resource.)*
   Drop support for Flink 1.13 and 1.14
   
   
   ## Brief change log
   
   *(for example:)*
 - *Periodic savepoint trigger is introduced to the custom resource*
 - *The operator checks on reconciliation whether the required time has 
passed*
 - *The JobManager's dispose savepoint API is used to clean up obsolete 
savepoints*
   Remove 1.13/1.14 in crd
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changes to the `CustomResourceDescriptors`: 
(yes / no)
 - Core observer or reconciler logic that is regularly executed: (yes / no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-33403) Bump flink version to 1.18.0 for flink-kubernetes-operator

2023-10-30 Thread Rui Fan (Jira)
Rui Fan created FLINK-33403:
---

 Summary: Bump flink version to 1.18.0 for flink-kubernetes-operator
 Key: FLINK-33403
 URL: https://issues.apache.org/jira/browse/FLINK-33403
 Project: Flink
  Issue Type: Improvement
  Components: Kubernetes Operator
Reporter: Rui Fan
Assignee: Rui Fan






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33380][connectors/mongodb] Bump flink version on flink-connectors-mongodb with push_pr.yml and w… [flink-connector-mongodb]

2023-10-30 Thread via GitHub


Jiabao-Sun commented on code in PR #18:
URL: 
https://github.com/apache/flink-connector-mongodb/pull/18#discussion_r1376965757


##
pom.xml:
##
@@ -119,11 +119,11 @@ under the License.
test

 
-   
-   org.testcontainers
-   junit-jupiter
-   test
-   
+
+  org.testcontainers
+  junit-jupiter
+  test
+

Review Comment:
   I think there is no need to modify the indentation symbol here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33380][connectors/mongodb] Bump flink version on flink-connectors-mongodb with push_pr.yml and w… [flink-connector-mongodb]

2023-10-30 Thread via GitHub


gtk96 commented on PR #18:
URL: 
https://github.com/apache/flink-connector-mongodb/pull/18#issuecomment-1786309854

   @Jiabao-Sun @MartijnVisser  ci pass 
https://github.com/gtk96/flink-connector-mongodb/actions/runs/6700848560 PTAL


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32107] [Tests] Kubernetes test failed because ofunable to establish ssl connection to github on AZP [flink]

2023-10-30 Thread via GitHub


victor9309 commented on PR #23528:
URL: https://github.com/apache/flink/pull/23528#issuecomment-1786309362

   Thanks @XComp for the review.  make the diff smaller


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32107] [Tests] Kubernetes test failed because ofunable to establish ssl connection to github on AZP [flink]

2023-10-30 Thread via GitHub


victor9309 commented on PR #23528:
URL: https://github.com/apache/flink/pull/23528#issuecomment-1786292325

   Thanks @XComp for the review.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] test: Add calc restore tests [flink]

2023-10-30 Thread via GitHub


flinkbot commented on PR #23623:
URL: https://github.com/apache/flink/pull/23623#issuecomment-1786250114

   
   ## CI report:
   
   * 14b7a02379b00d4d14b1711526989bd7585478af UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] test: Add calc restore tests [flink]

2023-10-30 Thread via GitHub


bvarghese1 opened a new pull request, #23623:
URL: https://github.com/apache/flink/pull/23623

   
   
   ## What is the purpose of the change
   
   Add restore tests for Calc node
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change added tests and can be verified as follows:
   
 - *Added restore tests for Calc node which verifies the generated compiled 
plan with the saved compiled plan*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33402) Hybrid Source Concurrency Race Condition Fixes and Related Bugs

2023-10-30 Thread Varun Narayanan Chakravarthy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Narayanan Chakravarthy updated FLINK-33402:
-
Description: 
Hello Team,
I noticed that there is data loss when using Hybrid Source. We are reading from 
a series of concrete File Sources ~100. All these locations are chained 
together using the Hybrid source.
The issue stems from a race-condition in Flink Hybrid Source code. The Hybrid 
Sources switches the next source before the current source is complete. 
Similarly for the Hybrid Source readers. I have also shared the patch file that 
fixes the issue.
>From the logs:

*Task Manager logs:* 
2023-10-10 17:46:23.577 [Source: parquet-source (1/2)#0|#0] INFO  
o.apache.flink.connector.base.source.reader.SourceReaderBase  - Adding split(s) 
to reader: [FileSourceSplit: s3://REDACTED/part-1-13189.snappy [0, 94451)  
hosts=[localhost] ID=000229 position=null] 2023-10-10 17:46:23.715 [Source 
Data Fetcher for Source: parquet-source (1/2)#0|#0] INFO  
org.apache.hadoop.fs.s3a.S3AInputStream  - Switching to Random IO seek policy 
2023-10-10 17:46:23.715 [Source Data Fetcher for Source: parquet-source 
(1/2)#0|#0] INFO  org.apache.hadoop.fs.s3a.S3AInputStream  - Switching to 
Random IO seek policy 2023-10-10 17:46:24.012 [Source: parquet-source 
(1/2)#0|#0] INFO  o.apache.flink.connector.base.source.reader.SourceReaderBase  
- Finished reading split(s) [000154] 2023-10-10 17:46:24.012 [Source Data 
Fetcher for Source: parquet-source (1/2)#0|#0] INFO  
o.a.flink.connector.base.source.reader.fetcher.SplitFetcher  - Finished reading 
from splits [000154] 2023-10-10 17:46:24.014 [Source: parquet-source 
(1/2)#0|#0] INFO  o.apache.flink.connector.base.source.reader.SourceReaderBase  
- Reader received NoMoreSplits event. 2023-10-10 17:46:24.014 [Source: 
parquet-source (1/2)#0|#0] DEBUG 
o.a.flink.connector.base.source.hybrid.HybridSourceReader  - No more splits for 
subtask=0 sourceIndex=11 
currentReader=org.apache.flink.connector.file.src.impl.FileSourceReader@59620ef8
 2023-10-10 17:46:24.116 [Source Data Fetcher for Source: parquet-source 
(1/2)#0|#0] INFO  org.apache.hadoop.fs.s3a.S3AInputStream  - Switching to 
Random IO seek policy 2023-10-10 17:46:24.116 [Source Data Fetcher for Source: 
parquet-source (1/2)#0|#0] INFO  org.apache.hadoop.fs.s3a.S3AInputStream  - 
Switching to Random IO seek policy 2023-10-10 17:46:24.116 [Source: 
parquet-source (1/2)#0|#0] INFO  
o.a.flink.connector.base.source.hybrid.HybridSourceReader  - Switch source 
event: subtask=0 sourceIndex=12 
source=org.apache.flink.connector.kafka.source.KafkaSource@7849da7e 2023-10-10 
17:46:24.116 [Source: parquet-source (1/2)#0|#0] INFO  
o.apache.flink.connector.base.source.reader.SourceReaderBase  - Closing Source 
Reader. 2023-10-10 17:46:24.116 [Source: parquet-source (1/2)#0|#0] INFO  
o.a.flink.connector.base.source.reader.fetcher.SplitFetcher  - Shutting down 
split fetcher 0 2023-10-10 17:46:24.198 [Source Data Fetcher for Source: 
parquet-source (1/2)#0|#0] INFO  
o.a.flink.connector.base.source.reader.fetcher.SplitFetcher  - Split fetcher 0 
exited. 2023-10-10 17:46:24.198 [Source: parquet-source (1/2)#0|#0] DEBUG 
o.a.flink.connector.base.source.hybrid.HybridSourceReader  - Reader closed: 
subtask=0 sourceIndex=11 
currentReader=org.apache.flink.connector.file.src.impl.FileSourceReader@59620ef8
We identified that data from `s3://REDACTED/part-1-13189.snappy` is missing.  
This is assigned to Reader with ID 000229. Now, we can see from the logs 
this split is added after the no-more splits event and is NOT read.

*Job Manager logs:*
2023-10-10 17:46:23.576 [SourceCoordinator-Source: parquet-source] INFO  
o.a.f.c.file.src.assigners.LocalityAwareSplitAssigner  - Assigning remote split 
to requesting host '10': Optional[FileSourceSplit: 
s3://REDACTED/part-1-13189.snappy [0, 94451)  hosts=[localhost] ID=000229 
position=null]
2023-10-10 17:46:23.576 [SourceCoordinator-Source: parquet-source] INFO  
o.a.flink.connector.file.src.impl.StaticFileSplitEnumerator  - Assigned split 
to subtask 0 : FileSourceSplit: s3://REDACTED/part-1-13189.snappy [0, 94451)  
hosts=[localhost] ID=000229 position=null
2023-10-10 17:46:23.786 [SourceCoordinator-Source: parquet-source] INFO  
o.apache.flink.runtime.source.coordinator.SourceCoordinator  - Source Source: 
parquet-source received split request from parallel task 1 (#0)
2023-10-10 17:46:23.786 [SourceCoordinator-Source: parquet-source] DEBUG 
o.a.f.c.base.source.hybrid.HybridSourceSplitEnumerator  - handleSplitRequest 
subtask=1 sourceIndex=11 pendingSplits={}
2023-10-10 17:46:23.786 [SourceCoordinator-Source: parquet-source] INFO  
o.a.flink.connector.file.src.impl.StaticFileSplitEnumerator  - Subtask 1 (on 
host '10.4.168.40') is requesting a file source split
2023-10-10 17:46:23.786 [SourceCoordinator-Source: parquet-source] INFO  

[jira] [Created] (FLINK-33402) Hybrid Source Concurrency Race Condition Fixes and Related Bugs

2023-10-30 Thread Varun Narayanan Chakravarthy (Jira)
Varun Narayanan Chakravarthy created FLINK-33402:


 Summary: Hybrid Source Concurrency Race Condition Fixes and 
Related Bugs
 Key: FLINK-33402
 URL: https://issues.apache.org/jira/browse/FLINK-33402
 Project: Flink
  Issue Type: Bug
  Components: Connectors / HybridSource
Affects Versions: 1.16.1
 Environment: Apache Flink 1.16.1
Mac OSX, Linux etc. 
Reporter: Varun Narayanan Chakravarthy
 Attachments: hybridSourceEnumeratorAndReaderFixes.patch

Hello Team,
I noticed that there is data loss when using Hybrid Source. We are reading from 
a series of concrete File Sources ~100. All these locations are chained 
together using the Hybrid source.
The issue stems from a race-condition in Flink Hybrid Source code. The Hybrid 
Sources switches the next source before the current source is complete. 
Similarly for the Hybrid Source readers. I have also shared the patch file that 
fixes the issue.
>From the logs:

*Task Manager logs:* 
2023-10-10 17:46:23.577 [Source: parquet-source (1/2)#0] INFO  
o.apache.flink.connector.base.source.reader.SourceReaderBase  - Adding split(s) 
to reader: [FileSourceSplit: s3://REDACTED/part-1-13189.snappy [0, 94451)  
hosts=[localhost] ID=000229 position=null] 2023-10-10 17:46:23.715 [Source 
Data Fetcher for Source: parquet-source (1/2)#0] INFO  
org.apache.hadoop.fs.s3a.S3AInputStream  - Switching to Random IO seek policy 
2023-10-10 17:46:23.715 [Source Data Fetcher for Source: parquet-source 
(1/2)#0] INFO  org.apache.hadoop.fs.s3a.S3AInputStream  - Switching to Random 
IO seek policy 2023-10-10 17:46:24.012 [Source: parquet-source (1/2)#0] INFO  
o.apache.flink.connector.base.source.reader.SourceReaderBase  - Finished 
reading split(s) [000154] 2023-10-10 17:46:24.012 [Source Data Fetcher for 
Source: parquet-source (1/2)#0] INFO  
o.a.flink.connector.base.source.reader.fetcher.SplitFetcher  - Finished reading 
from splits [000154] 2023-10-10 17:46:24.014 [Source: parquet-source 
(1/2)#0] INFO  o.apache.flink.connector.base.source.reader.SourceReaderBase  - 
Reader received NoMoreSplits event. 2023-10-10 17:46:24.014 [Source: 
parquet-source (1/2)#0] DEBUG 
o.a.flink.connector.base.source.hybrid.HybridSourceReader  - No more splits for 
subtask=0 sourceIndex=11 
currentReader=org.apache.flink.connector.file.src.impl.FileSourceReader@59620ef8
 2023-10-10 17:46:24.116 [Source Data Fetcher for Source: parquet-source 
(1/2)#0] INFO  org.apache.hadoop.fs.s3a.S3AInputStream  - Switching to Random 
IO seek policy 2023-10-10 17:46:24.116 [Source Data Fetcher for Source: 
parquet-source (1/2)#0] INFO  org.apache.hadoop.fs.s3a.S3AInputStream  - 
Switching to Random IO seek policy 2023-10-10 17:46:24.116 [Source: 
parquet-source (1/2)#0] INFO  
o.a.flink.connector.base.source.hybrid.HybridSourceReader  - Switch source 
event: subtask=0 sourceIndex=12 
source=org.apache.flink.connector.kafka.source.KafkaSource@7849da7e 2023-10-10 
17:46:24.116 [Source: parquet-source (1/2)#0] INFO  
o.apache.flink.connector.base.source.reader.SourceReaderBase  - Closing Source 
Reader. 2023-10-10 17:46:24.116 [Source: parquet-source (1/2)#0] INFO  
o.a.flink.connector.base.source.reader.fetcher.SplitFetcher  - Shutting down 
split fetcher 0 2023-10-10 17:46:24.198 [Source Data Fetcher for Source: 
parquet-source (1/2)#0] INFO  
o.a.flink.connector.base.source.reader.fetcher.SplitFetcher  - Split fetcher 0 
exited. 2023-10-10 17:46:24.198 [Source: parquet-source (1/2)#0] DEBUG 
o.a.flink.connector.base.source.hybrid.HybridSourceReader  - Reader closed: 
subtask=0 sourceIndex=11 
currentReader=org.apache.flink.connector.file.src.impl.FileSourceReader@59620ef8
We identified that data from `s3://REDACTED/part-1-13189.snappy` is missing.  
This is assigned to Reader with ID 000229. Now, we can see from the logs 
this split is added after the no-more splits event and is NOT read.

*Job Manager logs:*
2023-10-10 17:46:23.576 [SourceCoordinator-Source: parquet-source] INFO  
o.a.f.c.file.src.assigners.LocalityAwareSplitAssigner  - Assigning remote split 
to requesting host '10': Optional[FileSourceSplit: 
s3://REDACTED/part-1-13189.snappy [0, 94451)  hosts=[localhost] ID=000229 
position=null]
2023-10-10 17:46:23.576 [SourceCoordinator-Source: parquet-source] INFO  
o.a.flink.connector.file.src.impl.StaticFileSplitEnumerator  - Assigned split 
to subtask 0 : FileSourceSplit: s3://REDACTED/part-1-13189.snappy [0, 94451)  
hosts=[localhost] ID=000229 position=null
2023-10-10 17:46:23.786 [SourceCoordinator-Source: parquet-source] INFO  
o.apache.flink.runtime.source.coordinator.SourceCoordinator  - Source Source: 
parquet-source received split request from parallel task 1 (#0)
2023-10-10 17:46:23.786 [SourceCoordinator-Source: parquet-source] DEBUG 
o.a.f.c.base.source.hybrid.HybridSourceSplitEnumerator  - handleSplitRequest 
subtask=1 

Re: [PR] [hotfix] refer to sql_connector_download_table shortcode in the docs … [flink-connector-kafka]

2023-10-30 Thread via GitHub


tzulitai closed pull request #63: [hotfix] refer to 
sql_connector_download_table shortcode in the docs …
URL: https://github.com/apache/flink-connector-kafka/pull/63


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] refer to sql_connector_download_table shortcode in the docs … [flink-connector-kafka]

2023-10-30 Thread via GitHub


tzulitai commented on PR #63:
URL: 
https://github.com/apache/flink-connector-kafka/pull/63#issuecomment-1786150659

   Merged via 979791c4c71e944c16c51419cf9a84aa1f8fea4c.
   
   Thanks @mas-chen!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27758) [JUnit5 Migration] Module: flink-table-runtime

2023-10-30 Thread Chao Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17781135#comment-17781135
 ] 

Chao Liu commented on FLINK-27758:
--

Hi [~Sergey Nuyanzin] I'd like to work on this ticket, could I get assigned to 
this?

> [JUnit5 Migration] Module: flink-table-runtime
> --
>
> Key: FLINK-27758
> URL: https://issues.apache.org/jira/browse/FLINK-27758
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Runtime, Tests
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available, stale-assigned
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [hotfix] refer to sql_connector_download_table shortcode in the docs … [flink-connector-kafka]

2023-10-30 Thread via GitHub


mas-chen opened a new pull request, #63:
URL: https://github.com/apache/flink-connector-kafka/pull/63

   …to adhere to new connector versioning format
   
   Clicking through the docs, it looks like all connector moved to using this 
shortcode except for Kafka


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33376) Add AuthInfo config option for Zookeeper configuration

2023-10-30 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17781060#comment-17781060
 ] 

Matthias Pohl commented on FLINK-33376:
---

hm, good point.

About the different configuration options you mentioned ((/) should be exposed 
to the user, (x) should NOT be exposed to the user, (?) debatable; (!) should 
NOT be exposed to the user but might be useful within Flink):
 * 
[authorization|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#authorization(java.lang.String,byte%5B%5D)]:
 (/) This is the option which you want to expose to allow additional AuthInfo 
records as part of the connect, correct?
 * 
[canBeReadOnly|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#canBeReadOnly(boolean)]:
 (x) I'm not sure whether that's what we want. This would allow the client to 
read data from a ZK node that is cut off from the other nodes due to some 
network partition. AFAIU, we would increase the risk of ending up in an 
inconsistent state on Flink's side. WDYT?
 * 
[compressionProvider|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#compressionProvider(org.apache.curator.framework.api.CompressionProvider)]:
 (?) This configuration parameter can be used to specify a compression 
algorithm for the data that's sent. That might be useful. But generally, 
there's not much data written to ZK as far as I know. It's usually only a 
reference. The BLOB itself is stored on the FileSystem.
 * 
[defaultData|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#defaultData(byte%5B%5D)]:
 (x) I cannot think of a use-case where this is needed by the user. IIUC, it's 
used to specify data that's written/returned if no data is specified. Flink 
doesn't use this functionality and I don't see how it would be useful to the 
user to expose this feature.
 * 
[dontUseContainerParents|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#dontUseContainerParents()]/
 
[useContainerParentsIfAvailable|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#useContainerParentsIfAvailable()]:
 (!) This sounds like a property that is useful for Flink's leader election 
cleanup. But I don't see extra value in exposing the property to the user.
 * 
[maxCloseWaitMs|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#maxCloseWaitMs(int)]:
 (/) That might be a property that could be useful to the user. It would enable 
the user to adjust to different network speeds.
 * 
[namespace|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#namespace(java.lang.String)]
 (x) This one is already in use (see 
{{{}high-availability.zookeeper.path.root{}}})
 * 
[runSafeService|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#runSafeService(java.util.concurrent.Executor)]:
 (x) That seems to be a feature that's Flink-specific and shouldn't be handled 
by the user.
 * 
[schemaSet|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#schemaSet(org.apache.curator.framework.schema.SchemaSet)]:
 (!) That feels like a way to harden the internal contract between Flink and 
ZooKeeper. It might be nice-to-have to harden for Flink. But it shouldn't be 
exposed to the user, IMHO.
 * 
[simulatedSessionExpirationPercent|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#simulatedSessionExpirationPercent(int)]:
 (/) That one seems to be reasonable to be exposed.
 * 
[waitForShutdownTimeoutMs|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#waitForShutdownTimeoutMs(int)]:
 (x) This one can be used if you want Flink to wait for the resource cleanup on 
the ZK side, AFAIU. It feels like this is internal Flink logic and should be 
exposed.

I'm curious what you think about it.

> Add AuthInfo config option for Zookeeper configuration
> --
>
> Key: FLINK-33376
> URL: https://issues.apache.org/jira/browse/FLINK-33376
> Project: Flink
>  Issue Type: Improvement
>Reporter: Oleksandr Nitavskyi
>Assignee: Oleksandr Nitavskyi
>Priority: Major
>
> In certain cases ZooKeeper requires additional Authentication information. 
> For example list of valid [names for 
> ensemble|https://zookeeper.apache.org/doc/r3.8.0/zookeeperAdmin.html#:~:text=for%20secure%20authentication.-,zookeeper.ensembleAuthName,-%3A%20(Java%20system%20property]
>  in order to prevent the 

Re: [PR] [FLINK-18286] Implement type inference for functions on composite types [flink]

2023-10-30 Thread via GitHub


flinkbot commented on PR #23622:
URL: https://github.com/apache/flink/pull/23622#issuecomment-1785508212

   
   ## CI report:
   
   * 916e7ff59fc86c2468432a92623530857efd5027 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-18286] Implement type inference for functions on composite types [flink]

2023-10-30 Thread via GitHub


dawidwys opened a new pull request, #23622:
URL: https://github.com/apache/flink/pull/23622

   ## What is the purpose of the change
   
   This ports `ELEMENT/ITEM_AT/CARDINALITY` functions to the new type inference 
stack. The end goal is to get rid off `PlannerTypeInferenceUtil`.
   
   ## Verifying this change
   
   All existing IT tests pass.
   Added tests for the introduced strategies.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-33376) Add AuthInfo config option for Zookeeper configuration

2023-10-30 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17780411#comment-17780411
 ] 

Matthias Pohl edited comment on FLINK-33376 at 10/30/23 3:36 PM:
-

Sounds like a good idea. We could think about utilizing the namespaces. The 
FLIP could propose adding namespace support for {{curator}} -and 
{{zookeeper}}-. That would allow to load any parameter supported by these 
systems. WDYT?


was (Author: mapohl):
Sounds like a good idea. We could think about utilizing the namespaces. The 
FLIP could propose adding namespace support for {{curator}} and {{zookeeper}}. 
That would allow to load any parameter supported by these systems. WDYT?

> Add AuthInfo config option for Zookeeper configuration
> --
>
> Key: FLINK-33376
> URL: https://issues.apache.org/jira/browse/FLINK-33376
> Project: Flink
>  Issue Type: Improvement
>Reporter: Oleksandr Nitavskyi
>Assignee: Oleksandr Nitavskyi
>Priority: Major
>
> In certain cases ZooKeeper requires additional Authentication information. 
> For example list of valid [names for 
> ensemble|https://zookeeper.apache.org/doc/r3.8.0/zookeeperAdmin.html#:~:text=for%20secure%20authentication.-,zookeeper.ensembleAuthName,-%3A%20(Java%20system%20property]
>  in order to prevent the accidental connecting to a wrong ensemble.
> Curator allows to add additional AuthInfo object for such configuration. Thus 
> it would be useful to add one more additional Map property which would allow 
> to pass AuthInfo objects during Curator client creation.
> *Acceptance Criteria:* For Flink users it is possible to configure auth info 
> list for Curator framework client.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-33376) Add AuthInfo config option for Zookeeper configuration

2023-10-30 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17780489#comment-17780489
 ] 

Matthias Pohl edited comment on FLINK-33376 at 10/30/23 3:35 PM:
-

It would be really good to be able to support something generic enough to 
translate Flink configuration into Curator config, e.g. like in [hadoop 
config|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#flink-hadoop-%3Ckey%3E].
 

But since Curator uses the Builder pattern I do not see how can we make it 
generic enough. Probably as compromise it would be sane to consider to add 
support for all missing Curator configurations. 
If we go this way here is the list of configurations which Flink doesn't 
configure at all for now:
* 
[authorization|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#authorization(java.lang.String,byte%5B%5D)]
* 
[canBeReadOnly|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#canBeReadOnly(boolean)]
* 
[compressionProvider|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#compressionProvider(org.apache.curator.framework.api.CompressionProvider)]
* 
[defaultData|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#defaultData(byte%5B%5D)]
* 
[dontUseContainerParents|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#dontUseContainerParents()]/
 
[useContainerParentsIfAvailable|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#useContainerParentsIfAvailable()]
* 
[maxCloseWaitMs|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#maxCloseWaitMs(int)]
* 
[namespace|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#namespace(java.lang.String)]
* 
[runSafeService|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#runSafeService(java.util.concurrent.Executor)]
* 
[schemaSet|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#schemaSet(org.apache.curator.framework.schema.SchemaSet)]
* 
[simulatedSessionExpirationPercent|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#simulatedSessionExpirationPercent(int)]
* 
[waitForShutdownTimeoutMs|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#waitForShutdownTimeoutMs(int)]




was (Author: oleksandr nitavskyi):
It would be really good to be able to support something generic enough to 
translate Flink configuration into Curator config, e.g. like in [hadoop 
config|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#flink-hadoop-%3Ckey%3E].
 

But since Curator uses the Builder pattern I do not see how can we make it 
generic enough. Probably as compromise it would be sane to consider to add 
support for all missing Curator configurations. 
If we go this way here is the list of configurations which Flink doesn't 
configure at all for now:
* 
[authorization|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#authorization(java.lang.String,byte%5B%5D)]
* 
[canBeReadOnly|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#canBeReadOnly(boolean)]
* 
[compressionProvider|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#compressionProvider(org.apache.curator.framework.api.CompressionProvider)]
* 
[defaultData|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#defaultData(byte%5B%5D)]
* 
[dontUseContainerParents|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#dontUseContainerParents()]/[useContainerParentsIfAvailable|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#useContainerParentsIfAvailable()]
* 
[maxCloseWaitMs|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#maxCloseWaitMs(int)]
* 
[namespace|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#namespace(java.lang.String)]
* 
[runSafeService|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#runSafeService(java.util.concurrent.Executor)]
* 
[schemaSet|https://curator.apache.org/apidocs/org/apache/curator/framework/CuratorFrameworkFactory.Builder.html#schemaSet(org.apache.curator.framework.schema.SchemaSet)]
* 

[jira] [Closed] (FLINK-32182) Use original japicmp plugin

2023-10-30 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-32182.

Resolution: Fixed

master: 530ebd2f4ef59f84d2aedaf13a89b030480e3808

> Use original japicmp plugin
> ---
>
> Key: FLINK-32182
> URL: https://issues.apache.org/jira/browse/FLINK-32182
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> We currently use a japicmp fork for maven 3.2.5 compatibility, then we can 
> now drop.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-32182][build] Use original japicmp plugin [flink]

2023-10-30 Thread via GitHub


zentol merged PR #23594:
URL: https://github.com/apache/flink/pull/23594


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-33401) Kafka connector has broken version

2023-10-30 Thread Pavel Khokhlov (Jira)
Pavel Khokhlov created FLINK-33401:
--

 Summary: Kafka connector has broken version
 Key: FLINK-33401
 URL: https://issues.apache.org/jira/browse/FLINK-33401
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Reporter: Pavel Khokhlov


Trying to run Flink 1.18 with Kafka Connector

but official documentation has a bug  

[https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/connectors/datastream/kafka/]
{noformat}

org.apache.flink
flink-connector-kafka
-1.18
{noformat}
Basically version *-1.18* doesn't exist.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-18286) Implement type inference for functions on composite types

2023-10-30 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-18286:


Assignee: Dawid Wysakowicz  (was: Francesco Guardiani)

> Implement type inference for functions on composite types
> -
>
> Key: FLINK-18286
> URL: https://issues.apache.org/jira/browse/FLINK-18286
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available, stale-assigned
>
> We should implement type inference for functions such as AT/ITEM/ELEMENT/GET.
> Additionally we should make sure they are consistent in Table API & SQL.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] Bump org.elasticsearch:elasticsearch from 7.10.2 to 7.17.13 in /flink-connector-elasticsearch-base [flink-connector-elasticsearch]

2023-10-30 Thread via GitHub


dependabot[bot] opened a new pull request, #79:
URL: https://github.com/apache/flink-connector-elasticsearch/pull/79

   Bumps 
[org.elasticsearch:elasticsearch](https://github.com/elastic/elasticsearch) 
from 7.10.2 to 7.17.13.
   
   Release notes
   Sourced from https://github.com/elastic/elasticsearch/releases;>org.elasticsearch:elasticsearch's
 releases.
   
   Elasticsearch 7.17.13
   Downloads: https://elastic.co/downloads/elasticsearch;>https://elastic.co/downloads/elasticsearch
   Release notes: https://www.elastic.co/guide/en/elasticsearch/reference/7.17/release-notes-7.17.13.html;>https://www.elastic.co/guide/en/elasticsearch/reference/7.17/release-notes-7.17.13.html
   Elasticsearch 7.17.12
   Downloads: https://elastic.co/downloads/elasticsearch;>https://elastic.co/downloads/elasticsearch
   Release notes: https://www.elastic.co/guide/en/elasticsearch/reference/7.17/release-notes-7.17.12.html;>https://www.elastic.co/guide/en/elasticsearch/reference/7.17/release-notes-7.17.12.html
   Elasticsearch 7.17.11
   Downloads: https://elastic.co/downloads/elasticsearch;>https://elastic.co/downloads/elasticsearch
   Release notes: https://www.elastic.co/guide/en/elasticsearch/reference/7.17/release-notes-7.17.11.html;>https://www.elastic.co/guide/en/elasticsearch/reference/7.17/release-notes-7.17.11.html
   Elasticsearch 7.17.10
   Downloads: https://elastic.co/downloads/elasticsearch;>https://elastic.co/downloads/elasticsearch
   Release notes: https://www.elastic.co/guide/en/elasticsearch/reference/7.17/release-notes-7.17.10.html;>https://www.elastic.co/guide/en/elasticsearch/reference/7.17/release-notes-7.17.10.html
   Elasticsearch 7.17.9
   Downloads: https://elastic.co/downloads/elasticsearch;>https://elastic.co/downloads/elasticsearch
   Release notes: https://www.elastic.co/guide/en/elasticsearch/reference/7.17/release-notes-7.17.9.html;>https://www.elastic.co/guide/en/elasticsearch/reference/7.17/release-notes-7.17.9.html
   Elasticsearch 7.17.8
   Downloads: https://elastic.co/downloads/elasticsearch;>https://elastic.co/downloads/elasticsearch
   Release notes: https://www.elastic.co/guide/en/elasticsearch/reference/7.17/release-notes-7.17.8.html;>https://www.elastic.co/guide/en/elasticsearch/reference/7.17/release-notes-7.17.8.html
   Elasticsearch 7.17.7
   Downloads: https://elastic.co/downloads/elasticsearch;>https://elastic.co/downloads/elasticsearch
   Release notes: https://www.elastic.co/guide/en/elasticsearch/reference/7.17/release-notes-7.17.7.html;>https://www.elastic.co/guide/en/elasticsearch/reference/7.17/release-notes-7.17.7.html
   Elasticsearch 7.17.6
   Downloads: https://elastic.co/downloads/elasticsearch;>https://elastic.co/downloads/elasticsearch
   Release notes: https://www.elastic.co/guide/en/elasticsearch/reference/7.17/release-notes-7.17.6.html;>https://www.elastic.co/guide/en/elasticsearch/reference/7.17/release-notes-7.17.6.html
   Elasticsearch 7.17.5
   Downloads: https://elastic.co/downloads/elasticsearch;>https://elastic.co/downloads/elasticsearch
   Release notes: https://www.elastic.co/guide/en/elasticsearch/reference/7.17/release-notes-7.17.5.html;>https://www.elastic.co/guide/en/elasticsearch/reference/7.17/release-notes-7.17.5.html
   Elasticsearch 7.17.4
   Downloads: https://elastic.co/downloads/elasticsearch;>https://elastic.co/downloads/elasticsearch
   Release notes: https://www.elastic.co/guide/en/elasticsearch/reference/7.17/release-notes-7.17.4.html;>https://www.elastic.co/guide/en/elasticsearch/reference/7.17/release-notes-7.17.4.html
   Elasticsearch 7.17.3
   Downloads: https://elastic.co/downloads/elasticsearch;>https://elastic.co/downloads/elasticsearch
   Release notes: https://www.elastic.co/guide/en/elasticsearch/reference/7.17/release-notes-7.17.3.html;>https://www.elastic.co/guide/en/elasticsearch/reference/7.17/release-notes-7.17.3.html
   Elasticsearch 7.17.2
   Downloads: https://elastic.co/downloads/elasticsearch;>https://elastic.co/downloads/elasticsearch
   Release notes: https://www.elastic.co/guide/en/elasticsearch/reference/7.17/release-notes-7.17.2.html;>https://www.elastic.co/guide/en/elasticsearch/reference/7.17/release-notes-7.17.2.html
   Elasticsearch 7.17.1
   Downloads: https://elastic.co/downloads/elasticsearch;>https://elastic.co/downloads/elasticsearch
   
   
   ... (truncated)
   
   
   Commits
   
   https://github.com/elastic/elasticsearch/commit/2b211dbb8bfdecaf7f5b44d356bdfe54b1050c13;>2b211db
 Fix periodic Java matrix jobs to use correct JVM
   https://github.com/elastic/elasticsearch/commit/82490eb2379cc8f431e267eeb68d77b97eb28048;>82490eb
 Free up allocated buffers in Netty4HttpServerTransportTests (https://redirect.github.com/elastic/elasticsearch/issues/99005;>#99005)
 (https://redirect.github.com/elastic/elasticsearch/issues/99037;>#99037)
   https://github.com/elastic/elasticsearch/commit/9d448a0f951f8fea4439861d59da408f7b31e41d;>9d448a0
 Introduce FilterRestHandler 

Re: [PR] [FLINK-31757][runtime][scheduler] Support balanced tasks scheduling. [flink]

2023-10-30 Thread via GitHub


RocMarshal closed pull request #22750: [FLINK-31757][runtime][scheduler] 
Support balanced tasks scheduling.
URL: https://github.com/apache/flink/pull/22750


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33080][runtime] Make the configured checkpoint storage in the job-side configuration will take effect. [flink]

2023-10-30 Thread via GitHub


JunRuiLee commented on PR #23408:
URL: https://github.com/apache/flink/pull/23408#issuecomment-1785346603

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-33334) A number of json plan tests fail with comparisonfailure

2023-10-30 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-4?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan closed FLINK-4.
-

> A number of json plan tests fail with comparisonfailure
> ---
>
> Key: FLINK-4
> URL: https://issues.apache.org/jira/browse/FLINK-4
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> for instance
> {noformat}
> [ERROR] 
> org.apache.flink.table.planner.plan.nodes.exec.stream.SortJsonPlanTest.testSort
>   Time elapsed: 0.037 s  <<< FAILURE!
> org.junit.ComparisonFailure: 
> expected:<...alse",
> "[table-sink-class" : "DEFAULT",
> "connector" : "values]"
>   }
>   ...> but was:<...alse",
> "[connector" : "values",
> "table-sink-class" : "DEFAULT]"
>   }
>   ...>
>   at org.junit.Assert.assertEquals(Assert.java:117)
>   at org.junit.Assert.assertEquals(Assert.java:146)
>   at 
> org.apache.flink.table.planner.utils.TableTestUtilBase.doVerifyJsonPlan(TableTestBase.scala:846)
>   at 
> org.apache.flink.table.planner.utils.TableTestUtilBase.verifyJsonPlan(TableTestBase.scala:813)
>   at 
> org.apache.flink.table.planner.plan.nodes.exec.stream.SortJsonPlanTest.testSort(SortJsonPlanTest.java:64)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-33334) A number of json plan tests fail with comparisonfailure

2023-10-30 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-4?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan resolved FLINK-4.
---
Fix Version/s: 1.19.0
   Resolution: Fixed

Fixed in master: cc62044efc054b057d02838a02356fe7c9c9d7a2

> A number of json plan tests fail with comparisonfailure
> ---
>
> Key: FLINK-4
> URL: https://issues.apache.org/jira/browse/FLINK-4
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> for instance
> {noformat}
> [ERROR] 
> org.apache.flink.table.planner.plan.nodes.exec.stream.SortJsonPlanTest.testSort
>   Time elapsed: 0.037 s  <<< FAILURE!
> org.junit.ComparisonFailure: 
> expected:<...alse",
> "[table-sink-class" : "DEFAULT",
> "connector" : "values]"
>   }
>   ...> but was:<...alse",
> "[connector" : "values",
> "table-sink-class" : "DEFAULT]"
>   }
>   ...>
>   at org.junit.Assert.assertEquals(Assert.java:117)
>   at org.junit.Assert.assertEquals(Assert.java:146)
>   at 
> org.apache.flink.table.planner.utils.TableTestUtilBase.doVerifyJsonPlan(TableTestBase.scala:846)
>   at 
> org.apache.flink.table.planner.utils.TableTestUtilBase.verifyJsonPlan(TableTestBase.scala:813)
>   at 
> org.apache.flink.table.planner.plan.nodes.exec.stream.SortJsonPlanTest.testSort(SortJsonPlanTest.java:64)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33334][table] Make map entries sorted by keys in json plan to have it stable for java21 [flink]

2023-10-30 Thread via GitHub


LadyForest merged PR #23562:
URL: https://github.com/apache/flink/pull/23562


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-32912) Promote release 1.18

2023-10-30 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl updated FLINK-32912:
--
Description: 
Once the release has been finalized (FLINK-32920), the last step of the process 
is to promote the release within the project and beyond. Please wait for 24h 
after finalizing the release in accordance with the [ASF release 
policy|http://www.apache.org/legal/release-policy.html#release-announcements].

*Final checklist to declare this issue resolved:*
 # Website pull request to [list the 
release|http://flink.apache.org/downloads.html] merged
 # Release announced on the user@ mailing list.
 # Blog post published, if applicable.
 # Release recorded in 
[reporter.apache.org|https://reporter.apache.org/addrelease.html?flink].
 # Release announced on social media.
 # Completion declared on the dev@ mailing list.
 # Update Homebrew: [https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] 
(seems to be done automatically - at least for minor releases  for both minor 
and major releases)
 # Updated the japicmp configuration
 ** corresponding SNAPSHOT branch japicmp reference version set to the just 
released version, and API compatibiltity checks for {{@PublicEvolving}}  was 
enabled
 ** (minor version release only) master branch japicmp reference version set to 
the just released version
 ** (minor version release only) master branch japicmp exclusions have been 
cleared
 # Update the list of previous version in {{docs/config.toml}} on the master 
branch.
 # Set {{show_outdated_warning: true}} in {{docs/config.toml}} in the branch of 
the _now deprecated_ Flink version (i.e. 1.16 if 1.18.0 is released)
 # Update stable and master alias in 
[https://github.com/apache/flink/blob/master/.github/workflows/docs.yml]
 # Open discussion thread for End of Life for Unsupported version (i.e. 1.16)

  was:
Once the release has been finalized (FLINK-32920), the last step of the process 
is to promote the release within the project and beyond. Please wait for 24h 
after finalizing the release in accordance with the [ASF release 
policy|http://www.apache.org/legal/release-policy.html#release-announcements].

*Final checklist to declare this issue resolved:*
 # Website pull request to [list the 
release|http://flink.apache.org/downloads.html] merged
 # Release announced on the user@ mailing list.
 # Blog post published, if applicable.
 # Release recorded in 
[reporter.apache.org|https://reporter.apache.org/addrelease.html?flink].
 # Release announced on social media.
 # Completion declared on the dev@ mailing list.
 # Update Homebrew: [https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] 
(seems to be done automatically - at least for minor releases  for both minor 
and major releases)
 # Update quickstart scripts in {{{}flink-web{}}}, under the {{_include/q/}} 
directory
 # Updated the japicmp configuration
 ** corresponding SNAPSHOT branch japicmp reference version set to the just 
released version, and API compatibiltity checks for {{@PublicEvolving}}  was 
enabled
 ** (minor version release only) master branch japicmp reference version set to 
the just released version
 ** (minor version release only) master branch japicmp exclusions have been 
cleared
 # Update the list of previous version in {{docs/config.toml}} on the master 
branch.
 # Set {{show_outdated_warning: true}} in {{docs/config.toml}} in the branch of 
the _now deprecated_ Flink version (i.e. 1.16 if 1.18.0 is released)
 # Update stable and master alias in 
[https://github.com/apache/flink/blob/master/.github/workflows/docs.yml]
 # Open discussion thread for End of Life for Unsupported version (i.e. 1.16)


> Promote release 1.18
> 
>
> Key: FLINK-32912
> URL: https://issues.apache.org/jira/browse/FLINK-32912
> Project: Flink
>  Issue Type: New Feature
>Affects Versions: 1.18.0
>Reporter: Sergey Nuyanzin
>Assignee: Jing Ge
>Priority: Major
>  Labels: pull-request-available
>
> Once the release has been finalized (FLINK-32920), the last step of the 
> process is to promote the release within the project and beyond. Please wait 
> for 24h after finalizing the release in accordance with the [ASF release 
> policy|http://www.apache.org/legal/release-policy.html#release-announcements].
> *Final checklist to declare this issue resolved:*
>  # Website pull request to [list the 
> release|http://flink.apache.org/downloads.html] merged
>  # Release announced on the user@ mailing list.
>  # Blog post published, if applicable.
>  # Release recorded in 
> [reporter.apache.org|https://reporter.apache.org/addrelease.html?flink].
>  # Release announced on social media.
>  # Completion declared on the dev@ mailing list.
>  # Update Homebrew: 
> [https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] (seems to be done 
> 

[jira] [Resolved] (FLINK-30768) flink-web version cleanup

2023-10-30 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl resolved FLINK-30768.
---
Resolution: Fixed

> flink-web version cleanup
> -
>
> Key: FLINK-30768
> URL: https://issues.apache.org/jira/browse/FLINK-30768
> Project: Flink
>  Issue Type: Improvement
>  Components: Project Website
>Affects Versions: 1.16.0, 1.17.0, 1.15.3
>Reporter: Matthias Pohl
>Assignee: xiang1 yu
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available, starter
>
> The Flink website sources have multiple dependency versions (e.g. log4j, 
> sl4fj in q/gradle-quickstart.sh in {{q/gradle-quickstart.sh}}) that are not 
> referenced in Flink's parent pom file. That means that contributors might 
> forget updating the right locations in {{flink-web}} when updating the 
> dependencies. Additionally, {{q/gradle-quickstart.sh}} specifies a 
> {{$defaultFlinkVersion}} variable but requires to update a hardcoded Flink 
> version further down in the script as well (see 
> [q/gradle-quickstart.sh:119|https://github.com/apache/flink-web/blob/dc24124816d86617991050a2e36fe70ee40ff2dc/q/gradle-quickstart.sh#L119]).
>  
> This Jira issue is about reducing the locations where we have to update 
> versions (through variables) and adding references that these variables have 
> to be updated to the corresponding versions that used in the Flink source 
> code as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-30768] [Project Website] flink-web version cleanup [flink-web]

2023-10-30 Thread via GitHub


XComp merged PR #683:
URL: https://github.com/apache/flink-web/pull/683


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][rest] Improve error message [flink]

2023-10-30 Thread via GitHub


flinkbot commented on PR #23621:
URL: https://github.com/apache/flink/pull/23621#issuecomment-1785229315

   
   ## CI report:
   
   * 779f3288aecfe99d43d850e94fe1d669312100dd UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33400) Pulsar connector doesn't compile for Flink 1.18 due to Archunit update

2023-10-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33400:
---
Labels: pull-request-available  (was: )

> Pulsar connector doesn't compile for Flink 1.18 due to Archunit update
> --
>
> Key: FLINK-33400
> URL: https://issues.apache.org/jira/browse/FLINK-33400
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Pulsar
>Affects Versions: pulsar-4.0.1
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Blocker
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33400] Pulsar connector doesn't compile for Flink 1.18 [flink-connector-pulsar]

2023-10-30 Thread via GitHub


MartijnVisser opened a new pull request, #62:
URL: https://github.com/apache/flink-connector-pulsar/pull/62

   ## Purpose of the change
   
   * Add support for Flink 1.18

   ## Brief change log
   
   * Let CI also test against Flink 1.18
   * Added archunit violations that are reported by newer Archunit version 
(because of this has been updated in Flink 1.18) to existing violation store 
(so that it can still compile for Flink 1.17 and lower)
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Significant changes
   
   - [ ] Dependencies have been added or upgraded
   - [ ] Public API has been changed (Public API is any class annotated with 
`@Public(Evolving)`)
   - [ ] Serializers have been changed
   - [ ] New feature has been introduced
   - If yes, how is this documented? (not applicable / docs / JavaDocs / 
not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-33400) Pulsar connector doesn't compile for Flink 1.18 due to Archunit update

2023-10-30 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-33400:
--

 Summary: Pulsar connector doesn't compile for Flink 1.18 due to 
Archunit update
 Key: FLINK-33400
 URL: https://issues.apache.org/jira/browse/FLINK-33400
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Pulsar
Affects Versions: pulsar-4.0.1
Reporter: Martijn Visser
Assignee: Martijn Visser






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [hotfix][docs] Add SQL Gateway doc reference in the Chinese version of table overview [flink]

2023-10-30 Thread via GitHub


KarmaGYZ merged PR #23619:
URL: https://github.com/apache/flink/pull/23619


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33395][table-planner] fix the join hint doesn't work when appears in subquery [flink]

2023-10-30 Thread via GitHub


flinkbot commented on PR #23620:
URL: https://github.com/apache/flink/pull/23620#issuecomment-1785043507

   
   ## CI report:
   
   * 436d138830d46f2e81b99689067954e953636c83 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33395) The join hint doesn't work when appears in subquery

2023-10-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33395:
---
Labels: pull-request-available  (was: )

> The join hint doesn't work when appears in subquery
> ---
>
> Key: FLINK-33395
> URL: https://issues.apache.org/jira/browse/FLINK-33395
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.0, 1.17.0, 1.18.0
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
>
> See the existent test 
> 'NestLoopJoinHintTest#testJoinHintWithJoinHintInCorrelateAndWithAgg', the 
> test plan is 
> {code:java}
> HashJoin(joinType=[LeftSemiJoin], where=[=(a1, EXPR$0)], select=[a1, b1], 
> build=[right], tryDistinctBuildRow=[true])
> :- Exchange(distribution=[hash[a1]])
> :  +- TableSourceScan(table=[[default_catalog, default_database, T1]], 
> fields=[a1, b1])
> +- Exchange(distribution=[hash[EXPR$0]])
>+- LocalHashAggregate(groupBy=[EXPR$0], select=[EXPR$0])
>   +- Calc(select=[EXPR$0])
>  +- HashAggregate(isMerge=[true], groupBy=[a1], select=[a1, 
> Final_COUNT(count$0) AS EXPR$0])
> +- Exchange(distribution=[hash[a1]])
>+- LocalHashAggregate(groupBy=[a1], select=[a1, 
> Partial_COUNT(a2) AS count$0])
>   +- NestedLoopJoin(joinType=[InnerJoin], where=[=(a2, a1)], 
> select=[a2, a1], build=[right])
>  :- TableSourceScan(table=[[default_catalog, 
> default_database, T2, project=[a2], metadata=[]]], fields=[a2], 
> hints=[[[ALIAS options:[T2)
>  +- Exchange(distribution=[broadcast])
> +- TableSourceScan(table=[[default_catalog, 
> default_database, T1, project=[a1], metadata=[]]], fields=[a1], 
> hints=[[[ALIAS options:[T1) {code}
> but the NestedLoopJoin should broadcase left side.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33395][table-planner] fix the join hint doesn't work when appears in subquery [flink]

2023-10-30 Thread via GitHub


xuyangzhong opened a new pull request, #23620:
URL: https://github.com/apache/flink/pull/23620

   ## What is the purpose of the change
   
   This pr try to fix that join hints should also be resolved and work when 
defined in subquery.
   
   ## Brief change log
   
 - *Modify the code in SqlToRelConverter to progagate the join hint when 
converting*
 - *Add an abstract shuttle to resolve join hint both in normal nodes and 
sub-querys.*
 - *Added a brief description of handling join hints in JoinStrategy*
 - *Correct the tests*
   
   ## Verifying this change
   
   This change is already covered by existing tests. There is also some extra 
tests added to verify it.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`:no
 - The serializers:no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature?no
 - If yes, how is the feature documented? 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][docs] Add SQL Gateway doc reference in the Chinese version of table overview [flink]

2023-10-30 Thread via GitHub


xiangyuf commented on code in PR #23619:
URL: https://github.com/apache/flink/pull/23619#discussion_r1376075317


##
docs/content.zh/docs/dev/table/overview.md:
##
@@ -52,6 +52,7 @@ and later use the DataStream API to build alerting based on 
the matched patterns
 * [SQL]({{< ref "docs/dev/table/sql/overview" >}}): SQL 支持的操作和语法。
 * [内置函数]({{< ref "docs/dev/table/functions/systemFunctions" >}}): Table API 和 
SQL 中的内置函数。
 * [SQL Client]({{< ref "docs/dev/table/sqlClient" >}}): 不用编写代码就可以尝试 Flink 
SQL,可以直接提交 SQL 任务到集群上。
+* [SQL Gateway]({{< ref "docs/dev/table/sql-gateway/overview" >}}): 
SQL提交服务,支持多个客户端从远端并发提交 SQL 任务。

Review Comment:
   @KarmaGYZ Updated



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][docs] Add SQL Gateway doc reference in the Chinese version of table overview [flink]

2023-10-30 Thread via GitHub


KarmaGYZ commented on code in PR #23619:
URL: https://github.com/apache/flink/pull/23619#discussion_r1376073257


##
docs/content.zh/docs/dev/table/overview.md:
##
@@ -52,6 +52,7 @@ and later use the DataStream API to build alerting based on 
the matched patterns
 * [SQL]({{< ref "docs/dev/table/sql/overview" >}}): SQL 支持的操作和语法。
 * [内置函数]({{< ref "docs/dev/table/functions/systemFunctions" >}}): Table API 和 
SQL 中的内置函数。
 * [SQL Client]({{< ref "docs/dev/table/sqlClient" >}}): 不用编写代码就可以尝试 Flink 
SQL,可以直接提交 SQL 任务到集群上。
+* [SQL Gateway]({{< ref "docs/dev/table/sql-gateway/overview" >}}): 
SQL提交服务,支持多个客户端从远端并发提交 SQL 任务。

Review Comment:
   ```suggestion
   * [SQL Gateway]({{< ref "docs/dev/table/sql-gateway/overview" >}}): SQL 
提交服务,支持多个客户端从远端并发提交 SQL 任务。
   ```



##
docs/content.zh/docs/dev/table/overview.md:
##
@@ -52,6 +52,7 @@ and later use the DataStream API to build alerting based on 
the matched patterns
 * [SQL]({{< ref "docs/dev/table/sql/overview" >}}): SQL 支持的操作和语法。
 * [内置函数]({{< ref "docs/dev/table/functions/systemFunctions" >}}): Table API 和 
SQL 中的内置函数。
 * [SQL Client]({{< ref "docs/dev/table/sqlClient" >}}): 不用编写代码就可以尝试 Flink 
SQL,可以直接提交 SQL 任务到集群上。
+* [SQL Gateway]({{< ref "docs/dev/table/sql-gateway/overview" >}}): 
SQL提交服务,支持多个客户端从远端并发提交 SQL 任务。

Review Comment:
   ```suggestion
   * [SQL Gateway]({{< ref "docs/dev/table/sql-gateway/overview" >}}): SQL 
提交服务,支持多个客户端从远端并发提交 SQL 任务。
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][docs] Add SQL Gateway doc reference in the Chinese version of table overview [flink]

2023-10-30 Thread via GitHub


xiangyuf commented on PR #23619:
URL: https://github.com/apache/flink/pull/23619#issuecomment-1785014318

   @KarmaGYZ Pls review this when you have time.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33379] Bump CI flink version on flink-connector-elasticsearch [flink-connector-elasticsearch]

2023-10-30 Thread via GitHub


liyubin117 commented on PR #78:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/78#issuecomment-1785012806

   @MartijnVisser CI has succeed.
   
[https://github.com/liyubin117/flink-connector-elasticsearch/actions/runs/6692027990](url)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33393) flink document description error

2023-10-30 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan updated FLINK-33393:

Fix Version/s: 1.19.0
   (was: 1.17.1)

> flink document description error
> 
>
> Key: FLINK-33393
> URL: https://issues.apache.org/jira/browse/FLINK-33393
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.17.1
>Reporter: 蔡灿材
>Assignee: 蔡灿材
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
> Attachments: 捕获.PNG
>
>
> flink document description error, function part description error



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-33393) flink document description error

2023-10-30 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan resolved FLINK-33393.
-
Resolution: Fixed

> flink document description error
> 
>
> Key: FLINK-33393
> URL: https://issues.apache.org/jira/browse/FLINK-33393
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.17.1
>Reporter: 蔡灿材
>Assignee: 蔡灿材
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
> Attachments: 捕获.PNG
>
>
> flink document description error, function part description error



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33393) flink document description error

2023-10-30 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17780964#comment-17780964
 ] 

Rui Fan commented on FLINK-33393:
-

Merged to master<1.19> via : 806147c3233a47eacaa630dca5fdfef83397ab31

> flink document description error
> 
>
> Key: FLINK-33393
> URL: https://issues.apache.org/jira/browse/FLINK-33393
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.17.1
>Reporter: 蔡灿材
>Assignee: 蔡灿材
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.1
>
> Attachments: 捕获.PNG
>
>
> flink document description error, function part description error



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33393][doc] Fix typo in function documentation [flink]

2023-10-30 Thread via GitHub


1996fanrui merged PR #23618:
URL: https://github.com/apache/flink/pull/23618


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33379] Bump CI flink version on flink-connector-elasticsearch [flink-connector-elasticsearch]

2023-10-30 Thread via GitHub


liyubin117 commented on code in PR #78:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/78#discussion_r1376042196


##
.github/workflows/weekly.yml:
##
@@ -35,12 +35,18 @@ jobs:
 }, {
   flink: 1.18-SNAPSHOT,
   branch: main
+}, {
+  flink: 1.19-SNAPSHOT,
+  branch: main
 }, {
   flink: 1.16.2,
   branch: v3.0
 }, {
   flink: 1.17.1,
   branch: v3.0
+}, {
+  flink: 1.18.0,
+  branch: v3.0

Review Comment:
   I found that you removed the 1.18.0 support in the following hotfix, and 
there are compilation failures indeed, so I will also remove the support. 
thanks for your reminds.
   
![image](https://github.com/apache/flink-connector-elasticsearch/assets/7313035/ec287fa2-b895-4c70-8694-de327259c304)
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33379] Bump CI flink version on flink-connector-elasticsearch [flink-connector-elasticsearch]

2023-10-30 Thread via GitHub


liyubin117 commented on code in PR #78:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/78#discussion_r1376042196


##
.github/workflows/weekly.yml:
##
@@ -35,12 +35,18 @@ jobs:
 }, {
   flink: 1.18-SNAPSHOT,
   branch: main
+}, {
+  flink: 1.19-SNAPSHOT,
+  branch: main
 }, {
   flink: 1.16.2,
   branch: v3.0
 }, {
   flink: 1.17.1,
   branch: v3.0
+}, {
+  flink: 1.18.0,
+  branch: v3.0

Review Comment:
   I found that you removed the 1.18.0 support in the following hotfix, and 
there are compilation failures indeed, so I will 
also remove the support. thanks for your reminds.
   
![image](https://github.com/apache/flink-connector-elasticsearch/assets/7313035/ec287fa2-b895-4c70-8694-de327259c304)
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33379] Bump CI flink version on flink-connector-elasticsearch [flink-connector-elasticsearch]

2023-10-30 Thread via GitHub


liyubin117 commented on code in PR #78:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/78#discussion_r1376042196


##
.github/workflows/weekly.yml:
##
@@ -35,12 +35,18 @@ jobs:
 }, {
   flink: 1.18-SNAPSHOT,
   branch: main
+}, {
+  flink: 1.19-SNAPSHOT,
+  branch: main
 }, {
   flink: 1.16.2,
   branch: v3.0
 }, {
   flink: 1.17.1,
   branch: v3.0
+}, {
+  flink: 1.18.0,
+  branch: v3.0

Review Comment:
   I found that you removed the 1.18.0 support in the follow hotfix, and there 
are compilation failures indeed, so I will 
also remove the support. thanks for your reminds.
   
![image](https://github.com/apache/flink-connector-elasticsearch/assets/7313035/ec287fa2-b895-4c70-8694-de327259c304)
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33379] Bump CI flink version on flink-connector-elasticsearch [flink-connector-elasticsearch]

2023-10-30 Thread via GitHub


MartijnVisser commented on code in PR #78:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/78#discussion_r1376022350


##
.github/workflows/weekly.yml:
##
@@ -35,12 +35,18 @@ jobs:
 }, {
   flink: 1.18-SNAPSHOT,
   branch: main
+}, {
+  flink: 1.19-SNAPSHOT,
+  branch: main
 }, {
   flink: 1.16.2,
   branch: v3.0
 }, {
   flink: 1.17.1,
   branch: v3.0
+}, {
+  flink: 1.18.0,
+  branch: v3.0

Review Comment:
   Does the v3.0 branch support 1.18.0? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33317][runtime] Add cleaning mechanism for initial configs to reduce the memory usage [flink]

2023-10-30 Thread via GitHub


RocMarshal commented on code in PR #23589:
URL: https://github.com/apache/flink/pull/23589#discussion_r1376018673


##
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/graph/StreamConfig.java:
##
@@ -795,6 +795,20 @@ public boolean isGraphContainingLoops() {
 return config.getBoolean(GRAPH_CONTAINING_LOOPS, false);
 }
 
+/**
+ * In general, we don't clear any configuration. However, the {@link 
#SERIALIZED_UDF} may be
+ * very large when operator includes some large objects, the 
SERIALIZED_UDF is used to create a
+ * StreamOperator and usually only needs to be called once. {@link 
#CHAINED_TASK_CONFIG} may be
+ * large as well due to the StreamConfig of all non-head operators in 
OperatorChain will be
+ * serialized and stored in CHAINED_TASK_CONFIG. They can be cleared to 
reduce the memory after
+ * StreamTask is initialized. If so, TM will have more memory during 
running. See FLINK-33315
+ * and FLINK-33317 for more information.
+ */
+public void clearInitialConfigs() {
+config.removeKey(SERIALIZED_UDF);
+config.removeKey(CHAINED_TASK_CONFIG);
+}

Review Comment:
   Thanks @1996fanrui  for the reply.
   
   Just to reiterate this comment,
   
   As you know, there is a hypothetical premise and a consensus premise:
   
   1. Assuming that this part of the memory is large (sorry, I am not very 
clear about  this part)
   2. The JVM will perform GC at a secure point; GC needs to be at a safe 
point, but it may not necessarily trigger GC when there is a safe point. So 
even if the JVM reaches the safe point, it only means that there are more 
conditions for reaching GC, and the probability of GC triggering is higher.
   
   Based on the above two points:
   
   Assuming the code runs here, but the JVM has not reached the safe point, GC 
cannot be performed temporarily. Continuing to run will also take up some 
space. When GC is performed, more memory needs to be reclaimed and the latency 
will be longer. If we can make the JVM reach a safe point early, the 
probability of early GC will increase, which can minimize GC latency or 
allocate the latency to multiple GCs.
   
   However, this type of case is extreme and the conditions are also unstable. 
Let's turn our attention to the merger of PR :)
   Thank you again for your patience~



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33317][runtime] Add cleaning mechanism for initial configs to reduce the memory usage [flink]

2023-10-30 Thread via GitHub


RocMarshal commented on code in PR #23589:
URL: https://github.com/apache/flink/pull/23589#discussion_r1376018673


##
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/graph/StreamConfig.java:
##
@@ -795,6 +795,20 @@ public boolean isGraphContainingLoops() {
 return config.getBoolean(GRAPH_CONTAINING_LOOPS, false);
 }
 
+/**
+ * In general, we don't clear any configuration. However, the {@link 
#SERIALIZED_UDF} may be
+ * very large when operator includes some large objects, the 
SERIALIZED_UDF is used to create a
+ * StreamOperator and usually only needs to be called once. {@link 
#CHAINED_TASK_CONFIG} may be
+ * large as well due to the StreamConfig of all non-head operators in 
OperatorChain will be
+ * serialized and stored in CHAINED_TASK_CONFIG. They can be cleared to 
reduce the memory after
+ * StreamTask is initialized. If so, TM will have more memory during 
running. See FLINK-33315
+ * and FLINK-33317 for more information.
+ */
+public void clearInitialConfigs() {
+config.removeKey(SERIALIZED_UDF);
+config.removeKey(CHAINED_TASK_CONFIG);
+}

Review Comment:
   Thanks @1996fanrui  for the reply.
   
   Just to reiterate this comment,
   
   As you know, there is a hypothetical premise and a consensus premise:
   
   1. Assuming that this part of the memory is large (sorry, I am not familiar 
with this part)
   2. The JVM will perform GC at a secure point; GC needs to be at a safe 
point, but it may not necessarily trigger GC when there is a safe point. So 
even if the JVM reaches the safe point, it only means that there are more 
conditions for reaching GC, and the probability of GC triggering is higher.
   
   Based on the above two points:
   
   Assuming the code runs here, but the JVM has not reached the safe point, GC 
cannot be performed temporarily. Continuing to run will also take up some 
space. When GC is performed, more memory needs to be reclaimed and the latency 
will be longer. If we can make the JVM reach a safe point early, the 
probability of early GC will increase, which can minimize GC latency or 
allocate the latency to multiple GCs.
   
   However, this type of case is extreme and the conditions are also unstable. 
Let's turn our attention to the merger of PR :)
   Thank you again for your patience~



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33354][runtime] Cache TaskInformation and JobInformation to avoid deserializing duplicate big objects [flink]

2023-10-30 Thread via GitHub


1996fanrui commented on code in PR #23599:
URL: https://github.com/apache/flink/pull/23599#discussion_r1376013241


##
flink-runtime/src/main/java/org/apache/flink/runtime/util/DefaultGroupCache.java:
##
@@ -0,0 +1,163 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.util;
+
+import org.apache.flink.annotation.VisibleForTesting;
+
+import org.apache.flink.shaded.guava31.com.google.common.base.Ticker;
+import org.apache.flink.shaded.guava31.com.google.common.cache.Cache;
+import org.apache.flink.shaded.guava31.com.google.common.cache.CacheBuilder;
+import 
org.apache.flink.shaded.guava31.com.google.common.cache.RemovalNotification;
+
+import javax.annotation.concurrent.NotThreadSafe;
+
+import java.time.Duration;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Objects;
+import java.util.Set;
+
+/** Default implement of {@link GroupCache}. Entries will be expired after 
timeout. */
+@NotThreadSafe
+public class DefaultGroupCache implements GroupCache {
+private final Cache, V> cache;
+private final Map>> cachedBlobKeysPerJob;
+
+private DefaultGroupCache(Duration expireTimeout, int cacheSizeLimit, 
Ticker ticker) {
+this.cachedBlobKeysPerJob = new HashMap<>();
+this.cache =
+CacheBuilder.newBuilder()
+.concurrencyLevel(1)
+.maximumSize(cacheSizeLimit)
+.expireAfterAccess(expireTimeout)
+.ticker(ticker)
+.removalListener(this::onCacheRemoval)
+.build();
+}
+
+@Override
+public void clear() {
+cachedBlobKeysPerJob.clear();
+cache.cleanUp();
+}
+
+@Override
+public V get(G group, K key) {
+return cache.getIfPresent(new CacheKey<>(group, key));
+}
+
+@Override
+public void put(G group, K key, V value) {
+CacheKey cacheKey = new CacheKey<>(group, key);
+cache.put(cacheKey, value);
+cachedBlobKeysPerJob.computeIfAbsent(group, ignore -> new 
HashSet<>()).add(cacheKey);
+}
+
+@Override
+public void clearCacheForGroup(G group) {
+Set> removed = cachedBlobKeysPerJob.remove(group);
+if (removed != null) {
+cache.invalidateAll(removed);
+}
+}
+
+/**
+ * Removal listener that remove the cache key of this group .
+ *
+ * @param removalNotification of removed element.
+ */
+private void onCacheRemoval(RemovalNotification, V> 
removalNotification) {
+CacheKey cacheKey = removalNotification.getKey();
+V value = removalNotification.getValue();
+if (cacheKey != null && value != null) {
+cachedBlobKeysPerJob.computeIfPresent(
+cacheKey.getGroup(),
+(group, keys) -> {
+keys.remove(cacheKey);
+if (keys.isEmpty()) {
+return null;
+} else {
+return keys;
+}
+});

Review Comment:
   I write a demo on My IDEA, it doesn't have the memory leak. When return 
null, the map will remove the key.
   
   
   https://github.com/apache/flink/assets/38427477/457dd9bb-9ea3-474f-aab4-b0c43e0b5020;>
   
   
   https://github.com/apache/flink/assets/38427477/040b3831-1919-4a6e-afa0-8742e4bf200b;>



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33260] Allow user to provide a list of recoverable exceptions [flink-connector-aws]

2023-10-30 Thread via GitHub


iemre commented on code in PR #110:
URL: 
https://github.com/apache/flink-connector-aws/pull/110#discussion_r1376004352


##
flink-connector-aws/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/util/KinesisConfigUtilTest.java:
##
@@ -1021,4 +1021,28 @@ public void testGetV2ConsumerClientProperties() {
 .containsKey("aws.kinesis.client.user-agent-prefix")
 .hasSize(2);
 }
+
+@Test
+public void testInvalidCustomRecoverableErrorConfiguration() {
+exception.expect(IllegalArgumentException.class);
+exception.expectMessage(
+"Provided recoverable exception class could not be found: 
com.NonExistentExceptionClass");
+
+Properties testConfig = TestUtils.getStandardProperties();
+testConfig.setProperty(
+ConsumerConfigConstants.RECOVERABLE_EXCEPTIONS_PREFIX + 
"[0].exception",
+"com.NonExistentExceptionClass");
+
+KinesisConfigUtil.validateConsumerConfiguration(testConfig);
+}
+
+@Test
+public void testValidCustomRecoverableErrorConfiguration() {
+Properties testConfig = TestUtils.getStandardProperties();
+testConfig.setProperty(
+ConsumerConfigConstants.RECOVERABLE_EXCEPTIONS_PREFIX + 
"[0].exception",

Review Comment:
   should probably add 
`ConsumerConfigConstants.RECOVERABLE_EXCEPTIONS_EXCEPTION_SUFFIX` but it 
stutters - open to suggestions 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-30238) Unified Sink committer does not clean up state on final savepoint

2023-10-30 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17780952#comment-17780952
 ] 

Martijn Visser commented on FLINK-30238:


This ticket is pending to be closed, unless new feedback is provided in the 
discussion thread (see 
https://lists.apache.org/thread/25z3ld1ntzkonmp57joth174489g420y)

> Unified Sink committer does not clean up state on final savepoint
> -
>
> Key: FLINK-30238
> URL: https://issues.apache.org/jira/browse/FLINK-30238
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.17.0, 1.15.3, 1.16.1
>Reporter: Fabian Paul
>Priority: Critical
> Attachments: Screenshot 2023-03-09 at 1.47.11 PM.png, image (8).png
>
>
> During stop-with-savepoint the committer only commits the pending 
> committables on notifyCheckpointComplete.
> This has several downsides.
>  * Last committableSummary has checkpoint id LONG.MAX and is never cleared 
> from the state leading to that stop-with-savepoint does not work when the 
> pipeline recovers from a savepoint 
>  * While the committables are committed during stop-with-savepoint they are 
> not forwarded to post-commit topology, potentially losing data and preventing 
> to close open transactions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33260] Allow user to provide a list of recoverable exceptions [flink-connector-aws]

2023-10-30 Thread via GitHub


iemre commented on code in PR #110:
URL: 
https://github.com/apache/flink-connector-aws/pull/110#discussion_r1376002503


##
docs/content.zh/docs/connectors/table/kinesis.md:
##
@@ -680,6 +680,14 @@ Connector Options
   Long
   The interval (in milliseconds) after which to consider a shard idle 
for purposes of watermark generation. A positive value will allow the watermark 
to progress even when some shards don't receive new records.
 
+
+  shard.consumer.error.recoverable[0].exception
+  optional
+  no
+  (none)
+  String
+  User-specified Exception to retry indefinitely. Example value: 
`java.net.UnknownHostException`. This configuration is a zero-based array. As 
such, the specified exceptions must start with index 0. Specified exceptions 
must be valid Throwables in class, or connector will fail to initialize and 
fail fast.

Review Comment:
   typo: `in classpath`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-26624) Running HA (hashmap, async) end-to-end test failed on azure due to unable to find master logs

2023-10-30 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl reassigned FLINK-26624:
-

Assignee: xiang1 yu

> Running HA (hashmap, async) end-to-end test failed on azure due to unable to 
> find master logs
> -
>
> Key: FLINK-26624
> URL: https://issues.apache.org/jira/browse/FLINK-26624
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0, 1.17.0, 1.16.1
>Reporter: Yun Gao
>Assignee: xiang1 yu
>Priority: Major
>  Labels: auto-deprioritized-critical, pull-request-available, 
> starter, test-stability
> Attachments: 20230304.2-build-46800-flink-logs.tgz
>
>
> {code:java}
> Mar 12 04:31:15 Waiting for text Completed checkpoint [1-9]* for job 
> 699ebf9bdcb51a9fe76db5463027d34c to appear 2 of times in logs...
> grep: 
> /home/vsts/work/_temp/debug_files/flink-logs/*standalonesession-1*.log*: No 
> such file or directory
> Mar 12 04:31:16 Starting standalonesession daemon on host fv-az302-918.
> grep: 
> /home/vsts/work/_temp/debug_files/flink-logs/*standalonesession-1*.log*: No 
> such file or directory
> Mar 12 04:41:23 A timeout occurred waiting for Completed checkpoint [1-9]* 
> for job 699ebf9bdcb51a9fe76db5463027d34c to appear 2 of times in logs.
> Mar 12 04:41:23 Stopping job timeout watchdog (with pid=272045)
> Mar 12 04:41:23 Killing JM watchdog @ 273681
> Mar 12 04:41:23 Killing TM watchdog @ 274268
> Mar 12 04:41:23 [FAIL] Test script contains errors.
> Mar 12 04:41:23 Checking of logs skipped.
> Mar 12 04:41:23 
> Mar 12 04:41:23 [FAIL] 'Running HA (hashmap, async) end-to-end test' failed 
> after 10 minutes and 31 seconds! Test exited with exit code 1
> Mar 12 04:41:23 
> 04:41:23 ##[group]Environment Information
> Mar 12 04:41:24 Searching for .dump, .dumpstream and related files in 
> '/home/vsts/work/1/s'
> dmesg: read kernel buffer failed: Operation not permitted
> Mar 12 04:41:28 Stopping taskexecutor daemon (pid: 272837) on host 
> fv-az302-918.
> Mar 12 04:41:29 Stopping standalonesession daemon (pid: 274590) on host 
> fv-az302-918.
> Mar 12 04:41:35 Stopping zookeeper...
> Mar 12 04:41:36 Stopping zookeeper daemon (pid: 272248) on host fv-az302-918.
> The STDIO streams did not close within 10 seconds of the exit event from 
> process '/usr/bin/bash'. This may indicate a child process inherited the 
> STDIO streams and has not yet exited.
> ##[error]Bash exited with code '1'.
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32945=logs=bea52777-eaf8-5663-8482-18fbc3630e81=b2642e3a-5b86-574d-4c8a-f7e2842bfb14



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-30768) flink-web version cleanup

2023-10-30 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl reassigned FLINK-30768:
-

Assignee: xiang1 yu

> flink-web version cleanup
> -
>
> Key: FLINK-30768
> URL: https://issues.apache.org/jira/browse/FLINK-30768
> Project: Flink
>  Issue Type: Improvement
>  Components: Project Website
>Affects Versions: 1.16.0, 1.17.0, 1.15.3
>Reporter: Matthias Pohl
>Assignee: xiang1 yu
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available, starter
>
> The Flink website sources have multiple dependency versions (e.g. log4j, 
> sl4fj in q/gradle-quickstart.sh in {{q/gradle-quickstart.sh}}) that are not 
> referenced in Flink's parent pom file. That means that contributors might 
> forget updating the right locations in {{flink-web}} when updating the 
> dependencies. Additionally, {{q/gradle-quickstart.sh}} specifies a 
> {{$defaultFlinkVersion}} variable but requires to update a hardcoded Flink 
> version further down in the script as well (see 
> [q/gradle-quickstart.sh:119|https://github.com/apache/flink-web/blob/dc24124816d86617991050a2e36fe70ee40ff2dc/q/gradle-quickstart.sh#L119]).
>  
> This Jira issue is about reducing the locations where we have to update 
> versions (through variables) and adding references that these variables have 
> to be updated to the corresponding versions that used in the Flink source 
> code as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33011) Operator deletes HA data unexpectedly

2023-10-30 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan updated FLINK-33011:

Fix Version/s: kubernetes-operator-1.6.1

> Operator deletes HA data unexpectedly
> -
>
> Key: FLINK-33011
> URL: https://issues.apache.org/jira/browse/FLINK-33011
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Affects Versions: 1.17.1, kubernetes-operator-1.6.0
> Environment: Flink: 1.17.1
> Flink Kubernetes Operator: 1.6.0
>Reporter: Ruibin Xing
>Assignee: Gyula Fora
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.7.0, kubernetes-operator-1.6.1
>
> Attachments: flink_operator_logs_0831.csv
>
>
> We encountered a problem where the operator unexpectedly deleted HA data.
> The timeline is as follows:
> 12:08 We submitted the first spec, which suspended the job with savepoint 
> upgrade mode.
> 12:08 The job was suspended, while the HA data was preserved, and the log 
> showed the observed job deployment status was MISSING.
> 12:10 We submitted the second spec, which deployed the job with the last 
> state upgrade mode.
> 12:10 Logs showed the operator deleted both the Flink deployment and the HA 
> data again.
> 12:10 The job failed to start because the HA data was missing.
> According to the log, the deletion was triggered by 
> https://github.com/apache/flink-kubernetes-operator/blob/a728ba768e20236184e2b9e9e45163304b8b196c/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/reconciler/deployment/ApplicationReconciler.java#L168
> I think this would only be triggered if the job deployment status wasn't 
> MISSING. But the log before the deletion showed the observed job status was 
> MISSING at that moment.
> Related logs:
>  
> {code:java}
> 2023-08-30 12:08:48.190 + o.a.f.k.o.s.AbstractFlinkService [INFO 
> ][default/pipeline-pipeline-se-3] Cluster shutdown completed.
> 2023-08-30 12:10:27.010 + o.a.f.k.o.o.d.ApplicationObserver [INFO 
> ][default/pipeline-pipeline-se-3] Observing JobManager deployment. Previous 
> status: MISSING
> 2023-08-30 12:10:27.533 + o.a.f.k.o.l.AuditUtils         [INFO 
> ][default/pipeline-pipeline-se-3] >>> Event  | Info    | SPECCHANGED     | 
> UPGRADE change(s) detected (Diff: FlinkDeploymentSpec[image : 
> docker-registry.randomcompany.com/octopus/pipeline-pipeline-online:0835137c-362
>  -> 
> docker-registry.randomcompany.com/octopus/pipeline-pipeline-online:23db7ae8-365,
>  podTemplate.metadata.labels.app.kubernetes.io~1version : 
> 0835137cd803b7258695eb53a6ec520cb62a48a7 -> 
> 23db7ae84bdab8d91fa527fe2f8f2fce292d0abc, job.state : suspended -> running, 
> job.upgradeMode : last-state -> savepoint, restartNonce : 1545 -> 1547]), 
> starting reconciliation.
> 2023-08-30 12:10:27.679 + o.a.f.k.o.s.NativeFlinkService [INFO 
> ][default/pipeline-pipeline-se-3] Deleting JobManager deployment and HA 
> metadata.
> {code}
> A more complete log file is attached. Thanks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32868) Document the need to backport FLINK-30213 for using autoscaler with older version Flinks

2023-10-30 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan updated FLINK-32868:

Fix Version/s: kubernetes-operator-1.7.0
   kubernetes-operator-1.6.1

> Document the need to backport FLINK-30213 for using autoscaler with older 
> version Flinks
> 
>
> Key: FLINK-32868
> URL: https://issues.apache.org/jira/browse/FLINK-32868
> Project: Flink
>  Issue Type: Improvement
>  Components: Autoscaler
>Reporter: Zhanghao Chen
>Assignee: Zhanghao Chen
>Priority: Minor
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.7.0, kubernetes-operator-1.6.1
>
>
> The current Autoscaler doc states on job requirements as the following:
> Job requirements:
>  * The autoscaler currently only works with the latest [Flink 
> 1.17|https://hub.docker.com/_/flink] or after backporting the following fixes 
> to your 1.15/1.16 Flink image
>  ** [Job vertex parallelism 
> overrides|https://github.com/apache/flink/commit/23ce2281a0bb4047c64def9af7ddd5f19d88e2a9]
>  (must have)
>  ** [Support timespan for busyTime 
> metrics|https://github.com/apache/flink/commit/a7fdab8b23cddf568fa32ee7eb804d7c3eb23a35]
>  (good to have)
> However, https://issues.apache.org/jira/browse/FLINK-30213 is also crucial 
> and need to be backported to 1.15/1.16 to enable autoscaling. We should add 
> it to the doc as well, and marked as must have.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32890) Flink app rolled back with old savepoints (3 hours back in time) while some checkpoints have been taken in between

2023-10-30 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan updated FLINK-32890:

Fix Version/s: kubernetes-operator-1.6.1

> Flink app rolled back with old savepoints (3 hours back in time) while some 
> checkpoints have been taken in between
> --
>
> Key: FLINK-32890
> URL: https://issues.apache.org/jira/browse/FLINK-32890
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Nicolas Fraison
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.7.0, kubernetes-operator-1.6.1
>
>
> Here are all details about the issue:
>  * Deployed new release of a flink app with a new operator
>  * Flink Operator set the app as stable
>  * After some time the app failed and stay in failed state (due to some issue 
> with our kafka clusters)
>  * Finally decided to rollback the new release just in case it could be the 
> root cause of the issue on kafka
>  * Operator detect: {{Job is not running but HA metadata is available for 
> last state restore, ready for upgrade, Deleting JobManager deployment while 
> preserving HA metadata.}}  -> rely on last-state (as we do not disable 
> fallback), no savepoint taken
>  * Flink start JM and deployment of the app. It well find the 3 checkpoints
>  * {{Using '/flink-kafka-job-apache-nico/flink-kafka-job-apache-nico' as 
> Zookeeper namespace.}}
>  * {{Initializing job 'flink-kafka-job' (6b24a364c1905e924a69f3dbff0d26a6).}}
>  * {{Recovering checkpoints from 
> ZooKeeperStateHandleStore\{namespace='flink-kafka-job-apache-nico/flink-kafka-job-apache-nico/jobs/6b24a364c1905e924a69f3dbff0d26a6/checkpoints'}.}}
>  * {{Found 3 checkpoints in 
> ZooKeeperStateHandleStore\{namespace='flink-kafka-job-apache-nico/flink-kafka-job-apache-nico/jobs/6b24a364c1905e924a69f3dbff0d26a6/checkpoints'}.}}
>  * {{{}Restoring job 6b24a364c1905e924a69f3dbff0d26a6 from Checkpoint 19 @ 
> 1692268003920 for 6b24a364c1905e924a69f3dbff0d26a6 located at 
> }}\{{{}s3p://.../flink-kafka-job-apache-nico/checkpoints/6b24a364c1905e924a69f3dbff0d26a6/chk-19{}}}{{{}.{}}}
>  * Job failed because of the missing operator
> {code:java}
> Job 6b24a364c1905e924a69f3dbff0d26a6 reached terminal state FAILED.
> org.apache.flink.runtime.client.JobInitializationException: Could not start 
> the JobMaster.
> Caused by: java.util.concurrent.CompletionException: 
> java.lang.IllegalStateException: There is no operator for the state 
> f298e8715b4d85e6f965b60e1c848cbe * Job 6b24a364c1905e924a69f3dbff0d26a6 has 
> been registered for cleanup in the JobResultStore after reaching a terminal 
> state.{code}
>  * {{Clean up the high availability data for job 
> 6b24a364c1905e924a69f3dbff0d26a6.}}
>  * {{Removed job graph 6b24a364c1905e924a69f3dbff0d26a6 from 
> ZooKeeperStateHandleStore\{namespace='flink-kafka-job-apache-nico/flink-kafka-job-apache-nico/jobgraphs'}.}}
>  * JobManager restart and try to resubmit the job but the job was already 
> submitted so finished
>  * {{Received JobGraph submission 'flink-kafka-job' 
> (6b24a364c1905e924a69f3dbff0d26a6).}}
>  * {{Ignoring JobGraph submission 'flink-kafka-job' 
> (6b24a364c1905e924a69f3dbff0d26a6) because the job already reached a 
> globally-terminal state (i.e. FAILED, CANCELED, FINISHED) in a previous 
> execution.}}
>  * {{Application completed SUCCESSFULLY}}
>  * Finally the operator rollback the deployment and still indicate that {{Job 
> is not running but HA metadata is available for last state restore, ready for 
> upgrade}}
>  * But the job metadata are not anymore there (clean previously)
>  
> {code:java}
> (CONNECTED [zookeeper-data-eng-multi-cloud.zookeeper-flink.svc:2181]) /> ls 
> /flink-kafka-job-apache-nico/flink-kafka-job-apache-nico/jobs/6b24a364c1905e924a69f3dbff0d26a6/checkpoints
> Path 
> /flink-kafka-job-apache-nico/flink-kafka-job-apache-nico/jobs/6b24a364c1905e924a69f3dbff0d26a6/checkpoints
>  doesn't exist
> (CONNECTED [zookeeper-data-eng-multi-cloud.zookeeper-flink.svc:2181]) /> ls 
> /flink-kafka-job-apache-nico/flink-kafka-job-apache-nico/jobs
> (CONNECTED [zookeeper-data-eng-multi-cloud.zookeeper-flink.svc:2181]) /> ls 
> /flink-kafka-job-apache-nico/flink-kafka-job-apache-nico
> jobgraphs
> jobs
> leader
> {code}
>  
> The rolled back app from flink operator finally take the last provided 
> savepoint as no metadata/checkpoints are available. But this last savepoint 
> is an old one as during the upgrade the operator decided to rely on 
> last-state (The old savepoint taken is a scheduled one)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33399) Support switching from batch to stream mode for KeyedCoProcessOperator and IntervalJoinOperator

2023-10-30 Thread Xuannan Su (Jira)
Xuannan Su created FLINK-33399:
--

 Summary: Support switching from batch to stream mode for 
KeyedCoProcessOperator and IntervalJoinOperator
 Key: FLINK-33399
 URL: https://issues.apache.org/jira/browse/FLINK-33399
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Task
Reporter: Xuannan Su


Support switching from batch to stream mode for KeyedCoProcessOperator and 
IntervalJoinOperator.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33354][runtime] Cache TaskInformation and JobInformation to avoid deserializing duplicate big objects [flink]

2023-10-30 Thread via GitHub


RocMarshal commented on code in PR #23599:
URL: https://github.com/apache/flink/pull/23599#discussion_r1375961136


##
flink-runtime/src/main/java/org/apache/flink/runtime/util/DefaultGroupCache.java:
##
@@ -0,0 +1,163 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.util;
+
+import org.apache.flink.annotation.VisibleForTesting;
+
+import org.apache.flink.shaded.guava31.com.google.common.base.Ticker;
+import org.apache.flink.shaded.guava31.com.google.common.cache.Cache;
+import org.apache.flink.shaded.guava31.com.google.common.cache.CacheBuilder;
+import 
org.apache.flink.shaded.guava31.com.google.common.cache.RemovalNotification;
+
+import javax.annotation.concurrent.NotThreadSafe;
+
+import java.time.Duration;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Objects;
+import java.util.Set;
+
+/** Default implement of {@link GroupCache}. Entries will be expired after 
timeout. */
+@NotThreadSafe
+public class DefaultGroupCache implements GroupCache {
+private final Cache, V> cache;
+private final Map>> cachedBlobKeysPerJob;
+
+private DefaultGroupCache(Duration expireTimeout, int cacheSizeLimit, 
Ticker ticker) {
+this.cachedBlobKeysPerJob = new HashMap<>();
+this.cache =
+CacheBuilder.newBuilder()
+.concurrencyLevel(1)
+.maximumSize(cacheSizeLimit)
+.expireAfterAccess(expireTimeout)
+.ticker(ticker)
+.removalListener(this::onCacheRemoval)
+.build();
+}
+
+@Override
+public void clear() {
+cachedBlobKeysPerJob.clear();
+cache.cleanUp();
+}
+
+@Override
+public V get(G group, K key) {
+return cache.getIfPresent(new CacheKey<>(group, key));
+}
+
+@Override
+public void put(G group, K key, V value) {
+CacheKey cacheKey = new CacheKey<>(group, key);
+cache.put(cacheKey, value);
+cachedBlobKeysPerJob.computeIfAbsent(group, ignore -> new 
HashSet<>()).add(cacheKey);
+}
+
+@Override
+public void clearCacheForGroup(G group) {
+Set> removed = cachedBlobKeysPerJob.remove(group);
+if (removed != null) {
+cache.invalidateAll(removed);
+}
+}
+
+/**
+ * Removal listener that remove the cache key of this group .
+ *
+ * @param removalNotification of removed element.
+ */
+private void onCacheRemoval(RemovalNotification, V> 
removalNotification) {
+CacheKey cacheKey = removalNotification.getKey();
+V value = removalNotification.getValue();
+if (cacheKey != null && value != null) {
+cachedBlobKeysPerJob.computeIfPresent(
+cacheKey.getGroup(),
+(group, keys) -> {
+keys.remove(cacheKey);
+if (keys.isEmpty()) {
+return null;
+} else {
+return keys;
+}
+});

Review Comment:
   Would be there a risk of memory leakage here?
   For example, let's talk about the situation:
   
   - There are too many groups
   - Perform the following operations on each of these groups one by one
 - Add a set for one group and then remove set for the same one group, but 
the key has not been removed. Would there be many Entries in the form of 
`Entry-i`(set-i is empty or null) ?
 
   In short, would `cachedBlobKeysPerJob` degenerate into a collection with too 
many elements?
   
   Please correct me if needed for my limited read.



##
flink-runtime/src/main/java/org/apache/flink/runtime/deployment/TaskDeploymentDescriptor.java:
##
@@ -151,39 +166,43 @@ public TaskDeploymentDescriptor(
 }
 
 /**
- * Return the sub task's serialized job information.
+ * Return the sub task's job information.
  *
- * @return serialized job information (may throw {@link 
IllegalStateException} if {@link
- * #loadBigData} is not called beforehand).
+ * @return job 

[jira] [Created] (FLINK-33398) Support switching from batch to stream mode for one input stream operator

2023-10-30 Thread Xuannan Su (Jira)
Xuannan Su created FLINK-33398:
--

 Summary: Support switching from batch to stream mode for one input 
stream operator
 Key: FLINK-33398
 URL: https://issues.apache.org/jira/browse/FLINK-33398
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Task
Reporter: Xuannan Su


Introduce the infra to support switching from batch to stream mode for one 
input stream operator.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-32182][build] Use original japicmp plugin [flink]

2023-10-30 Thread via GitHub


zentol commented on PR #23594:
URL: https://github.com/apache/flink/pull/23594#issuecomment-1784893428

   oh god what happened...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33293) Data loss with Kafka Sink

2023-10-30 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17780943#comment-17780943
 ] 

Martijn Visser commented on FLINK-33293:


[~jredzepovic] Did you test 3.0.0-1.17?

> Data loss with Kafka Sink
> -
>
> Key: FLINK-33293
> URL: https://issues.apache.org/jira/browse/FLINK-33293
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.16.1, 1.16.2, 1.17.1
>Reporter: Jasmin Redzepovic
>Priority: Major
> Attachments: job.log, job_1_16_2_run1.log, job_1_16_2_run2.log, 
> job_1_17_1_run1.log, job_1_17_1_run2.log
>
>
> More info in Slack discussion: 
> [https://www.linen.dev/s/apache-flink/t/15851877/hi-all-it-s-me-again-based-on-https-apache-flink-slack-com-a#e102fa98-dcd7-40e8-a2c4-f7b4a83234e1]
>  
> *TLDR:*
> (in Flink version 1.15 I was unable to reproduce the issue, but in 1.16 and 
> 1.17 I can reproduce it)
> I have created a sink topic with 8 partitions, a replication factor of 3, and 
> a minimum in-sync replicas of 2. The consumer properties are set to their 
> default values.
> For the producer, I made changes to the delivery.timeout.ms and 
> request.timeout.ms properties, setting them to *5000ms* and *4000ms* 
> respectively. (acks are set to -1 by default, which is equals to _all_ I 
> guess)
> KafkaSink is configured with an AT_LEAST_ONCE delivery guarantee. The job 
> parallelism is set to 1 and the checkpointing interval is set to 2000ms. I 
> started a Flink Job and monitored its logs. Additionally, I was consuming the 
> __consumer_offsets topic in parallel to track when offsets are committed for 
> my consumer group.
> The problematic part occurs during checkpoint 5. Its duration was 5009ms, 
> which exceeds the delivery timeout for Kafka (5000ms). Although it was marked 
> as completed, I believe that the output buffer of KafkaSink was not fully 
> acknowledged by Kafka. As a result, Flink proceeded to trigger checkpoint 6 
> but immediately encountered a Kafka {_}TimeoutException: Expiring N 
> records{_}.
> I suspect that this exception originated from checkpoint 5 and that 
> checkpoint 5 should not have been considered successful. The job then failed 
> but recovered from checkpoint 5. Some time after checkpoint 7, consumer 
> offsets were committed to Kafka, and this process repeated once more at 
> checkpoint 9.
> Since the offsets of checkpoint 5 were committed to Kafka, but the output 
> buffer was only partially delivered, there has been data loss. I confirmed 
> this when sinking the topic to the database.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   >