[jira] [Created] (FLINK-34224) ChangelogStorageMetricsTest.testAttemptsPerUpload(ChangelogStorageMetricsTest timed out

2024-01-23 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-34224:
-

 Summary: 
ChangelogStorageMetricsTest.testAttemptsPerUpload(ChangelogStorageMetricsTest 
timed out
 Key: FLINK-34224
 URL: https://issues.apache.org/jira/browse/FLINK-34224
 Project: Flink
  Issue Type: Bug
  Components: Runtime / State Backends
Affects Versions: 1.19.0
Reporter: Matthias Pohl


The timeout appeared in the GitHub Actions workflow (currently in test phase; 
[FLIP-396|https://cwiki.apache.org/confluence/display/FLINK/FLIP-396%3A+Trial+to+test+GitHub+Actions+as+an+alternative+for+Flink%27s+current+Azure+CI+infrastructure]):
https://github.com/XComp/flink/actions/runs/7632434859/job/20793613726#step:10:11040

{code}
Jan 24 01:38:36 "ForkJoinPool-1-worker-1" #16 daemon prio=5 os_prio=0 
tid=0x7f3b200ae800 nid=0x406e3 waiting on condition [0x7f3b1ba0e000]
Jan 24 01:38:36java.lang.Thread.State: WAITING (parking)
Jan 24 01:38:36 at sun.misc.Unsafe.park(Native Method)
Jan 24 01:38:36 - parking to wait for  <0xdfbbb358> (a 
java.util.concurrent.CompletableFuture$Signaller)
Jan 24 01:38:36 at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
Jan 24 01:38:36 at 
java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
Jan 24 01:38:36 at 
java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3313)
Jan 24 01:38:36 at 
java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
Jan 24 01:38:36 at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
Jan 24 01:38:36 at 
org.apache.flink.changelog.fs.ChangelogStorageMetricsTest.testAttemptsPerUpload(ChangelogStorageMetricsTest.java:251)
Jan 24 01:38:36 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
[...]
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34210][Runtime/Checkpointing] Fix DefaultExecutionGraphBuilder#isCheckpointingEnabled return wrong value when checkpoint disabled [flink]

2024-01-23 Thread via GitHub


mayuehappy commented on PR #24173:
URL: https://github.com/apache/flink/pull/24173#issuecomment-1907582420

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33186) CheckpointAfterAllTasksFinishedITCase.testRestoreAfterSomeTasksFinished fails on AZP

2024-01-23 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17810247#comment-17810247
 ] 

Matthias Pohl commented on FLINK-33186:
---

https://github.com/XComp/flink/actions/runs/7632434711/job/20792993223#step:10:8585

>  CheckpointAfterAllTasksFinishedITCase.testRestoreAfterSomeTasksFinished 
> fails on AZP
> -
>
> Key: FLINK-33186
> URL: https://issues.apache.org/jira/browse/FLINK-33186
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Assignee: Jiang Xin
>Priority: Critical
>  Labels: test-stability
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=53509=logs=baf26b34-3c6a-54e8-f93f-cf269b32f802=8c9d126d-57d2-5a9e-a8c8-ff53f7b35cd9=8762
> fails as
> {noformat}
> Sep 28 01:23:43 Caused by: 
> org.apache.flink.runtime.checkpoint.CheckpointException: Task local 
> checkpoint failure.
> Sep 28 01:23:43   at 
> org.apache.flink.runtime.checkpoint.PendingCheckpoint.abort(PendingCheckpoint.java:550)
> Sep 28 01:23:43   at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.abortPendingCheckpoint(CheckpointCoordinator.java:2248)
> Sep 28 01:23:43   at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.abortPendingCheckpoint(CheckpointCoordinator.java:2235)
> Sep 28 01:23:43   at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.lambda$null$9(CheckpointCoordinator.java:817)
> Sep 28 01:23:43   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Sep 28 01:23:43   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Sep 28 01:23:43   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> Sep 28 01:23:43   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> Sep 28 01:23:43   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> Sep 28 01:23:43   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> Sep 28 01:23:43   at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-28915] Support fetching artifacts in native K8s and standalone application mode [flink]

2024-01-23 Thread via GitHub


mbalassi commented on code in PR #24065:
URL: https://github.com/apache/flink/pull/24065#discussion_r1464456656


##
docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md:
##
@@ -97,14 +97,34 @@ COPY /path/of/my-flink-job.jar 
$FLINK_HOME/usrlib/my-flink-job.jar
 After creating and publishing the Docker image under `custom-image-name`, you 
can start an Application cluster with the following command:
 
 ```bash
+# Local Schema
 $ ./bin/flink run-application \
 --target kubernetes-application \
 -Dkubernetes.cluster-id=my-first-application-cluster \
 -Dkubernetes.container.image.ref=custom-image-name \
 local:///opt/flink/usrlib/my-flink-job.jar
+
+# FileSystem
+$ ./bin/flink run-application \
+--target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+s3://my-bucket/my-flink-job.jar
+
+# HTTP(S)
+$ ./bin/flink run-application \
+--target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+http://ip:port/my-flink-job.jar
 ```
+{{< hint info >}}
+Now, The jar artifact supports downloading from the [flink filesystem]({{< ref 
"docs/deployment/filesystems/overview" >}}) or Http/Https in Application Mode.  

Review Comment:
   nit: remove `Now,`



##
docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md:
##
@@ -326,7 +346,7 @@ $ kubectl create clusterrolebinding 
flink-role-binding-default --clusterrole=edi
 ```
 
 If you do not want to use the `default` service account, use the following 
command to create a new `flink-service-account` service account and set the 
role binding.
-Then use the config option 
`-Dkubernetes.service-account=flink-service-account` to make the JobManager pod 
use the `flink-service-account` service account to create/delete TaskManager 
pods and leader ConfigMaps. 
+Then use the config option 
`-Dkubernetes.service-account=flink-service-account` to configure the 
JobManager pods service account used to create and delete TaskManager pods and 
leader ConfigMaps.

Review Comment:
   nit: `JobManager pod's`



##
docs/layouts/shortcodes/generated/artifact_fetch_configuration.html:
##
@@ -0,0 +1,36 @@
+
+
+
+Key
+Default
+Type
+Description
+
+
+
+
+user.artifacts.artifact-list
+(none)
+ListString
+A semicolon-separated list of the additional artifacts to 
fetch for the job before setting up the application cluster. All given elements 
have to be valid URIs. Example: 
s3://sandbox-bucket/format.jar;http://sandbox-server:1234/udf.jar

Review Comment:
   Why are we switching to semicolon seperator from commas as used by `--jars`?



##
docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md:
##
@@ -97,14 +97,34 @@ COPY /path/of/my-flink-job.jar 
$FLINK_HOME/usrlib/my-flink-job.jar
 After creating and publishing the Docker image under `custom-image-name`, you 
can start an Application cluster with the following command:
 
 ```bash
+# Local Schema
 $ ./bin/flink run-application \
 --target kubernetes-application \
 -Dkubernetes.cluster-id=my-first-application-cluster \
 -Dkubernetes.container.image.ref=custom-image-name \
 local:///opt/flink/usrlib/my-flink-job.jar
+
+# FileSystem
+$ ./bin/flink run-application \
+--target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+s3://my-bucket/my-flink-job.jar
+
+# HTTP(S)
+$ ./bin/flink run-application \
+--target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+http://ip:port/my-flink-job.jar
 ```
+{{< hint info >}}
+Now, The jar artifact supports downloading from the [flink filesystem]({{< ref 
"docs/deployment/filesystems/overview" >}}) or Http/Https in Application Mode.  
+The jar package will be downloaded from filesystem to
+[user.artifacts.base.dir]({{< ref "docs/deployment/config" 
>}}#user-artifacts-base-dir)/[kubernetes.namespace]({{< ref 
"docs/deployment/config" >}}#kubernetes-namespace)/[kubernetes.cluster-id]({{< 
ref "docs/deployment/config" >}}#kubernetes-cluster-id) path in image.
+{{< /hint >}}
 
-Note `local` is the only supported 
scheme in Application Mode.
+Note `local` schema is still supported. 
If you use `local` schema, the jar must be provided in the image or download by 
a init container as described in this [example](#example-of-pod-template).

Review Comment:
   nit: `downloaded by`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To 

Re: [PR] [FLINK-34007][k8s] Adds workaround that fixes the deadlock when renewing the leadership lease fails [flink]

2024-01-23 Thread via GitHub


XComp commented on PR #24132:
URL: https://github.com/apache/flink/pull/24132#issuecomment-1907572028

   [ci 
failure](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=56790)
 is caused by some infrastructure instability in the compile step of the 
connect stage:
   ```
   Jan 23 18:59:09 18:59:09.769 [WARNING] locking 
FileBasedConfig[/home/agent01_azpcontainer/.config/jgit/config] failed after 5 
retries
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34007][k8s] Adds workaround that fixes the deadlock when renewing the leadership lease fails [flink]

2024-01-23 Thread via GitHub


XComp commented on PR #24132:
URL: https://github.com/apache/flink/pull/24132#issuecomment-1907572233

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34210][Runtime/Checkpointing] Fix DefaultExecutionGraphBuilder#isCheckpointingEnabled return wrong value when checkpoint disabled [flink]

2024-01-23 Thread via GitHub


mayuehappy commented on PR #24173:
URL: https://github.com/apache/flink/pull/24173#issuecomment-1907564710

   @masteryhx   
   I found some unit tests failed after change. 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56811=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=24c3384f-1bcb-57b3-224f-51bf973bbee8
   That's because some test set the CheckpointInterval as Long.MAX_VALUE but as 
expect the Checkpoint can trigger Normally .I will add another fix commit 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34195) PythonEnvUtils creates python environment instead of python3

2024-01-23 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl updated FLINK-34195:
--
Labels: starter  (was: )

> PythonEnvUtils creates python environment instead of python3
> 
>
> Key: FLINK-34195
> URL: https://issues.apache.org/jira/browse/FLINK-34195
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Build System / CI
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: starter
>
> I was looking into the Python installation of the Flink test suite because I 
> working on updating the CI Docker image from 16.04 (Xenial) to 22.04 
> (FLINK-34194). I noticed that there is test code still relying on the 
> {{python}} command instead of {{{}python3{}}}. For Ubuntu 16.04 that meant 
> relying on Python 2. Therefore, we have tests still relying on Python 2 as 
> far as I understand.
> I couldn't find any documentation or mailing list discussion on major Python 
> version support. But AFAIU, we're relying on Python3 (based on the e2e tests) 
> which makes these tests out-dated.
> Additionally, 
> [python.client.executable|https://github.com/apache/flink/blob/50cb4ee8c545cd38d0efee014939df91c2c9c65f/flink-python/src/main/java/org/apache/flink/python/PythonOptions.java#L170]
>  relies on {{{}python{}}}.
> Should we make it more explicit in our test code that we're actually 
> expecting python3? Additionally, should that be mentioned somewhere in the 
> docs? Or if it's already mentioned, could you point me to it? (As someone 
> looking into PyFlink for the "first" time) I would have expected something 
> like that being mentioned on the [PyFlink 
> overview|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/python/overview/].
>  Or is it the default to assume nowadays that {{python}} refers to 
> {{python3?}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31472) AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread

2024-01-23 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17810241#comment-17810241
 ] 

Matthias Pohl commented on FLINK-31472:
---

Sure, if we are certain that this is a test issue and not an issue that was 
introduced with 1.19?!

> AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread
> 
>
> Key: FLINK-31472
> URL: https://issues.apache.org/jira/browse/FLINK-31472
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.17.0, 1.16.1, 1.18.0, 1.19.0
>Reporter: Ran Tao
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> when run mvn clean test, this case failed occasionally.
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.955 
> s <<< FAILURE! - in 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest
> [ERROR] 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
>   Time elapsed: 0.492 s  <<< ERROR!
> java.lang.IllegalStateException: Illegal thread detected. This method must be 
> called from inside the mailbox thread!
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>         at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>         at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>         at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
>         at 
> 

[jira] [Commented] (FLINK-34155) Recurring SqlExecutionException

2024-01-23 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17810238#comment-17810238
 ] 

Matthias Pohl commented on FLINK-34155:
---

How big is the mvn.log? I'm wondering whether it would be possible to share the 
entire log for investigation purposes.

> Recurring SqlExecutionException
> ---
>
> Key: FLINK-34155
> URL: https://issues.apache.org/jira/browse/FLINK-34155
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.19.0
>Reporter: Jeyhun Karimov
>Priority: Major
>  Labels: test-stability
> Attachments: disk-full.log
>
>
> When analyzing very big maven log file in our CI system, I found out that 
> there is a recurring {{{}SqlException (subset of the log file is 
> attached){}}}:
>  
> {{org.apache.flink.table.gateway.service.utils.SqlExecutionException: Only 
> 'INSERT/CREATE TABLE AS' statement is allowed in Statement Set or use 'END' 
> statement to submit Statement Set.}}
>  
>  
> which leads to:
>  
> {{06:31:41,155 [flink-rest-server-netty-worker-thread-22] ERROR 
> org.apache.flink.table.gateway.rest.handler.statement.FetchResultsHandler [] 
> - Unhandled exception.}}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34155) Recurring SqlExecutionException

2024-01-23 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl updated FLINK-34155:
--
Labels: test-stability  (was: test)

> Recurring SqlExecutionException
> ---
>
> Key: FLINK-34155
> URL: https://issues.apache.org/jira/browse/FLINK-34155
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.19.0
>Reporter: Jeyhun Karimov
>Priority: Major
>  Labels: test-stability
> Attachments: disk-full.log
>
>
> When analyzing very big maven log file in our CI system, I found out that 
> there is a recurring {{{}SqlException (subset of the log file is 
> attached){}}}:
>  
> {{org.apache.flink.table.gateway.service.utils.SqlExecutionException: Only 
> 'INSERT/CREATE TABLE AS' statement is allowed in Statement Set or use 'END' 
> statement to submit Statement Set.}}
>  
>  
> which leads to:
>  
> {{06:31:41,155 [flink-rest-server-netty-worker-thread-22] ERROR 
> org.apache.flink.table.gateway.rest.handler.statement.FetchResultsHandler [] 
> - Unhandled exception.}}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33760) Group Window agg has different result when only consuming -D records while using or not using minibatch

2024-01-23 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reassigned FLINK-33760:
-

Assignee: Yunhong Zheng

> Group Window agg has different result when only consuming -D records while 
> using or not using minibatch
> ---
>
> Key: FLINK-33760
> URL: https://issues.apache.org/jira/browse/FLINK-33760
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Reporter: xuyang
>Assignee: Yunhong Zheng
>Priority: Major
>
> Add the test in AggregateITCase to re-produce this bug.
>  
> {code:java}
> @Test
> def test(): Unit = {
>   val upsertSourceCurrencyData = List(
> changelogRow("-D", 1.bigDecimal, "a"),
> changelogRow("-D", 1.bigDecimal, "b"),
> changelogRow("-D", 1.bigDecimal, "b")
>   )
>   val upsertSourceDataId = registerData(upsertSourceCurrencyData);
>   tEnv.executeSql(s"""
>  |CREATE TABLE T (
>  | `a` DECIMAL(32, 8),
>  | `d` STRING,
>  | proctime as proctime()
>  |) WITH (
>  | 'connector' = 'values',
>  | 'data-id' = '$upsertSourceDataId',
>  | 'changelog-mode' = 'I,UA,UB,D',
>  | 'failing-source' = 'true'
>  |)
>  |""".stripMargin)
>   val sql =
> "SELECT max(a), sum(a), min(a), TUMBLE_START(proctime, INTERVAL '0.005' 
> SECOND), TUMBLE_END(proctime, INTERVAL '0.005' SECOND), d FROM T GROUP BY d, 
> TUMBLE(proctime, INTERVAL '0.005' SECOND)"
>   val sink = new TestingRetractSink
>   tEnv.sqlQuery(sql).toRetractStream[Row].addSink(sink)
>   env.execute()
>   // Use the result precision/scale calculated for sum and don't override 
> with the one calculated
>   // for plus()/minus(), which results in loosing a decimal digit.
>   val expected = 
> List("6.41671935,65947.230719357070,609.0286740370369970")
>   assertEquals(expected, sink.getRetractResults.sorted)
> } {code}
> When MiniBatch is ON, the result is `List()`.
>  
> When MiniBatch is OFF, the result is 
> `List(null,-1.,null,2023-12-06T11:29:21.895,2023-12-06T11:29:21.900,a)`.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33760) Group Window agg has different result when only consuming -D records while using or not using minibatch

2024-01-23 Thread Yunhong Zheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17810232#comment-17810232
 ] 

Yunhong Zheng commented on FLINK-33760:
---

[~xuyangzhong]  It looks like a bug,  I want to take this ticket! 

> Group Window agg has different result when only consuming -D records while 
> using or not using minibatch
> ---
>
> Key: FLINK-33760
> URL: https://issues.apache.org/jira/browse/FLINK-33760
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Reporter: xuyang
>Priority: Major
>
> Add the test in AggregateITCase to re-produce this bug.
>  
> {code:java}
> @Test
> def test(): Unit = {
>   val upsertSourceCurrencyData = List(
> changelogRow("-D", 1.bigDecimal, "a"),
> changelogRow("-D", 1.bigDecimal, "b"),
> changelogRow("-D", 1.bigDecimal, "b")
>   )
>   val upsertSourceDataId = registerData(upsertSourceCurrencyData);
>   tEnv.executeSql(s"""
>  |CREATE TABLE T (
>  | `a` DECIMAL(32, 8),
>  | `d` STRING,
>  | proctime as proctime()
>  |) WITH (
>  | 'connector' = 'values',
>  | 'data-id' = '$upsertSourceDataId',
>  | 'changelog-mode' = 'I,UA,UB,D',
>  | 'failing-source' = 'true'
>  |)
>  |""".stripMargin)
>   val sql =
> "SELECT max(a), sum(a), min(a), TUMBLE_START(proctime, INTERVAL '0.005' 
> SECOND), TUMBLE_END(proctime, INTERVAL '0.005' SECOND), d FROM T GROUP BY d, 
> TUMBLE(proctime, INTERVAL '0.005' SECOND)"
>   val sink = new TestingRetractSink
>   tEnv.sqlQuery(sql).toRetractStream[Row].addSink(sink)
>   env.execute()
>   // Use the result precision/scale calculated for sum and don't override 
> with the one calculated
>   // for plus()/minus(), which results in loosing a decimal digit.
>   val expected = 
> List("6.41671935,65947.230719357070,609.0286740370369970")
>   assertEquals(expected, sink.getRetractResults.sorted)
> } {code}
> When MiniBatch is ON, the result is `List()`.
>  
> When MiniBatch is OFF, the result is 
> `List(null,-1.,null,2023-12-06T11:29:21.895,2023-12-06T11:29:21.900,a)`.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34145][connector/filesystem] support dynamic source parallelism inference in batch jobs [flink]

2024-01-23 Thread via GitHub


flinkbot commented on PR #24186:
URL: https://github.com/apache/flink/pull/24186#issuecomment-1907491277

   
   ## CI report:
   
   * cf88c4b65ae73600d8efb25d6273a7ea16ae5b2d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33803] Set observedGeneration at end of reconciliation [flink-kubernetes-operator]

2024-01-23 Thread via GitHub


gyfora commented on code in PR #755:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/755#discussion_r1464380455


##
flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/reconciler/ReconciliationMetadata.java:
##
@@ -42,6 +42,7 @@ public class ReconciliationMetadata {
 public static ReconciliationMetadata from(AbstractFlinkResource 
resource) {
 ObjectMeta metadata = new ObjectMeta();
 metadata.setGeneration(resource.getMetadata().getGeneration());
+
resource.getStatus().setObservedGeneration(resource.getMetadata().getGeneration());

Review Comment:
   I would move this code to:
   `ReconciliationStatus#serializeAndSetLastReconciledSpec` and 
`ReconciliationUtils#updateReconciliationMetadata` that way it's a bit more 
intuitive when the setting of the generation happens. It's a bit unexpected 
that a constructor method for `ReconciliationMetadata` mutates the resources 
here.
   
   I know that my suggestion will actually duplicate the setting of the 
observedGeneration in the status but I think it will be cleaner once we remove 
the now deprecated meta generation field :) 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34223][dist] Introduce a migration tool to transfer legacy config file to new config file. [flink]

2024-01-23 Thread via GitHub


flinkbot commented on PR #24185:
URL: https://github.com/apache/flink/pull/24185#issuecomment-1907481027

   
   ## CI report:
   
   * 19b6b48ebe252e29200838665e6bbc42583d5f98 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34145) File source connector support dynamic source parallelism inference in batch jobs

2024-01-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34145:
---
Labels: pull-request-available  (was: )

> File source connector support dynamic source parallelism inference in batch 
> jobs
> 
>
> Key: FLINK-34145
> URL: https://issues.apache.org/jira/browse/FLINK-34145
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Affects Versions: 1.19.0
>Reporter: xingbe
>Assignee: xingbe
>Priority: Major
>  Labels: pull-request-available
>
> [FLIP-379|https://cwiki.apache.org/confluence/display/FLINK/FLIP-379%3A+Dynamic+source+parallelism+inference+for+batch+jobs]
>  has introduced support for dynamic source parallelism inference in batch 
> jobs, and we plan to give priority to enabling this feature for the file 
> source connector.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34145][connector/filesystem] support dynamic source parallelism inference in batch jobs [flink]

2024-01-23 Thread via GitHub


SinBex opened a new pull request, #24186:
URL: https://github.com/apache/flink/pull/24186

   ## What is the purpose of the change
   
   
*[FLIP-379](https://cwiki.apache.org/confluence/display/FLINK/FLIP-379%3A+Dynamic+source+parallelism+inference+for+batch+jobs)
 has introduced support for dynamic source parallelism inference in batch jobs, 
and we plan to give priority to enabling this feature for the file source 
connector.*
   
   
   ## Brief change log
 - *The FileSource implements the DynamicParallelismInference interface.*
   
   
   ## Verifying this change
   This change is already covered by existing tests, such as 
*(FileSourceTextLinesITCase::testBoundedTextFileSource)*.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no )
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: ( no )
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34223) Introduce a migration tool to transfer legacy config file to new config file

2024-01-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34223:
---
Labels: pull-request-available  (was: )

> Introduce a migration tool to transfer legacy config file to new config file
> 
>
> Key: FLINK-34223
> URL: https://issues.apache.org/jira/browse/FLINK-34223
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Scripts
>Reporter: Junrui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> As we transition to new configuration files that adhere to the standard YAML 
> format, users are expected to manually migrate their existing config files. 
> However, this process can be error-prone and time-consuming.
> To simplify the migration, we're introducing an automated script. This script 
> leverages BashJavaUtils to efficiently convert old flink-conf.yaml files into 
> the new config file config.yaml, thereby reducing the effort required for 
> migration.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34223][dist] Introduce a migration tool to transfer legacy config file to new config file. [flink]

2024-01-23 Thread via GitHub


JunRuiLee opened a new pull request, #24185:
URL: https://github.com/apache/flink/pull/24185

   
   
   
   
   ## What is the purpose of the change
   
   As we transition to new configuration files that adhere to the standard YAML 
format, users are expected to manually migrate their existing config files. 
However, this process can be error-prone and time-consuming.
   
   To simplify the migration, we're introducing an automated script. This 
script leverages BashJavaUtils to efficiently convert old flink-conf.yaml files 
into the new config file config.yaml, thereby reducing the effort required for 
migration.
   
   
   ## Brief change log
   Introduce a migration script `migrate-config-tool.sh` to transfer legacy 
config file to new config file.
   This script accepts three input parameters: `FLINK_CONF_DIR`, 
`FLINK_BIN_DIR`, and `FLINK_LIB_DIR`. The output will be a newly created 
`config.yaml` file located in the `FLINK_CONF_DIR` directory.
   
   
   ## Verifying this change
   
   This change added tests and can be verified by 
FlinkConfigLoaderTest#testMigrateLegacyConfigToStandardYaml and 
BashJavaUtilsITCase#testMigrateLegacyConfigToStandardYaml
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (**yes** / no)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (**yes** / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ **not documented**)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-34223) Introduce a migration tool to transfer legacy config file to new config file

2024-01-23 Thread Junrui Li (Jira)
Junrui Li created FLINK-34223:
-

 Summary: Introduce a migration tool to transfer legacy config file 
to new config file
 Key: FLINK-34223
 URL: https://issues.apache.org/jira/browse/FLINK-34223
 Project: Flink
  Issue Type: Sub-task
  Components: Deployment / Scripts
Reporter: Junrui Li
 Fix For: 1.19.0


As we transition to new configuration files that adhere to the standard YAML 
format, users are expected to manually migrate their existing config files. 
However, this process can be error-prone and time-consuming.

To simplify the migration, we're introducing an automated script. This script 
leverages BashJavaUtils to efficiently convert old flink-conf.yaml files into 
the new config file config.yaml, thereby reducing the effort required for 
migration.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34222) Get minibatch join operator involved

2024-01-23 Thread Shuai Xu (Jira)
Shuai Xu created FLINK-34222:


 Summary: Get minibatch join operator involved
 Key: FLINK-34222
 URL: https://issues.apache.org/jira/browse/FLINK-34222
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Runtime
Reporter: Shuai Xu
 Fix For: 1.19.0


Get minibatch join operator involved which includes both plan and operator



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34221) Introduce operator for minibatch join

2024-01-23 Thread Shuai Xu (Jira)
Shuai Xu created FLINK-34221:


 Summary: Introduce operator for minibatch join
 Key: FLINK-34221
 URL: https://issues.apache.org/jira/browse/FLINK-34221
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Runtime
Reporter: Shuai Xu
 Fix For: 1.19.0


Introduce operator that implements minibatch join



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34220) introduce buffer bundle for minibatch join

2024-01-23 Thread Shuai Xu (Jira)
Shuai Xu created FLINK-34220:


 Summary: introduce buffer bundle for minibatch join
 Key: FLINK-34220
 URL: https://issues.apache.org/jira/browse/FLINK-34220
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Runtime
Reporter: Shuai Xu
 Fix For: 1.19.0


introduce buffer bundle for storing records to implement minibatch join



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34219) Introduce a new join operator to support minibatch

2024-01-23 Thread Shuai Xu (Jira)
Shuai Xu created FLINK-34219:


 Summary: Introduce a new join operator to support minibatch
 Key: FLINK-34219
 URL: https://issues.apache.org/jira/browse/FLINK-34219
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / Runtime
Reporter: Shuai Xu
 Fix For: 1.19.0


This is the parent task of FLIP-415.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34205) Update flink-docker's Dockerfile and docker-entrypoint.sh to use BashJavaUtils for Flink configuration management

2024-01-23 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu closed FLINK-34205.
---
Fix Version/s: 1.19.0
   Resolution: Done

dev-master:
44f058287cc956a620b12b6f8ed214e44dc3db77

> Update flink-docker's Dockerfile and docker-entrypoint.sh to use 
> BashJavaUtils for Flink configuration management
> -
>
> Key: FLINK-34205
> URL: https://issues.apache.org/jira/browse/FLINK-34205
> Project: Flink
>  Issue Type: Sub-task
>  Components: flink-docker
>Reporter: Junrui Li
>Assignee: Junrui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> The flink-docker's Dockerfile and docker-entrypoint.sh currently use shell 
> scripting techniques with grep and sed for configuration reading and 
> modification. This method is not suitable for the standard YAML configuration 
> format.
> Following the changes introduced in FLINK-33721, we should update 
> flink-docker's Dockerfile and docker-entrypoint.sh to use BashJavaUtils for 
> Flink configuration reading and writing.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34205) Update flink-docker's Dockerfile and docker-entrypoint.sh to use BashJavaUtils for Flink configuration management

2024-01-23 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu reassigned FLINK-34205:
---

Assignee: Junrui Li

> Update flink-docker's Dockerfile and docker-entrypoint.sh to use 
> BashJavaUtils for Flink configuration management
> -
>
> Key: FLINK-34205
> URL: https://issues.apache.org/jira/browse/FLINK-34205
> Project: Flink
>  Issue Type: Sub-task
>  Components: flink-docker
>Reporter: Junrui Li
>Assignee: Junrui Li
>Priority: Major
>  Labels: pull-request-available
>
> The flink-docker's Dockerfile and docker-entrypoint.sh currently use shell 
> scripting techniques with grep and sed for configuration reading and 
> modification. This method is not suitable for the standard YAML configuration 
> format.
> Following the changes introduced in FLINK-33721, we should update 
> flink-docker's Dockerfile and docker-entrypoint.sh to use BashJavaUtils for 
> Flink configuration reading and writing.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34205] Use BashJavaUtils for Flink configuration management [flink-docker]

2024-01-23 Thread via GitHub


zhuzhurk merged PR #175:
URL: https://github.com/apache/flink-docker/pull/175


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34210][Runtime/Checkpointing] Fix DefaultExecutionGraphBuilder#isCheckpointingEnabled return wrong value when checkpoint disabled [flink]

2024-01-23 Thread via GitHub


mayuehappy commented on PR #24173:
URL: https://github.com/apache/flink/pull/24173#issuecomment-1907411370

   @flinkbot run azure
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34055][table] Introduce a new annotation for named parameters [flink]

2024-01-23 Thread via GitHub


fsk119 commented on code in PR #24106:
URL: https://github.com/apache/flink/pull/24106#discussion_r1464215746


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/annotation/ProcedureHint.java:
##
@@ -139,6 +139,18 @@
  */
 String[] argumentNames() default {""};
 
+/**
+ * Explicitly lists the argument that a procedure takes as input, 
including their names, types,
+ * and whether they are optional.
+ *
+ * By default, it is recommended to use this parameter instead of 
{@link #input()}. If the
+ * type of argumentHint is not defined, it will be considered an invalid 
argument and an
+ * exception will be thrown. Additionally, both this parameter and {@link 
#input()} cannot be
+ * defined at the same time. If neither arguments nor {@link #input()} are 
defined,
+ * reflection-based extraction will be used.
+ */
+ArgumentHint[] arguments() default {};

Review Comment:
   ditto



##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/extraction/BaseMappingExtractor.java:
##
@@ -362,7 +363,17 @@ static Optional 
tryExtractInputGroupArgument(
 Method method, int paramPos) {
 final Parameter parameter = method.getParameters()[paramPos];
 final DataTypeHint hint = parameter.getAnnotation(DataTypeHint.class);
-if (hint != null) {
+final ArgumentHint argumentHint = 
parameter.getAnnotation(ArgumentHint.class);
+if (hint != null && argumentHint != null) {
+throw extractionError(
+"ArgumentHint and DataTypeHint cannot be declared at the 
same time.");

Review Comment:
   nit: it's better to notify user which parameter is not correct.
   ```
throw extractionError(
   String.format(
   "ArgumentHint and DataTypeHint cannot be 
declared at the same time for parameter '%s'.",
   parameter.getName()));
   ```



##
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/inference/TypeInferenceOperandChecker.java:
##
@@ -114,6 +117,24 @@ public boolean isOptional(int i) {
 return false;
 }
 
+@Override
+public List paramTypes(RelDataTypeFactory typeFactory) {
+throw new IllegalStateException("Should not be called");
+}
+
+@Override
+public List paramNames() {
+return typeInference

Review Comment:
   Add more details exception. Print current required method method signature 
and the candidate method signatures.
   
   BTW, do you mean user can not use like this?
   ```
   public static class NamedArgumentsScalarFunction extends ScalarFunction {
   @FunctionHint(
   output = @DataTypeHint("STRING"),
   arguments = {
   @ArgumentHint(name = "in1", type = @DataTypeHint("int")),
   @ArgumentHint(name = "in2", type = @DataTypeHint("int"), 
isOptional = true)
   })
   public String eval(Integer arg1, Integer arg2) {
   return (arg1 + ": " + arg2);
   }
   
   @FunctionHint(
   output = @DataTypeHint("STRING"),
   arguments = {
   @ArgumentHint(name = "in1", type = 
@DataTypeHint("int")),
   @ArgumentHint(name = "in2", type = 
@DataTypeHint("int"),  isOptional = true),
   @ArgumentHint(name = "in3", type = 
@DataTypeHint("int"),  isOptional = true)
   })
   public String eval(Integer arg1, Integer arg2, Integer arg3) {
   return (arg1 + ": " + arg2);
   }
   }
   ```
   
   ```
   SELECT udf(in1 => 1);
   ```
   



##
flink-table/flink-table-common/src/test/java/org/apache/flink/table/types/extraction/TypeInferenceExtractorTest.java:
##
@@ -1585,4 +1629,106 @@ public void eval(CompletableFuture f, int[] i) {}
 private static class DataTypeHintOnScalarFunctionAsync extends 
AsyncScalarFunction {
 public void eval(@DataTypeHint("ROW") 
CompletableFuture f) {}
 }
+
+private static class ArgumentHintScalarFunction extends ScalarFunction {
+@FunctionHint(
+arguments = {
+@ArgumentHint(type = @DataTypeHint("STRING"), name = "f1"),
+@ArgumentHint(type = @DataTypeHint("INTEGER"), name = "f2")
+})
+public String eval(String f1, int f2) {
+return "";
+}
+}
+
+private static class ArgumentsAndInputsScalarFunction extends 
ScalarFunction {
+@FunctionHint(
+arguments = {
+@ArgumentHint(type = @DataTypeHint("STRING"), name = "f1"),
+@ArgumentHint(type = @DataTypeHint("INTEGER"), name = "f2")
+},
+input = {@DataTypeHint("STRING"), 

Re: [PR] [FLINK-24024][table-planner] support session window tvf in plan [flink]

2024-01-23 Thread via GitHub


xuyangzhong commented on PR #23505:
URL: https://github.com/apache/flink/pull/23505#issuecomment-1907407611

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34174) Remove SlotMatchingStrategy related logic

2024-01-23 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17810199#comment-17810199
 ] 

Rui Fan commented on FLINK-34174:
-

Merged to master(1.19.0) via : 551bb9ebdc484a41951bb3aa9b88430cdca1c0d8

> Remove SlotMatchingStrategy related logic
> -
>
> Key: FLINK-34174
> URL: https://issues.apache.org/jira/browse/FLINK-34174
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Task
>Reporter: RocMarshal
>Assignee: RocMarshal
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34174) Remove SlotMatchingStrategy related logic

2024-01-23 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan resolved FLINK-34174.
-
Fix Version/s: 1.19.0
   Resolution: Fixed

> Remove SlotMatchingStrategy related logic
> -
>
> Key: FLINK-34174
> URL: https://issues.apache.org/jira/browse/FLINK-34174
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Task
>Reporter: RocMarshal
>Assignee: RocMarshal
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34174][runtime] Remove SlotMatchingStrategy related logic [flink]

2024-01-23 Thread via GitHub


1996fanrui merged PR #24158:
URL: https://github.com/apache/flink/pull/24158


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34174][runtime] Remove SlotMatchingStrategy related logic [flink]

2024-01-23 Thread via GitHub


1996fanrui commented on PR #24158:
URL: https://github.com/apache/flink/pull/24158#issuecomment-1907376784

   No any comments for 2 days, merging~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-33865) exponential-delay.attempts-before-reset-backoff doesn't work when it's set in Job Configuration

2024-01-23 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan resolved FLINK-33865.
-
Fix Version/s: 1.19.0
   Resolution: Fixed

> exponential-delay.attempts-before-reset-backoff doesn't work when it's set in 
> Job Configuration
> ---
>
> Key: FLINK-33865
> URL: https://issues.apache.org/jira/browse/FLINK-33865
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
> Attachments: image-2023-12-17-17-56-59-138.png
>
>
> exponential-delay.attempts-before-reset-backoff doesn't work when it's set in 
> Job Configuration.
> h2. Reason:
> When exponential-delay.attempts-before-reset-backoff is set by job 
> Configuration instead of cluster configuration. ExecutionConfig#configure 
> will call RestartStrategies#parseConfiguration to create the 
> ExponentialDelayRestartStrategyConfiguration. And then 
> RestartBackoffTimeStrategyFactoryLoader#getJobRestartStrategyFactory will 
> create the ExponentialDelayRestartBackoffTimeStrategyFactory by the 
> ExponentialDelayRestartStrategyConfiguration.
> Since 1.19, RestartStrategies and RestartStrategyConfiguration are depreated, 
> so ExponentialDelayRestartStrategyConfiguration doesn't support 
> exponential-delay.attempts-before-reset-backoff. So if we set 
> exponential-delay.attempts-before-reset-backoff at job level, it won't be 
> supported.
> h2. Solution
> If we use the ExponentialDelayRestartStrategyConfiguration to save 
> restartStrategy related options in the ExecutionConfig, all new options are 
> set at job level will be missed. 
> So we can use the Configuration to save the restartStrategy options inside of 
> ExecutionConfig.
> !image-2023-12-17-17-56-59-138.png|width=1212,height=256!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33865) exponential-delay.attempts-before-reset-backoff doesn't work when it's set in Job Configuration

2024-01-23 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17810198#comment-17810198
 ] 

Rui Fan commented on FLINK-33865:
-

Merged to master(1.19.0) via:
* 393ef1cc43942ee4652e20206472b07b7794297b
* 5c895aa670bb51593713d66f7bf380b4e92575d9

> exponential-delay.attempts-before-reset-backoff doesn't work when it's set in 
> Job Configuration
> ---
>
> Key: FLINK-33865
> URL: https://issues.apache.org/jira/browse/FLINK-33865
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2023-12-17-17-56-59-138.png
>
>
> exponential-delay.attempts-before-reset-backoff doesn't work when it's set in 
> Job Configuration.
> h2. Reason:
> When exponential-delay.attempts-before-reset-backoff is set by job 
> Configuration instead of cluster configuration. ExecutionConfig#configure 
> will call RestartStrategies#parseConfiguration to create the 
> ExponentialDelayRestartStrategyConfiguration. And then 
> RestartBackoffTimeStrategyFactoryLoader#getJobRestartStrategyFactory will 
> create the ExponentialDelayRestartBackoffTimeStrategyFactory by the 
> ExponentialDelayRestartStrategyConfiguration.
> Since 1.19, RestartStrategies and RestartStrategyConfiguration are depreated, 
> so ExponentialDelayRestartStrategyConfiguration doesn't support 
> exponential-delay.attempts-before-reset-backoff. So if we set 
> exponential-delay.attempts-before-reset-backoff at job level, it won't be 
> supported.
> h2. Solution
> If we use the ExponentialDelayRestartStrategyConfiguration to save 
> restartStrategy related options in the ExecutionConfig, all new options are 
> set at job level will be missed. 
> So we can use the Configuration to save the restartStrategy options inside of 
> ExecutionConfig.
> !image-2023-12-17-17-56-59-138.png|width=1212,height=256!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33865][runtime] Adding an ITCase to ensure `exponential-delay.attempts-before-reset-backoff` works well [flink]

2024-01-23 Thread via GitHub


1996fanrui merged PR #23942:
URL: https://github.com/apache/flink/pull/23942


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33865][runtime] Adding an ITCase to ensure `exponential-delay.attempts-before-reset-backoff` works well [flink]

2024-01-23 Thread via GitHub


1996fanrui commented on PR #23942:
URL: https://github.com/apache/flink/pull/23942#issuecomment-1907374215

   Thanks @RocMarshal for the review!
   
   > Thanks @1996fanrui for the contribution, But the CI failed. could you 
check it? LGTM +1.
   
   I checked it, and it's not related to this PR. I created the FLINK-34218 to 
follow it.
   
   I try to run the CI again, it's green now, merging~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34218) AutoRescalingITCase#testCheckpointRescalingInKeyedState fails

2024-01-23 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17810197#comment-17810197
 ] 

Rui Fan commented on FLINK-34218:
-

I run it in my local more than 30 times, all of them are successful.

> AutoRescalingITCase#testCheckpointRescalingInKeyedState fails
> -
>
> Key: FLINK-34218
> URL: https://issues.apache.org/jira/browse/FLINK-34218
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Rui Fan
>Priority: Major
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56740=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba=ae4f8708-9994-57d3-c2d7-b892156e7812



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33577][dist] Change the default config file to config.yaml in flink-dist. [flink]

2024-01-23 Thread via GitHub


JunRuiLee commented on PR #24177:
URL: https://github.com/apache/flink/pull/24177#issuecomment-1907348848

   Hi @HuangXingBo , this pr includes some PyFlink changes related to the 
adoption of a new configuration file, config.yaml. Could you please help review 
it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33233) Null point exception when non-native udf used in join condition

2024-01-23 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17810188#comment-17810188
 ] 

luoyuxia commented on FLINK-33233:
--

1.17: 
https://github.com/apache/flink/commit/44a697c2537de02b96ca1044498e3f930dd6fdc7

1.18:https://github.com/apache/flink/commit/369fae70399798c1afa5041dec02d666b6d98008

> Null point exception when non-native udf used in join condition
> ---
>
> Key: FLINK-33233
> URL: https://issues.apache.org/jira/browse/FLINK-33233
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.17.0
>Reporter: yunfan
>Assignee: yunfan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Any non-native udf used in hive-parser join condition. 
> It will caused NullPointException.
> It can reproduced by follow code by adding this test to 
> {code:java}
> org.apache.flink.connectors.hive.HiveDialectQueryITCase{code}
>  
> {code:java}
> // Add follow code to org.apache.flink.connectors.hive.HiveDialectQueryITCase
> @Test
> public void testUdfInJoinCondition() throws Exception {
> List result = CollectionUtil.iteratorToList(tableEnv.executeSql(
> "select foo.y, bar.I from bar join foo on hiveudf(foo.x) = bar.I 
> where bar.I > 1").collect());
> assertThat(result.toString())
> .isEqualTo("[+I[2, 2]]");
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-33233) Null point exception when non-native udf used in join condition

2024-01-23 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia resolved FLINK-33233.
--
Fix Version/s: 1.17.3
   1.18.2
   Resolution: Fixed

> Null point exception when non-native udf used in join condition
> ---
>
> Key: FLINK-33233
> URL: https://issues.apache.org/jira/browse/FLINK-33233
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.17.0
>Reporter: yunfan
>Assignee: yunfan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0, 1.17.3, 1.18.2
>
>
> Any non-native udf used in hive-parser join condition. 
> It will caused NullPointException.
> It can reproduced by follow code by adding this test to 
> {code:java}
> org.apache.flink.connectors.hive.HiveDialectQueryITCase{code}
>  
> {code:java}
> // Add follow code to org.apache.flink.connectors.hive.HiveDialectQueryITCase
> @Test
> public void testUdfInJoinCondition() throws Exception {
> List result = CollectionUtil.iteratorToList(tableEnv.executeSql(
> "select foo.y, bar.I from bar join foo on hiveudf(foo.x) = bar.I 
> where bar.I > 1").collect());
> assertThat(result.toString())
> .isEqualTo("[+I[2, 2]]");
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34218) AutoRescalingITCase#testCheckpointRescalingInKeyedState fails

2024-01-23 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17810189#comment-17810189
 ] 

Rui Fan commented on FLINK-34218:
-

Hi [~srichter], AutoRescalingITCase#testCheckpointRescalingInKeyedState fails 
occasionally, would you mind helping take a look? Thanks

I'm not sure whether current flink core logic has some bugs or this test has 
some bugs.

> AutoRescalingITCase#testCheckpointRescalingInKeyedState fails
> -
>
> Key: FLINK-34218
> URL: https://issues.apache.org/jira/browse/FLINK-34218
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Rui Fan
>Priority: Major
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56740=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba=ae4f8708-9994-57d3-c2d7-b892156e7812



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33233][hive] Fix NPE when non-native udf is used in join condition with Hive dialect, back port to 1.18 [flink]

2024-01-23 Thread via GitHub


luoyuxia merged PR #24149:
URL: https://github.com/apache/flink/pull/24149


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33233][hive] Fix NPE when non-native udf is used in join condition with Hive dialect, back port to 1.17 [flink]

2024-01-23 Thread via GitHub


luoyuxia merged PR #24148:
URL: https://github.com/apache/flink/pull/24148


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31090) Flink SQL fail to select INTERVAL

2024-01-23 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-31090:
-
Fix Version/s: 1.18.0

> Flink SQL fail to select INTERVAL 
> --
>
> Key: FLINK-31090
> URL: https://issues.apache.org/jira/browse/FLINK-31090
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Table SQL / Runtime
>Reporter: luoyuxia
>Priority: Major
> Fix For: 1.18.0
>
>
> Can be reproduce in CalcITCase  with the following code:
>  
> {code:java}
> @Test def testSelectInterval(): Unit = { checkResult("SELECT INTERVAL 2 DAY", 
> data3) }
> {code}
>  
>  
> It'll throw the exception:
> org.apache.flink.table.planner.codegen.CodeGenException: Interval expression 
> type expected.
>     at 
> org.apache.flink.table.planner.codegen.CodeGenUtils$.requireTimeInterval(CodeGenUtils.scala:419)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateCallExpression(ExprCodeGenerator.scala:549)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:490)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:57)
>     at org.apache.calcite.rex.RexCall.accept(RexCall.java:189)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateExpression(ExprCodeGenerator.scala:143)
>     at 
> org.apache.flink.table.planner.codegen.ExpressionReducer.$anonfun$reduce$2(ExpressionReducer.scala:81)
>     at scala.collection.immutable.List.map(List.scala:282)
>     at 
> org.apache.flink.table.planner.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:81)
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31090) Flink SQL fail to select INTERVAL

2024-01-23 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia closed FLINK-31090.

Resolution: Fixed

> Flink SQL fail to select INTERVAL 
> --
>
> Key: FLINK-31090
> URL: https://issues.apache.org/jira/browse/FLINK-31090
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Table SQL / Runtime
>Reporter: luoyuxia
>Priority: Major
>
> Can be reproduce in CalcITCase  with the following code:
>  
> {code:java}
> @Test def testSelectInterval(): Unit = { checkResult("SELECT INTERVAL 2 DAY", 
> data3) }
> {code}
>  
>  
> It'll throw the exception:
> org.apache.flink.table.planner.codegen.CodeGenException: Interval expression 
> type expected.
>     at 
> org.apache.flink.table.planner.codegen.CodeGenUtils$.requireTimeInterval(CodeGenUtils.scala:419)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateCallExpression(ExprCodeGenerator.scala:549)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:490)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:57)
>     at org.apache.calcite.rex.RexCall.accept(RexCall.java:189)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateExpression(ExprCodeGenerator.scala:143)
>     at 
> org.apache.flink.table.planner.codegen.ExpressionReducer.$anonfun$reduce$2(ExpressionReducer.scala:81)
>     at scala.collection.immutable.List.map(List.scala:282)
>     at 
> org.apache.flink.table.planner.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:81)
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34218) AutoRescalingITCase#testCheckpointRescalingInKeyedState fails

2024-01-23 Thread Rui Fan (Jira)
Rui Fan created FLINK-34218:
---

 Summary: AutoRescalingITCase#testCheckpointRescalingInKeyedState 
fails
 Key: FLINK-34218
 URL: https://issues.apache.org/jira/browse/FLINK-34218
 Project: Flink
  Issue Type: Bug
  Components: Tests
Reporter: Rui Fan


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56740=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba=ae4f8708-9994-57d3-c2d7-b892156e7812



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31409) Hive dialect should use public interfaces in Hive connector

2024-01-23 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17810187#comment-17810187
 ] 

luoyuxia commented on FLINK-31409:
--

[~libenchao] Thanks for reminder.

> Hive dialect should use public interfaces in Hive connector
> ---
>
> Key: FLINK-31409
> URL: https://issues.apache.org/jira/browse/FLINK-31409
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: luoyuxia
>Assignee: luoyuxia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> Currently, for the Hive dialect part in Hive connector, it depends much 
> internal interfaces in flink-table-planner or other module. We should avoid 
> it and use public interfaces proposed in  
> [FLIP-216|[https://cwiki.apache.org/confluence/display/FLINK/FLIP-216%3A++Introduce+pluggable+dialect+and+plan+for+migrating+Hive+dialect]]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-31409) Hive dialect should use public interfaces in Hive connector

2024-01-23 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia resolved FLINK-31409.
--
Fix Version/s: 1.18.0
   Resolution: Fixed

> Hive dialect should use public interfaces in Hive connector
> ---
>
> Key: FLINK-31409
> URL: https://issues.apache.org/jira/browse/FLINK-31409
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: luoyuxia
>Assignee: luoyuxia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> Currently, for the Hive dialect part in Hive connector, it depends much 
> internal interfaces in flink-table-planner or other module. We should avoid 
> it and use public interfaces proposed in  
> [FLIP-216|[https://cwiki.apache.org/confluence/display/FLINK/FLIP-216%3A++Introduce+pluggable+dialect+and+plan+for+migrating+Hive+dialect]]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (FLINK-31409) Hive dialect should use public interfaces in Hive connector

2024-01-23 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia reopened FLINK-31409:
--

> Hive dialect should use public interfaces in Hive connector
> ---
>
> Key: FLINK-31409
> URL: https://issues.apache.org/jira/browse/FLINK-31409
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: luoyuxia
>Assignee: luoyuxia
>Priority: Major
>  Labels: pull-request-available
>
> Currently, for the Hive dialect part in Hive connector, it depends much 
> internal interfaces in flink-table-planner or other module. We should avoid 
> it and use public interfaces proposed in  
> [FLIP-216|[https://cwiki.apache.org/confluence/display/FLINK/FLIP-216%3A++Introduce+pluggable+dialect+and+plan+for+migrating+Hive+dialect]]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34144][docs] Update the documentation about dynamic source parallelism inference [flink]

2024-01-23 Thread via GitHub


flinkbot commented on PR #24184:
URL: https://github.com/apache/flink/pull/24184#issuecomment-1907311524

   
   ## CI report:
   
   * e807815c4ced6ae83dec3c251833a3d4586868cb UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] introduce BufferBundle for minibatch join [flink]

2024-01-23 Thread via GitHub


xishuaidelin commented on PR #24159:
URL: https://github.com/apache/flink/pull/24159#issuecomment-1907309476

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34144) Update the documentation and configuration description about dynamic source parallelism inference

2024-01-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34144:
---
Labels: pull-request-available  (was: )

> Update the documentation and configuration description about dynamic source 
> parallelism inference
> -
>
> Key: FLINK-34144
> URL: https://issues.apache.org/jira/browse/FLINK-34144
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.19.0
>Reporter: xingbe
>Assignee: xingbe
>Priority: Major
>  Labels: pull-request-available
>
> [FLIP-379|https://cwiki.apache.org/confluence/display/FLINK/FLIP-379%3A+Dynamic+source+parallelism+inference+for+batch+jobs#FLIP379:Dynamicsourceparallelisminferenceforbatchjobs-IntroduceDynamicParallelismInferenceinterfaceforSource]
>  introduces the new feature of dynamic source parallelism inference, and we 
> plan to update the documentation and configuration items accordingly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34144][docs] Update the documentation about dynamic source parallelism inference [flink]

2024-01-23 Thread via GitHub


SinBex opened a new pull request, #24184:
URL: https://github.com/apache/flink/pull/24184

   ## What is the purpose of the change
   
   
*[FLIP-379](https://cwiki.apache.org/confluence/display/FLINK/FLIP-379%3A+Dynamic+source+parallelism+inference+for+batch+jobs#FLIP379:Dynamicsourceparallelisminferenceforbatchjobs-IntroduceDynamicParallelismInferenceinterfaceforSource)
 introduces the new feature of dynamic source parallelism inference, and we 
plan to update the documentation and configuration items accordingly.*
   
   
   ## Brief change log
   
 - *update the documentation about dynamic source parallelism inference*
   
   
   ## Verifying this change
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no )
 - The runtime per-record code paths (performance sensitive): (no )
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no )
 - The S3 file system connector: (no )
   
   ## Documentation
   
 - Does this pull request introduce a new feature? ( no)
 - If yes, how is the feature documented? (docs)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-31409) Hive dialect should use public interfaces in Hive connector

2024-01-23 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17810185#comment-17810185
 ] 

Benchao Li commented on FLINK-31409:


[~luoyuxia] Could you fill the fixVersion of this issue?

> Hive dialect should use public interfaces in Hive connector
> ---
>
> Key: FLINK-31409
> URL: https://issues.apache.org/jira/browse/FLINK-31409
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: luoyuxia
>Assignee: luoyuxia
>Priority: Major
>  Labels: pull-request-available
>
> Currently, for the Hive dialect part in Hive connector, it depends much 
> internal interfaces in flink-table-planner or other module. We should avoid 
> it and use public interfaces proposed in  
> [FLIP-216|[https://cwiki.apache.org/confluence/display/FLINK/FLIP-216%3A++Introduce+pluggable+dialect+and+plan+for+migrating+Hive+dialect]]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33865][runtime] Adding an ITCase to ensure `exponential-delay.attempts-before-reset-backoff` works well [flink]

2024-01-23 Thread via GitHub


1996fanrui commented on PR #23942:
URL: https://github.com/apache/flink/pull/23942#issuecomment-1907277744

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-23687][table-planner] Introduce partitioned lookup join to enforce input of LookupJoin to hash shuffle by lookup keys [flink]

2024-01-23 Thread via GitHub


yunfan123 commented on PR #24104:
URL: https://github.com/apache/flink/pull/24104#issuecomment-1907264935

   @LB-Yu Hello. Thanks for your comment. I have resolved the issue in the 
latest commit and supplemented it with additional unit tests


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-33198) Add timestamp with local time zone support in Avro converters

2024-01-23 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu resolved FLINK-33198.

Resolution: Implemented

Implemented in master(1.19): 533ead6ae946cbc77525d276b6dea965d390181a

> Add timestamp with local time zone support in Avro converters
> -
>
> Key: FLINK-33198
> URL: https://issues.apache.org/jira/browse/FLINK-33198
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Zhenqiu Huang
>Assignee: Zhenqiu Huang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Currently, RowDataToAvroConverters doesn't handle with LogicType 
> TIMESTAMP_WITH_LOCAL_TIME_ZONE. We should add the corresponding conversion.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33198) Add timestamp with local time zone support in Avro converters

2024-01-23 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-33198:
---
Priority: Major  (was: Minor)

> Add timestamp with local time zone support in Avro converters
> -
>
> Key: FLINK-33198
> URL: https://issues.apache.org/jira/browse/FLINK-33198
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Zhenqiu Huang
>Assignee: Zhenqiu Huang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Currently, RowDataToAvroConverters doesn't handle with LogicType 
> TIMESTAMP_WITH_LOCAL_TIME_ZONE. We should add the corresponding conversion.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33198][Format/Avro] support local timezone timestamp logic type in AVRO [flink]

2024-01-23 Thread via GitHub


leonardBang merged PR #23511:
URL: https://github.com/apache/flink/pull/23511


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34132][runtime] Correct the error message and doc of AdaptiveBatch only supports all edges being BLOCKING or HYBRID_FULL/HYBRID_SELECTIVE. [flink]

2024-01-23 Thread via GitHub


wanglijie95 commented on code in PR #24118:
URL: https://github.com/apache/flink/pull/24118#discussion_r1464226470


##
docs/content/docs/deployment/elastic_scaling.md:
##
@@ -238,7 +238,7 @@ In addition, there are several related configuration 
options that may need adjus
 ### Limitations
 
 - **Batch jobs only**: Adaptive Batch Scheduler only supports batch jobs. 
Exception will be thrown if a streaming job is submitted.
-- **BLOCKING or HYBRID jobs only**: At the moment, Adaptive Batch Scheduler 
only supports jobs whose [shuffle mode]({{< ref "docs/deployment/config" 
>}}#execution-batch-shuffle-mode) is `ALL_EXCHANGES_BLOCKING / 
ALL_EXCHANGES_HYBRID_FULL / ALL_EXCHANGES_HYBRID_SELECTIVE`.
+- **BLOCKING or HYBRID jobs only**: At the moment, Adaptive Batch Scheduler 
only supports DataStream jobs whose [shuffle mode]({{< ref 
"docs/deployment/config" >}}#execution-batch-shuffle-mode) is 
`ALL_EXCHANGES_BLOCKING / ALL_EXCHANGES_HYBRID_FULL / 
ALL_EXCHANGES_HYBRID_SELECTIVE` and DataSet jobs whose `ExecutionMode` is 
`BATCH_FORCED`.

Review Comment:
   I personlly think that `DataStream API` does not include `SQL/Table API`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33263][table-planner] Implement ParallelismProvider for sources in the table planner [flink]

2024-01-23 Thread via GitHub


libenchao commented on code in PR #24128:
URL: https://github.com/apache/flink/pull/24128#discussion_r1464205544


##
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/transformations/SourceTransformationWrapper.java:
##
@@ -0,0 +1,72 @@
+/*
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package org.apache.flink.streaming.api.transformations;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.ExecutionConfig;
+import org.apache.flink.api.dag.Transformation;
+import org.apache.flink.streaming.api.graph.TransformationTranslator;
+
+import org.apache.flink.shaded.guava31.com.google.common.collect.Lists;
+
+import java.util.Collections;
+import java.util.List;
+
+/**
+ * This Transformation is a phantom transformation which used to expose a 
default parallelism to

Review Comment:
   ```suggestion
* This Transformation is a phantom transformation which is used to expose a 
default parallelism to
   ```



##
flink-table/flink-table-planner/src/test/resources/org/apache/flink/table/planner/plan/stream/sql/TableScanTest.xml:
##
@@ -727,4 +727,60 @@ Calc(select=[ts, a, b], where=[>(a, 1)], 
changelogMode=[I,UB,UA,D])
 ]]>
 
   
+
+  
+
+  
+
+   
+  
+
+
+  
+   
+   
+  
+   
+  
+  
+   
+  
+   
+   
+  
+   
+   
+  
+   
+   
+  

Re: [PR] [FLINK-24024][table-planner] support session window tvf in plan [flink]

2024-01-23 Thread via GitHub


xuyangzhong commented on code in PR #23505:
URL: https://github.com/apache/flink/pull/23505#discussion_r1464224820


##
flink-table/flink-table-planner/src/test/resources/org/apache/flink/table/planner/plan/stream/sql/agg/WindowAggregateTest.xml:
##
@@ -2614,6 +2614,66 @@ Calc(select=[window_start, window_end, a, EXPR$3, 
EXPR$4, EXPR$5, wAvg, uv])
 +- Exchange(distribution=[hash[a]])
+- WatermarkAssigner(rowtime=[rowtime], watermark=[-(rowtime, 
1000:INTERVAL SECOND)])
   +- TableSourceScan(table=[[default_catalog, 
default_database, MyTable]], fields=[a, b, c, d, e, rowtime])
+]]>
+
+  
+  
+
+  

Re: [PR] [FLINK-32075][FLIP-306][Checkpoint] Delete merged files on checkpoint abort or subsumption [flink]

2024-01-23 Thread via GitHub


Zakelly commented on PR #24181:
URL: https://github.com/apache/flink/pull/24181#issuecomment-1907248389

   @fredia @masteryhx  Would  you please take a look? Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-34143) Modify the effective strategy of `execution.batch.adaptive.auto-parallelism.default-source-parallelism`

2024-01-23 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu closed FLINK-34143.
---
Fix Version/s: 1.19.0
   Resolution: Done

master/release-1.19:
97caa3c251e416640a6f54ea103912839c346f70

> Modify the effective strategy of 
> `execution.batch.adaptive.auto-parallelism.default-source-parallelism`
> ---
>
> Key: FLINK-34143
> URL: https://issues.apache.org/jira/browse/FLINK-34143
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: xingbe
>Assignee: xingbe
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Currently, if users do not set the 
> `{{{}execution.batch.adaptive.auto-parallelism.default-source-parallelism`{}}}
>  configuration option, the AdaptiveBatchScheduler defaults to a parallelism 
> of 1 for source vertices. In 
> [FLIP-379|https://cwiki.apache.org/confluence/display/FLINK/FLIP-379%3A+Dynamic+source+parallelism+inference+for+batch+jobs#FLIP379:Dynamicsourceparallelisminferenceforbatchjobs-IntroduceDynamicParallelismInferenceinterfaceforSource],
>  the value of 
> `{{{}execution.batch.adaptive.auto-parallelism.default-source-parallelism`{}}}
>  will act as the upper bound for inferring dynamic source parallelism, and 
> continuing with the current policy is no longer appropriate.
> We plan to change the effectiveness strategy of 
> `{{{}execution.batch.adaptive.auto-parallelism.default-source-parallelism`{}}};
>  when the user does not set this config option, we will use the value of 
> `{{{}execution.batch.adaptive.auto-parallelism.max-parallelism`{}}} as the 
> upper bound for source parallelism inference. If 
> {{`execution.batch.adaptive.auto-parallelism.max-parallelism`}} is also not 
> configured, the value of `{{{}parallelism.default`{}}} will be used as a 
> fallback.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34143] Modify the effective strategy of `execution.batch.adaptive.auto-parallelism.default-source-parallelism` [flink]

2024-01-23 Thread via GitHub


zhuzhurk closed pull request #24170: [FLINK-34143] Modify the effective 
strategy of 
`execution.batch.adaptive.auto-parallelism.default-source-parallelism`
URL: https://github.com/apache/flink/pull/24170


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34132][runtime] Correct the error message and doc of AdaptiveBatch only supports all edges being BLOCKING or HYBRID_FULL/HYBRID_SELECTIVE. [flink]

2024-01-23 Thread via GitHub


JunRuiLee commented on code in PR #24118:
URL: https://github.com/apache/flink/pull/24118#discussion_r1464218412


##
docs/content/docs/deployment/elastic_scaling.md:
##
@@ -238,7 +238,7 @@ In addition, there are several related configuration 
options that may need adjus
 ### Limitations
 
 - **Batch jobs only**: Adaptive Batch Scheduler only supports batch jobs. 
Exception will be thrown if a streaming job is submitted.
-- **BLOCKING or HYBRID jobs only**: At the moment, Adaptive Batch Scheduler 
only supports jobs whose [shuffle mode]({{< ref "docs/deployment/config" 
>}}#execution-batch-shuffle-mode) is `ALL_EXCHANGES_BLOCKING / 
ALL_EXCHANGES_HYBRID_FULL / ALL_EXCHANGES_HYBRID_SELECTIVE`.
+- **BLOCKING or HYBRID jobs only**: At the moment, Adaptive Batch Scheduler 
only supports DataStream jobs whose [shuffle mode]({{< ref 
"docs/deployment/config" >}}#execution-batch-shuffle-mode) is 
`ALL_EXCHANGES_BLOCKING / ALL_EXCHANGES_HYBRID_FULL / 
ALL_EXCHANGES_HYBRID_SELECTIVE` and DataSet jobs whose `ExecutionMode` is 
`BATCH_FORCED`.

Review Comment:
   How about `DataStream API` and `DataSet API`, which are used in many FLIP 
content ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34132][runtime] Correct the error message and doc of AdaptiveBatch only supports all edges being BLOCKING or HYBRID_FULL/HYBRID_SELECTIVE. [flink]

2024-01-23 Thread via GitHub


wanglijie95 commented on code in PR #24118:
URL: https://github.com/apache/flink/pull/24118#discussion_r1464214787


##
docs/content/docs/deployment/elastic_scaling.md:
##
@@ -238,7 +238,7 @@ In addition, there are several related configuration 
options that may need adjus
 ### Limitations
 
 - **Batch jobs only**: Adaptive Batch Scheduler only supports batch jobs. 
Exception will be thrown if a streaming job is submitted.
-- **BLOCKING or HYBRID jobs only**: At the moment, Adaptive Batch Scheduler 
only supports jobs whose [shuffle mode]({{< ref "docs/deployment/config" 
>}}#execution-batch-shuffle-mode) is `ALL_EXCHANGES_BLOCKING / 
ALL_EXCHANGES_HYBRID_FULL / ALL_EXCHANGES_HYBRID_SELECTIVE`.
+- **BLOCKING or HYBRID jobs only**: At the moment, Adaptive Batch Scheduler 
only supports DataStream jobs whose [shuffle mode]({{< ref 
"docs/deployment/config" >}}#execution-batch-shuffle-mode) is 
`ALL_EXCHANGES_BLOCKING / ALL_EXCHANGES_HYBRID_FULL / 
ALL_EXCHANGES_HYBRID_SELECTIVE` and DataSet jobs whose `ExecutionMode` is 
`BATCH_FORCED`.

Review Comment:
   I think we can not directly use `DataStream jobs` here, because it does not 
include SQL/Table jobs.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Draft: Flip 387 optional argument [flink]

2024-01-23 Thread via GitHub


flinkbot commented on PR #24183:
URL: https://github.com/apache/flink/pull/24183#issuecomment-1907229799

   
   ## CI report:
   
   * 0d402864671fb4dc34f036fc44b71252f66d63db UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Draft: Flip 387 optional argument [flink]

2024-01-23 Thread via GitHub


hackergin opened a new pull request, #24183:
URL: https://github.com/apache/flink/pull/24183

   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34210][Runtime/Checkpointing] Fix DefaultExecutionGraphBuilder#isCheckpointingEnabled return wrong value when checkpoint disabled [flink]

2024-01-23 Thread via GitHub


masteryhx commented on PR #24173:
URL: https://github.com/apache/flink/pull/24173#issuecomment-1907224952

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34120] Introduce unified serialization config option for all Kryo, POJO and customized serializers [flink]

2024-01-23 Thread via GitHub


X-czh commented on PR #24182:
URL: https://github.com/apache/flink/pull/24182#issuecomment-1907215961

   @reswqa Could you help review it? Thx~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34115][table-planner] Fix unstable TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate [flink]

2024-01-23 Thread via GitHub


LadyForest commented on PR #24178:
URL: https://github.com/apache/flink/pull/24178#issuecomment-1907215660

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34055][table] Introduce a new annotation for named parameters [flink]

2024-01-23 Thread via GitHub


hackergin commented on code in PR #24106:
URL: https://github.com/apache/flink/pull/24106#discussion_r1464193652


##
flink-table/flink-table-common/src/test/java/org/apache/flink/table/types/extraction/TypeInferenceExtractorTest.java:
##
@@ -427,8 +428,7 @@ private static Stream functionSpecs() {
 "Could not find a publicly accessible method 
named 'eval'."),
 
 // named arguments with overloaded function
-TestSpec.forScalarFunction(NamedArgumentsScalarFunction.class)
-.expectNamedArguments("n"),
+TestSpec.forScalarFunction(NamedArgumentsScalarFunction.class),

Review Comment:
   I have added relevant comments. We expect namedArgument to be null, so there 
is no expectXXX here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33958) Implement restore tests for IntervalJoin node

2024-01-23 Thread Bonnie Varghese (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17810141#comment-17810141
 ] 

Bonnie Varghese commented on FLINK-33958:
-

Apologize for the delayed response. Im trying to reproduce this locally but Im 
unable to. I tried running the tests in a loop until failure in IntelliJ to 
reproduce however that did not work. One option I can think of is adjusting the 
input data to make the output more predictable. Let me try that. If that 
doesn't work I can disable that test temporarily or revert the commit.

> Implement restore tests for IntervalJoin node
> -
>
> Key: FLINK-33958
> URL: https://issues.apache.org/jira/browse/FLINK-33958
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34135) A number of ci failures with Access to the path '.../_work/_temp/containerHandlerInvoker.js' is denied.

2024-01-23 Thread Jeyhun Karimov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17810104#comment-17810104
 ] 

Jeyhun Karimov commented on FLINK-34135:


Hi [~mapohl] yes CI seems stable again, we can close the issue

> A number of ci failures with Access to the path 
> '.../_work/_temp/containerHandlerInvoker.js' is denied.
> ---
>
> Key: FLINK-34135
> URL: https://issues.apache.org/jira/browse/FLINK-34135
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Reporter: Sergey Nuyanzin
>Assignee: Jeyhun Karimov
>Priority: Critical
>  Labels: test-stability
>
> There is a number of builds failing with something like 
> {noformat}
> ##[error]Access to the path 
> '/home/agent03/myagent/_work/_temp/containerHandlerInvoker.js' is denied.
> {noformat}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56490=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=fb588352-ef18-568d-b447-699986250ccb
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=554d7c3f-d38e-55f4-96b4-ada3a9cb7d6f=9
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=fa307d6d-91b1-5ab6-d460-ef50f552b1fe=1798d435-832b-51fe-a9ad-efb9abf4ab04=9
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=a1ac4ce4-9a4f-5fdb-3290-7e163fba19dc=e4c57254-ec06-5788-3f8e-5ad5dffb418e=9
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=2c3cbe13-dee0-5837-cf47-3053da9a8a78=56881383-f398-5091-6b3b-22a7eeb7cfa8=9
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=b0a398c0-685b-599c-eb57-c8c2a771138e=2d9c27d0-8dbb-5be9-7271-453f74f48ab3=9
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=162f98f7-8967-5f47-2782-a1e178ec2ad3=c9934c56-710d-5f85-d2b8-28ec1fd700ed=9



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34135) A number of ci failures with Access to the path '.../_work/_temp/containerHandlerInvoker.js' is denied.

2024-01-23 Thread Jeyhun Karimov (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeyhun Karimov closed FLINK-34135.
--
Resolution: Fixed

> A number of ci failures with Access to the path 
> '.../_work/_temp/containerHandlerInvoker.js' is denied.
> ---
>
> Key: FLINK-34135
> URL: https://issues.apache.org/jira/browse/FLINK-34135
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Reporter: Sergey Nuyanzin
>Assignee: Jeyhun Karimov
>Priority: Critical
>  Labels: test-stability
>
> There is a number of builds failing with something like 
> {noformat}
> ##[error]Access to the path 
> '/home/agent03/myagent/_work/_temp/containerHandlerInvoker.js' is denied.
> {noformat}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56490=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=fb588352-ef18-568d-b447-699986250ccb
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=554d7c3f-d38e-55f4-96b4-ada3a9cb7d6f=9
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=fa307d6d-91b1-5ab6-d460-ef50f552b1fe=1798d435-832b-51fe-a9ad-efb9abf4ab04=9
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=a1ac4ce4-9a4f-5fdb-3290-7e163fba19dc=e4c57254-ec06-5788-3f8e-5ad5dffb418e=9
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=2c3cbe13-dee0-5837-cf47-3053da9a8a78=56881383-f398-5091-6b3b-22a7eeb7cfa8=9
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=b0a398c0-685b-599c-eb57-c8c2a771138e=2d9c27d0-8dbb-5be9-7271-453f74f48ab3=9
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=162f98f7-8967-5f47-2782-a1e178ec2ad3=c9934c56-710d-5f85-d2b8-28ec1fd700ed=9



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34098) Not enough Azure Pipeline CI runners available?

2024-01-23 Thread Jeyhun Karimov (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeyhun Karimov closed FLINK-34098.
--
Resolution: Fixed

It was a disk-full issue. I reported the recurring error (that leads to the big 
maven log file) in FLINK-34155

> Not enough Azure Pipeline CI runners available?
> ---
>
> Key: FLINK-34098
> URL: https://issues.apache.org/jira/browse/FLINK-34098
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.18.0, 1.17.2, 1.19.0
>Reporter: Matthias Pohl
>Assignee: Jeyhun Karimov
>Priority: Critical
>
> CI takes longer than usual. There might be an issue with the runner pool (on 
> the Alibaba VMs)?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33803] Set observedGeneration at end of reconciliation [flink-kubernetes-operator]

2024-01-23 Thread via GitHub


justin-chen commented on code in PR #755:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/755#discussion_r1463969344


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/reconciler/ReconciliationUtils.java:
##
@@ -126,6 +126,9 @@ private static  void 
updateStatusForSpecReconcil
 status.setError(null);
 
reconciliationStatus.setReconciliationTimestamp(clock.instant().toEpochMilli());
 
+// Set observedGeneration
+status.setObservedGeneration(target.getMetadata().getGeneration());

Review Comment:
   @gyfora I see, thanks for the review. I moved the setting of 
`observedGeneration` to be at the same time/place as 
`lastReconciledSpec.meta.generation`, and replaced the internal usage of the 
latter in two locations. Tests pass, however please advise if there is 
somewhere I may have overlooked.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33545][Connectors/Kafka] KafkaSink implementation can cause dataloss during broker issue when not using EXACTLY_ONCE if there's any batching [flink-connector-kafka]

2024-01-23 Thread via GitHub


hhktseng commented on PR #70:
URL: 
https://github.com/apache/flink-connector-kafka/pull/70#issuecomment-1906723207

   created patch and applied after syncing to latest commit, then replaced 
forked branch with latest sync + patch


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33198][Format/Avro] support local timezone timestamp logic type in AVRO [flink]

2024-01-23 Thread via GitHub


HuangZhenQiu commented on PR #23511:
URL: https://github.com/apache/flink/pull/23511#issuecomment-1906612609

   @flinkbot run azure
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33198][Format/Avro] support local timezone timestamp logic type in AVRO [flink]

2024-01-23 Thread via GitHub


HuangZhenQiu commented on PR #23511:
URL: https://github.com/apache/flink/pull/23511#issuecomment-1906611710

   Hit a code style issue which is fixed in the latest master. Rebase master 
again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34135) A number of ci failures with Access to the path '.../_work/_temp/containerHandlerInvoker.js' is denied.

2024-01-23 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17810041#comment-17810041
 ] 

Matthias Pohl commented on FLINK-34135:
---

[~jeyhunkarimov] can't we close this issue now? Or is there still some work 
being done?

> A number of ci failures with Access to the path 
> '.../_work/_temp/containerHandlerInvoker.js' is denied.
> ---
>
> Key: FLINK-34135
> URL: https://issues.apache.org/jira/browse/FLINK-34135
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Reporter: Sergey Nuyanzin
>Assignee: Jeyhun Karimov
>Priority: Critical
>  Labels: test-stability
>
> There is a number of builds failing with something like 
> {noformat}
> ##[error]Access to the path 
> '/home/agent03/myagent/_work/_temp/containerHandlerInvoker.js' is denied.
> {noformat}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56490=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=fb588352-ef18-568d-b447-699986250ccb
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=554d7c3f-d38e-55f4-96b4-ada3a9cb7d6f=9
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=fa307d6d-91b1-5ab6-d460-ef50f552b1fe=1798d435-832b-51fe-a9ad-efb9abf4ab04=9
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=a1ac4ce4-9a4f-5fdb-3290-7e163fba19dc=e4c57254-ec06-5788-3f8e-5ad5dffb418e=9
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=2c3cbe13-dee0-5837-cf47-3053da9a8a78=56881383-f398-5091-6b3b-22a7eeb7cfa8=9
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=b0a398c0-685b-599c-eb57-c8c2a771138e=2d9c27d0-8dbb-5be9-7271-453f74f48ab3=9
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=162f98f7-8967-5f47-2782-a1e178ec2ad3=c9934c56-710d-5f85-d2b8-28ec1fd700ed=9



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34007) Flink Job stuck in suspend state after losing leadership in HA Mode

2024-01-23 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17810038#comment-17810038
 ] 

Matthias Pohl commented on FLINK-34007:
---

Just checking whether a new instance of \{{LeaderElector}} was created is 
tricky because \{{LeaderElector}} is coupled with the k8s backend. We would 
either have to mock \{{LeaderElector}} or have to work with some backend. 
Anyway, I came up with a quite straight-forward ITCase. I added it to the PR 
which is ready to be reviewed now.

> Flink Job stuck in suspend state after losing leadership in HA Mode
> ---
>
> Key: FLINK-34007
> URL: https://issues.apache.org/jira/browse/FLINK-34007
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0, 1.18.1, 1.18.2
>Reporter: Zhenqiu Huang
>Priority: Blocker
>  Labels: pull-request-available
> Attachments: Debug.log, LeaderElector-Debug.json, job-manager.log
>
>
> The observation is that Job manager goes to suspend state with a failed 
> container not able to register itself to resource manager after timeout.
> JM Log, see attached
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34120] Introduce unified serialization config option for all Kryo, POJO and customized serializers [flink]

2024-01-23 Thread via GitHub


flinkbot commented on PR #24182:
URL: https://github.com/apache/flink/pull/24182#issuecomment-1906494842

   
   ## CI report:
   
   * db0ed2d1e4f59006cb2f6ef491d143390a5504a9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-34217) Update Serialization-related doc with the new way of configuration

2024-01-23 Thread Zhanghao Chen (Jira)
Zhanghao Chen created FLINK-34217:
-

 Summary: Update Serialization-related doc with the new way of 
configuration
 Key: FLINK-34217
 URL: https://issues.apache.org/jira/browse/FLINK-34217
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation
Affects Versions: 1.19.0
Reporter: Zhanghao Chen






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34120] Introduce unified serialization config option for all Kryo, POJO and customized serializers [flink]

2024-01-23 Thread via GitHub


X-czh commented on code in PR #24182:
URL: https://github.com/apache/flink/pull/24182#discussion_r1463601347


##
flink-core/src/main/java/org/apache/flink/api/common/serialization/SerializerConfig.java:
##
@@ -229,8 +233,9 @@ public LinkedHashSet> getRegisteredPojoTypes() {
 }
 
 /** Returns the registered type info factories. */
-public LinkedHashMap, Class>> 
getRegisteredTypeFactories() {
-return registeredTypeFactories;
+public LinkedHashMap, Class>>

Review Comment:
   The type should be `Class>` instead of 
`Class>` for proper functioning. Updated in the FLIP as well



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34120] Introduce unified serialization config option for all Kryo, POJO and customized serializers [flink]

2024-01-23 Thread via GitHub


X-czh commented on code in PR #24182:
URL: https://github.com/apache/flink/pull/24182#discussion_r1463599214


##
flink-core/src/main/java/org/apache/flink/api/common/serialization/SerializerConfig.java:
##
@@ -229,8 +233,9 @@ public LinkedHashSet> getRegisteredPojoTypes() {
 }
 
 /** Returns the registered type info factories. */
-public LinkedHashMap, Class>> 
getRegisteredTypeFactories() {
-return registeredTypeFactories;
+public LinkedHashMap, Class>>
+getRegisteredTypeInfoFactories() {
+return registeredTypeInfoFactories;

Review Comment:
   `getRegisteredTypeFactories` -> `getRegisteredTypeInfoFactories`: this is a 
typo in the method name from the previous MR, fix it here



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34120] Introduce unified serialization config option for all Kryo, POJO and customized serializers [flink]

2024-01-23 Thread via GitHub


X-czh commented on code in PR #24182:
URL: https://github.com/apache/flink/pull/24182#discussion_r1463599214


##
flink-core/src/main/java/org/apache/flink/api/common/serialization/SerializerConfig.java:
##
@@ -229,8 +233,9 @@ public LinkedHashSet> getRegisteredPojoTypes() {
 }
 
 /** Returns the registered type info factories. */
-public LinkedHashMap, Class>> 
getRegisteredTypeFactories() {
-return registeredTypeFactories;
+public LinkedHashMap, Class>>
+getRegisteredTypeInfoFactories() {
+return registeredTypeInfoFactories;

Review Comment:
   This is a typo in the method name from the previous MR, fix it here



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34120) Introduce unified serialization config option for all Kryo, POJO and customized serializers

2024-01-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34120:
---
Labels: pull-request-available  (was: )

> Introduce unified serialization config option for all Kryo, POJO and 
> customized serializers
> ---
>
> Key: FLINK-34120
> URL: https://issues.apache.org/jira/browse/FLINK-34120
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Type Serialization System, Runtime / Configuration
>Affects Versions: 1.19.0
>Reporter: Zhanghao Chen
>Assignee: Zhanghao Chen
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34120] Introduce unified serialization config option for all Kryo, POJO and customized serializers [flink]

2024-01-23 Thread via GitHub


X-czh opened a new pull request, #24182:
URL: https://github.com/apache/flink/pull/24182

   
   
   ## What is the purpose of the change
   
   Introduce unified serialization config option for all Kryo, POJO and 
customized serializers for parameterized serialization config based on pure 
config.
   
   ## Brief change log
   
   1. Introduce `pipeline.serialization-config`, a unified serialization config 
option for all Kryo, POJO and customized serializers.
   2. For POJO & Kryo serializers, the config is parsed and dispatched to the 
existing `SerializerConfig#registerPojoType`, 
`SerializerConfig#registerKryoType`, 
`SerializerConfig#addDefaultKryoSerializer`, and 
`SerializerConfig#registerTypeWithKryoSerializer` methods.
   3. For customized serializers, the specified type info factories is 
registered in the global static factory map of `TypeExtractor `for now so that 
it can be accessed from the static methods of `TypeExtractor` where 
`SerializerConfig` is currently not accessible. Plan to migrate the static 
methods to take `SerializerConfig` as one of the arguments in v1.20.
   
   ## Verifying this change
   
   Added tests in  `SerializerConfigTest` on parsing different types of 
serializer config, both legal and illegal.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no) no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no) yes
 - The serializers: (yes / no / don't know) yes
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know) no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know) 
no
 - The S3 file system connector: (yes / no / don't know) no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no) yes
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented) docs
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34190][FLIP-416][checkpoint] Deprecate RestoreMode#LEGACY [flink]

2024-01-23 Thread via GitHub


Zakelly commented on PR #24169:
URL: https://github.com/apache/flink/pull/24169#issuecomment-1906452618

   @masteryhx Thanks for your detailed review! I modified accordingly, PTAL. 
Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32075][FLIP-306][Checkpoint] Delete merged files on checkpoint abort or subsumption [flink]

2024-01-23 Thread via GitHub


flinkbot commented on PR #24181:
URL: https://github.com/apache/flink/pull/24181#issuecomment-1906438459

   
   ## CI report:
   
   * a1c23d06eae81a4811f7b253a5b6a91609d24502 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-32075) Delete merged files on checkpoint abort or subsumption

2024-01-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-32075:
---
Labels: pull-request-available  (was: )

> Delete merged files on checkpoint abort or subsumption
> --
>
> Key: FLINK-32075
> URL: https://issues.apache.org/jira/browse/FLINK-32075
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Affects Versions: 1.18.0
>Reporter: Zakelly Lan
>Assignee: Zakelly Lan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-32075][FLIP-306][Checkpoint] Delete merged files on checkpoint abort or subsumption [flink]

2024-01-23 Thread via GitHub


Zakelly opened a new pull request, #24181:
URL: https://github.com/apache/flink/pull/24181

   ## What is the purpose of the change
   
   As important part of the FLIP-306, this PR enables the file deletion on 
checkpoint abort or subsumption.
   
   ## Brief change log
   
 - Track all logic files
 - Wire checkpoint notification with FileMergingSnapshotManager
   
   ## Verifying this change
   
 - Added test in FileMergingSnapshotManagerTest
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), **Checkpointing**, Kubernetes/Yarn, ZooKeeper: (**yes** / no / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (not applicable / docs / 
**JavaDocs** / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34209] Migrate FileSink to the new SinkV2 API [flink]

2024-01-23 Thread via GitHub


flinkbot commented on PR #24180:
URL: https://github.com/apache/flink/pull/24180#issuecomment-1906396145

   
   ## CI report:
   
   * 568fa38ead32fd9757def6e462eb4e6680a5feaf UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34104][autoscaler] Improve the ScalingReport format of autoscaling [flink-kubernetes-operator]

2024-01-23 Thread via GitHub


mxm commented on code in PR #757:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/757#discussion_r1463522323


##
flink-autoscaler/src/main/java/org/apache/flink/autoscaler/event/AutoscalerEventUtils.java:
##
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.autoscaler.event;
+
+import org.apache.flink.annotation.Experimental;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.regex.Pattern;
+import java.util.stream.Collectors;
+
+/** The utils of {@link AutoScalerEventHandler}. */
+@Experimental
+public class AutoscalerEventUtils {
+
+private static final Pattern SCALING_REPORT_SEPARATOR = 
Pattern.compile("\\{(.+?)\\}");
+private static final Pattern VERTEX_SCALING_REPORT_PATTERN =
+Pattern.compile(
+"Vertex ID (.*?) \\| Parallelism (.*?) -> (.*?) \\| 
Processing capacity (.*?) -> (.*?) \\| Target data rate (.*)");
+
+/** Parse the scaling report from original scaling report event. */
+public static List parseVertexScalingReports(String 
scalingReport) {

Review Comment:
   Makes sense. I was thinking to expose the overrides on the status field of 
the custom resource, but that would require pull instead of push. So no 
objections from my side moving ahead with the proposed changes.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-34209] Migrate FileSink to the new SinkV2 API [flink]

2024-01-23 Thread via GitHub


pvary opened a new pull request, #24180:
URL: https://github.com/apache/flink/pull/24180

   ## What is the purpose of the change
   
   Currently `FileSink` uses `TwoPhaseCommittingSink` and `StatefulSink` from 
the SinkV2 API. We should migrate it to use the new FLIP-372 SinkV2 API.
   
   There are some additional changes to use the same pattern for the 
`Deprecated` methods/classes.
   
   
   ## Brief change log
   
   Move to the new API.
   
   ## Verifying this change
   
   Tests are updated where needed. The other tests should cover the existing 
code
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   3   >