[jira] [Commented] (FLINK-31182) CompiledPlan cannot deserialize BridgingSqlFunction with MissingTypeStrategy

2023-03-04 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17696532#comment-17696532
 ] 

Jane Chan commented on FLINK-31182:
---

Hi [~twalthr], thanks for the fix!  LGTM, and feel free to merge.

> CompiledPlan cannot deserialize BridgingSqlFunction with MissingTypeStrategy
> 
>
> Key: FLINK-31182
> URL: https://issues.apache.org/jira/browse/FLINK-31182
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.17.0, 1.18.0, 1.17.1
>Reporter: Jane Chan
>Assignee: Timo Walther
>Priority: Major
>  Labels: pull-request-available
> Attachments: screenshot-1.png
>
>
> This issue is reported from the [user mail 
> list|https://lists.apache.org/thread/y6fgzyx330omhkr40376knw8k4oczz3s].
> The stacktrace is 
> {code:java}
> Unable to find source-code formatter for language: text. Available languages 
> are: actionscript, ada, applescript, bash, c, c#, c++, cpp, css, erlang, go, 
> groovy, haskell, html, java, javascript, js, json, lua, none, nyan, objc, 
> perl, php, python, r, rainbow, ruby, scala, sh, sql, swift, visualbasic, xml, 
> yamlCaused by: org.apache.flink.table.api.TableException: Could not resolve 
> internal system function '$UNNEST_ROWS$1'. This is a bug, please file an 
> issue.
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.serde.RexNodeJsonDeserializer.deserializeInternalFunction(RexNodeJsonDeserializer.java:392)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.serde.RexNodeJsonDeserializer.deserializeSqlOperator(RexNodeJsonDeserializer.java:337)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.serde.RexNodeJsonDeserializer.deserializeCall(RexNodeJsonDeserializer.java:307)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.serde.RexNodeJsonDeserializer.deserialize(RexNodeJsonDeserializer.java:146)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.serde.RexNodeJsonDeserializer.deserialize(RexNodeJsonDeserializer.java:128)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.serde.RexNodeJsonDeserializer.deserialize(RexNodeJsonDeserializer.java:115)
>  {code}
> The root cause is that although ModuleManager can resolve '$UNNEST_ROWS$1', 
> the output type strategy is "Missing"; as a result, 
> FunctionCatalogOperatorTable#convertToBridgingSqlFunction returns empty.
> !screenshot-1.png|width=675,height=295!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] 1996fanrui commented on a diff in pull request #22084: [WIP][FLINK-31293][runtime] LocalBufferPool request overdraft buffer only when no available buffer and pool size is reached

2023-03-04 Thread via GitHub


1996fanrui commented on code in PR #22084:
URL: https://github.com/apache/flink/pull/22084#discussion_r1125605884


##
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/LocalBufferPool.java:
##
@@ -396,7 +396,7 @@ private MemorySegment requestMemorySegment(int 
targetChannel) {
 synchronized (availableMemorySegments) {
 checkDestroyed();
 
-if (availableMemorySegments.isEmpty()) {
+if (availableMemorySegments.isEmpty() && isRequestedSizeReached()) 
{

Review Comment:
   I'm not sure whether `availableMemorySegments.poll();` should be executed 
when `availableMemorySegments.isEmpty()`. 
   
   1. How about call it just when `!availableMemorySegments.isEmpty()`?
   2. We should add some comments for 
`requestOverdraftMemorySegmentFromGlobal`, explain when to request the 
overdraft buffer.
   
   ```suggestion
   if (!availableMemorySegments.isEmpty()) {
   segment = availableMemorySegments.poll();
   } else if (isRequestedSizeReached()) {
   // Only when the buffer request reaches the upper limit, 
requests an overdraft buffer
   segment = requestOverdraftMemorySegmentFromGlobal();
   }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #22097: [FLINK-30983][runtime] Support configured ssl algorithms for external REST SSL

2023-03-04 Thread via GitHub


flinkbot commented on PR #22097:
URL: https://github.com/apache/flink/pull/22097#issuecomment-1454976854

   
   ## CI report:
   
   * 8ce5ceecce326e6dc777821a3f1dc332bb740cf5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] TanYuxin-tyx commented on pull request #21937: [FLINK-30983][runtime] Support configured ssl algorithms for external REST SSL

2023-03-04 Thread via GitHub


TanYuxin-tyx commented on PR #21937:
URL: https://github.com/apache/flink/pull/21937#issuecomment-1454976667

   > Thanks @TanYuxin-tyx, LGTM! Would you mind backport this to release-1.17 
branch?
   
   OK, Thanks @reswqa , I have opened a new PR for 1.17. 
https://github.com/apache/flink/pull/22097


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] TanYuxin-tyx opened a new pull request, #22097: [FLINK-30983][runtime] Support configured ssl algorithms for external REST SSL

2023-03-04 Thread via GitHub


TanYuxin-tyx opened a new pull request, #22097:
URL: https://github.com/apache/flink/pull/22097

   
   
   
   ## What is the purpose of the change
   
   *Support configured ssl algorithms for external REST SSL*
   
   
   ## Brief change log
   
   *(for example:)*
 - *Support configured ssl algorithms for external REST SSL*
   Backport to 1.17. The change is reviewed in 
https://github.com/apache/flink/pull/21937.
   
   ## Verifying this change
   
   Add a test in SSLUtilsTest.
 - *Add test in SSLUtilsTest*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] TanYuxin-tyx closed pull request #22095: [FLINK-30983][runtime] Support configured ssl algorithms for external REST SSL

2023-03-04 Thread via GitHub


TanYuxin-tyx closed pull request #22095: [FLINK-30983][runtime] Support 
configured ssl algorithms for external REST SSL
URL: https://github.com/apache/flink/pull/22095


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #22095: [FLINK-30983][runtime] Support configured ssl algorithms for external REST SSL

2023-03-04 Thread via GitHub


flinkbot commented on PR #22095:
URL: https://github.com/apache/flink/pull/22095#issuecomment-1454974947

   
   ## CI report:
   
   * d04c9fab357e41ffa6a777d9796c8b56feb29f0a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] TanYuxin-tyx closed pull request #22096: [FLINK-30983][runtime] Support configured ssl algorithms for external REST SSL

2023-03-04 Thread via GitHub


TanYuxin-tyx closed pull request #22096: [FLINK-30983][runtime] Support 
configured ssl algorithms for external REST SSL
URL: https://github.com/apache/flink/pull/22096


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] TanYuxin-tyx opened a new pull request, #22096: [FLINK-30983][runtime] Support configured ssl algorithms for external REST SSL

2023-03-04 Thread via GitHub


TanYuxin-tyx opened a new pull request, #22096:
URL: https://github.com/apache/flink/pull/22096

   
   
   
   ## What is the purpose of the change
   
   *Support configured ssl algorithms for external REST SSL*
   
   
   ## Brief change log
   
   *(for example:)*
 - *Support configured ssl algorithms for external REST SSL*
   
   
   ## Verifying this change
   
   Add a test in SSLUtilsTest.
 - *Add test in SSLUtilsTest*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] TanYuxin-tyx opened a new pull request, #22095: [FLINK-30983][runtime] Support configured ssl algorithms for external REST SSL

2023-03-04 Thread via GitHub


TanYuxin-tyx opened a new pull request, #22095:
URL: https://github.com/apache/flink/pull/22095

   
   
   
   ## What is the purpose of the change
   
   *Support configured ssl algorithms for external REST SSL*
   Cherry-pick to 1.17, the change is reviewed in 
https://github.com/apache/flink/pull/21937.
   
   ## Brief change log
   
   *(for example:)*
 - *Support configured ssl algorithms for external REST SSL*
   
   
   ## Verifying this change
   
   Add a test in SSLUtilsTest.
 - *Add test in SSLUtilsTest*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] felixzh2020 commented on pull request #22093: [FLINK-31321] Yarn-session mode, securityConfiguration supports dynam…

2023-03-04 Thread via GitHub


felixzh2020 commented on PR #22093:
URL: https://github.com/apache/flink/pull/22093#issuecomment-1454951273

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31321) Yarn-session mode, securityConfiguration supports dynamic configuration

2023-03-04 Thread felixzh (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

felixzh updated FLINK-31321:

Fix Version/s: 1.17.0

> Yarn-session mode, securityConfiguration supports dynamic configuration
> ---
>
> Key: FLINK-31321
> URL: https://issues.apache.org/jira/browse/FLINK-31321
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Reporter: felixzh
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>
> when different tenants submit jobs using the same {_}flink-conf.yaml{_}, the 
> same user is displayed on the Yarn page.
> _SecurityConfiguration_ does not support dynamic configuration. Therefore, 
> the user displayed on the Yarn page is the 
> _security.kerberos.login.principal_ in the {_}flink-conf.yaml{_}.
> FLINK-29435 only fixed CliFrontend class(Corresponds to flink script).
> FlinkYarnSessionCli class(Corresponds to yarn-session.sh script) still exists 
> this question.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31321) Yarn-session mode, securityConfiguration supports dynamic configuration

2023-03-04 Thread felixzh (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

felixzh updated FLINK-31321:

Affects Version/s: (was: 1.16.1)

> Yarn-session mode, securityConfiguration supports dynamic configuration
> ---
>
> Key: FLINK-31321
> URL: https://issues.apache.org/jira/browse/FLINK-31321
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Reporter: felixzh
>Priority: Major
>  Labels: pull-request-available
>
> when different tenants submit jobs using the same {_}flink-conf.yaml{_}, the 
> same user is displayed on the Yarn page.
> _SecurityConfiguration_ does not support dynamic configuration. Therefore, 
> the user displayed on the Yarn page is the 
> _security.kerberos.login.principal_ in the {_}flink-conf.yaml{_}.
> FLINK-29435 only fixed CliFrontend class(Corresponds to flink script).
> FlinkYarnSessionCli class(Corresponds to yarn-session.sh script) still exists 
> this question.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31321) Yarn-session mode, securityConfiguration supports dynamic configuration

2023-03-04 Thread felixzh (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

felixzh updated FLINK-31321:

Affects Version/s: 1.16.1

> Yarn-session mode, securityConfiguration supports dynamic configuration
> ---
>
> Key: FLINK-31321
> URL: https://issues.apache.org/jira/browse/FLINK-31321
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Affects Versions: 1.16.1
>Reporter: felixzh
>Priority: Major
>  Labels: pull-request-available
>
> when different tenants submit jobs using the same {_}flink-conf.yaml{_}, the 
> same user is displayed on the Yarn page.
> _SecurityConfiguration_ does not support dynamic configuration. Therefore, 
> the user displayed on the Yarn page is the 
> _security.kerberos.login.principal_ in the {_}flink-conf.yaml{_}.
> FLINK-29435 only fixed CliFrontend class(Corresponds to flink script).
> FlinkYarnSessionCli class(Corresponds to yarn-session.sh script) still exists 
> this question.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-opensearch] lilyevsky commented on a diff in pull request #11: [FLINK-30998] Add optional exception handler to flink-connector-opensearch

2023-03-04 Thread via GitHub


lilyevsky commented on code in PR #11:
URL: 
https://github.com/apache/flink-connector-opensearch/pull/11#discussion_r1125513579


##
flink-connector-opensearch/src/main/java/org/apache/flink/connector/opensearch/sink/OpensearchSinkBuilder.java:
##
@@ -148,6 +150,7 @@ public OpensearchSinkBuilder setBulkFlushMaxActions(int 
numMaxActions) {
  * @param maxSizeMb the maximum size of buffered actions, in mb.
  * @return this builder
  */
+@SuppressWarnings("UnusedReturnValue")

Review Comment:
   Thanks @reta 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-30331) Further improvement of production availability of hybrid shuffle

2023-03-04 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo closed FLINK-30331.
--
Fix Version/s: 1.17.0
   Resolution: Done

> Further improvement of production availability of hybrid shuffle
> 
>
> Key: FLINK-30331
> URL: https://issues.apache.org/jira/browse/FLINK-30331
> Project: Flink
>  Issue Type: Improvement
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Major
> Fix For: 1.17.0
>
>
> Hybrid shuffle is experimental in version 1.16. In order to basically achieve 
> production availability in 1.17, we need to make some further improvements.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] Myasuka commented on pull request #22058: [FLINK-31283][docs] Update the scala-2.11 related building doc

2023-03-04 Thread via GitHub


Myasuka commented on PR #22058:
URL: https://github.com/apache/flink/pull/22058#issuecomment-1454790346

   @zentol please take a review on this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #22094: [FLINK-31324][connector] Make previous SingleThreadFetcherManager constructor deprecated

2023-03-04 Thread via GitHub


flinkbot commented on PR #22094:
URL: https://github.com/apache/flink/pull/22094#issuecomment-1454790168

   
   ## CI report:
   
   * 6abcd0b030445a5fd0692ba67b397706231aa112 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31324) Broken SingleThreadFetcherManager constructor API

2023-03-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-31324:
---
Labels: pull-request-available  (was: )

> Broken SingleThreadFetcherManager constructor API
> -
>
> Key: FLINK-31324
> URL: https://issues.apache.org/jira/browse/FLINK-31324
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Parent
>Reporter: Yun Tang
>Assignee: Yun Tang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>
> FLINK-28853 changed the default constructor of 
> {{SingleThreadFetcherManager}}. Though the {{SingleThreadFetcherManager}} is 
> annotated as {{Internal}}, it actually acts as some-degree public API, which 
> is widely used in many connector projects:
> [flink-cdc-connector|https://github.com/ververica/flink-cdc-connectors/blob/release-2.3.0/flink-connector-mysql-cdc/src/main/java/com/ververica/cdc/connectors/mysql/source/reader/MySqlSourceReader.java#L93],
>  
> [flink-connector-mongodb|https://github.com/apache/flink-connector-mongodb/blob/main/flink-connector-mongodb/src/main/java/org/apache/flink/connector/mongodb/source/reader/MongoSourceReader.java#L58]
>  and so on.
> Once flink-1.17 is released, all these existing connectors are broken and 
> cannot be used in new release version, and will throw exceptions like:
> {code:java}
> java.lang.NoSuchMethodError: 
> org.apache.flink.connector.base.source.reader.fetcher.SingleThreadFetcherManager.(Lorg/apache/flink/connector/base/source/reader/synchronization/FutureCompletingBlockingQueue;Ljava/util/function/Supplier;)V
>   at 
> com.ververica.cdc.connectors.mysql.source.reader.MySqlSourceReader.(MySqlSourceReader.java:91)
>  ~[flink-sql-connector-mysql-cdc-2.3.0.jar:2.3.0]
>   at 
> com.ververica.cdc.connectors.mysql.source.MySqlSource.createReader(MySqlSource.java:159)
>  ~[flink-sql-connector-mysql-cdc-2.3.0.jar:2.3.0]
>   at 
> org.apache.flink.streaming.api.operators.SourceOperator.initReader(SourceOperator.java:312)
>  ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceOperatorStreamTask.init(SourceOperatorStreamTask.java:94)
>  ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:699)
>  ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:675)
>  ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:952)
>  ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:921) 
> ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:745) 
> ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562) 
> ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>   at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_362]
> {code}
> Thus, I suggest to make the original SingleThreadFetcherManager constructor 
> as depreacted instead of removing it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] Myasuka opened a new pull request, #22094: [FLINK-31324][connector] Make previous SingleThreadFetcherManager constructor deprecated

2023-03-04 Thread via GitHub


Myasuka opened a new pull request, #22094:
URL: https://github.com/apache/flink/pull/22094

   ## What is the purpose of the change
   
   [FLINK-28853](https://issues.apache.org/jira/browse/FLINK-28853) changed the 
default constructor of SingleThreadFetcherManager. Though the 
SingleThreadFetcherManager is annotated as Internal, it actually acts as 
some-degree public API, which is widely used in many connector projects:
   
[flink-cdc-connector](https://github.com/ververica/flink-cdc-connectors/blob/release-2.3.0/flink-connector-mysql-cdc/src/main/java/com/ververica/cdc/connectors/mysql/source/reader/MySqlSourceReader.java#L93),
 
[flink-connector-mongodb](https://github.com/apache/flink-connector-mongodb/blob/main/flink-connector-mongodb/src/main/java/org/apache/flink/connector/mongodb/source/reader/MongoSourceReader.java#L58)
 and so on.
   
   Once flink-1.17 is released, all these existing connectors are broken and 
cannot be used in new release version, and will throw exceptions like:
   ~~~java
   java.lang.NoSuchMethodError: 
org.apache.flink.connector.base.source.reader.fetcher.SingleThreadFetcherManager.(Lorg/apache/flink/connector/base/source/reader/synchronization/FutureCompletingBlockingQueue;Ljava/util/function/Supplier;)V
at 
com.ververica.cdc.connectors.mysql.source.reader.MySqlSourceReader.(MySqlSourceReader.java:91)
 ~[flink-sql-connector-mysql-cdc-2.3.0.jar:2.3.0]
at 
com.ververica.cdc.connectors.mysql.source.MySqlSource.createReader(MySqlSource.java:159)
 ~[flink-sql-connector-mysql-cdc-2.3.0.jar:2.3.0]
at 
org.apache.flink.streaming.api.operators.SourceOperator.initReader(SourceOperator.java:312)
 ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at 
org.apache.flink.streaming.runtime.tasks.SourceOperatorStreamTask.init(SourceOperatorStreamTask.java:94)
 ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:699)
 ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:675)
 ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:952)
 ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at 
org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:921) 
~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:745) 
~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562) 
~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_362]
   ~~~
   Thus, I suggest to make the original SingleThreadFetcherManager constructor 
as depreacted instead of removing it.
   
   
   ## Brief change log
   
   Introduce the previous constructor again and make it as deprecated.
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-opensearch] reta commented on a diff in pull request #11: [FLINK-30998] Add optional exception handler to flink-connector-opensearch

2023-03-04 Thread via GitHub


reta commented on code in PR #11:
URL: 
https://github.com/apache/flink-connector-opensearch/pull/11#discussion_r1125495890


##
flink-connector-opensearch/src/main/java/org/apache/flink/connector/opensearch/sink/OpensearchSinkBuilder.java:
##
@@ -148,6 +150,7 @@ public OpensearchSinkBuilder setBulkFlushMaxActions(int 
numMaxActions) {
  * @param maxSizeMb the maximum size of buffered actions, in mb.
  * @return this builder
  */
+@SuppressWarnings("UnusedReturnValue")

Review Comment:
   > Hi @reta , please let me know what are the plans for merging.
   
   Thanks @lilyevsky , LGTM, asked Flink committers to take I look (I do not 
have commit rights)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-opensearch] reta commented on pull request #11: [FLINK-30998] Add optional exception handler to flink-connector-opensearch

2023-03-04 Thread via GitHub


reta commented on PR #11:
URL: 
https://github.com/apache/flink-connector-opensearch/pull/11#issuecomment-1454787590

   @MartijnVisser @zentol could you please take a look guys? thank you!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-31310) Force clear directory no matter what situation in HiveCatalog.dropTable

2023-03-04 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17696460#comment-17696460
 ] 

Nicholas Jiang edited comment on FLINK-31310 at 3/4/23 4:02 PM:


[~lzljs3620320], the `dropTable` interface is invoked after `getTable`, 
therefore if no tale in hive, the `dropTable` could not be invoked, because 
there is a check whether table exist in `getDataTableSchema`. The exist 
situation occurs in the `getTable`, not in `dropTable`.

IMO, when no table in hive, users could use the FileSystemCatalog to drop the 
table and clear table directory and HiveCatalog only drops the table in Hive 
and clear table directory via hive metastore client.


was (Author: nicholasjiang):
[~lzljs3620320], the `dropTable` interface is invoked after `getTable`, 
therefore if no tale in hive, the `dropTable` could not be invoked, because 
there is a check whether table exist in `getDataTableSchema`. IMO, when no 
table in hive, users could use the FileSystemCatalog to drop the table and 
clear table directory and HiveCatalog only drops the table in Hive and clear 
table directory via hive metastore client.

> Force clear directory no matter what situation in HiveCatalog.dropTable
> ---
>
> Key: FLINK-31310
> URL: https://issues.apache.org/jira/browse/FLINK-31310
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Nicholas Jiang
>Priority: Major
> Fix For: table-store-0.4.0
>
>
> Currently, if no table in hive, will not clear the table.
> We should clear table directory in any situation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-opensearch] lilyevsky commented on a diff in pull request #11: [FLINK-30998] Add optional exception handler to flink-connector-opensearch

2023-03-04 Thread via GitHub


lilyevsky commented on code in PR #11:
URL: 
https://github.com/apache/flink-connector-opensearch/pull/11#discussion_r1125495620


##
flink-connector-opensearch/src/main/java/org/apache/flink/connector/opensearch/sink/OpensearchWriter.java:
##
@@ -122,10 +128,11 @@
 } catch (Exception e) {
 throw new FlinkRuntimeException("Failed to open the 
OpensearchEmitter", e);
 }
+this.failureHandler = failureHandler;
 }
 
 @Override
-public void write(IN element, Context context) throws IOException, 
InterruptedException {
+public void write(IN element, Context context) throws InterruptedException 
{

Review Comment:
   @reta Done. Also removed the line with warnings suppressions that I added at 
some point.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-31310) Force clear directory no matter what situation in HiveCatalog.dropTable

2023-03-04 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17696460#comment-17696460
 ] 

Nicholas Jiang commented on FLINK-31310:


[~lzljs3620320], the `dropTable` interface is invoked after `getTable`, 
therefore if no tale in hive, the `dropTable` could not be invoked, because 
there is a check whether table exist in `getDataTableSchema`. IMO, when no 
table in hive, users could use the FileSystemCatalog to drop the table and 
clear table directory and HiveCatalog only drops the table in Hive and clear 
table directory via hive metastore client.

> Force clear directory no matter what situation in HiveCatalog.dropTable
> ---
>
> Key: FLINK-31310
> URL: https://issues.apache.org/jira/browse/FLINK-31310
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Nicholas Jiang
>Priority: Major
> Fix For: table-store-0.4.0
>
>
> Currently, if no table in hive, will not clear the table.
> We should clear table directory in any situation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31324) Broken SingleThreadFetcherManager constructor API

2023-03-04 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17696457#comment-17696457
 ] 

Yun Tang commented on FLINK-31324:
--

cc [~jark] [~mxm]

> Broken SingleThreadFetcherManager constructor API
> -
>
> Key: FLINK-31324
> URL: https://issues.apache.org/jira/browse/FLINK-31324
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Parent
>Reporter: Yun Tang
>Assignee: Yun Tang
>Priority: Blocker
> Fix For: 1.17.0
>
>
> FLINK-28853 changed the default constructor of 
> {{SingleThreadFetcherManager}}. Though the {{SingleThreadFetcherManager}} is 
> annotated as {{Internal}}, it actually acts as some-degree public API, which 
> is widely used in many connector projects:
> [flink-cdc-connector|https://github.com/ververica/flink-cdc-connectors/blob/release-2.3.0/flink-connector-mysql-cdc/src/main/java/com/ververica/cdc/connectors/mysql/source/reader/MySqlSourceReader.java#L93],
>  
> [flink-connector-mongodb|https://github.com/apache/flink-connector-mongodb/blob/main/flink-connector-mongodb/src/main/java/org/apache/flink/connector/mongodb/source/reader/MongoSourceReader.java#L58]
>  and so on.
> Once flink-1.17 is released, all these existing connectors are broken and 
> cannot be used in new release version. Thus, I suggest to make the original 
> SingleThreadFetcherManager constructor as depreacted instead of removing it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31324) Broken SingleThreadFetcherManager constructor API

2023-03-04 Thread Yun Tang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Tang updated FLINK-31324:
-
Description: 
FLINK-28853 changed the default constructor of {{SingleThreadFetcherManager}}. 
Though the {{SingleThreadFetcherManager}} is annotated as {{Internal}}, it 
actually acts as some-degree public API, which is widely used in many connector 
projects:
[flink-cdc-connector|https://github.com/ververica/flink-cdc-connectors/blob/release-2.3.0/flink-connector-mysql-cdc/src/main/java/com/ververica/cdc/connectors/mysql/source/reader/MySqlSourceReader.java#L93],
 
[flink-connector-mongodb|https://github.com/apache/flink-connector-mongodb/blob/main/flink-connector-mongodb/src/main/java/org/apache/flink/connector/mongodb/source/reader/MongoSourceReader.java#L58]
 and so on.

Once flink-1.17 is released, all these existing connectors are broken and 
cannot be used in new release version, and will throw exceptions like:

{code:java}
java.lang.NoSuchMethodError: 
org.apache.flink.connector.base.source.reader.fetcher.SingleThreadFetcherManager.(Lorg/apache/flink/connector/base/source/reader/synchronization/FutureCompletingBlockingQueue;Ljava/util/function/Supplier;)V
at 
com.ververica.cdc.connectors.mysql.source.reader.MySqlSourceReader.(MySqlSourceReader.java:91)
 ~[flink-sql-connector-mysql-cdc-2.3.0.jar:2.3.0]
at 
com.ververica.cdc.connectors.mysql.source.MySqlSource.createReader(MySqlSource.java:159)
 ~[flink-sql-connector-mysql-cdc-2.3.0.jar:2.3.0]
at 
org.apache.flink.streaming.api.operators.SourceOperator.initReader(SourceOperator.java:312)
 ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at 
org.apache.flink.streaming.runtime.tasks.SourceOperatorStreamTask.init(SourceOperatorStreamTask.java:94)
 ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:699)
 ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:675)
 ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:952)
 ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at 
org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:921) 
~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:745) 
~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562) 
~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_362]
{code}


Thus, I suggest to make the original SingleThreadFetcherManager constructor as 
depreacted instead of removing it.

  was:
FLINK-28853 changed the default constructor of {{SingleThreadFetcherManager}}. 
Though the {{SingleThreadFetcherManager}} is annotated as {{Internal}}, it 
actually acts as some-degree public API, which is widely used in many connector 
projects:
[flink-cdc-connector|https://github.com/ververica/flink-cdc-connectors/blob/release-2.3.0/flink-connector-mysql-cdc/src/main/java/com/ververica/cdc/connectors/mysql/source/reader/MySqlSourceReader.java#L93],
 
[flink-connector-mongodb|https://github.com/apache/flink-connector-mongodb/blob/main/flink-connector-mongodb/src/main/java/org/apache/flink/connector/mongodb/source/reader/MongoSourceReader.java#L58]
 and so on.

Once flink-1.17 is released, all these existing connectors are broken and 
cannot be used in new release version. Thus, I suggest to make the original 
SingleThreadFetcherManager constructor as depreacted instead of removing it.


> Broken SingleThreadFetcherManager constructor API
> -
>
> Key: FLINK-31324
> URL: https://issues.apache.org/jira/browse/FLINK-31324
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Parent
>Reporter: Yun Tang
>Assignee: Yun Tang
>Priority: Blocker
> Fix For: 1.17.0
>
>
> FLINK-28853 changed the default constructor of 
> {{SingleThreadFetcherManager}}. Though the {{SingleThreadFetcherManager}} is 
> annotated as {{Internal}}, it actually acts as some-degree public API, which 
> is widely used in many connector projects:
> [flink-cdc-connector|https://github.com/ververica/flink-cdc-connectors/blob/release-2.3.0/flink-connector-mysql-cdc/src/main/java/com/ververica/cdc/connectors/mysql/source/reader/MySqlSourceReader.java#L93],
>  
> [flink-connector-mongodb|https://github.com/apache/flink-connector-mongodb/blob/main/flink-connector-mongodb/src/main/java/org/apache/flink/connector/mongodb/source/reader/MongoSourceReader.java#L58]
>  and so on.
> Once flink-1.17 is released, all these existing connectors are broken 

[jira] [Created] (FLINK-31324) Broken SingleThreadFetcherManager constructor API

2023-03-04 Thread Yun Tang (Jira)
Yun Tang created FLINK-31324:


 Summary: Broken SingleThreadFetcherManager constructor API
 Key: FLINK-31324
 URL: https://issues.apache.org/jira/browse/FLINK-31324
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Parent
Reporter: Yun Tang
Assignee: Yun Tang
 Fix For: 1.17.0


FLINK-28853 changed the default constructor of {{SingleThreadFetcherManager}}. 
Though the {{SingleThreadFetcherManager}} is annotated as {{Internal}}, it 
actually acts as some-degree public API, which is widely used in many connector 
projects:
[flink-cdc-connector|https://github.com/ververica/flink-cdc-connectors/blob/release-2.3.0/flink-connector-mysql-cdc/src/main/java/com/ververica/cdc/connectors/mysql/source/reader/MySqlSourceReader.java#L93],
 
[flink-connector-mongodb|https://github.com/apache/flink-connector-mongodb/blob/main/flink-connector-mongodb/src/main/java/org/apache/flink/connector/mongodb/source/reader/MongoSourceReader.java#L58]
 and so on.

Once flink-1.17 is released, all these existing connectors are broken and 
cannot be used in new release version. Thus, I suggest to make the original 
SingleThreadFetcherManager constructor as depreacted instead of removing it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28853) FLIP-217 Support watermark alignment of source splits

2023-03-04 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17696448#comment-17696448
 ] 

Yun Tang commented on FLINK-28853:
--

[~mxm] It seems the docs are still not updated?

> FLIP-217 Support watermark alignment of source splits
> -
>
> Key: FLINK-28853
> URL: https://issues.apache.org/jira/browse/FLINK-28853
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Common, Runtime / Checkpointing
>Affects Versions: 1.16.0
>Reporter: Sebastian Mattheis
>Assignee: Maximilian Michels
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>
> This improvement implements 
> [FLIP-217|https://cwiki.apache.org/confluence/display/FLINK/FLIP-217+Support+watermark+alignment+of+source+splits]
>  to support watermark alignment of source splits.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-opensearch] reta commented on a diff in pull request #11: [FLINK-30998] Add optional exception handler to flink-connector-opensearch

2023-03-04 Thread via GitHub


reta commented on code in PR #11:
URL: 
https://github.com/apache/flink-connector-opensearch/pull/11#discussion_r1125470984


##
flink-connector-opensearch/src/main/java/org/apache/flink/connector/opensearch/sink/OpensearchWriter.java:
##
@@ -122,10 +128,11 @@
 } catch (Exception e) {
 throw new FlinkRuntimeException("Failed to open the 
OpensearchEmitter", e);
 }
+this.failureHandler = failureHandler;
 }
 
 @Override
-public void write(IN element, Context context) throws IOException, 
InterruptedException {
+public void write(IN element, Context context) throws InterruptedException 
{

Review Comment:
   @lilyevsky correct, the `throws IOException, InterruptedException` is the 
right part of the signature: the first comment is about `write`, the second is 
about `flush`, thank you



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] felixzh2020 commented on pull request #22093: [FLINK-31321] Yarn-session mode, securityConfiguration supports dynam…

2023-03-04 Thread via GitHub


felixzh2020 commented on PR #22093:
URL: https://github.com/apache/flink/pull/22093#issuecomment-1454738229

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31319) Inconsistent condition judgement about kafka partitionDiscoveryIntervalMs cause potential bug

2023-03-04 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-31319:

Description: 
As kafka option description, partitionDiscoveryIntervalMs <=0 means disabled.  
!image-2023-03-04-01-37-29-360.png|width=781,height=147!

just like start kafka enumerator:

!image-2023-03-04-01-39-20-352.png|width=465,height=311!

but inner 
handlePartitionSplitChanges use error if condition( < 0):

!image-2023-03-04-01-40-44-124.png|width=576,height=237!
 
it will cause noMoreNewPartitionSplits can not be set to true. 
!image-2023-03-04-01-41-55-664.png|width=522,height=610!

Finally cause bounded source can not signalNoMoreSplits, so it will not quit.

Besides,Both ends of the if condition should be mutually exclusive.

  was:
As kafka option description, partitionDiscoveryIntervalMs <=0 means disabled.  
!image-2023-03-04-01-37-29-360.png|width=781,height=147!

just like start kafka enumerator:

!image-2023-03-04-01-39-20-352.png|width=465,height=311!

but inner 
handlePartitionSplitChanges use error if condition( < 0):

!image-2023-03-04-01-40-44-124.png|width=576,height=237!
 
it will cause noMoreNewPartitionSplits can not be set to true. 
!image-2023-03-04-01-41-55-664.png|width=522,height=610!

Finally cause bounded source can not signalNoMoreSplits and quit.

Besides,Both ends of the if condition should be mutually exclusive.


> Inconsistent condition judgement about kafka partitionDiscoveryIntervalMs 
> cause potential bug
> -
>
> Key: FLINK-31319
> URL: https://issues.apache.org/jira/browse/FLINK-31319
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.16.1
>Reporter: Ran Tao
>Priority: Critical
>  Labels: pull-request-available
> Attachments: image-2023-03-04-01-37-29-360.png, 
> image-2023-03-04-01-39-20-352.png, image-2023-03-04-01-40-44-124.png, 
> image-2023-03-04-01-41-55-664.png
>
>
> As kafka option description, partitionDiscoveryIntervalMs <=0 means disabled. 
>  !image-2023-03-04-01-37-29-360.png|width=781,height=147!
> just like start kafka enumerator:
> !image-2023-03-04-01-39-20-352.png|width=465,height=311!
> but inner 
> handlePartitionSplitChanges use error if condition( < 0):
> !image-2023-03-04-01-40-44-124.png|width=576,height=237!
>  
> it will cause noMoreNewPartitionSplits can not be set to true. 
> !image-2023-03-04-01-41-55-664.png|width=522,height=610!
> Finally cause bounded source can not signalNoMoreSplits, so it will not quit.
> Besides,Both ends of the if condition should be mutually exclusive.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-ml] lindong28 commented on a diff in pull request #218: [FLINK-31306] Add Servable for PipelineModel

2023-03-04 Thread via GitHub


lindong28 commented on code in PR #218:
URL: https://github.com/apache/flink-ml/pull/218#discussion_r1125428753


##
flink-ml-core/src/main/java/org/apache/flink/ml/builder/PipelineModel.java:
##
@@ -82,6 +85,33 @@ public static PipelineModel load(StreamTableEnvironment 
tEnv, String path) throw
 ReadWriteUtils.loadPipeline(tEnv, path, 
PipelineModel.class.getName()));
 }
 
+public static PipelineModelServable loadServable(String path) throws 
IOException {
+return PipelineModelServable.load(path);
+}
+
+/**
+ * Whether all stages in the pipeline have corresponding {@link 
TransformerServable} so that the
+ * PipelineModel can be turned into a TransformerServable and used in an 
online inference
+ * program.
+ *
+ * @return true if all stages have corresponding TransformerServable, 
false if not.
+ */
+public boolean supportServable() {
+for (Stage stage : stages) {

Review Comment:
   nits: `Stage stage : stages` -> `Stage stage : stages`



##
flink-ml-servable-core/src/test/java/org/apache/flink/ml/servable/builder/ExampleServables.java:
##
@@ -0,0 +1,99 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.servable.builder;
+
+import org.apache.flink.api.common.typeutils.base.IntSerializer;
+import org.apache.flink.core.fs.FSDataInputStream;
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.core.memory.DataInputViewStreamWrapper;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.servable.api.DataFrame;
+import org.apache.flink.ml.servable.api.Row;
+import org.apache.flink.ml.servable.api.TransformerServable;
+import org.apache.flink.ml.servable.types.DataTypes;
+import org.apache.flink.ml.util.FileUtils;
+import org.apache.flink.ml.util.ParamUtils;
+import org.apache.flink.ml.util.ServableReadWriteUtils;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+/** Defines Servable subclasses to be used in unit tests. */
+public class ExampleServables {
+
+/**
+ * A {@link TransformerServable} subclass that increments every value in 
the input dataframe by
+ * `delta` and outputs the resulting values.
+ */
+public static class SumModelServable implements 
TransformerServable {
+
+private static final String COL_NAME = "input";
+
+private final Map, Object> paramMap = new HashMap<>();
+
+private int delta;
+
+public SumModelServable() {
+ParamUtils.initializeMapWithDefaultValues(paramMap, this);
+}
+
+@Override
+public DataFrame transform(DataFrame input) {
+List outputRows = new ArrayList<>();
+for (Row row : input.collect()) {
+assert row.size() == 1;
+int originValue = (Integer) row.get(0);
+outputRows.add(new Row(Collections.singletonList(originValue + 
delta)));
+}
+return new DataFrame(
+Collections.singletonList(COL_NAME),
+Collections.singletonList(DataTypes.INT),
+outputRows);
+}
+
+@Override
+public Map, Object> getParamMap() {
+return paramMap;
+}
+
+public static SumModelServable load(String path) throws IOException {
+SumModelServable servable =
+ServableReadWriteUtils.loadServableParam(path, 
SumModelServable.class);
+
+Path modelDataPath = FileUtils.getDataPath(path);
+try (FSDataInputStream fsDataInputStream =
+FileUtils.getModelDataInputStream(modelDataPath)) {
+DataInputViewStreamWrapper dataInputViewStreamWrapper =
+new DataInputViewStreamWrapper(fsDataInputStream);
+int delta = 
IntSerializer.INSTANCE.deserialize(dataInputViewStreamWrapper);
+servable.setDelta(delta);
+}
+return servable;
+}
+
+public SumModelServable setDelta(int delta) {

Review Comment:
   Should we 

[GitHub] [flink-connector-cassandra] echauchot commented on pull request #3: [FLINK-26822] Add Cassandra Source

2023-03-04 Thread via GitHub


echauchot commented on PR #3:
URL: 
https://github.com/apache/flink-connector-cassandra/pull/3#issuecomment-1454671877

   @mosche contributed testContainers configuration that allows to avoid having 
a private docker image + timeouts etc...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31323) Fix unstable merge-into E2E test

2023-03-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-31323:
---
Labels: pull-request-available  (was: )

> Fix unstable merge-into E2E test
> 
>
> Key: FLINK-31323
> URL: https://issues.apache.org/jira/browse/FLINK-31323
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.4.0
>Reporter: yuzelin
>Priority: Major
>  Labels: pull-request-available
>
> A complex test of merge-into action in docker environment may fail. So the 
> test need to be simplified.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-table-store] yuzelin opened a new pull request, #580: [FLINK-31323] Fix unstable merge-into E2E test

2023-03-04 Thread via GitHub


yuzelin opened a new pull request, #580:
URL: https://github.com/apache/flink-table-store/pull/580

   Simplify `FlinkActionsE2eTest#testMergeInto`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27051) CompletedCheckpoint.DiscardObject.discard is not idempotent

2023-03-04 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17696419#comment-17696419
 ] 

Wencong Liu commented on FLINK-27051:
-

Hello [~mapohl] , I'm quite interested in the issues under this umbrella. For 
this issue, do you mean the CompletedCheckpoint.DiscardObject.discard should 
only discard related data at the first time when it's invoked in multiple times?

> CompletedCheckpoint.DiscardObject.discard is not idempotent
> ---
>
> Key: FLINK-27051
> URL: https://issues.apache.org/jira/browse/FLINK-27051
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Matthias Pohl
>Priority: Major
>
> `CompletedCheckpoint.DiscardObject.discard` is not implemented in an 
> idempotent fashion because we're losing the operatorState even in the case of 
> a failure (see 
> [CompletedCheckpoint:328||https://github.com/apache/flink/blob/dc419b5639f68bcb0b773763f24179dd3536d713/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CompletedCheckpoint.java#L328].
>  This prevents us from retrying the deletion.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)