Re: [PR] [FLINK-34058][table] Support optional parameters for named parameters [flink]

2024-01-26 Thread via GitHub


hackergin commented on code in PR #24183:
URL: https://github.com/apache/flink/pull/24183#discussion_r1468374197


##
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/calls/StringCallGen.scala:
##
@@ -237,6 +238,9 @@ object StringCallGen {
 val currentDatabase = ctx.addReusableQueryLevelCurrentDatabase()
 generateNonNullField(returnType, currentDatabase)
 
+  case DEFAULT =>

Review Comment:
   Updated



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34058][table] Support optional parameters for named parameters [flink]

2024-01-26 Thread via GitHub


hackergin commented on code in PR #24183:
URL: https://github.com/apache/flink/pull/24183#discussion_r1468374158


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/extraction/TypeInferenceExtractor.java:
##
@@ -286,6 +305,15 @@ private static void configureTypedArguments(
 
 private static TypeStrategy translateResultTypeStrategy(
 Map 
resultMapping) {
+if (resultMapping.size() == 1) {

Review Comment:
   Yes, after obtaining the type of default parameters in 
OperatorBindingCallContext, there is no need to modify it here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34058][table] Support optional parameters for named parameters [flink]

2024-01-26 Thread via GitHub


hackergin commented on code in PR #24183:
URL: https://github.com/apache/flink/pull/24183#discussion_r1468373906


##
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/ExprCodeGenerator.scala:
##
@@ -465,20 +467,38 @@ class ExprCodeGenerator(ctx: CodeGeneratorContext, 
nullableInput: Boolean)
 call.getOperands.get(1).asInstanceOf[RexLiteral])
 }
 
+// replace default node with right type.
+val operands = new util.ArrayList[RexNode](call.operands)
+
 // convert operands and help giving untyped NULL literals a type
-val operands = call.getOperands.zipWithIndex.map {
+val expressions = call.getOperands.zipWithIndex.map {
 
   // this helps e.g. for AS(null)
   // we might need to extend this logic in case some rules do not create 
typed NULLs
   case (operandLiteral: RexLiteral, 0)
   if operandLiteral.getType.getSqlTypeName == SqlTypeName.NULL &&
 call.getOperator.getReturnTypeInference == ReturnTypes.ARG0 =>
 generateNullLiteral(resultType)
-
+  case (rexCall: RexCall, i)
+  if (rexCall.getKind == SqlKind.DEFAULT && call.getOperator
+.isInstanceOf[BridgingSqlFunction]) => {
+val sqlFunction = call.getOperator.asInstanceOf[BridgingSqlFunction]
+val typeInference = sqlFunction.getTypeInference
+val typeFactory = sqlFunction.getTypeFactory
+if (typeInference.getTypedArguments.isPresent) {
+  val dataType = 
typeInference.getTypedArguments.get().get(i).getLogicalType
+  operands.set(
+i,
+
rexCall.clone(typeFactory.createFieldTypeFromLogicalType(dataType), 
rexCall.operands))

Review Comment:
   Indeed, this is redundant. The type has already been corrected in the 
SqlToRel phase.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34058][table] Support optional parameters for named parameters [flink]

2024-01-26 Thread via GitHub


hackergin commented on code in PR #24183:
URL: https://github.com/apache/flink/pull/24183#discussion_r1468373785


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/extraction/FunctionTemplate.java:
##
@@ -194,14 +194,17 @@ private static  T defaultAsNull(
 "Argument and input hints cannot be declared in the same 
function hint.");
 }
 
+Boolean[] argumentOptionals = null;

Review Comment:
   The default value is false, and we did not use defaultAsNull when obtaining 
the optional. I don't think there will be a situation where it exists as null. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34251][core] ClosureCleaner to include reference classes for non-serialization exception [flink]

2024-01-26 Thread via GitHub


flinkbot commented on PR #24205:
URL: https://github.com/apache/flink/pull/24205#issuecomment-1913010923

   
   ## CI report:
   
   * 154e41a43c45a2bb59f0b1a4af37e7652434f8b7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34251) ClosureCleaner to include reference classes for non-serialization exception

2024-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34251:
---
Labels: pull-request-available  (was: )

> ClosureCleaner to include reference classes for non-serialization exception
> ---
>
> Key: FLINK-34251
> URL: https://issues.apache.org/jira/browse/FLINK-34251
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Core
>Affects Versions: 1.18.2
>Reporter: Mingliang Liu
>Priority: Minor
>  Labels: pull-request-available
>
> Currently the ClosureCleaner throws exception if {{checkSerializable} is 
> enabled while some object is non-serializable. It includes the 
> non-serializable (nested) object in the exception in the exception message.
> However, when the user job program gets more complex pulling multiple 
> operators each of which pulls multiple 3rd party libraries, it is unclear how 
> the non-serializable object is referenced as some of those objects could be 
> nested in multiple levels. For example, following exception is not 
> straightforward where to check:
> {code}
> org.apache.flink.api.common.InvalidProgramException: java.lang.Object@528c868 
> is not serializable. 
> {code}
> It would be nice to include the reference stack in the exception message, as 
> following:
> {code}
> org.apache.flink.api.common.InvalidProgramException: 
> java.lang.Object@72437d8d is not serializable. Referenced via [class 
> com.mycompany.myapp.ComplexMap, class com.mycompany.myapp.LocalMap, class 
> com.yourcompany.yourapp.YourPojo, class com.hercompany.herapp.Random, class 
> java.lang.Object] ...
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34251][core] ClosureCleaner to include reference classes for non-serialization exception [flink]

2024-01-26 Thread via GitHub


liuml07 opened a new pull request, #24205:
URL: https://github.com/apache/flink/pull/24205

   
   
   
   ## What is the purpose of the change
   
   Currently the ClosureCleaner throws exception if {{checkSerializable} is 
enabled while some object is non-serializable. It includes the non-serializable 
(nested) object in the exception in the exception message.
   
   However, when the user job program gets more complex pulling multiple 
operators each of which pulls multiple 3rd party libraries, it is unclear how 
the non-serializable object is referenced as some of those objects could be 
nested in multiple levels. For example, following exception is not 
straightforward where to check:
   ```
   org.apache.flink.api.common.InvalidProgramException: 
java.lang.Object@528c868 is not serializable. 
   ```
   
   It would be nice to include the reference stack in the exception message, as 
following:
   ```
   org.apache.flink.api.common.InvalidProgramException: 
java.lang.Object@72437d8d is not serializable. Referenced via [class 
com.mycompany.myapp.ComplexMap, class com.mycompany.myapp.LocalMap, class 
com.yourcompany.yourapp.YourPojo, class com.hercompany.herapp.Random, class 
java.lang.Object] ...
   ```
   
   ## Verifying this change
   
   This change is largely covered by existing tests, and new test case was 
added to `ClosureCleanerTest`.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-34251) ClosureCleaner to include reference classes for non-serialization exception

2024-01-26 Thread Mingliang Liu (Jira)
Mingliang Liu created FLINK-34251:
-

 Summary: ClosureCleaner to include reference classes for 
non-serialization exception
 Key: FLINK-34251
 URL: https://issues.apache.org/jira/browse/FLINK-34251
 Project: Flink
  Issue Type: Improvement
  Components: API / Core
Affects Versions: 1.18.2
Reporter: Mingliang Liu


Currently the ClosureCleaner throws exception if {{checkSerializable} is 
enabled while some object is non-serializable. It includes the non-serializable 
(nested) object in the exception in the exception message.

However, when the user job program gets more complex pulling multiple operators 
each of which pulls multiple 3rd party libraries, it is unclear how the 
non-serializable object is referenced as some of those objects could be nested 
in multiple levels. For example, following exception is not straightforward 
where to check:
{code}
org.apache.flink.api.common.InvalidProgramException: java.lang.Object@528c868 
is not serializable. 
{code}

It would be nice to include the reference stack in the exception message, as 
following:
{code}
org.apache.flink.api.common.InvalidProgramException: java.lang.Object@72437d8d 
is not serializable. Referenced via [class com.mycompany.myapp.ComplexMap, 
class com.mycompany.myapp.LocalMap, class com.yourcompany.yourapp.YourPojo, 
class com.hercompany.herapp.Random, class java.lang.Object] ...
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34251) ClosureCleaner to include reference classes for non-serialization exception

2024-01-26 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated FLINK-34251:
--
Priority: Minor  (was: Major)

> ClosureCleaner to include reference classes for non-serialization exception
> ---
>
> Key: FLINK-34251
> URL: https://issues.apache.org/jira/browse/FLINK-34251
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Core
>Affects Versions: 1.18.2
>Reporter: Mingliang Liu
>Priority: Minor
>
> Currently the ClosureCleaner throws exception if {{checkSerializable} is 
> enabled while some object is non-serializable. It includes the 
> non-serializable (nested) object in the exception in the exception message.
> However, when the user job program gets more complex pulling multiple 
> operators each of which pulls multiple 3rd party libraries, it is unclear how 
> the non-serializable object is referenced as some of those objects could be 
> nested in multiple levels. For example, following exception is not 
> straightforward where to check:
> {code}
> org.apache.flink.api.common.InvalidProgramException: java.lang.Object@528c868 
> is not serializable. 
> {code}
> It would be nice to include the reference stack in the exception message, as 
> following:
> {code}
> org.apache.flink.api.common.InvalidProgramException: 
> java.lang.Object@72437d8d is not serializable. Referenced via [class 
> com.mycompany.myapp.ComplexMap, class com.mycompany.myapp.LocalMap, class 
> com.yourcompany.yourapp.YourPojo, class com.hercompany.herapp.Random, class 
> java.lang.Object] ...
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34246) Allow only archive failed job to history server

2024-01-26 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811468#comment-17811468
 ] 

Wencong Liu commented on FLINK-34246:
-

Thanks [~qingwei91], for suggesting this. Are you suggesting that we should 
offer an option that allows the HistoryServer to archive only the failed batch 
jobs? This requirement seems quite specific. For instance, we would also need 
to consider archiving the logs of failed streaming jobs.

> Allow only archive failed job to history server
> ---
>
> Key: FLINK-34246
> URL: https://issues.apache.org/jira/browse/FLINK-34246
> Project: Flink
>  Issue Type: Improvement
>  Components: Client / Job Submission
>Reporter: Lim Qing Wei
>Priority: Minor
>
> Hi, I wonder if we can support only archiving Failed job to History Server.
> History server is a great tool to allow us to check on previous job, we are 
> using FLink batch which can run many times throughout the week, we only need 
> to check job on History Server when it has failed.
> It would be more efficient if we can choose to only store a subset of the 
> data.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33297) FLIP-366: Support standard YAML for FLINK configuration

2024-01-26 Thread lincoln lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811466#comment-17811466
 ] 

lincoln lee commented on FLINK-33297:
-

[~JunRuiLi] Thank you for the update.

> FLIP-366: Support standard YAML for FLINK configuration
> ---
>
> Key: FLINK-33297
> URL: https://issues.apache.org/jira/browse/FLINK-33297
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Reporter: Junrui Li
>Assignee: Junrui Li
>Priority: Major
>  Labels: 2.0-related, pull-request-available
> Fix For: 1.19.0
>
>
> Support standard YAML for FLINK configuration



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34250) Add formats options to docs

2024-01-26 Thread Zhenqiu Huang (Jira)
Zhenqiu Huang created FLINK-34250:
-

 Summary: Add formats options to docs
 Key: FLINK-34250
 URL: https://issues.apache.org/jira/browse/FLINK-34250
 Project: Flink
  Issue Type: Improvement
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Affects Versions: 1.19.0
Reporter: Zhenqiu Huang


We have options defined for AVRO and CVS formats. But they are not included in 
docs. It is better to show in a common section.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34249][runtime] Remove DefaultSlotTracker related logic. [flink]

2024-01-26 Thread via GitHub


flinkbot commented on PR #24204:
URL: https://github.com/apache/flink/pull/24204#issuecomment-1912942087

   
   ## CI report:
   
   * 026ba82a56b2171f6e07cb870e63a7fddb8f5e51 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34249) Remove DefaultSlotTracker related logic.

2024-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34249:
---
Labels: pull-request-available  (was: )

> Remove DefaultSlotTracker related logic.
> 
>
> Key: FLINK-34249
> URL: https://issues.apache.org/jira/browse/FLINK-34249
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Task
>Reporter: RocMarshal
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34249][runtime] Remove DefaultSlotTracker related logic. [flink]

2024-01-26 Thread via GitHub


RocMarshal opened a new pull request, #24204:
URL: https://github.com/apache/flink/pull/24204

   
   
   ## What is the purpose of the change
   
   Remove DefaultSlotTracker related logic.  
   
   The main reason for initiating this ticket is   
   https://issues.apache.org/jira/browse/FLINK-31449  &  
https://issues.apache.org/jira/browse/FLINK-34174
   (IIUC) as the current related logic is no longer being used.
   
   
   ## Brief change log
   
   Remove DefaultSlotTracker related logic.
   
   
   ## Verifying this change
   
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34249) Remove DefaultSlotTracker related logic.

2024-01-26 Thread RocMarshal (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811463#comment-17811463
 ] 

RocMarshal commented on FLINK-34249:


pre step: https://issues.apache.org/jira/browse/FLINK-34174

The main reason for initiating this ticket is 
https://issues.apache.org/jira/browse/FLINK-31449  &  
https://issues.apache.org/jira/browse/FLINK-34174
(IIUC) as the current related logic is no longer being used.

> Remove DefaultSlotTracker related logic.
> 
>
> Key: FLINK-34249
> URL: https://issues.apache.org/jira/browse/FLINK-34249
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Task
>Reporter: RocMarshal
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34249) Remove DefaultSlotTracker related logic.

2024-01-26 Thread RocMarshal (Jira)
RocMarshal created FLINK-34249:
--

 Summary: Remove DefaultSlotTracker related logic.
 Key: FLINK-34249
 URL: https://issues.apache.org/jira/browse/FLINK-34249
 Project: Flink
  Issue Type: Technical Debt
  Components: Runtime / Task
Reporter: RocMarshal






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34248] Implement restore tests for changelog normalize node [flink]

2024-01-26 Thread via GitHub


flinkbot commented on PR #24203:
URL: https://github.com/apache/flink/pull/24203#issuecomment-1912908978

   
   ## CI report:
   
   * 776ffbde30fa754b06a2858fa572800c5edd8ffa UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34248) Implement restore tests for ChangelogNormalize node

2024-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34248:
---
Labels: pull-request-available  (was: )

> Implement restore tests for ChangelogNormalize node
> ---
>
> Key: FLINK-34248
> URL: https://issues.apache.org/jira/browse/FLINK-34248
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34248] Implement restore tests for changelog normalize node [flink]

2024-01-26 Thread via GitHub


bvarghese1 opened a new pull request, #24203:
URL: https://github.com/apache/flink/pull/24203

   
   
   ## What is the purpose of the change
   
   *Add restore tests for ChangelogNormalize node*
   
   ## Verifying this change
   This change added tests and can be verified as follows:
   
   - Added restore tests for ChangelogNormalize node which verifies the 
generated compiled plan with the saved compiled plan.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34113] Update flink-connector-elasticsearch to be compatible with updated SinkV2 interfaces [flink-connector-elasticsearch]

2024-01-26 Thread via GitHub


Jiabao-Sun commented on PR #88:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/88#issuecomment-1912905198

   > @Jiabao-Sun Thanks for this, but then we also need to make sure that this 
is tested against 1.19-SNAPSHOT if I'm not mistaken. Can you add that to this 
PR, so we can see if everything will compile against 1.17, 1.18 and 1.19, or 
what we need to drop?
   
   Hey @MartijnVisser, this PR can be tested against 1.19-SNAPSHOT as well.
   
   
![image](https://github.com/apache/flink-connector-elasticsearch/assets/27403841/6d57db20-3f2a-47ff-8f1b-9fec48f2812f)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-34248) Implement restore tests for ChangelogNormalize node

2024-01-26 Thread Bonnie Varghese (Jira)
Bonnie Varghese created FLINK-34248:
---

 Summary: Implement restore tests for ChangelogNormalize node
 Key: FLINK-34248
 URL: https://issues.apache.org/jira/browse/FLINK-34248
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Bonnie Varghese
Assignee: Bonnie Varghese






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34234] Apply ShadeOptionalChecker for flink-shaded [flink-shaded]

2024-01-26 Thread via GitHub


snuyanzin commented on code in PR #136:
URL: https://github.com/apache/flink-shaded/pull/136#discussion_r1468176508


##
pom.xml:
##
@@ -65,6 +65,7 @@ under the License.
 4.1.100.Final
 2.15.3
 32.1.3-jre
+true

Review Comment:
   that was the plan 
   the only thing stopped me from making it here is that I was thinking about 
making ShadeOptionalChecker working which then could be backported to 18.x and 
then as a separate task sync maven-shade-plugin for all the modules
   
   Or do you think it worth making it here?
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34234] Apply ShadeOptionalChecker for flink-shaded [flink-shaded]

2024-01-26 Thread via GitHub


snuyanzin commented on code in PR #136:
URL: https://github.com/apache/flink-shaded/pull/136#discussion_r1467969251


##
flink-shaded-zookeeper-parent/flink-shaded-zookeeper-38/pom.xml:
##
@@ -128,6 +131,7 @@ under the License.
 org.apache.zookeeper
 zookeeper
 ${zookeeper.version}
+${flink.markBundledAsOptional}
 
 
 io.netty

Review Comment:
   May be I was not clear enough, sorry
   By saying 
   >it does not make sense to add it for depedencies inside dependencyManagement
   
   I ment not in general. Just for the case here
   Here there is already existing dependency on zookeeper for every zookeeper 
module in dependencies which overrides the one from dependency management 
(optional tag).
   Netty here comes only as a dependency from zookeeper and once zookeeper is 
marked  optional maven started to think that netty is also becoming optional
   I don't think this is the case for poms you've mentioned



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34234] Apply ShadeOptionalChecker for flink-shaded [flink-shaded]

2024-01-26 Thread via GitHub


snuyanzin commented on code in PR #136:
URL: https://github.com/apache/flink-shaded/pull/136#discussion_r1467969251


##
flink-shaded-zookeeper-parent/flink-shaded-zookeeper-38/pom.xml:
##
@@ -128,6 +131,7 @@ under the License.
 org.apache.zookeeper
 zookeeper
 ${zookeeper.version}
+${flink.markBundledAsOptional}
 
 
 io.netty

Review Comment:
   May be I was not clear enough, sorry
   By saying 
   >it does not make sense to add it for depedencies inside dependencyManagement
   
   I ment not in general.
   Here there is already existing dependency on zookeeper for every zookeeper 
module in dependencies which overrides the one from dependency management 
(optional tag).
   Netty here comes only as a dependency from zookeeper and once zookeeper is 
marked  optional maven started to think that netty is also becoming optional
   I don't think this is the case for poms you've mentioned



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34152] Tune heap memory of autoscaled jobs [flink-kubernetes-operator]

2024-01-26 Thread via GitHub


mxm commented on PR #762:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/762#issuecomment-1912465655

   @gyfora @1996fanrui I created a doc to go over the rational and the ideas 
behind this PR: 
https://docs.google.com/document/d/19GXHGL_FvN6WBgFvLeXpDABog2H_qqkw1_wrpamkFSc/edit
   
   The PR itself will have to be updated because I'm no longer planning to 
incorporate the data change rate, but the heap-based adjustments remain.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33138] DataStream API implementation [flink-connector-prometheus]

2024-01-26 Thread via GitHub


nicusX commented on code in PR #1:
URL: 
https://github.com/apache/flink-connector-prometheus/pull/1#discussion_r1467930051


##
prometheus-connector/src/main/java/org/apache/flink/connector/prometheus/sink/PrometheusSink.java:
##
@@ -0,0 +1,117 @@
+/*
+ *  Licensed to the Apache Software Foundation (ASF) under one or more
+ *  contributor license agreements.  See the NOTICE file distributed with
+ *  this work for additional information regarding copyright ownership.
+ *  The ASF licenses this file to You under the Apache License, Version 2.0
+ *  (the "License"); you may not use this file except in compliance with
+ *  the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.flink.connector.prometheus.sink;
+
+import org.apache.flink.connector.base.sink.AsyncSinkBase;
+import org.apache.flink.connector.base.sink.writer.BufferedRequestState;
+import org.apache.flink.connector.base.sink.writer.ElementConverter;
+import 
org.apache.flink.connector.prometheus.sink.http.PrometheusAsyncHttpClientBuilder;
+import org.apache.flink.connector.prometheus.sink.prometheus.Types;
+import org.apache.flink.core.io.SimpleVersionedSerializer;
+
+import org.apache.hc.client5.http.impl.async.CloseableHttpAsyncClient;
+
+import java.util.Collection;
+
+/** Sink implementation accepting {@link PrometheusTimeSeries} as inputs. */
+public class PrometheusSink extends AsyncSinkBase {

Review Comment:
   Marked as @Public all the classes representing the public interface of the 
connector: `PrometheusSink`, `PrometheusTimeSeries`, and 
`PrometheusRequestSigner`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33138] DataStream API implementation [flink-connector-prometheus]

2024-01-26 Thread via GitHub


nicusX commented on code in PR #1:
URL: 
https://github.com/apache/flink-connector-prometheus/pull/1#discussion_r1467925863


##
prometheus-connector/src/main/java/org/apache/flink/connector/prometheus/sink/PrometheusTimeSeries.java:
##
@@ -0,0 +1,161 @@
+/*
+ *  Licensed to the Apache Software Foundation (ASF) under one or more
+ *  contributor license agreements.  See the NOTICE file distributed with
+ *  this work for additional information regarding copyright ownership.
+ *  The ASF licenses this file to You under the Apache License, Version 2.0
+ *  (the "License"); you may not use this file except in compliance with
+ *  the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.flink.connector.prometheus.sink;
+
+import org.apache.commons.lang3.builder.EqualsBuilder;
+import org.apache.commons.lang3.builder.HashCodeBuilder;
+
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+/**
+ * Pojo used as sink input, containing a single TimeSeries: a list of Labels 
and a list of Samples.
+ *
+ * metricName is mapped in Prometheus to the value of the mandatory label 
named '__name__'
+ * labels. The other labels, as key/value, are appended after the '__name__' 
label.
+ */
+public class PrometheusTimeSeries implements Serializable {
+/** A single Label. */
+public static class Label implements Serializable {
+private final String name;
+private final String value;
+
+public Label(String name, String value) {
+this.name = name;
+this.value = value;
+}
+
+public String getName() {
+return name;
+}
+
+public String getValue() {
+return value;
+}
+
+@Override
+public boolean equals(Object o) {
+if (this == o) {
+return true;
+}
+if (o == null || getClass() != o.getClass()) {
+return false;
+}
+Label label = (Label) o;
+return new EqualsBuilder()
+.append(name, label.name)
+.append(value, label.value)
+.isEquals();
+}
+
+@Override
+public int hashCode() {
+return new HashCodeBuilder(17, 
37).append(name).append(value).toHashCode();
+}
+}
+
+/** A single Sample. */
+public static class Sample implements Serializable {

Review Comment:
   implemented



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33138] DataStream API implementation [flink-connector-prometheus]

2024-01-26 Thread via GitHub


nicusX commented on code in PR #1:
URL: 
https://github.com/apache/flink-connector-prometheus/pull/1#discussion_r1467921494


##
prometheus-connector/src/main/java/org/apache/flink/connector/prometheus/sink/PrometheusSinkWriter.java:
##
@@ -0,0 +1,267 @@
+/*
+ *  Licensed to the Apache Software Foundation (ASF) under one or more
+ *  contributor license agreements.  See the NOTICE file distributed with
+ *  this work for additional information regarding copyright ownership.
+ *  The ASF licenses this file to You under the Apache License, Version 2.0
+ *  (the "License"); you may not use this file except in compliance with
+ *  the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.flink.connector.prometheus.sink;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.connector.base.sink.writer.AsyncSinkWriter;
+import org.apache.flink.connector.base.sink.writer.BufferedRequestState;
+import org.apache.flink.connector.base.sink.writer.ElementConverter;
+import 
org.apache.flink.connector.prometheus.sink.http.RemoteWriteResponseClassifier;
+import org.apache.flink.connector.prometheus.sink.prometheus.Remote;
+import org.apache.flink.connector.prometheus.sink.prometheus.Types;
+
+import org.apache.hc.client5.http.async.methods.SimpleHttpRequest;
+import org.apache.hc.client5.http.async.methods.SimpleHttpResponse;
+import org.apache.hc.client5.http.impl.async.CloseableHttpAsyncClient;
+import org.apache.hc.core5.concurrent.FutureCallback;
+import org.apache.hc.core5.io.CloseMode;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.xerial.snappy.Snappy;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+import java.util.function.Consumer;
+
+import static 
org.apache.flink.connector.prometheus.sink.SinkCounters.SinkCounter.NUM_SAMPLES_DROPPED;
+import static 
org.apache.flink.connector.prometheus.sink.SinkCounters.SinkCounter.NUM_SAMPLES_NON_RETRIABLE_DROPPED;
+import static 
org.apache.flink.connector.prometheus.sink.SinkCounters.SinkCounter.NUM_SAMPLES_OUT;
+import static 
org.apache.flink.connector.prometheus.sink.SinkCounters.SinkCounter.NUM_SAMPLES_RETRY_LIMIT_DROPPED;
+import static 
org.apache.flink.connector.prometheus.sink.SinkCounters.SinkCounter.NUM_WRITE_REQUESTS_OUT;
+import static 
org.apache.flink.connector.prometheus.sink.SinkCounters.SinkCounter.NUM_WRITE_REQUESTS_PERMANENTLY_FAILED;
+
+/** Writer, taking care of batching the {@link PrometheusTimeSeries} and 
handling retries. */
+public class PrometheusSinkWriter extends 
AsyncSinkWriter {
+
+/**
+ * * Batching of this sink is in terms of Samples, not bytes. The goal is 
adaptively increase
+ * the number of Samples in each batch, a WriteRequest sent to Prometheus, 
to a configurable
+ * number. This is the parameter maxBatchSizeInBytes.
+ *
+ * getSizeInBytes(requestEntry) returns the number of Samples (not 
bytes) and
+ * maxBatchSizeInBytes is actually in terms of Samples (not bytes).
+ *
+ * In AsyncSinkWriter, maxBatchSize is in terms of requestEntries 
(TimeSeries). But because
+ * each TimeSeries contains 1+ Samples, we set maxBatchSize = 
maxBatchSizeInBytes.
+ *
+ * maxRecordSizeInBytes is also calculated in the same unit assumed by 
getSizeInBytes(..). In
+ * our case is the max number of Samples in a single TimeSeries sent to 
the Sink. We are
+ * limiting the number of Samples in each TimeSeries to the max batch 
size, setting
+ * maxRecordSizeInBytes = maxBatchSizeInBytes.
+ */
+private static final Logger LOG = 
LoggerFactory.getLogger(PrometheusSinkWriter.class);
+
+private final SinkCounters counters;
+private final CloseableHttpAsyncClient asyncHttpClient;
+private final PrometheusRemoteWriteHttpRequestBuilder requestBuilder;
+
+public PrometheusSinkWriter(
+ElementConverter 
elementConverter,
+Sink.InitContext context,
+int maxInFlightRequests,
+int maxBufferedRequests,
+int maxBatchSizeInSamples,
+long maxTimeInBufferMS,
+String prometheusRemoteWriteUrl,
+CloseableHttpAsyncClient asyncHttpClient,
+SinkCounters counters,
+PrometheusRequestSigner requestSigner) {
+this(
+elementConverter,
+context,
+maxInFlightRequests,
+maxBufferedRequests,
+

Re: [PR] [FLINK-33138] DataStream API implementation [flink-connector-prometheus]

2024-01-26 Thread via GitHub


nicusX commented on code in PR #1:
URL: 
https://github.com/apache/flink-connector-prometheus/pull/1#discussion_r1467886908


##
prometheus-connector/src/main/java/org/apache/flink/connector/prometheus/sink/PrometheusSinkWriter.java:
##
@@ -0,0 +1,267 @@
+/*
+ *  Licensed to the Apache Software Foundation (ASF) under one or more
+ *  contributor license agreements.  See the NOTICE file distributed with
+ *  this work for additional information regarding copyright ownership.
+ *  The ASF licenses this file to You under the Apache License, Version 2.0
+ *  (the "License"); you may not use this file except in compliance with
+ *  the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.flink.connector.prometheus.sink;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.connector.base.sink.writer.AsyncSinkWriter;
+import org.apache.flink.connector.base.sink.writer.BufferedRequestState;
+import org.apache.flink.connector.base.sink.writer.ElementConverter;
+import 
org.apache.flink.connector.prometheus.sink.http.RemoteWriteResponseClassifier;
+import org.apache.flink.connector.prometheus.sink.prometheus.Remote;
+import org.apache.flink.connector.prometheus.sink.prometheus.Types;
+
+import org.apache.hc.client5.http.async.methods.SimpleHttpRequest;
+import org.apache.hc.client5.http.async.methods.SimpleHttpResponse;
+import org.apache.hc.client5.http.impl.async.CloseableHttpAsyncClient;
+import org.apache.hc.core5.concurrent.FutureCallback;
+import org.apache.hc.core5.io.CloseMode;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.xerial.snappy.Snappy;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+import java.util.function.Consumer;
+
+import static 
org.apache.flink.connector.prometheus.sink.SinkCounters.SinkCounter.NUM_SAMPLES_DROPPED;
+import static 
org.apache.flink.connector.prometheus.sink.SinkCounters.SinkCounter.NUM_SAMPLES_NON_RETRIABLE_DROPPED;
+import static 
org.apache.flink.connector.prometheus.sink.SinkCounters.SinkCounter.NUM_SAMPLES_OUT;
+import static 
org.apache.flink.connector.prometheus.sink.SinkCounters.SinkCounter.NUM_SAMPLES_RETRY_LIMIT_DROPPED;
+import static 
org.apache.flink.connector.prometheus.sink.SinkCounters.SinkCounter.NUM_WRITE_REQUESTS_OUT;
+import static 
org.apache.flink.connector.prometheus.sink.SinkCounters.SinkCounter.NUM_WRITE_REQUESTS_PERMANENTLY_FAILED;
+
+/** Writer, taking care of batching the {@link PrometheusTimeSeries} and 
handling retries. */
+public class PrometheusSinkWriter extends 
AsyncSinkWriter {
+
+/**
+ * * Batching of this sink is in terms of Samples, not bytes. The goal is 
adaptively increase
+ * the number of Samples in each batch, a WriteRequest sent to Prometheus, 
to a configurable
+ * number. This is the parameter maxBatchSizeInBytes.
+ *
+ * getSizeInBytes(requestEntry) returns the number of Samples (not 
bytes) and
+ * maxBatchSizeInBytes is actually in terms of Samples (not bytes).
+ *
+ * In AsyncSinkWriter, maxBatchSize is in terms of requestEntries 
(TimeSeries). But because
+ * each TimeSeries contains 1+ Samples, we set maxBatchSize = 
maxBatchSizeInBytes.
+ *
+ * maxRecordSizeInBytes is also calculated in the same unit assumed by 
getSizeInBytes(..). In
+ * our case is the max number of Samples in a single TimeSeries sent to 
the Sink. We are
+ * limiting the number of Samples in each TimeSeries to the max batch 
size, setting
+ * maxRecordSizeInBytes = maxBatchSizeInBytes.
+ */
+private static final Logger LOG = 
LoggerFactory.getLogger(PrometheusSinkWriter.class);
+
+private final SinkCounters counters;
+private final CloseableHttpAsyncClient asyncHttpClient;
+private final PrometheusRemoteWriteHttpRequestBuilder requestBuilder;
+
+public PrometheusSinkWriter(
+ElementConverter 
elementConverter,
+Sink.InitContext context,
+int maxInFlightRequests,
+int maxBufferedRequests,
+int maxBatchSizeInSamples,
+long maxTimeInBufferMS,
+String prometheusRemoteWriteUrl,
+CloseableHttpAsyncClient asyncHttpClient,
+SinkCounters counters,
+PrometheusRequestSigner requestSigner) {
+this(
+elementConverter,
+context,
+maxInFlightRequests,
+maxBufferedRequests,
+

Re: [PR] [FLINK-33914][ci] Adds basic Flink CI workflow [flink]

2024-01-26 Thread via GitHub


XComp commented on PR #23970:
URL: https://github.com/apache/flink/pull/23970#issuecomment-1912346703

   Deprecation warning started to appear which forced me to update the version 
for `actions/checkout` and `actions/cache` to `v4`.
   ![Uploading Screenshot from 2024-01-26 17-32-23.png…]()
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33297) FLIP-366: Support standard YAML for FLINK configuration

2024-01-26 Thread Junrui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811331#comment-17811331
 ] 

Junrui Li commented on FLINK-33297:
---

[~lincoln.86xy] Not yet, sorry for the misleading information. I missed 
creating the JIRA for the document this FLIP.  have now added that. Once the 
documentation is completed, I will close this JIRA.

> FLIP-366: Support standard YAML for FLINK configuration
> ---
>
> Key: FLINK-33297
> URL: https://issues.apache.org/jira/browse/FLINK-33297
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Reporter: Junrui Li
>Assignee: Junrui Li
>Priority: Major
>  Labels: 2.0-related, pull-request-available
> Fix For: 1.19.0
>
>
> Support standard YAML for FLINK configuration



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34247) Document FLIP-366: Support standard YAML for FLINK configuration

2024-01-26 Thread Junrui Li (Jira)
Junrui Li created FLINK-34247:
-

 Summary: Document FLIP-366: Support standard YAML for FLINK 
configuration
 Key: FLINK-34247
 URL: https://issues.apache.org/jira/browse/FLINK-34247
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation
Reporter: Junrui Li
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34246) Allow only archive failed job to history server

2024-01-26 Thread Lim Qing Wei (Jira)
Lim Qing Wei created FLINK-34246:


 Summary: Allow only archive failed job to history server
 Key: FLINK-34246
 URL: https://issues.apache.org/jira/browse/FLINK-34246
 Project: Flink
  Issue Type: Improvement
  Components: Client / Job Submission
Reporter: Lim Qing Wei


Hi, I wonder if we can support only archiving Failed job to History Server.

History server is a great tool to allow us to check on previous job, we are 
using FLink batch which can run many times throughout the week, we only need to 
check job on History Server when it has failed.

It would be more efficient if we can choose to only store a subset of the data.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33006) Add e2e test for Kubernetes Operator HA

2024-01-26 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-33006.
--
Resolution: Fixed

merged to main 6be277cd0c7421f4822c49296198f4a34b2cd721

> Add e2e test for Kubernetes Operator HA
> ---
>
> Key: FLINK-33006
> URL: https://issues.apache.org/jira/browse/FLINK-33006
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Assignee: Zhenqiu Huang
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.8.0
>
>
> There is currently no proper test coverage for operator HA



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33006] add e2e for Flink operator HA [flink-kubernetes-operator]

2024-01-26 Thread via GitHub


gyfora merged PR #756:
URL: https://github.com/apache/flink-kubernetes-operator/pull/756


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33006] add e2e for Flink operator HA [flink-kubernetes-operator]

2024-01-26 Thread via GitHub


HuangZhenQiu commented on code in PR #756:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/756#discussion_r1467819326


##
e2e-tests/test_flink_operator_ha.sh:
##
@@ -0,0 +1,71 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+###
+
+# This script tests the operator HA:
+# 1. Deploy a new flink deployment and wait for job manager to come up
+# 2. Verify the operator log on existing leader
+# 3. Delete the leader operator pod
+# 4. Verify the new leader is different with the old one
+# 5. Check operator log for the flink deployment in the new leader
+
+SCRIPT_DIR=$(dirname "$(readlink -f "$0")")
+source "${SCRIPT_DIR}/utils.sh"
+
+CLUSTER_ID="flink-example-statemachine"
+APPLICATION_YAML="${SCRIPT_DIR}/data/flinkdep-cr.yaml"
+TIMEOUT=300
+
+on_exit cleanup_and_exit "$APPLICATION_YAML" $TIMEOUT $CLUSTER_ID
+
+retry_times 5 30 "kubectl apply -f $APPLICATION_YAML" || exit 1
+
+wait_for_jobmanager_running $CLUSTER_ID $TIMEOUT
+jm_pod_name=$(get_jm_pod_name $CLUSTER_ID)
+
+wait_for_logs $jm_pod_name "Completed checkpoint [0-9]+ for job" ${TIMEOUT} || 
exit 1
+wait_for_status flinkdep/flink-example-statemachine 
'.status.jobManagerDeploymentStatus' READY ${TIMEOUT} || exit 1
+wait_for_status flinkdep/flink-example-statemachine '.status.jobStatus.state' 
RUNNING ${TIMEOUT} || exit 1
+
+job_id=$(kubectl logs $jm_pod_name -c flink-main-container | grep -E -o 'Job 
[a-z0-9]+ is submitted' | awk '{print $2}')

Review Comment:
   From my testing experience, we need to wait for the job is in running 
status, then we can check operator log for the message "Resource fully 
reconciled, nothing to do". Checking other messages is not table, and easy to 
cause timeout without waiting for job in running state.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33006] add e2e for Flink operator HA [flink-kubernetes-operator]

2024-01-26 Thread via GitHub


gyfora commented on code in PR #756:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/756#discussion_r1467821898


##
e2e-tests/test_flink_operator_ha.sh:
##
@@ -0,0 +1,71 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+###
+
+# This script tests the operator HA:
+# 1. Deploy a new flink deployment and wait for job manager to come up
+# 2. Verify the operator log on existing leader
+# 3. Delete the leader operator pod
+# 4. Verify the new leader is different with the old one
+# 5. Check operator log for the flink deployment in the new leader
+
+SCRIPT_DIR=$(dirname "$(readlink -f "$0")")
+source "${SCRIPT_DIR}/utils.sh"
+
+CLUSTER_ID="flink-example-statemachine"
+APPLICATION_YAML="${SCRIPT_DIR}/data/flinkdep-cr.yaml"
+TIMEOUT=300
+
+on_exit cleanup_and_exit "$APPLICATION_YAML" $TIMEOUT $CLUSTER_ID
+
+retry_times 5 30 "kubectl apply -f $APPLICATION_YAML" || exit 1
+
+wait_for_jobmanager_running $CLUSTER_ID $TIMEOUT
+jm_pod_name=$(get_jm_pod_name $CLUSTER_ID)
+
+wait_for_logs $jm_pod_name "Completed checkpoint [0-9]+ for job" ${TIMEOUT} || 
exit 1
+wait_for_status flinkdep/flink-example-statemachine 
'.status.jobManagerDeploymentStatus' READY ${TIMEOUT} || exit 1
+wait_for_status flinkdep/flink-example-statemachine '.status.jobStatus.state' 
RUNNING ${TIMEOUT} || exit 1
+
+job_id=$(kubectl logs $jm_pod_name -c flink-main-container | grep -E -o 'Job 
[a-z0-9]+ is submitted' | awk '{print $2}')

Review Comment:
   ok!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33006] add e2e for Flink operator HA [flink-kubernetes-operator]

2024-01-26 Thread via GitHub


HuangZhenQiu commented on code in PR #756:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/756#discussion_r1467815393


##
.github/workflows/ci.yml:
##
@@ -113,12 +114,18 @@ jobs:
 test: test_autoscaler.sh
   - version: v1_15
 test: test_dynamic_config.sh
+  - version: v1_15
+test: test_flink_operator_ha.sh
   - version: v1_16
 test: test_autoscaler.sh
   - version: v1_16
 test: test_dynamic_config.sh
+  - version: v1_16
+test: test_flink_operator_ha.sh
   - version: v1_17
 test: test_dynamic_config.sh
+  - version: v1_17
+test: test_flink_operator_ha.sh

Review Comment:
   Yes, that is why I exclude from v1_15 to v1_17



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33006] add e2e for Flink operator HA [flink-kubernetes-operator]

2024-01-26 Thread via GitHub


HuangZhenQiu commented on code in PR #756:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/756#discussion_r1467819326


##
e2e-tests/test_flink_operator_ha.sh:
##
@@ -0,0 +1,71 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+###
+
+# This script tests the operator HA:
+# 1. Deploy a new flink deployment and wait for job manager to come up
+# 2. Verify the operator log on existing leader
+# 3. Delete the leader operator pod
+# 4. Verify the new leader is different with the old one
+# 5. Check operator log for the flink deployment in the new leader
+
+SCRIPT_DIR=$(dirname "$(readlink -f "$0")")
+source "${SCRIPT_DIR}/utils.sh"
+
+CLUSTER_ID="flink-example-statemachine"
+APPLICATION_YAML="${SCRIPT_DIR}/data/flinkdep-cr.yaml"
+TIMEOUT=300
+
+on_exit cleanup_and_exit "$APPLICATION_YAML" $TIMEOUT $CLUSTER_ID
+
+retry_times 5 30 "kubectl apply -f $APPLICATION_YAML" || exit 1
+
+wait_for_jobmanager_running $CLUSTER_ID $TIMEOUT
+jm_pod_name=$(get_jm_pod_name $CLUSTER_ID)
+
+wait_for_logs $jm_pod_name "Completed checkpoint [0-9]+ for job" ${TIMEOUT} || 
exit 1
+wait_for_status flinkdep/flink-example-statemachine 
'.status.jobManagerDeploymentStatus' READY ${TIMEOUT} || exit 1
+wait_for_status flinkdep/flink-example-statemachine '.status.jobStatus.state' 
RUNNING ${TIMEOUT} || exit 1
+
+job_id=$(kubectl logs $jm_pod_name -c flink-main-container | grep -E -o 'Job 
[a-z0-9]+ is submitted' | awk '{print $2}')

Review Comment:
   From my testing experience, we need to wait for the job is in running 
status, then we can check operator log for the message "Resource fully 
reconciled, nothing to do". To check other messages are not table, and easy to 
cause timeout without waiting for job in running state.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33006] add e2e for Flink operator HA [flink-kubernetes-operator]

2024-01-26 Thread via GitHub


gyfora commented on code in PR #756:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/756#discussion_r1467818628


##
.github/workflows/ci.yml:
##
@@ -113,12 +114,18 @@ jobs:
 test: test_autoscaler.sh
   - version: v1_15
 test: test_dynamic_config.sh
+  - version: v1_15
+test: test_flink_operator_ha.sh
   - version: v1_16
 test: test_autoscaler.sh
   - version: v1_16
 test: test_dynamic_config.sh
+  - version: v1_16
+test: test_flink_operator_ha.sh
   - version: v1_17
 test: test_dynamic_config.sh
+  - version: v1_17
+test: test_flink_operator_ha.sh

Review Comment:
   ah sorry, I thought this was includes. You are right



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-34115) TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate fails

2024-01-26 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811328#comment-17811328
 ] 

Matthias Pohl edited comment on FLINK-34115 at 1/26/24 3:44 PM:


1.18 (this one included the fix from above): 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56944=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11746]


was (Author: mapohl):
1.18: 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56944=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11746]

> TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate fails
> --
>
> Key: FLINK-34115
> URL: https://issues.apache.org/jira/browse/FLINK-34115
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.18.0, 1.19.0
>Reporter: Matthias Pohl
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0, 1.18.2
>
>
> It failed twice in the same pipeline run:
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56348=logs=de826397-1924-5900-0034-51895f69d4b7=f311e913-93a2-5a37-acab-4a63e1328f94=11613]
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56348=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11963]
> {code:java}
>  Jan 14 01:20:01 01:20:01.949 [ERROR] Tests run: 18, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 29.07 s <<< FAILURE! -- in 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase
> Jan 14 01:20:01 01:20:01.949 [ERROR] 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate
>  -- Time elapsed: 0.518 s <<< FAILURE!
> Jan 14 01:20:01 org.opentest4j.AssertionFailedError: 
> Jan 14 01:20:01 
> Jan 14 01:20:01 expected: List((true,6,1), (false,6,1), (true,6,1), 
> (true,3,2), (false,6,1), (false,3,2), (true,6,1), (true,5,2), (false,6,1), 
> (false,5,2), (true,8,1), (true,6,2), (false,8,1), (false,6,2), (true,8,1), 
> (true,6,2))
> Jan 14 01:20:01  but was: List((true,3,1), (false,3,1), (true,5,1), 
> (true,3,2), (false,5,1), (false,3,2), (true,8,1), (true,5,2), (false,8,1), 
> (false,5,2), (true,8,1), (true,5,2), (false,8,1), (false,5,2), (true,8,1), 
> (true,6,2))
> Jan 14 01:20:01   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Jan 14 01:20:01   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Jan 14 01:20:01   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Jan 14 01:20:01   at 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.checkRank$1(TableAggregateITCase.scala:122)
> Jan 14 01:20:01   at 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate(TableAggregateITCase.scala:69)
> Jan 14 01:20:01   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.Iterator.forEachRemaining(Iterator.java:116)
> Jan 14 01:20:01   at 
> scala.collection.convert.Wrappers$IteratorWrapper.forEachRemaining(Wrappers.scala:26)
> Jan 14 01:20:01   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> Jan 14 01:20:01   at 
> 

Re: [PR] [FLINK-33006] add e2e for Flink operator HA [flink-kubernetes-operator]

2024-01-26 Thread via GitHub


HuangZhenQiu commented on code in PR #756:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/756#discussion_r1467815393


##
.github/workflows/ci.yml:
##
@@ -113,12 +114,18 @@ jobs:
 test: test_autoscaler.sh
   - version: v1_15
 test: test_dynamic_config.sh
+  - version: v1_15
+test: test_flink_operator_ha.sh
   - version: v1_16
 test: test_autoscaler.sh
   - version: v1_16
 test: test_dynamic_config.sh
+  - version: v1_16
+test: test_flink_operator_ha.sh
   - version: v1_17
 test: test_dynamic_config.sh
+  - version: v1_17
+test: test_flink_operator_ha.sh

Review Comment:
   Yes, that is why I exclude v1_15 to v1_17



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34206) CacheITCase.testRetryOnCorruptedClusterDataset(Path) failed

2024-01-26 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811329#comment-17811329
 ] 

Matthias Pohl commented on FLINK-34206:
---

The following builds don't have the test disabled, yet. I'm adding them for 
documentation purposes:
 * 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56942=logs=2c3cbe13-dee0-5837-cf47-3053da9a8a78=b78d9d30-509a-5cea-1fef-db7abaa325ae=9364]
 * 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56942=logs=baf26b34-3c6a-54e8-f93f-cf269b32f802=8c9d126d-57d2-5a9e-a8c8-ff53f7b35cd9=9216]
 * 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56948=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba=8829]

> CacheITCase.testRetryOnCorruptedClusterDataset(Path) failed
> ---
>
> Key: FLINK-34206
> URL: https://issues.apache.org/jira/browse/FLINK-34206
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Assignee: xingbe
>Priority: Blocker
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56728=logs=a657ddbf-d986-5381-9649-342d9c92e7fb=dc085d4a-05c8-580e-06ab-21f5624dab16=8763
> {code}
> Jan 23 01:39:48 01:39:48.152 [ERROR] Tests run: 6, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 19.24 s <<< FAILURE! -- in 
> org.apache.flink.test.streaming.runtime.CacheITCase
> Jan 23 01:39:48 01:39:48.152 [ERROR] 
> org.apache.flink.test.streaming.runtime.CacheITCase.testRetryOnCorruptedClusterDataset(Path)
>  -- Time elapsed: 4.755 s <<< ERROR!
> Jan 23 01:39:48 org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
> Jan 23 01:39:48   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
> Jan 23 01:39:48   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:141)
> Jan 23 01:39:48   at 
> java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:646)
> Jan 23 01:39:48   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
> Jan 23 01:39:48   at 
> java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2179)
> Jan 23 01:39:48   at 
> org.apache.flink.runtime.rpc.pekko.PekkoInvocationHandler.lambda$invokeRpc$1(PekkoInvocationHandler.java:268)
> Jan 23 01:39:48   at 
> java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
> Jan 23 01:39:48   at 
> java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
> Jan 23 01:39:48   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
> Jan 23 01:39:48   at 
> java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2179)
> Jan 23 01:39:48   at 
> org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1287)
> Jan 23 01:39:48   at 
> org.apache.flink.runtime.concurrent.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$1(ClassLoadingUtils.java:93)
> Jan 23 01:39:48   at 
> org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
> Jan 23 01:39:48   at 
> org.apache.flink.runtime.concurrent.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
> Jan 23 01:39:48   at 
> java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
> Jan 23 01:39:48   at 
> java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
> Jan 23 01:39:48   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
> Jan 23 01:39:48   at 
> java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2179)
> Jan 23 01:39:48   at 
> org.apache.flink.runtime.concurrent.pekko.ScalaFutureUtils$1.onComplete(ScalaFutureUtils.java:47)
> Jan 23 01:39:48   at 
> org.apache.pekko.dispatch.OnComplete.internal(Future.scala:310)
> Jan 23 01:39:48   at 
> org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307)
> Jan 23 01:39:48   at 
> org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234)
> Jan 23 01:39:48   at 
> org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231)
> Jan 23 01:39:48   at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
> Jan 23 01:39:48   at 
> 

[jira] [Comment Edited] (FLINK-34115) TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate fails

2024-01-26 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811328#comment-17811328
 ] 

Matthias Pohl edited comment on FLINK-34115 at 1/26/24 3:44 PM:


1.18 (this one included the fix from above): 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56944=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11746]


was (Author: mapohl):
1.18 (this one included the fix from above): 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56944=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11746]

> TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate fails
> --
>
> Key: FLINK-34115
> URL: https://issues.apache.org/jira/browse/FLINK-34115
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.18.0, 1.19.0
>Reporter: Matthias Pohl
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0, 1.18.2
>
>
> It failed twice in the same pipeline run:
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56348=logs=de826397-1924-5900-0034-51895f69d4b7=f311e913-93a2-5a37-acab-4a63e1328f94=11613]
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56348=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11963]
> {code:java}
>  Jan 14 01:20:01 01:20:01.949 [ERROR] Tests run: 18, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 29.07 s <<< FAILURE! -- in 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase
> Jan 14 01:20:01 01:20:01.949 [ERROR] 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate
>  -- Time elapsed: 0.518 s <<< FAILURE!
> Jan 14 01:20:01 org.opentest4j.AssertionFailedError: 
> Jan 14 01:20:01 
> Jan 14 01:20:01 expected: List((true,6,1), (false,6,1), (true,6,1), 
> (true,3,2), (false,6,1), (false,3,2), (true,6,1), (true,5,2), (false,6,1), 
> (false,5,2), (true,8,1), (true,6,2), (false,8,1), (false,6,2), (true,8,1), 
> (true,6,2))
> Jan 14 01:20:01  but was: List((true,3,1), (false,3,1), (true,5,1), 
> (true,3,2), (false,5,1), (false,3,2), (true,8,1), (true,5,2), (false,8,1), 
> (false,5,2), (true,8,1), (true,5,2), (false,8,1), (false,5,2), (true,8,1), 
> (true,6,2))
> Jan 14 01:20:01   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Jan 14 01:20:01   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Jan 14 01:20:01   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Jan 14 01:20:01   at 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.checkRank$1(TableAggregateITCase.scala:122)
> Jan 14 01:20:01   at 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate(TableAggregateITCase.scala:69)
> Jan 14 01:20:01   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.Iterator.forEachRemaining(Iterator.java:116)
> Jan 14 01:20:01   at 
> scala.collection.convert.Wrappers$IteratorWrapper.forEachRemaining(Wrappers.scala:26)
> Jan 14 01:20:01   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> Jan 14 01:20:01   at 
> 

[jira] [Comment Edited] (FLINK-34115) TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate fails

2024-01-26 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811328#comment-17811328
 ] 

Matthias Pohl edited comment on FLINK-34115 at 1/26/24 3:42 PM:


1.18: 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56944=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11746]


was (Author: mapohl):
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56944=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11746

> TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate fails
> --
>
> Key: FLINK-34115
> URL: https://issues.apache.org/jira/browse/FLINK-34115
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.18.0, 1.19.0
>Reporter: Matthias Pohl
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0, 1.18.2
>
>
> It failed twice in the same pipeline run:
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56348=logs=de826397-1924-5900-0034-51895f69d4b7=f311e913-93a2-5a37-acab-4a63e1328f94=11613]
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56348=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11963]
> {code:java}
>  Jan 14 01:20:01 01:20:01.949 [ERROR] Tests run: 18, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 29.07 s <<< FAILURE! -- in 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase
> Jan 14 01:20:01 01:20:01.949 [ERROR] 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate
>  -- Time elapsed: 0.518 s <<< FAILURE!
> Jan 14 01:20:01 org.opentest4j.AssertionFailedError: 
> Jan 14 01:20:01 
> Jan 14 01:20:01 expected: List((true,6,1), (false,6,1), (true,6,1), 
> (true,3,2), (false,6,1), (false,3,2), (true,6,1), (true,5,2), (false,6,1), 
> (false,5,2), (true,8,1), (true,6,2), (false,8,1), (false,6,2), (true,8,1), 
> (true,6,2))
> Jan 14 01:20:01  but was: List((true,3,1), (false,3,1), (true,5,1), 
> (true,3,2), (false,5,1), (false,3,2), (true,8,1), (true,5,2), (false,8,1), 
> (false,5,2), (true,8,1), (true,5,2), (false,8,1), (false,5,2), (true,8,1), 
> (true,6,2))
> Jan 14 01:20:01   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Jan 14 01:20:01   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Jan 14 01:20:01   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Jan 14 01:20:01   at 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.checkRank$1(TableAggregateITCase.scala:122)
> Jan 14 01:20:01   at 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate(TableAggregateITCase.scala:69)
> Jan 14 01:20:01   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.Iterator.forEachRemaining(Iterator.java:116)
> Jan 14 01:20:01   at 
> scala.collection.convert.Wrappers$IteratorWrapper.forEachRemaining(Wrappers.scala:26)
> Jan 14 01:20:01   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> Jan 14 01:20:01   at 
> 

[jira] [Commented] (FLINK-34115) TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate fails

2024-01-26 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811328#comment-17811328
 ] 

Matthias Pohl commented on FLINK-34115:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56944=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11746

> TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate fails
> --
>
> Key: FLINK-34115
> URL: https://issues.apache.org/jira/browse/FLINK-34115
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.18.0, 1.19.0
>Reporter: Matthias Pohl
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0, 1.18.2
>
>
> It failed twice in the same pipeline run:
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56348=logs=de826397-1924-5900-0034-51895f69d4b7=f311e913-93a2-5a37-acab-4a63e1328f94=11613]
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56348=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11963]
> {code:java}
>  Jan 14 01:20:01 01:20:01.949 [ERROR] Tests run: 18, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 29.07 s <<< FAILURE! -- in 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase
> Jan 14 01:20:01 01:20:01.949 [ERROR] 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate
>  -- Time elapsed: 0.518 s <<< FAILURE!
> Jan 14 01:20:01 org.opentest4j.AssertionFailedError: 
> Jan 14 01:20:01 
> Jan 14 01:20:01 expected: List((true,6,1), (false,6,1), (true,6,1), 
> (true,3,2), (false,6,1), (false,3,2), (true,6,1), (true,5,2), (false,6,1), 
> (false,5,2), (true,8,1), (true,6,2), (false,8,1), (false,6,2), (true,8,1), 
> (true,6,2))
> Jan 14 01:20:01  but was: List((true,3,1), (false,3,1), (true,5,1), 
> (true,3,2), (false,5,1), (false,3,2), (true,8,1), (true,5,2), (false,8,1), 
> (false,5,2), (true,8,1), (true,5,2), (false,8,1), (false,5,2), (true,8,1), 
> (true,6,2))
> Jan 14 01:20:01   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Jan 14 01:20:01   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Jan 14 01:20:01   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Jan 14 01:20:01   at 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.checkRank$1(TableAggregateITCase.scala:122)
> Jan 14 01:20:01   at 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate(TableAggregateITCase.scala:69)
> Jan 14 01:20:01   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.Iterator.forEachRemaining(Iterator.java:116)
> Jan 14 01:20:01   at 
> scala.collection.convert.Wrappers$IteratorWrapper.forEachRemaining(Wrappers.scala:26)
> Jan 14 01:20:01   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272)
> Jan 14 01:20:01   at 
> 

[jira] [Commented] (FLINK-31472) AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread

2024-01-26 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811327#comment-17811327
 ] 

Matthias Pohl commented on FLINK-31472:
---

1.18: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56944=logs=1c002d28-a73d-5309-26ee-10036d8476b4=d1c117a6-8f13-5466-55f0-d48dbb767fcd=10575

> AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread
> 
>
> Key: FLINK-31472
> URL: https://issues.apache.org/jira/browse/FLINK-31472
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.17.0, 1.16.1, 1.18.0, 1.19.0
>Reporter: Ran Tao
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> when run mvn clean test, this case failed occasionally.
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.955 
> s <<< FAILURE! - in 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest
> [ERROR] 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
>   Time elapsed: 0.492 s  <<< ERROR!
> java.lang.IllegalStateException: Illegal thread detected. This method must be 
> called from inside the mailbox thread!
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>         at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>         at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>         at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
>         

[jira] [Commented] (FLINK-31472) AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread

2024-01-26 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811326#comment-17811326
 ] 

Matthias Pohl commented on FLINK-31472:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56942=logs=1c002d28-a73d-5309-26ee-10036d8476b4=d1c117a6-8f13-5466-55f0-d48dbb767fcd=10596

> AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread
> 
>
> Key: FLINK-31472
> URL: https://issues.apache.org/jira/browse/FLINK-31472
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.17.0, 1.16.1, 1.18.0, 1.19.0
>Reporter: Ran Tao
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> when run mvn clean test, this case failed occasionally.
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.955 
> s <<< FAILURE! - in 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest
> [ERROR] 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
>   Time elapsed: 0.492 s  <<< ERROR!
> java.lang.IllegalStateException: Illegal thread detected. This method must be 
> called from inside the mailbox thread!
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>         at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>         at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>         at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
>         at 
> 

[jira] [Created] (FLINK-34245) CassandraSinkTest.test_cassandra_sink fails under JDK17 and JDK21 due to InaccessibleObjectException

2024-01-26 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-34245:
-

 Summary: CassandraSinkTest.test_cassandra_sink fails under JDK17 
and JDK21 due to InaccessibleObjectException
 Key: FLINK-34245
 URL: https://issues.apache.org/jira/browse/FLINK-34245
 Project: Flink
  Issue Type: Bug
  Components: API / Python, Connectors / Cassandra
Affects Versions: 1.19.0
Reporter: Matthias Pohl


[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56942=logs=b53e1644-5cb4-5a3b-5d48-f523f39bcf06=b68c9f5c-04c9-5c75-3862-a3a27aabbce3]

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56942=logs=60960eae-6f09-579e-371e-29814bdd1adc=7a70c083-6a74-5348-5106-30a76c29d8fa=63680]
{code:java}
Jan 26 01:29:27 E   py4j.protocol.Py4JJavaError: An error 
occurred while calling 
z:org.apache.flink.python.util.PythonConfigUtil.configPythonOperator.
Jan 26 01:29:27 E   : 
java.lang.reflect.InaccessibleObjectException: Unable to make field final 
java.util.Map java.util.Collections$UnmodifiableMap.m accessible: module 
java.base does not "opens java.util" to unnamed module @17695df3
Jan 26 01:29:27 E   at 
java.base/java.lang.reflect.AccessibleObject.throwInaccessibleObjectException(AccessibleObject.java:391)
Jan 26 01:29:27 E   at 
java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:367)
Jan 26 01:29:27 E   at 
java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:315)
Jan 26 01:29:27 E   at 
java.base/java.lang.reflect.Field.checkCanSetAccessible(Field.java:183)
Jan 26 01:29:27 E   at 
java.base/java.lang.reflect.Field.setAccessible(Field.java:177)
Jan 26 01:29:27 E   at 
org.apache.flink.python.util.PythonConfigUtil.registerPythonBroadcastTransformationTranslator(PythonConfigUtil.java:357)
Jan 26 01:29:27 E   at 
org.apache.flink.python.util.PythonConfigUtil.configPythonOperator(PythonConfigUtil.java:101)
Jan 26 01:29:27 E   at 
java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
Jan 26 01:29:27 E   at 
java.base/java.lang.reflect.Method.invoke(Method.java:580)
Jan 26 01:29:27 E   at 
org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
Jan 26 01:29:27 E   at 
org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
Jan 26 01:29:27 E   at 
org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282)
Jan 26 01:29:27 E   at 
org.apache.flink.api.python.shaded.py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
Jan 26 01:29:27 E   at 
org.apache.flink.api.python.shaded.py4j.commands.CallCommand.execute(CallCommand.java:79)
Jan 26 01:29:27 E   at 
org.apache.flink.api.python.shaded.py4j.GatewayConnection.run(GatewayConnection.java:238)
Jan 26 01:29:27 E   at 
java.base/java.lang.Thread.run(Thread.java:1583) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34178][autoscaler] Fix the bug that observed scaling restart time is always great than `stabilization.interval` [flink-kubernetes-operator]

2024-01-26 Thread via GitHub


mxm commented on PR #759:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/759#issuecomment-1912256491

   > Hi @1996fanrui , thanks for catching this, I did not realize that 
"history" only ever spans for one rescale cycle. 
   
   There is the metrics history and the scaling history. The former is only 
preserved for the current metric window (to tolerate restarts during the metric 
window). The latter is stored according to age / size.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34178][autoscaler] Fix the bug that observed scaling restart time is always great than `stabilization.interval` [flink-kubernetes-operator]

2024-01-26 Thread via GitHub


mxm commented on code in PR #759:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/759#discussion_r1467799834


##
flink-autoscaler/src/main/java/org/apache/flink/autoscaler/JobAutoScalerImpl.java:
##
@@ -160,14 +160,23 @@ private void runScalingLogic(Context ctx, 
AutoscalerFlinkMetrics autoscalerMetri
 var collectedMetrics = metricsCollector.updateMetrics(ctx, stateStore);
 var jobTopology = collectedMetrics.getJobTopology();
 
+var now = clock.instant();

Review Comment:
   >I just checked and, unfortunately, our assumption does not seem to hold 
true with that regard. The loop is not triggered when the job state changes to 
RUNNING. I tried setting a higher reconciliation interval and it directly is 
reflected in the recorded durations.
   
   There is no strict guarantee how often we are being called, but in my tests 
I saw 10-20 seconds due to cluster events arriving. In any case, the main issue 
here is not the reconciliation interval but that we are using the current 
instant instead of the job update time. Let's address this as proposed by Rui 
and we should be good to go. This feature might then be robust enough to be 
enabled out of the box.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34244] Upgrade Confluent Platform Avro Schema Registry [flink]

2024-01-26 Thread via GitHub


flinkbot commented on PR #24202:
URL: https://github.com/apache/flink/pull/24202#issuecomment-1912236603

   
   ## CI report:
   
   * 627b53ac7e8fa8a17102aa77b4e2de30e85a00b4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34229) Duplicate entry in InnerClasses attribute in class file FusionStreamOperator

2024-01-26 Thread xingbe (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811321#comment-17811321
 ] 

xingbe commented on FLINK-34229:


[~FrankZou] Sure, I have uploaded all the debug logs from CompileUtils for the 
failed task. Hope they can help you locate the issue.

> Duplicate entry in InnerClasses attribute in class file FusionStreamOperator
> 
>
> Key: FLINK-34229
> URL: https://issues.apache.org/jira/browse/FLINK-34229
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.19.0
>Reporter: xingbe
>Priority: Major
> Attachments: image-2024-01-24-17-05-47-883.png, taskmanager_log.txt
>
>
> I noticed a runtime error happens in 10TB TPC-DS (q35.sql) benchmarks in 
> 1.19, the problem did not happen in 1.18.0. This issue may have been newly 
> introduced recently. !image-2024-01-24-17-05-47-883.png|width=589,height=279!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34229) Duplicate entry in InnerClasses attribute in class file FusionStreamOperator

2024-01-26 Thread xingbe (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xingbe updated FLINK-34229:
---
Attachment: taskmanager_log.txt

> Duplicate entry in InnerClasses attribute in class file FusionStreamOperator
> 
>
> Key: FLINK-34229
> URL: https://issues.apache.org/jira/browse/FLINK-34229
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.19.0
>Reporter: xingbe
>Priority: Major
> Attachments: image-2024-01-24-17-05-47-883.png, taskmanager_log.txt
>
>
> I noticed a runtime error happens in 10TB TPC-DS (q35.sql) benchmarks in 
> 1.19, the problem did not happen in 1.18.0. This issue may have been newly 
> introduced recently. !image-2024-01-24-17-05-47-883.png|width=589,height=279!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34244) Upgrade Confluent Platform to latest compatible version

2024-01-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34244:
---
Labels: pull-request-available  (was: )

> Upgrade Confluent Platform to latest compatible version
> ---
>
> Key: FLINK-34244
> URL: https://issues.apache.org/jira/browse/FLINK-34244
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Kafka, Formats (JSON, Avro, Parquet, ORC, 
> SequenceFile)
>Affects Versions: kafka-3.1.0, 1.19.0
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
>
> Flink uses Confluent Platform for its Confluent Avro Schema Registry 
> implementation, and we can update that to the latest version.
> It's also used by the Flink Kafka connector, and we should upgrade it to the 
> latest compatible version of the used Kafka Client (in this case, 7.4.x)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34244] Upgrade Confluent Platform Avro Schema Registry [flink]

2024-01-26 Thread via GitHub


MartijnVisser opened a new pull request, #24202:
URL: https://github.com/apache/flink/pull/24202

   ## What is the purpose of the change
   
   * Decrease tech debt 
   
   ## Brief change log
   
   * Update used Confluent Platform version for Avro Schema Registry format
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): yes
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34234] Apply ShadeOptionalChecker for flink-shaded [flink-shaded]

2024-01-26 Thread via GitHub


XComp commented on code in PR #136:
URL: https://github.com/apache/flink-shaded/pull/136#discussion_r1467771500


##
.github/workflows/ci.yml:
##
@@ -51,6 +52,14 @@ jobs:
 
   - name: Check licensing
 run: |
-  mvn ${MVN_COMMON_OPTIONS} exec:java@check-licensing -N \
+  mvn ${MVN_COMMON_OPTIONS} exec:java@check-license -N \
 -Dexec.args="${{ env.MVN_BUILD_OUTPUT_FILE }} $(pwd) ${{ 
env.MVN_VALIDATION_DIR }}" \
--Dlog4j.configurationFile=file://$(pwd)/tools/ci/log4j.properties
\ No newline at end of file
+-Dlog4j.configurationFile=file://$(pwd)/tools/ci/log4j.properties
+
+  - name: Check shade optional dependencies
+run: |
+mvn ${MVN_COMMON_OPTIONS} dependency:tree -Plicense-check | tee 
${{ env.MVN_DEPENDENCY_PLUGIN_OUTPUT_FILE }}

Review Comment:
   btw. the `exec-maven-plugin` allows to set System properties (see 
[docs](https://www.mojohaus.org/exec-maven-plugin/usage.html#pom-configuration-1))
 which could be used to enable profiles implicitly (see 
[docs](https://maven.apache.org/guides/introduction/introduction-to-profiles.html#property)).
 :innocent: 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34234] Apply ShadeOptionalChecker for flink-shaded [flink-shaded]

2024-01-26 Thread via GitHub


XComp commented on code in PR #136:
URL: https://github.com/apache/flink-shaded/pull/136#discussion_r1467765454


##
.github/workflows/ci.yml:
##
@@ -51,6 +52,14 @@ jobs:
 
   - name: Check licensing
 run: |
-  mvn ${MVN_COMMON_OPTIONS} exec:java@check-licensing -N \
+  mvn ${MVN_COMMON_OPTIONS} exec:java@check-license -N \
 -Dexec.args="${{ env.MVN_BUILD_OUTPUT_FILE }} $(pwd) ${{ 
env.MVN_VALIDATION_DIR }}" \
--Dlog4j.configurationFile=file://$(pwd)/tools/ci/log4j.properties
\ No newline at end of file
+-Dlog4j.configurationFile=file://$(pwd)/tools/ci/log4j.properties
+
+  - name: Check shade optional dependencies
+run: |
+mvn ${MVN_COMMON_OPTIONS} dependency:tree -Plicense-check | tee 
${{ env.MVN_DEPENDENCY_PLUGIN_OUTPUT_FILE }}

Review Comment:
   ```
   

Re: [PR] [FLINK-34234] Apply ShadeOptionalChecker for flink-shaded [flink-shaded]

2024-01-26 Thread via GitHub


snuyanzin commented on code in PR #136:
URL: https://github.com/apache/flink-shaded/pull/136#discussion_r1467762581


##
.github/workflows/ci.yml:
##
@@ -51,6 +52,14 @@ jobs:
 
   - name: Check licensing
 run: |
-  mvn ${MVN_COMMON_OPTIONS} exec:java@check-licensing -N \
+  mvn ${MVN_COMMON_OPTIONS} exec:java@check-license -N \
 -Dexec.args="${{ env.MVN_BUILD_OUTPUT_FILE }} $(pwd) ${{ 
env.MVN_VALIDATION_DIR }}" \
--Dlog4j.configurationFile=file://$(pwd)/tools/ci/log4j.properties
\ No newline at end of file
+-Dlog4j.configurationFile=file://$(pwd)/tools/ci/log4j.properties
+
+  - name: Check shade optional dependencies
+run: |
+mvn ${MVN_COMMON_OPTIONS} dependency:tree -Plicense-check | tee 
${{ env.MVN_DEPENDENCY_PLUGIN_OUTPUT_FILE }}

Review Comment:
   may be we need to rename the profile itself... :thinking: 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34234] Apply ShadeOptionalChecker for flink-shaded [flink-shaded]

2024-01-26 Thread via GitHub


snuyanzin commented on code in PR #136:
URL: https://github.com/apache/flink-shaded/pull/136#discussion_r1467761466


##
.github/workflows/ci.yml:
##
@@ -51,6 +52,14 @@ jobs:
 
   - name: Check licensing
 run: |
-  mvn ${MVN_COMMON_OPTIONS} exec:java@check-licensing -N \
+  mvn ${MVN_COMMON_OPTIONS} exec:java@check-license -N \
 -Dexec.args="${{ env.MVN_BUILD_OUTPUT_FILE }} $(pwd) ${{ 
env.MVN_VALIDATION_DIR }}" \
--Dlog4j.configurationFile=file://$(pwd)/tools/ci/log4j.properties
\ No newline at end of file
+-Dlog4j.configurationFile=file://$(pwd)/tools/ci/log4j.properties
+
+  - name: Check shade optional dependencies
+run: |
+mvn ${MVN_COMMON_OPTIONS} dependency:tree -Plicense-check | tee 
${{ env.MVN_DEPENDENCY_PLUGIN_OUTPUT_FILE }}

Review Comment:
   In fact it is...
   The reason is that version suffix of several modules like guava, zookeeper 
is added only with this profile e.g. [1] and we need to know module name 
together with version suffix while parsing output of `dependency:tree`
   
   
https://github.com/apache/flink-shaded/blob/b4f9ed2dc85b3dcbb4bfc3bdb6866505bfae6893/flink-shaded-guava-32/pom.xml#L52-L59



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34234] Apply ShadeOptionalChecker for flink-shaded [flink-shaded]

2024-01-26 Thread via GitHub


XComp commented on code in PR #136:
URL: https://github.com/apache/flink-shaded/pull/136#discussion_r1467749847


##
pom.xml:
##
@@ -65,6 +65,7 @@ under the License.
 4.1.100.Final
 2.15.3
 32.1.3-jre
+true

Review Comment:
   I see, those were different issue. But that still leaves the question open 
whether we want to enable the more recent shade plugin version for all modules. 
:thinking: 



##
.github/workflows/ci.yml:
##
@@ -51,6 +52,14 @@ jobs:
 
   - name: Check licensing
 run: |
-  mvn ${MVN_COMMON_OPTIONS} exec:java@check-licensing -N \
+  mvn ${MVN_COMMON_OPTIONS} exec:java@check-license -N \
 -Dexec.args="${{ env.MVN_BUILD_OUTPUT_FILE }} $(pwd) ${{ 
env.MVN_VALIDATION_DIR }}" \
--Dlog4j.configurationFile=file://$(pwd)/tools/ci/log4j.properties
\ No newline at end of file
+-Dlog4j.configurationFile=file://$(pwd)/tools/ci/log4j.properties
+
+  - name: Check shade optional dependencies
+run: |
+mvn ${MVN_COMMON_OPTIONS} dependency:tree -Plicense-check | tee 
${{ env.MVN_DEPENDENCY_PLUGIN_OUTPUT_FILE }}

Review Comment:
   Sorry for being not clear on that one. I didn't mean to move it from the 
`Build` step into the `Check shade optional dependencies`. Instead, you could 
add a new step `Create Dependency Tree`.



##
flink-shaded-zookeeper-parent/flink-shaded-zookeeper-38/pom.xml:
##
@@ -128,6 +131,7 @@ under the License.
 org.apache.zookeeper
 zookeeper
 ${zookeeper.version}
+${flink.markBundledAsOptional}
 
 
 io.netty

Review Comment:
   Can you elaborate a bit here? It doesn't make sense because dependencies 
from the `` section would need to be explicitly added to 
a module pom to make the dependency available and only then would we have to 
add the `` tag?!
   
   Looks like we have 4 locations in `flink-dist/pom.xml` and 
`flink-filesystems/flink-s3-fs-presto/pom.xml` where this also applies. 
:thinking: 



##
.github/workflows/ci.yml:
##
@@ -51,6 +52,14 @@ jobs:
 
   - name: Check licensing
 run: |
-  mvn ${MVN_COMMON_OPTIONS} exec:java@check-licensing -N \
+  mvn ${MVN_COMMON_OPTIONS} exec:java@check-license -N \
 -Dexec.args="${{ env.MVN_BUILD_OUTPUT_FILE }} $(pwd) ${{ 
env.MVN_VALIDATION_DIR }}" \
--Dlog4j.configurationFile=file://$(pwd)/tools/ci/log4j.properties
\ No newline at end of file
+-Dlog4j.configurationFile=file://$(pwd)/tools/ci/log4j.properties
+
+  - name: Check shade optional dependencies
+run: |
+mvn ${MVN_COMMON_OPTIONS} dependency:tree -Plicense-check | tee 
${{ env.MVN_DEPENDENCY_PLUGIN_OUTPUT_FILE }}

Review Comment:
   ```suggestion
   mvn ${MVN_COMMON_OPTIONS} dependency:tree | tee ${{ 
env.MVN_DEPENDENCY_PLUGIN_OUTPUT_FILE }}
   ```
   The profile is not necessary for the `dependency:tree` IIUC.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-27891][Table] Add ARRAY_APPEND and ARRAY_PREPEND functions [flink]

2024-01-26 Thread via GitHub


snuyanzin commented on PR #19873:
URL: https://github.com/apache/flink/pull/19873#issuecomment-1912192870

   ah.. sorry completely forgot abou it
   yeah, I'm going to fix the conflicts and merge after build is green


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34193) Remove usage of Flink-Shaded Jackson and Snakeyaml in flink-connector-kafka

2024-01-26 Thread Jinsui Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811307#comment-17811307
 ] 

Jinsui Chen commented on FLINK-34193:
-

Hi, [~martijnvisser]. Thank you for you response. I have one more question. 
JSONKeyValueDeserializationSchema which depends on a specific version of 
Jackson is annotated by '@PublicEvolving'. Is it required to remove this class 
for this refactoring?

> Remove usage of Flink-Shaded Jackson and Snakeyaml in flink-connector-kafka
> ---
>
> Key: FLINK-34193
> URL: https://issues.apache.org/jira/browse/FLINK-34193
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Kafka
>Reporter: Martijn Visser
>Priority: Blocker
>
> The Flink Kafka connector doesn't have a direct dependency in the POM on 
> flink-shaded, but it still uses the shaded versions of Jackson and SnakeYAML 
> in {{YamlFileMetaDataService.java}} and 
> {{KafkaRecordDeserializationSchemaTest}} 
> Those cause problems when trying to compile the Flink Kafka connector for 
> Flink 1.19, since these dependencies have been updated in there. Since 
> connectors shouldn't rely on Flink-Shaded, we should refactor these 
> implementations 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33006] add e2e for Flink operator HA [flink-kubernetes-operator]

2024-01-26 Thread via GitHub


gyfora commented on code in PR #756:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/756#discussion_r1467714636


##
e2e-tests/test_flink_operator_ha.sh:
##
@@ -0,0 +1,71 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+###
+
+# This script tests the operator HA:
+# 1. Deploy a new flink deployment and wait for job manager to come up
+# 2. Verify the operator log on existing leader
+# 3. Delete the leader operator pod
+# 4. Verify the new leader is different with the old one
+# 5. Check operator log for the flink deployment in the new leader
+
+SCRIPT_DIR=$(dirname "$(readlink -f "$0")")
+source "${SCRIPT_DIR}/utils.sh"
+
+CLUSTER_ID="flink-example-statemachine"
+APPLICATION_YAML="${SCRIPT_DIR}/data/flinkdep-cr.yaml"
+TIMEOUT=300
+
+on_exit cleanup_and_exit "$APPLICATION_YAML" $TIMEOUT $CLUSTER_ID
+
+retry_times 5 30 "kubectl apply -f $APPLICATION_YAML" || exit 1
+
+wait_for_jobmanager_running $CLUSTER_ID $TIMEOUT
+jm_pod_name=$(get_jm_pod_name $CLUSTER_ID)
+
+wait_for_logs $jm_pod_name "Completed checkpoint [0-9]+ for job" ${TIMEOUT} || 
exit 1
+wait_for_status flinkdep/flink-example-statemachine 
'.status.jobManagerDeploymentStatus' READY ${TIMEOUT} || exit 1
+wait_for_status flinkdep/flink-example-statemachine '.status.jobStatus.state' 
RUNNING ${TIMEOUT} || exit 1
+
+job_id=$(kubectl logs $jm_pod_name -c flink-main-container | grep -E -o 'Job 
[a-z0-9]+ is submitted' | awk '{print $2}')

Review Comment:
   Can we remove the job checks? It's not relevant for operator HA



##
.github/workflows/ci.yml:
##
@@ -113,12 +114,18 @@ jobs:
 test: test_autoscaler.sh
   - version: v1_15
 test: test_dynamic_config.sh
+  - version: v1_15
+test: test_flink_operator_ha.sh
   - version: v1_16
 test: test_autoscaler.sh
   - version: v1_16
 test: test_dynamic_config.sh
+  - version: v1_16
+test: test_flink_operator_ha.sh
   - version: v1_17
 test: test_dynamic_config.sh
+  - version: v1_17
+test: test_flink_operator_ha.sh

Review Comment:
   We should only test for one Flink version as the operator HA is not Flink 
version dependent



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34200) AutoRescalingITCase#testCheckpointRescalingInKeyedState fails

2024-01-26 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811298#comment-17811298
 ] 

Matthias Pohl commented on FLINK-34200:
---

I attached the [debug 
logs|https://issues.apache.org/jira/secure/attachment/13066285/FLINK-34200.failure.log.gz]
 for a failed local test run.

> AutoRescalingITCase#testCheckpointRescalingInKeyedState fails
> -
>
> Key: FLINK-34200
> URL: https://issues.apache.org/jira/browse/FLINK-34200
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: test-stability
> Attachments: FLINK-34200.failure.log.gz
>
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56601=logs=8fd9202e-fd17-5b26-353c-ac1ff76c8f28=ea7cf968-e585-52cb-e0fc-f48de023a7ca=8200]
> {code:java}
> Jan 19 02:31:53 02:31:53.954 [ERROR] Tests run: 32, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 1050 s <<< FAILURE! -- in 
> org.apache.flink.test.checkpointing.AutoRescalingITCase
> Jan 19 02:31:53 02:31:53.954 [ERROR] 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingInKeyedState[backend
>  = rocksdb, buffersPerChannel = 2] -- Time elapsed: 59.10 s <<< FAILURE!
> Jan 19 02:31:53 java.lang.AssertionError: expected:<[(0,8000), (0,32000), 
> (0,48000), (0,72000), (1,78000), (1,3), (1,54000), (0,2000), (0,1), 
> (0,5), (0,66000), (0,74000), (0,82000), (1,8), (1,0), (1,16000), 
> (1,24000), (1,4), (1,56000), (1,64000), (0,12000), (0,28000), (0,52000), 
> (0,6), (0,68000), (0,76000), (1,18000), (1,26000), (1,34000), (1,42000), 
> (1,58000), (0,6000), (0,14000), (0,22000), (0,38000), (0,46000), (0,62000), 
> (0,7), (1,4000), (1,2), (1,36000), (1,44000)]> but was:<[(0,8000), 
> (0,32000), (0,48000), (0,72000), (1,78000), (1,3), (1,54000), (0,2000), 
> (0,1), (0,5), (0,66000), (0,74000), (0,82000), (1,8), (1,0), 
> (1,16000), (1,24000), (1,4), (1,56000), (1,64000), (0,12000), (0,28000), 
> (0,52000), (0,6), (0,68000), (0,76000), (0,1000), (0,25000), (0,33000), 
> (0,41000), (1,18000), (1,26000), (1,34000), (1,42000), (1,58000), (0,6000), 
> (0,14000), (0,22000), (0,38000), (0,46000), (0,62000), (0,7), (1,4000), 
> (1,2), (1,36000), (1,44000)]>
> Jan 19 02:31:53   at org.junit.Assert.fail(Assert.java:89)
> Jan 19 02:31:53   at org.junit.Assert.failNotEquals(Assert.java:835)
> Jan 19 02:31:53   at org.junit.Assert.assertEquals(Assert.java:120)
> Jan 19 02:31:53   at org.junit.Assert.assertEquals(Assert.java:146)
> Jan 19 02:31:53   at 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingKeyedState(AutoRescalingITCase.java:296)
> Jan 19 02:31:53   at 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingInKeyedState(AutoRescalingITCase.java:196)
> Jan 19 02:31:53   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 19 02:31:53   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34200) AutoRescalingITCase#testCheckpointRescalingInKeyedState fails

2024-01-26 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl updated FLINK-34200:
--
Attachment: FLINK-34200.failure.log.gz

> AutoRescalingITCase#testCheckpointRescalingInKeyedState fails
> -
>
> Key: FLINK-34200
> URL: https://issues.apache.org/jira/browse/FLINK-34200
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: test-stability
> Attachments: FLINK-34200.failure.log.gz
>
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56601=logs=8fd9202e-fd17-5b26-353c-ac1ff76c8f28=ea7cf968-e585-52cb-e0fc-f48de023a7ca=8200]
> {code:java}
> Jan 19 02:31:53 02:31:53.954 [ERROR] Tests run: 32, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 1050 s <<< FAILURE! -- in 
> org.apache.flink.test.checkpointing.AutoRescalingITCase
> Jan 19 02:31:53 02:31:53.954 [ERROR] 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingInKeyedState[backend
>  = rocksdb, buffersPerChannel = 2] -- Time elapsed: 59.10 s <<< FAILURE!
> Jan 19 02:31:53 java.lang.AssertionError: expected:<[(0,8000), (0,32000), 
> (0,48000), (0,72000), (1,78000), (1,3), (1,54000), (0,2000), (0,1), 
> (0,5), (0,66000), (0,74000), (0,82000), (1,8), (1,0), (1,16000), 
> (1,24000), (1,4), (1,56000), (1,64000), (0,12000), (0,28000), (0,52000), 
> (0,6), (0,68000), (0,76000), (1,18000), (1,26000), (1,34000), (1,42000), 
> (1,58000), (0,6000), (0,14000), (0,22000), (0,38000), (0,46000), (0,62000), 
> (0,7), (1,4000), (1,2), (1,36000), (1,44000)]> but was:<[(0,8000), 
> (0,32000), (0,48000), (0,72000), (1,78000), (1,3), (1,54000), (0,2000), 
> (0,1), (0,5), (0,66000), (0,74000), (0,82000), (1,8), (1,0), 
> (1,16000), (1,24000), (1,4), (1,56000), (1,64000), (0,12000), (0,28000), 
> (0,52000), (0,6), (0,68000), (0,76000), (0,1000), (0,25000), (0,33000), 
> (0,41000), (1,18000), (1,26000), (1,34000), (1,42000), (1,58000), (0,6000), 
> (0,14000), (0,22000), (0,38000), (0,46000), (0,62000), (0,7), (1,4000), 
> (1,2), (1,36000), (1,44000)]>
> Jan 19 02:31:53   at org.junit.Assert.fail(Assert.java:89)
> Jan 19 02:31:53   at org.junit.Assert.failNotEquals(Assert.java:835)
> Jan 19 02:31:53   at org.junit.Assert.assertEquals(Assert.java:120)
> Jan 19 02:31:53   at org.junit.Assert.assertEquals(Assert.java:146)
> Jan 19 02:31:53   at 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingKeyedState(AutoRescalingITCase.java:296)
> Jan 19 02:31:53   at 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingInKeyedState(AutoRescalingITCase.java:196)
> Jan 19 02:31:53   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 19 02:31:53   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33365) Missing filter condition in execution plan containing lookup join with mysql jdbc connector

2024-01-26 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811296#comment-17811296
 ] 

Sergey Nuyanzin commented on FLINK-33365:
-

Merged to 3.1.x as 
[6f318a5326b2d7aba554638b186bf4cc4e17511c|https://github.com/apache/flink-connector-jdbc/commit/6f318a5326b2d7aba554638b186bf4cc4e17511c]

> Missing filter condition in execution plan containing lookup join with mysql 
> jdbc connector
> ---
>
> Key: FLINK-33365
> URL: https://issues.apache.org/jira/browse/FLINK-33365
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.18.0, 1.17.1
> Environment: Flink 1.17.1 & Flink 1.18.0 with 
> flink-connector-jdbc-3.1.1-1.17.jar
>Reporter: macdoor615
>Assignee: david radley
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: jdbc-3.2.0, jdbc-3.1.2
>
> Attachments: flink-connector-jdbc-3.0.0-1.16.png, 
> flink-connector-jdbc-3.1.1-1.17.png
>
>
> create table in flink with sql-client.sh
> {code:java}
> CREATE TABLE default_catalog.default_database.a (
>   ip string, 
>   proctime as proctime()
> ) 
> WITH (
>   'connector' = 'datagen'
> );{code}
> create table in mysql
> {code:java}
> create table b (
>   ip varchar(20), 
>   type int
> );  {code}
>  
> Flink 1.17.1/ 1.18.0 and *flink-connector-jdbc-3.1.1-1.17.jar*
> excute in sql-client.sh 
> {code:java}
> explain SELECT * FROM default_catalog.default_database.a left join 
> bnpmp_mysql_test.gem_tmp.b FOR SYSTEM_TIME AS OF a.proctime on b.type = 0 and 
> a.ip = b.ip; {code}
> get the execution plan
> {code:java}
> ...
> == Optimized Execution Plan ==
> Calc(select=[ip, PROCTIME_MATERIALIZE(proctime) AS proctime, ip0, type])
> +- LookupJoin(table=[bnpmp_mysql_test.gem_tmp.b], joinType=[LeftOuterJoin], 
> lookup=[ip=ip], select=[ip, proctime, ip, CAST(0 AS INTEGER) AS type, CAST(ip 
> AS VARCHAR(2147483647)) AS ip0])
>    +- Calc(select=[ip, PROCTIME() AS proctime])
>       +- TableSourceScan(table=[[default_catalog, default_database, a]], 
> fields=[ip]){code}
>  
> excute same sql in sql-client with Flink 1.17.1/ 1.18.0 and 
> *flink-connector-jdbc-3.0.0-1.16.jar*
> get the execution plan
> {code:java}
> ...
> == Optimized Execution Plan ==
> Calc(select=[ip, PROCTIME_MATERIALIZE(proctime) AS proctime, ip0, type])
> +- LookupJoin(table=[bnpmp_mysql_test.gem_tmp.b], joinType=[LeftOuterJoin], 
> lookup=[type=0, ip=ip], where=[(type = 0)], select=[ip, proctime, ip, CAST(0 
> AS INTEGER) AS type, CAST(ip AS VARCHAR(2147483647)) AS ip0])
>    +- Calc(select=[ip, PROCTIME() AS proctime])
>       +- TableSourceScan(table=[[default_catalog, default_database, a]], 
> fields=[ip]) {code}
> with flink-connector-jdbc-3.1.1-1.17.jar,  the condition is 
> *lookup=[ip=ip]*
> with flink-connector-jdbc-3.0.0-1.16.jar ,  the condition is 
> *lookup=[type=0, ip=ip], where=[(type = 0)]*
>  
> In out real world production environment, this lead incorrect data output
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33365) Missing filter condition in execution plan containing lookup join with mysql jdbc connector

2024-01-26 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-33365:

Fix Version/s: jdbc-3.1.2

> Missing filter condition in execution plan containing lookup join with mysql 
> jdbc connector
> ---
>
> Key: FLINK-33365
> URL: https://issues.apache.org/jira/browse/FLINK-33365
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.18.0, 1.17.1
> Environment: Flink 1.17.1 & Flink 1.18.0 with 
> flink-connector-jdbc-3.1.1-1.17.jar
>Reporter: macdoor615
>Assignee: david radley
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: jdbc-3.2.0, jdbc-3.1.2
>
> Attachments: flink-connector-jdbc-3.0.0-1.16.png, 
> flink-connector-jdbc-3.1.1-1.17.png
>
>
> create table in flink with sql-client.sh
> {code:java}
> CREATE TABLE default_catalog.default_database.a (
>   ip string, 
>   proctime as proctime()
> ) 
> WITH (
>   'connector' = 'datagen'
> );{code}
> create table in mysql
> {code:java}
> create table b (
>   ip varchar(20), 
>   type int
> );  {code}
>  
> Flink 1.17.1/ 1.18.0 and *flink-connector-jdbc-3.1.1-1.17.jar*
> excute in sql-client.sh 
> {code:java}
> explain SELECT * FROM default_catalog.default_database.a left join 
> bnpmp_mysql_test.gem_tmp.b FOR SYSTEM_TIME AS OF a.proctime on b.type = 0 and 
> a.ip = b.ip; {code}
> get the execution plan
> {code:java}
> ...
> == Optimized Execution Plan ==
> Calc(select=[ip, PROCTIME_MATERIALIZE(proctime) AS proctime, ip0, type])
> +- LookupJoin(table=[bnpmp_mysql_test.gem_tmp.b], joinType=[LeftOuterJoin], 
> lookup=[ip=ip], select=[ip, proctime, ip, CAST(0 AS INTEGER) AS type, CAST(ip 
> AS VARCHAR(2147483647)) AS ip0])
>    +- Calc(select=[ip, PROCTIME() AS proctime])
>       +- TableSourceScan(table=[[default_catalog, default_database, a]], 
> fields=[ip]){code}
>  
> excute same sql in sql-client with Flink 1.17.1/ 1.18.0 and 
> *flink-connector-jdbc-3.0.0-1.16.jar*
> get the execution plan
> {code:java}
> ...
> == Optimized Execution Plan ==
> Calc(select=[ip, PROCTIME_MATERIALIZE(proctime) AS proctime, ip0, type])
> +- LookupJoin(table=[bnpmp_mysql_test.gem_tmp.b], joinType=[LeftOuterJoin], 
> lookup=[type=0, ip=ip], where=[(type = 0)], select=[ip, proctime, ip, CAST(0 
> AS INTEGER) AS type, CAST(ip AS VARCHAR(2147483647)) AS ip0])
>    +- Calc(select=[ip, PROCTIME() AS proctime])
>       +- TableSourceScan(table=[[default_catalog, default_database, a]], 
> fields=[ip]) {code}
> with flink-connector-jdbc-3.1.1-1.17.jar,  the condition is 
> *lookup=[ip=ip]*
> with flink-connector-jdbc-3.0.0-1.16.jar ,  the condition is 
> *lookup=[type=0, ip=ip], where=[(type = 0)]*
>  
> In out real world production environment, this lead incorrect data output
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33365] include filters with Lookup joins [flink-connector-jdbc]

2024-01-26 Thread via GitHub


davidradl commented on PR #97:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/97#issuecomment-1912127835

   Thanks @snuyanzin :-)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33365][BP-3.1] include filters with Lookup joins [flink-connector-jdbc]

2024-01-26 Thread via GitHub


snuyanzin commented on PR #96:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/96#issuecomment-1912126016

   superceded by #97 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33365][BP-3.1] include filters with Lookup joins [flink-connector-jdbc]

2024-01-26 Thread via GitHub


snuyanzin closed pull request #96: [FLINK-33365][BP-3.1] include filters with 
Lookup joins 
URL: https://github.com/apache/flink-connector-jdbc/pull/96


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33365] include filters with Lookup joins [flink-connector-jdbc]

2024-01-26 Thread via GitHub


boring-cyborg[bot] commented on PR #97:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/97#issuecomment-1912124003

   Awesome work, congrats on your first merged pull request!
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33365] include filters with Lookup joins [flink-connector-jdbc]

2024-01-26 Thread via GitHub


snuyanzin merged PR #97:
URL: https://github.com/apache/flink-connector-jdbc/pull/97


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33365][BP-3.1] include filters with Lookup joins [flink-connector-jdbc]

2024-01-26 Thread via GitHub


davidradl commented on PR #96:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/96#issuecomment-1912118063

   @snuyanzin I did the same with 
https://github.com/apache/flink-connector-jdbc/pull/97 . I cherry picked and 
fixed up the conflict in one commit. I can close my pr if you are going forward 
with this pr or you could merge my one. Let me know which you would like to do. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] Add jakarta.activation required after changes in Flink main repo [flink-connector-jdbc]

2024-01-26 Thread via GitHub


snuyanzin closed pull request #93: [hotfix] Add jakarta.activation required 
after changes in Flink main repo
URL: https://github.com/apache/flink-connector-jdbc/pull/93


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] Add jakarta.activation required after changes in Flink main repo [flink-connector-jdbc]

2024-01-26 Thread via GitHub


snuyanzin commented on PR #93:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/93#issuecomment-1912104156

   closing since the issue was fixed in upstream


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33365] include filters with Lookup joins [flink-connector-jdbc]

2024-01-26 Thread via GitHub


davidradl commented on PR #97:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/97#issuecomment-1912096605

   @snuyanzin as discussed here is the pr for the back port.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34234] Apply ShadeOptionalChecker for flink-shaded [flink-shaded]

2024-01-26 Thread via GitHub


snuyanzin commented on code in PR #136:
URL: https://github.com/apache/flink-shaded/pull/136#discussion_r1467672006


##
pom.xml:
##
@@ -341,21 +342,41 @@ under the License.
 
 java
 
+
+
+
org.apache.flink.tools.ci.licensecheck.LicenseChecker
+
+
+
+check-shade
+
+none
+
+java
+
+
+
+
org.apache.flink.tools.ci.optional.ShadeOptionalChecker
+
 
 
 
-
org.apache.flink.tools.ci.licensecheck.LicenseChecker
 true
 
false
 
 
 
 org.apache.flink
 flink-ci-tools
-1.17-SNAPSHOT
+1.18-SNAPSHOT

Review Comment:
   I think so.
   I guess it could also go with its own hotfix commit



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-34145) File source connector support dynamic source parallelism inference in batch jobs

2024-01-26 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu closed FLINK-34145.
---
Fix Version/s: 1.19.0
   Resolution: Done

master/release-1.19:
11631cb59568df60d40933fb13c8433062ed9290

> File source connector support dynamic source parallelism inference in batch 
> jobs
> 
>
> Key: FLINK-34145
> URL: https://issues.apache.org/jira/browse/FLINK-34145
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Affects Versions: 1.19.0
>Reporter: xingbe
>Assignee: xingbe
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> [FLIP-379|https://cwiki.apache.org/confluence/display/FLINK/FLIP-379%3A+Dynamic+source+parallelism+inference+for+batch+jobs]
>  has introduced support for dynamic source parallelism inference in batch 
> jobs, and we plan to give priority to enabling this feature for the file 
> source connector.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34145][connector/filesystem] support dynamic source parallelism inference in batch jobs [flink]

2024-01-26 Thread via GitHub


zhuzhurk closed pull request #24186: [FLINK-34145][connector/filesystem] 
support dynamic source parallelism inference in batch jobs
URL: https://github.com/apache/flink/pull/24186


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34234] Apply ShadeOptionalChecker for flink-shaded [flink-shaded]

2024-01-26 Thread via GitHub


XComp commented on code in PR #136:
URL: https://github.com/apache/flink-shaded/pull/136#discussion_r1467669315


##
pom.xml:
##
@@ -341,21 +342,41 @@ under the License.
 
 java
 
+
+
+
org.apache.flink.tools.ci.licensecheck.LicenseChecker
+
+
+
+check-shade
+
+none
+
+java
+
+
+
+
org.apache.flink.tools.ci.optional.ShadeOptionalChecker
+
 
 
 
-
org.apache.flink.tools.ci.licensecheck.LicenseChecker
 true
 
false
 
 
 
 org.apache.flink
 flink-ci-tools
-1.17-SNAPSHOT
+1.18-SNAPSHOT

Review Comment:
   I'm wondering whether we could go for the released version as well to have a 
clearer reproducibility in case of errors.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-34222) Get minibatch join operator involved

2024-01-26 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee reassigned FLINK-34222:
---

Assignee: Shuai Xu

> Get minibatch join operator involved
> 
>
> Key: FLINK-34222
> URL: https://issues.apache.org/jira/browse/FLINK-34222
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Runtime
>Reporter: Shuai Xu
>Assignee: Shuai Xu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Get minibatch join operator involved in which includes both plan and 
> operator. Implement minibatch join in E2E.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-33264) Support source parallelism setting for DataGen connector

2024-01-26 Thread Benchao Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benchao Li resolved FLINK-33264.

Fix Version/s: 1.19.0
   Resolution: Fixed

Implemented via 862a7129d2730b4c70a21826a5b858fc541a4470 (master)

[~Zhanghao Chen] Thanks for the contribution!

> Support source parallelism setting for DataGen connector
> 
>
> Key: FLINK-33264
> URL: https://issues.apache.org/jira/browse/FLINK-33264
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Parent
>Reporter: Zhanghao Chen
>Assignee: Zhanghao Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34222][table] End-to-end implementation of minibatch join [flink]

2024-01-26 Thread via GitHub


xishuaidelin commented on PR #24161:
URL: https://github.com/apache/flink/pull/24161#issuecomment-1912079018

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33264][table] Support source parallelism setting for DataGen connector [flink]

2024-01-26 Thread via GitHub


libenchao closed pull request #24133: [FLINK-33264][table] Support source 
parallelism setting for DataGen connector
URL: https://github.com/apache/flink/pull/24133


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-34200) AutoRescalingITCase#testCheckpointRescalingInKeyedState fails

2024-01-26 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811275#comment-17811275
 ] 

Matthias Pohl edited comment on FLINK-34200 at 1/26/24 1:32 PM:


Another local repeated test run for (rocksdb,buffersPerChannel=2) failed in the 
2nd test run:
The diff was {{(0,23000), (0,31000), [...] (0,19000), (0,35000)}} this time.

I hope that helps. Let me know if you need help with the test setup.


was (Author: mapohl):
Another local repeated test run for (rocksdb,buffersPerChannel=2) failed in the 
2nd test run:
The diff was {{(0,23000), (0,31000), [...] (0,19000), (0,35000)}} this time.

> AutoRescalingITCase#testCheckpointRescalingInKeyedState fails
> -
>
> Key: FLINK-34200
> URL: https://issues.apache.org/jira/browse/FLINK-34200
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56601=logs=8fd9202e-fd17-5b26-353c-ac1ff76c8f28=ea7cf968-e585-52cb-e0fc-f48de023a7ca=8200]
> {code:java}
> Jan 19 02:31:53 02:31:53.954 [ERROR] Tests run: 32, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 1050 s <<< FAILURE! -- in 
> org.apache.flink.test.checkpointing.AutoRescalingITCase
> Jan 19 02:31:53 02:31:53.954 [ERROR] 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingInKeyedState[backend
>  = rocksdb, buffersPerChannel = 2] -- Time elapsed: 59.10 s <<< FAILURE!
> Jan 19 02:31:53 java.lang.AssertionError: expected:<[(0,8000), (0,32000), 
> (0,48000), (0,72000), (1,78000), (1,3), (1,54000), (0,2000), (0,1), 
> (0,5), (0,66000), (0,74000), (0,82000), (1,8), (1,0), (1,16000), 
> (1,24000), (1,4), (1,56000), (1,64000), (0,12000), (0,28000), (0,52000), 
> (0,6), (0,68000), (0,76000), (1,18000), (1,26000), (1,34000), (1,42000), 
> (1,58000), (0,6000), (0,14000), (0,22000), (0,38000), (0,46000), (0,62000), 
> (0,7), (1,4000), (1,2), (1,36000), (1,44000)]> but was:<[(0,8000), 
> (0,32000), (0,48000), (0,72000), (1,78000), (1,3), (1,54000), (0,2000), 
> (0,1), (0,5), (0,66000), (0,74000), (0,82000), (1,8), (1,0), 
> (1,16000), (1,24000), (1,4), (1,56000), (1,64000), (0,12000), (0,28000), 
> (0,52000), (0,6), (0,68000), (0,76000), (0,1000), (0,25000), (0,33000), 
> (0,41000), (1,18000), (1,26000), (1,34000), (1,42000), (1,58000), (0,6000), 
> (0,14000), (0,22000), (0,38000), (0,46000), (0,62000), (0,7), (1,4000), 
> (1,2), (1,36000), (1,44000)]>
> Jan 19 02:31:53   at org.junit.Assert.fail(Assert.java:89)
> Jan 19 02:31:53   at org.junit.Assert.failNotEquals(Assert.java:835)
> Jan 19 02:31:53   at org.junit.Assert.assertEquals(Assert.java:120)
> Jan 19 02:31:53   at org.junit.Assert.assertEquals(Assert.java:146)
> Jan 19 02:31:53   at 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingKeyedState(AutoRescalingITCase.java:296)
> Jan 19 02:31:53   at 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingInKeyedState(AutoRescalingITCase.java:196)
> Jan 19 02:31:53   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 19 02:31:53   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34200) AutoRescalingITCase#testCheckpointRescalingInKeyedState fails

2024-01-26 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811275#comment-17811275
 ] 

Matthias Pohl commented on FLINK-34200:
---

Another local repeated test run for (rocksdb,buffersPerChannel=2) failed in the 
2nd test run:
The diff was {{ (0,23000), (0,31000), [...] (0,19000), (0,35000)}} this time.

> AutoRescalingITCase#testCheckpointRescalingInKeyedState fails
> -
>
> Key: FLINK-34200
> URL: https://issues.apache.org/jira/browse/FLINK-34200
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56601=logs=8fd9202e-fd17-5b26-353c-ac1ff76c8f28=ea7cf968-e585-52cb-e0fc-f48de023a7ca=8200]
> {code:java}
> Jan 19 02:31:53 02:31:53.954 [ERROR] Tests run: 32, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 1050 s <<< FAILURE! -- in 
> org.apache.flink.test.checkpointing.AutoRescalingITCase
> Jan 19 02:31:53 02:31:53.954 [ERROR] 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingInKeyedState[backend
>  = rocksdb, buffersPerChannel = 2] -- Time elapsed: 59.10 s <<< FAILURE!
> Jan 19 02:31:53 java.lang.AssertionError: expected:<[(0,8000), (0,32000), 
> (0,48000), (0,72000), (1,78000), (1,3), (1,54000), (0,2000), (0,1), 
> (0,5), (0,66000), (0,74000), (0,82000), (1,8), (1,0), (1,16000), 
> (1,24000), (1,4), (1,56000), (1,64000), (0,12000), (0,28000), (0,52000), 
> (0,6), (0,68000), (0,76000), (1,18000), (1,26000), (1,34000), (1,42000), 
> (1,58000), (0,6000), (0,14000), (0,22000), (0,38000), (0,46000), (0,62000), 
> (0,7), (1,4000), (1,2), (1,36000), (1,44000)]> but was:<[(0,8000), 
> (0,32000), (0,48000), (0,72000), (1,78000), (1,3), (1,54000), (0,2000), 
> (0,1), (0,5), (0,66000), (0,74000), (0,82000), (1,8), (1,0), 
> (1,16000), (1,24000), (1,4), (1,56000), (1,64000), (0,12000), (0,28000), 
> (0,52000), (0,6), (0,68000), (0,76000), (0,1000), (0,25000), (0,33000), 
> (0,41000), (1,18000), (1,26000), (1,34000), (1,42000), (1,58000), (0,6000), 
> (0,14000), (0,22000), (0,38000), (0,46000), (0,62000), (0,7), (1,4000), 
> (1,2), (1,36000), (1,44000)]>
> Jan 19 02:31:53   at org.junit.Assert.fail(Assert.java:89)
> Jan 19 02:31:53   at org.junit.Assert.failNotEquals(Assert.java:835)
> Jan 19 02:31:53   at org.junit.Assert.assertEquals(Assert.java:120)
> Jan 19 02:31:53   at org.junit.Assert.assertEquals(Assert.java:146)
> Jan 19 02:31:53   at 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingKeyedState(AutoRescalingITCase.java:296)
> Jan 19 02:31:53   at 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingInKeyedState(AutoRescalingITCase.java:196)
> Jan 19 02:31:53   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 19 02:31:53   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34200) AutoRescalingITCase#testCheckpointRescalingInKeyedState fails

2024-01-26 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811275#comment-17811275
 ] 

Matthias Pohl edited comment on FLINK-34200 at 1/26/24 1:30 PM:


Another local repeated test run for (rocksdb,buffersPerChannel=2) failed in the 
2nd test run:
The diff was {{(0,23000), (0,31000), [...] (0,19000), (0,35000)}} this time.


was (Author: mapohl):
Another local repeated test run for (rocksdb,buffersPerChannel=2) failed in the 
2nd test run:
The diff was {{ (0,23000), (0,31000), [...] (0,19000), (0,35000)}} this time.

> AutoRescalingITCase#testCheckpointRescalingInKeyedState fails
> -
>
> Key: FLINK-34200
> URL: https://issues.apache.org/jira/browse/FLINK-34200
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56601=logs=8fd9202e-fd17-5b26-353c-ac1ff76c8f28=ea7cf968-e585-52cb-e0fc-f48de023a7ca=8200]
> {code:java}
> Jan 19 02:31:53 02:31:53.954 [ERROR] Tests run: 32, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 1050 s <<< FAILURE! -- in 
> org.apache.flink.test.checkpointing.AutoRescalingITCase
> Jan 19 02:31:53 02:31:53.954 [ERROR] 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingInKeyedState[backend
>  = rocksdb, buffersPerChannel = 2] -- Time elapsed: 59.10 s <<< FAILURE!
> Jan 19 02:31:53 java.lang.AssertionError: expected:<[(0,8000), (0,32000), 
> (0,48000), (0,72000), (1,78000), (1,3), (1,54000), (0,2000), (0,1), 
> (0,5), (0,66000), (0,74000), (0,82000), (1,8), (1,0), (1,16000), 
> (1,24000), (1,4), (1,56000), (1,64000), (0,12000), (0,28000), (0,52000), 
> (0,6), (0,68000), (0,76000), (1,18000), (1,26000), (1,34000), (1,42000), 
> (1,58000), (0,6000), (0,14000), (0,22000), (0,38000), (0,46000), (0,62000), 
> (0,7), (1,4000), (1,2), (1,36000), (1,44000)]> but was:<[(0,8000), 
> (0,32000), (0,48000), (0,72000), (1,78000), (1,3), (1,54000), (0,2000), 
> (0,1), (0,5), (0,66000), (0,74000), (0,82000), (1,8), (1,0), 
> (1,16000), (1,24000), (1,4), (1,56000), (1,64000), (0,12000), (0,28000), 
> (0,52000), (0,6), (0,68000), (0,76000), (0,1000), (0,25000), (0,33000), 
> (0,41000), (1,18000), (1,26000), (1,34000), (1,42000), (1,58000), (0,6000), 
> (0,14000), (0,22000), (0,38000), (0,46000), (0,62000), (0,7), (1,4000), 
> (1,2), (1,36000), (1,44000)]>
> Jan 19 02:31:53   at org.junit.Assert.fail(Assert.java:89)
> Jan 19 02:31:53   at org.junit.Assert.failNotEquals(Assert.java:835)
> Jan 19 02:31:53   at org.junit.Assert.assertEquals(Assert.java:120)
> Jan 19 02:31:53   at org.junit.Assert.assertEquals(Assert.java:146)
> Jan 19 02:31:53   at 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingKeyedState(AutoRescalingITCase.java:296)
> Jan 19 02:31:53   at 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingInKeyedState(AutoRescalingITCase.java:196)
> Jan 19 02:31:53   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 19 02:31:53   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34115) TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate fails

2024-01-26 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811273#comment-17811273
 ] 

Matthias Pohl commented on FLINK-34115:
---

{quote}
I think this build does not include 0a1d671d0c7c5912de2116dbf1d1d641cff72b95.
{quote}
Correct. I mentioned that above the link. :)

I added it for the sake of completeness when going through CI failures that 
happened before the merge.

> TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate fails
> --
>
> Key: FLINK-34115
> URL: https://issues.apache.org/jira/browse/FLINK-34115
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.18.0, 1.19.0
>Reporter: Matthias Pohl
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0, 1.18.2
>
>
> It failed twice in the same pipeline run:
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56348=logs=de826397-1924-5900-0034-51895f69d4b7=f311e913-93a2-5a37-acab-4a63e1328f94=11613]
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56348=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11963]
> {code:java}
>  Jan 14 01:20:01 01:20:01.949 [ERROR] Tests run: 18, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 29.07 s <<< FAILURE! -- in 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase
> Jan 14 01:20:01 01:20:01.949 [ERROR] 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate
>  -- Time elapsed: 0.518 s <<< FAILURE!
> Jan 14 01:20:01 org.opentest4j.AssertionFailedError: 
> Jan 14 01:20:01 
> Jan 14 01:20:01 expected: List((true,6,1), (false,6,1), (true,6,1), 
> (true,3,2), (false,6,1), (false,3,2), (true,6,1), (true,5,2), (false,6,1), 
> (false,5,2), (true,8,1), (true,6,2), (false,8,1), (false,6,2), (true,8,1), 
> (true,6,2))
> Jan 14 01:20:01  but was: List((true,3,1), (false,3,1), (true,5,1), 
> (true,3,2), (false,5,1), (false,3,2), (true,8,1), (true,5,2), (false,8,1), 
> (false,5,2), (true,8,1), (true,5,2), (false,8,1), (false,5,2), (true,8,1), 
> (true,6,2))
> Jan 14 01:20:01   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Jan 14 01:20:01   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Jan 14 01:20:01   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Jan 14 01:20:01   at 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.checkRank$1(TableAggregateITCase.scala:122)
> Jan 14 01:20:01   at 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate(TableAggregateITCase.scala:69)
> Jan 14 01:20:01   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.Iterator.forEachRemaining(Iterator.java:116)
> Jan 14 01:20:01   at 
> scala.collection.convert.Wrappers$IteratorWrapper.forEachRemaining(Wrappers.scala:26)
> Jan 14 01:20:01   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272)
> 

[PR] [FLINK-33365] include filters with Lookup joins [flink-connector-jdbc]

2024-01-26 Thread via GitHub


davidradl opened a new pull request, #97:
URL: https://github.com/apache/flink-connector-jdbc/pull/97

   This closes apache/flink-connector-jdbc#79
   
   Creating as draft while testing


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34200) AutoRescalingITCase#testCheckpointRescalingInKeyedState fails

2024-01-26 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811271#comment-17811271
 ] 

Matthias Pohl commented on FLINK-34200:
---

btw. I was able to reproduce the test instability locally after the third run 
(keep in mind that it appears to only fail for backend=rocksdb, 
buffersPerChannel=2):

{code}
java.lang.AssertionError: 
Expected :[(0,8000), (0,32000), (0,48000), (0,72000), (1,78000), (1,3), 
(1,54000), (0,2000), (0,1), (0,5), (0,66000), (0,74000), (0,82000), 
(1,8), (1,0), (1,16000), (1,24000), (1,4), (1,56000), (1,64000), 
(0,12000), (0,28000), (0,52000), (0 ...

Actual   :[(0,8000), (0,32000), (0,48000), (0,72000), (1,78000), (1,3), 
(1,54000), (0,2000), (0,1), (0,5), (0,66000), (0,74000), (0,82000), 
(1,8), (1,0), (1,16000), (1,24000), (1,4), (1,56000), (1,64000), 
(0,12000), (0,28000), (0,52000), (0 ...
{code}
The actual diff is again 4 events more than expected: {{(0,1000), (0,25000), 
(0,33000), (0,41000),}}


> AutoRescalingITCase#testCheckpointRescalingInKeyedState fails
> -
>
> Key: FLINK-34200
> URL: https://issues.apache.org/jira/browse/FLINK-34200
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56601=logs=8fd9202e-fd17-5b26-353c-ac1ff76c8f28=ea7cf968-e585-52cb-e0fc-f48de023a7ca=8200]
> {code:java}
> Jan 19 02:31:53 02:31:53.954 [ERROR] Tests run: 32, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 1050 s <<< FAILURE! -- in 
> org.apache.flink.test.checkpointing.AutoRescalingITCase
> Jan 19 02:31:53 02:31:53.954 [ERROR] 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingInKeyedState[backend
>  = rocksdb, buffersPerChannel = 2] -- Time elapsed: 59.10 s <<< FAILURE!
> Jan 19 02:31:53 java.lang.AssertionError: expected:<[(0,8000), (0,32000), 
> (0,48000), (0,72000), (1,78000), (1,3), (1,54000), (0,2000), (0,1), 
> (0,5), (0,66000), (0,74000), (0,82000), (1,8), (1,0), (1,16000), 
> (1,24000), (1,4), (1,56000), (1,64000), (0,12000), (0,28000), (0,52000), 
> (0,6), (0,68000), (0,76000), (1,18000), (1,26000), (1,34000), (1,42000), 
> (1,58000), (0,6000), (0,14000), (0,22000), (0,38000), (0,46000), (0,62000), 
> (0,7), (1,4000), (1,2), (1,36000), (1,44000)]> but was:<[(0,8000), 
> (0,32000), (0,48000), (0,72000), (1,78000), (1,3), (1,54000), (0,2000), 
> (0,1), (0,5), (0,66000), (0,74000), (0,82000), (1,8), (1,0), 
> (1,16000), (1,24000), (1,4), (1,56000), (1,64000), (0,12000), (0,28000), 
> (0,52000), (0,6), (0,68000), (0,76000), (0,1000), (0,25000), (0,33000), 
> (0,41000), (1,18000), (1,26000), (1,34000), (1,42000), (1,58000), (0,6000), 
> (0,14000), (0,22000), (0,38000), (0,46000), (0,62000), (0,7), (1,4000), 
> (1,2), (1,36000), (1,44000)]>
> Jan 19 02:31:53   at org.junit.Assert.fail(Assert.java:89)
> Jan 19 02:31:53   at org.junit.Assert.failNotEquals(Assert.java:835)
> Jan 19 02:31:53   at org.junit.Assert.assertEquals(Assert.java:120)
> Jan 19 02:31:53   at org.junit.Assert.assertEquals(Assert.java:146)
> Jan 19 02:31:53   at 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingKeyedState(AutoRescalingITCase.java:296)
> Jan 19 02:31:53   at 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingInKeyedState(AutoRescalingITCase.java:196)
> Jan 19 02:31:53   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 19 02:31:53   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33297) FLIP-366: Support standard YAML for FLINK configuration

2024-01-26 Thread lincoln lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811270#comment-17811270
 ] 

lincoln lee commented on FLINK-33297:
-

[~JunRuiLi] Is all the work on FLIP-366 done? If so, we can close the jira.

> FLIP-366: Support standard YAML for FLINK configuration
> ---
>
> Key: FLINK-33297
> URL: https://issues.apache.org/jira/browse/FLINK-33297
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Reporter: Junrui Li
>Assignee: Junrui Li
>Priority: Major
>  Labels: 2.0-related, pull-request-available
> Fix For: 1.19.0
>
>
> Support standard YAML for FLINK configuration



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34200) AutoRescalingITCase#testCheckpointRescalingInKeyedState fails

2024-01-26 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811268#comment-17811268
 ] 

Matthias Pohl commented on FLINK-34200:
---

Not sure whether that helps, but from the three builds that we have documented 
here, it looks like we have unexpected lines being processed:
{code}
➜  FLINK-34200 diff <(sed 's/), (/\n/g' 56601.expected) <(sed s'/), (/\n/g' 
56601.actual) 
26a27,30
> 0,1000
> 0,25000
> 0,33000
> 0,41000
➜  FLINK-34200 diff <(sed 's/), (/\n/g' 56859.expected) <(sed s'/), (/\n/g' 
56859.actual)
13a14,15
> 0,23000
> 0,31000
38a41,42
> 0,19000
> 0,35000
➜  FLINK-34200 diff <(sed 's/), (/\n/g' 56740.expected) <(sed s'/), (/\n/g' 
56740.actual)
26a27,30
> 0,1000
> 0,25000
> 0,33000
> 0,41000
{code}

> AutoRescalingITCase#testCheckpointRescalingInKeyedState fails
> -
>
> Key: FLINK-34200
> URL: https://issues.apache.org/jira/browse/FLINK-34200
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56601=logs=8fd9202e-fd17-5b26-353c-ac1ff76c8f28=ea7cf968-e585-52cb-e0fc-f48de023a7ca=8200]
> {code:java}
> Jan 19 02:31:53 02:31:53.954 [ERROR] Tests run: 32, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 1050 s <<< FAILURE! -- in 
> org.apache.flink.test.checkpointing.AutoRescalingITCase
> Jan 19 02:31:53 02:31:53.954 [ERROR] 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingInKeyedState[backend
>  = rocksdb, buffersPerChannel = 2] -- Time elapsed: 59.10 s <<< FAILURE!
> Jan 19 02:31:53 java.lang.AssertionError: expected:<[(0,8000), (0,32000), 
> (0,48000), (0,72000), (1,78000), (1,3), (1,54000), (0,2000), (0,1), 
> (0,5), (0,66000), (0,74000), (0,82000), (1,8), (1,0), (1,16000), 
> (1,24000), (1,4), (1,56000), (1,64000), (0,12000), (0,28000), (0,52000), 
> (0,6), (0,68000), (0,76000), (1,18000), (1,26000), (1,34000), (1,42000), 
> (1,58000), (0,6000), (0,14000), (0,22000), (0,38000), (0,46000), (0,62000), 
> (0,7), (1,4000), (1,2), (1,36000), (1,44000)]> but was:<[(0,8000), 
> (0,32000), (0,48000), (0,72000), (1,78000), (1,3), (1,54000), (0,2000), 
> (0,1), (0,5), (0,66000), (0,74000), (0,82000), (1,8), (1,0), 
> (1,16000), (1,24000), (1,4), (1,56000), (1,64000), (0,12000), (0,28000), 
> (0,52000), (0,6), (0,68000), (0,76000), (0,1000), (0,25000), (0,33000), 
> (0,41000), (1,18000), (1,26000), (1,34000), (1,42000), (1,58000), (0,6000), 
> (0,14000), (0,22000), (0,38000), (0,46000), (0,62000), (0,7), (1,4000), 
> (1,2), (1,36000), (1,44000)]>
> Jan 19 02:31:53   at org.junit.Assert.fail(Assert.java:89)
> Jan 19 02:31:53   at org.junit.Assert.failNotEquals(Assert.java:835)
> Jan 19 02:31:53   at org.junit.Assert.assertEquals(Assert.java:120)
> Jan 19 02:31:53   at org.junit.Assert.assertEquals(Assert.java:146)
> Jan 19 02:31:53   at 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingKeyedState(AutoRescalingITCase.java:296)
> Jan 19 02:31:53   at 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingInKeyedState(AutoRescalingITCase.java:196)
> Jan 19 02:31:53   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 19 02:31:53   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33365][BP-3.1] include filters with Lookup joins [flink-connector-jdbc]

2024-01-26 Thread via GitHub


snuyanzin commented on PR #96:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/96#issuecomment-1912058646

   @davidradl could you please have a look here
   I created a backport of #79 
   it was not a cherry-pick because of some inconsistent changes between 
branches


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [BP-3.1][FLINK-33365] include filters with Lookup joins [flink-connector-jdbc]

2024-01-26 Thread via GitHub


snuyanzin opened a new pull request, #96:
URL: https://github.com/apache/flink-connector-jdbc/pull/96

   This is a backport of FLINK-33365 
   it also backports FLINK-33883 to simplify backport of FLINK-33365
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Add Kafka 3.1.0 connector [flink-web]

2024-01-26 Thread via GitHub


MartijnVisser opened a new pull request, #718:
URL: https://github.com/apache/flink-web/pull/718

   (no comment)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34178][autoscaler] Fix the bug that observed scaling restart time is always great than `stabilization.interval` [flink-kubernetes-operator]

2024-01-26 Thread via GitHub


afedulov commented on code in PR #759:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/759#discussion_r1467614975


##
flink-autoscaler/src/main/java/org/apache/flink/autoscaler/JobAutoScalerImpl.java:
##
@@ -160,14 +160,23 @@ private void runScalingLogic(Context ctx, 
AutoscalerFlinkMetrics autoscalerMetri
 var collectedMetrics = metricsCollector.updateMetrics(ctx, stateStore);
 var jobTopology = collectedMetrics.getJobTopology();
 
+var now = clock.instant();

Review Comment:
   @mxm I just checked and, unfortunately, our assumption does not seem to hold 
true with that regard. The loop is not triggered when the job state changes to 
`RUNNING`. I tried setting a higher reconciliation interval and it directly is 
reflected in the recorded durations.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34178][autoscaler] Fix the bug that observed scaling restart time is always great than `stabilization.interval` [flink-kubernetes-operator]

2024-01-26 Thread via GitHub


afedulov commented on code in PR #759:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/759#discussion_r1467622312


##
flink-autoscaler/src/main/java/org/apache/flink/autoscaler/JobAutoScalerImpl.java:
##
@@ -160,14 +160,23 @@ private void runScalingLogic(Context ctx, 
AutoscalerFlinkMetrics autoscalerMetri
 var collectedMetrics = metricsCollector.updateMetrics(ctx, stateStore);
 var jobTopology = collectedMetrics.getJobTopology();
 
+var now = clock.instant();

Review Comment:
   @1996fanrui using `jobUpdateTs` looks like a good idea to me and it also 
seems to be safe - the `runRescaleLogic` is only executed when the job is in 
`RUNNING` state, hence the last timestamp fetched in the 
`ScalingMetricCollector.getJobUpdateTs()` for a rescaled job should be correct 
one. We could add an extra filter for `RUNNING` on top of 
`JobStatus.timestamps` to make the implementation more robust against future 
refactorings.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-30400) Stop bundling connector-base in externalized connectors

2024-01-26 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-30400:
---
Issue Type: Technical Debt  (was: Improvement)

> Stop bundling connector-base in externalized connectors
> ---
>
> Key: FLINK-30400
> URL: https://issues.apache.org/jira/browse/FLINK-30400
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Common
>Reporter: Chesnay Schepler
>Assignee: Hang Ruan
>Priority: Major
>  Labels: pull-request-available
> Fix For: elasticsearch-3.1.0, aws-connector-4.2.0, kafka-3.1.0, 
> rabbitmq-3.0.2, kafka-3.0.2
>
>
> Check that none of the externalized connectors bundle connector-base; if so 
> remove the bundling and schedule a new minor release.
> Bundling this module is highly problematic w.r.t. binary compatibility, since 
> bundled classes may rely on internal APIs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   >