[jira] [Updated] (FLINK-31160) Support join/cogroup in BroadcastUtils.withBroadcastStream

2023-04-03 Thread Dong Lin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin updated FLINK-31160:
-
Fix Version/s: ml-2.3.0
   (was: ml-2.2.0)

> Support join/cogroup in BroadcastUtils.withBroadcastStream
> --
>
> Key: FLINK-31160
> URL: https://issues.apache.org/jira/browse/FLINK-31160
> Project: Flink
>  Issue Type: Improvement
>  Components: Library / Machine Learning
>Affects Versions: ml-2.0.0, ml-2.1.0, ml-2.2.0
>Reporter: Zhipeng Zhang
>Assignee: Zhipeng Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: ml-2.3.0
>
>
> Currently BroadcastUtils#withBroadcastStream does not support cogroup/join, 
> since we restricted that users can only introduce one extra operator in the 
> specified lambda function.
>  
> To support using join/cogroup in BroadcastUtils#withBroadcastStream, we would 
> like to relax the restriction such that users can introduce more than more 
> extra operator, but only the output operator can access the broadcast 
> variables.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30933) Result of join inside iterationBody loses max watermark

2023-04-03 Thread Dong Lin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin updated FLINK-30933:
-
Fix Version/s: ml-2.3.0
   (was: ml-2.2.0)

> Result of join inside iterationBody loses max watermark
> ---
>
> Key: FLINK-30933
> URL: https://issues.apache.org/jira/browse/FLINK-30933
> Project: Flink
>  Issue Type: Bug
>  Components: Library / Machine Learning
>Affects Versions: ml-2.0.0, ml-2.1.0, ml-2.2.0
>Reporter: Zhipeng Zhang
>Assignee: Zhipeng Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: ml-2.3.0
>
>
> Currently if we execute a join inside an iteration body, the following 
> program produces empty output. (In which the right result should be a list 
> with \{2}.
> {code:java}
> public class Test {
> public static void main(String[] args) throws Exception {
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> env.setParallelism(1);
> DataStream> input1 =
> env.fromElements(Tuple2.of(1L, 1), Tuple2.of(2L, 2));
> DataStream> input2 =
> env.fromElements(Tuple2.of(1L, 2L), Tuple2.of(2L, 3L));
> DataStream> iterationJoin =
> Iterations.iterateBoundedStreamsUntilTermination(
> DataStreamList.of(input1),
> ReplayableDataStreamList.replay(input2),
> IterationConfig.newBuilder()
> .setOperatorLifeCycle(
> 
> IterationConfig.OperatorLifeCycle.PER_ROUND)
> .build(),
> new MyIterationBody())
> .get(0);
> DataStream left = iterationJoin.map(x -> x.f0);
> DataStream right = iterationJoin.map(x -> x.f1);
> DataStream result =
> left.join(right)
> .where(x -> x)
> .equalTo(x -> x)
> .window(EndOfStreamWindows.get())
> .apply((JoinFunction) (l1, l2) -> 
> l1);
> List collectedResult = 
> IteratorUtils.toList(result.executeAndCollect());
> List expectedResult = Arrays.asList(2L);
> compareResultCollections(expectedResult, collectedResult, 
> Long::compareTo);
> }
> private static class MyIterationBody implements IterationBody {
> @Override
> public IterationBodyResult process(
> DataStreamList variableStreams, DataStreamList dataStreams) {
> DataStream> input1 = variableStreams.get(0);
> DataStream> input2 = dataStreams.get(0);
> DataStream terminationCriteria = input1.flatMap(new 
> TerminateOnMaxIter(1));
> DataStream> res =
> input1.join(input2)
> .where(x -> x.f0)
> .equalTo(x -> x.f0)
> .window(EndOfStreamWindows.get())
> .apply(
> new JoinFunction<
> Tuple2,
> Tuple2,
> Tuple2>() {
> @Override
> public Tuple2 join(
> Tuple2 
> longIntegerTuple2,
> Tuple2 
> longLongTuple2)
> throws Exception {
> return longLongTuple2;
> }
> });
> return new IterationBodyResult(
> DataStreamList.of(input1), DataStreamList.of(res), 
> terminationCriteria);
> }
> }
> }
> {code}
>  
> There are two possible reasons:
>  * The timer in `HeadOperator` is not a daemon process and it does not exit 
> even flink job finishes.
>  * The max watermark from the iteration body is missed.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30715) FLIP-289: Support Online Inference

2023-04-03 Thread Dong Lin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin closed FLINK-30715.

Resolution: Fixed

> FLIP-289: Support Online Inference
> --
>
> Key: FLINK-30715
> URL: https://issues.apache.org/jira/browse/FLINK-30715
> Project: Flink
>  Issue Type: New Feature
>  Components: Library / Machine Learning
>Reporter: Jiang Xin
>Assignee: Jiang Xin
>Priority: Major
> Fix For: ml-2.2.0
>
>
> The FLIP design doc can be found at 
> [https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=240881268].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31707) Constant string cannot be used as input arguments of Pandas UDAF

2023-04-03 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-31707:

Fix Version/s: 1.16.2

> Constant string cannot be used as input arguments of Pandas UDAF
> 
>
> Key: FLINK-31707
> URL: https://issues.apache.org/jira/browse/FLINK-31707
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: Dian Fu
>Assignee: Dian Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.2, 1.18.0, 1.17.1
>
>
> It will throw exceptions as following when using constant strings in Pandas 
> UDAF:
> {code}
> E   raise ValueError("field_type %s is not supported." % 
> field_type)
> E   ValueError: field_type type_name: CHAR
> E   char_info {
> E length: 3
> E   }
> Eis not supported.
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-31707) Constant string cannot be used as input arguments of Pandas UDAF

2023-04-03 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17708188#comment-17708188
 ] 

Dian Fu edited comment on FLINK-31707 at 4/4/23 5:26 AM:
-

Fixed in:
- master via 7c6d8b0134cbcdc60d56b87d39ff2f28c310b1eb
- release-1.17 via 9c5ca0590806932e4e8f9d3f942f0a2a5442fe2d
- release-1.16 via 3291e4d6f9afff40e1e9718e23388610577de741


was (Author: dianfu):
Fixed in:
- master via 7c6d8b0134cbcdc60d56b87d39ff2f28c310b1eb
- release-1.17 via 9c5ca0590806932e4e8f9d3f942f0a2a5442fe2d

> Constant string cannot be used as input arguments of Pandas UDAF
> 
>
> Key: FLINK-31707
> URL: https://issues.apache.org/jira/browse/FLINK-31707
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: Dian Fu
>Assignee: Dian Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0, 1.17.1
>
>
> It will throw exceptions as following when using constant strings in Pandas 
> UDAF:
> {code}
> E   raise ValueError("field_type %s is not supported." % 
> field_type)
> E   ValueError: field_type type_name: CHAR
> E   char_info {
> E length: 3
> E   }
> Eis not supported.
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] liuyongvs commented on pull request #22324: [FLINK-31691][table] Add built-in MAP_FROM_ENTRIES function.

2023-04-03 Thread via GitHub


liuyongvs commented on PR #22324:
URL: https://github.com/apache/flink/pull/22324#issuecomment-1495357124

   hi  @snuyanzin  do you have time to review it ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-31718) pulsar connector v3.0 branch's CI is not working properly

2023-04-03 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-31718:
--

 Summary: pulsar connector v3.0 branch's CI is not working properly
 Key: FLINK-31718
 URL: https://issues.apache.org/jira/browse/FLINK-31718
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Pulsar
Affects Versions: pulsar-3.0.1
Reporter: Weijie Guo
Assignee: Weijie Guo


After FLINK-30963, we no longer manually set {{flink_url}}, but it is required 
in pulsar connector's own {{ci.yml}}, which causes CI to fail to run normally. 
The root of the problem is that the v3.0 branch does not use the {{ci.yml}} in 
{{flink-connector-shared-utils}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-pulsar] reswqa opened a new pull request, #39: [hotfix][Build] bump flink version to 1.16.1 for ci workflows.

2023-04-03 Thread via GitHub


reswqa opened a new pull request, #39:
URL: https://github.com/apache/flink-connector-pulsar/pull/39

   ## Purpose of the change
   
   *1.16.0 was not found in `dist.apache.org` after flink 1.16.1 released.*
   
   ## Brief change log
   
   - *Bump flink version to 1.16.1 for ci workflows.*
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Significant changes
   
   - [ ] Dependencies have been added or upgraded
   - [ ] Public API has been changed (Public API is any class annotated with 
`@Public(Evolving)`)
   - [ ] Serializers have been changed
   - [ ] New feature has been introduced
   - If yes, how is this documented? (not applicable / docs / JavaDocs / 
not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-30486) Make the config documentation generator available to connector repos

2023-04-03 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo closed FLINK-30486.
--
Resolution: Duplicate

> Make the config documentation generator available to connector repos
> 
>
> Key: FLINK-30486
> URL: https://issues.apache.org/jira/browse/FLINK-30486
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Common, Connectors / Pulsar, Documentation
>Reporter: Martijn Visser
>Priority: Major
>
> Most connectors can be externalized without any issues. This becomes 
> problematic when connectors provide configuration options, like Pulsar does. 
> As discussed in 
> https://github.com/apache/flink/pull/21501#discussion_r1046979593 we need to 
> make the config documentation generator available for connector repositories. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31545) FlinkConnection creates and manages statements

2023-04-03 Thread Benchao Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benchao Li closed FLINK-31545.
--
Fix Version/s: 1.18.0
   Resolution: Fixed

Fixed in 
https://github.com/apache/flink/commit/8c44d58c4c4aeb36c91d8b27f4128891970dc47d 

[~zjureel] Thanks for your contribution!

> FlinkConnection creates and manages statements
> --
>
> Key: FLINK-31545
> URL: https://issues.apache.org/jira/browse/FLINK-31545
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / JDBC
>Affects Versions: 1.18.0
>Reporter: Fang Yong
>Assignee: Fang Yong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> FlinkConnection creates Executor for Statement and manages statements



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] libenchao closed pull request #22289: [FLINK-31545][jdbc-driver] Create executor in flink connection

2023-04-03 Thread via GitHub


libenchao closed pull request #22289: [FLINK-31545][jdbc-driver] Create 
executor in flink connection
URL: https://github.com/apache/flink/pull/22289


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-31704) Pulsar docs should be pulled from dedicated branch

2023-04-03 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17708194#comment-17708194
 ] 

Weijie Guo commented on FLINK-31704:


Thanks [~dannycranmer], fair enough. I will create the 3.0.0-docs branch for 
pulsar connector and delete it once the next version is released.

> Pulsar docs should be pulled from dedicated branch
> --
>
> Key: FLINK-31704
> URL: https://issues.apache.org/jira/browse/FLINK-31704
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Pulsar, Documentation
>Reporter: Danny Cranmer
>Assignee: Weijie Guo
>Priority: Major
>
> Pulsar docs are pulled from the {{main}} 
> [branch|https://github.com/apache/flink/blob/release-1.17/docs/setup_docs.sh#L49].
>  This is dangerous for final versions since we may include features in the 
> docs that are not supported. Update Pulsar to pull from a dedicated branch or 
> tag.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-31704) Pulsar docs should be pulled from dedicated branch

2023-04-03 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo reassigned FLINK-31704:
--

Assignee: Weijie Guo

> Pulsar docs should be pulled from dedicated branch
> --
>
> Key: FLINK-31704
> URL: https://issues.apache.org/jira/browse/FLINK-31704
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Pulsar, Documentation
>Reporter: Danny Cranmer
>Assignee: Weijie Guo
>Priority: Major
>
> Pulsar docs are pulled from the {{main}} 
> [branch|https://github.com/apache/flink/blob/release-1.17/docs/setup_docs.sh#L49].
>  This is dangerous for final versions since we may include features in the 
> docs that are not supported. Update Pulsar to pull from a dedicated branch or 
> tag.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-pulsar] reswqa opened a new pull request, #38: [hotfix] Fix connector artifact shortcode for document.

2023-04-03 Thread via GitHub


reswqa opened a new pull request, #38:
URL: https://github.com/apache/flink-connector-pulsar/pull/38

   ## Purpose of the change
   
   *Fix connector artifact shortcode for v3.0 branch document.*
   
   ## Brief change log
   
   - Using `connector_artifact` instead of `artifact`.
   
   ## Verifying this change
   
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   
   ## Significant changes
   
   *(Please check any boxes [x] if the answer is "yes". You can first publish 
the PR and check them afterwards, for
   convenience.)*
   
   - [ ] Dependencies have been added or upgraded
   - [ ] Public API has been changed (Public API is any class annotated with 
`@Public(Evolving)`)
   - [ ] Serializers have been changed
   - [ ] New feature has been introduced
   - If yes, how is this documented? (not applicable / docs / JavaDocs / 
not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27204) FileSystemJobResultStore should operator on the ioExecutor

2023-04-03 Thread Wencong Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17708193#comment-17708193
 ] 

Wencong Liu commented on FLINK-27204:
-

OK, I will make the changes according to your suggestions.

> FileSystemJobResultStore should operator on the ioExecutor
> --
>
> Key: FLINK-27204
> URL: https://issues.apache.org/jira/browse/FLINK-27204
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.16.0
>Reporter: Matthias Pohl
>Assignee: Wencong Liu
>Priority: Major
>
> The {{JobResultStore}} interface is synchronous currently. For the 
> {{FileSystemJobResultStore}} this means that (possibly) IO-heavy operations 
> have to be explicitly called moved to the ioExecutor within the Dispatcher.
> Instead, we could change the {{JobResultStore}} interface in a way that it 
> returns {{CompletableFuture}} instances instead. That would enable us to run 
> the {{FileSystemJobResultStore}} operations in the ioExecutor which would be 
> set when initializing the {{{}FileSystemJobResultStore{}}}. This would move 
> the responsibility of where to run the operation from the {{Dispatcher}} into 
> the {{JobResultStore.}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-31293) Request memory segment from LocalBufferPool may hanging forever.

2023-04-03 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17708190#comment-17708190
 ] 

Weijie Guo edited comment on FLINK-31293 at 4/4/23 2:37 AM:


master(1.18) via fb6caee13710348a9b53284c2cabbdb2e7aa9739.
release-1.17 via 6a476bee5e452d1f172173ec018939c8a154886c.
release-1.16 via 9582727387d368d1b9e358aedb55c3f2eaae4371.



was (Author: weijie guo):
master(1.18) via fb6caee13710348a9b53284c2cabbdb2e7aa9739.
release-1.16 via
release-1.17 via 

> Request memory segment from LocalBufferPool may hanging forever.
> 
>
> Key: FLINK-31293
> URL: https://issues.apache.org/jira/browse/FLINK-31293
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.17.0, 1.16.1, 1.18.0
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.16.2, 1.18.0, 1.17.1
>
> Attachments: image-2023-03-02-12-23-50-572.png, 
> image-2023-03-02-12-28-48-437.png, image-2023-03-02-12-29-03-003.png
>
>
> In our TPC-DS test, we found that in the case of fierce competition in 
> network memory, some tasks may hanging forever.
> From the thread dump information, we can see that the task is waiting for the 
> {{LocalBufferPool}} to become available. It is strange that other tasks have 
> finished and released network memory already. Undoubtedly, this is an 
> unexpected behavior, which implies that there must be a bug in the 
> {{LocalBufferPool}} or {{{}NetworkBufferPool{}}}.
> !image-2023-03-02-12-23-50-572.png|width=650,height=153!
> By dumping the heap memory, we can find a strange phenomenon that there are 
> available buffers in the {{{}LocalBufferPool{}}}, but it was considered to be 
> un-available. Another thing to note is that it now holds an overdraft buffer.
> !image-2023-03-02-12-28-48-437.png|width=520,height=200!
> !image-2023-03-02-12-29-03-003.png|width=438,height=84!
> TL;DR: This problem occurred in multi-thread race related to the introduction 
> of overdraft buffer.
> Suppose we have two threads, called A and B. For simplicity, 
> {{LocalBufferPool}} is called {{LocalPool}} and {{NetworkBufferPool}} is 
> called {{{}GlobalPool{}}}.
> Thread A continuously request buffers blocking from the {{{}LocalPool{}}}.
> Thread B continuously return buffers to {{{}GlobalPool{}}}.
>  # If thread A takes the last available buffer of {{{}LocalPool{}}}, but 
> {{GlobalPool}} does not have a buffer at this time, it will register a 
> callback function with {{{}GlobalPool{}}}.
>  # Thread B returns one buffer to {{{}GlobalPool{}}}, but has not started to 
> trigger the callback.
>  # Thread A continues to request buffer. Because the 
> {{availableMemorySegments}} of {{LocalPool}} is empty, it requests the 
> overdraftBuffer instead. But there is already a buffer in the 
> {{{}GlobalPool{}}}, it successfully gets the buffer.
>  # Thread B triggers the callback. Since there is no buffer in {{GlobalPool}} 
> now, the callback is re-registered.
>  # Thread A continues to request buffer. Because there is no buffer in 
> {{{}GlobalPool{}}}, it will block on {{{}CompletableFuture#get{}}}.
>  # Thread B continues to return a buffer and triggers the recently registered 
> callback. As a result, {{LocalPool}} puts the buffer into 
> {{{}availableMemorySegments{}}}. Unfortunately, the current logic of 
> {{shouldBeAvailable}} method is: if there is an overdraft buffer, 
> {{LocalPool}} is considered as un-available.
>  # Thread A will keep blocking forever.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31293) Request memory segment from LocalBufferPool may hanging forever.

2023-04-03 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo closed FLINK-31293.
--
Fix Version/s: 1.16.2
   1.18.0
   1.17.1
   Resolution: Fixed

master(1.18) via fb6caee13710348a9b53284c2cabbdb2e7aa9739.
release-1.16 via
release-1.17 via 

> Request memory segment from LocalBufferPool may hanging forever.
> 
>
> Key: FLINK-31293
> URL: https://issues.apache.org/jira/browse/FLINK-31293
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.17.0, 1.16.1
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.16.2, 1.18.0, 1.17.1
>
> Attachments: image-2023-03-02-12-23-50-572.png, 
> image-2023-03-02-12-28-48-437.png, image-2023-03-02-12-29-03-003.png
>
>
> In our TPC-DS test, we found that in the case of fierce competition in 
> network memory, some tasks may hanging forever.
> From the thread dump information, we can see that the task is waiting for the 
> {{LocalBufferPool}} to become available. It is strange that other tasks have 
> finished and released network memory already. Undoubtedly, this is an 
> unexpected behavior, which implies that there must be a bug in the 
> {{LocalBufferPool}} or {{{}NetworkBufferPool{}}}.
> !image-2023-03-02-12-23-50-572.png|width=650,height=153!
> By dumping the heap memory, we can find a strange phenomenon that there are 
> available buffers in the {{{}LocalBufferPool{}}}, but it was considered to be 
> un-available. Another thing to note is that it now holds an overdraft buffer.
> !image-2023-03-02-12-28-48-437.png|width=520,height=200!
> !image-2023-03-02-12-29-03-003.png|width=438,height=84!
> TL;DR: This problem occurred in multi-thread race related to the introduction 
> of overdraft buffer.
> Suppose we have two threads, called A and B. For simplicity, 
> {{LocalBufferPool}} is called {{LocalPool}} and {{NetworkBufferPool}} is 
> called {{{}GlobalPool{}}}.
> Thread A continuously request buffers blocking from the {{{}LocalPool{}}}.
> Thread B continuously return buffers to {{{}GlobalPool{}}}.
>  # If thread A takes the last available buffer of {{{}LocalPool{}}}, but 
> {{GlobalPool}} does not have a buffer at this time, it will register a 
> callback function with {{{}GlobalPool{}}}.
>  # Thread B returns one buffer to {{{}GlobalPool{}}}, but has not started to 
> trigger the callback.
>  # Thread A continues to request buffer. Because the 
> {{availableMemorySegments}} of {{LocalPool}} is empty, it requests the 
> overdraftBuffer instead. But there is already a buffer in the 
> {{{}GlobalPool{}}}, it successfully gets the buffer.
>  # Thread B triggers the callback. Since there is no buffer in {{GlobalPool}} 
> now, the callback is re-registered.
>  # Thread A continues to request buffer. Because there is no buffer in 
> {{{}GlobalPool{}}}, it will block on {{{}CompletableFuture#get{}}}.
>  # Thread B continues to return a buffer and triggers the recently registered 
> callback. As a result, {{LocalPool}} puts the buffer into 
> {{{}availableMemorySegments{}}}. Unfortunately, the current logic of 
> {{shouldBeAvailable}} method is: if there is an overdraft buffer, 
> {{LocalPool}} is considered as un-available.
>  # Thread A will keep blocking forever.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31293) Request memory segment from LocalBufferPool may hanging forever.

2023-04-03 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo updated FLINK-31293:
---
Affects Version/s: 1.18.0

> Request memory segment from LocalBufferPool may hanging forever.
> 
>
> Key: FLINK-31293
> URL: https://issues.apache.org/jira/browse/FLINK-31293
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.17.0, 1.16.1, 1.18.0
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.16.2, 1.18.0, 1.17.1
>
> Attachments: image-2023-03-02-12-23-50-572.png, 
> image-2023-03-02-12-28-48-437.png, image-2023-03-02-12-29-03-003.png
>
>
> In our TPC-DS test, we found that in the case of fierce competition in 
> network memory, some tasks may hanging forever.
> From the thread dump information, we can see that the task is waiting for the 
> {{LocalBufferPool}} to become available. It is strange that other tasks have 
> finished and released network memory already. Undoubtedly, this is an 
> unexpected behavior, which implies that there must be a bug in the 
> {{LocalBufferPool}} or {{{}NetworkBufferPool{}}}.
> !image-2023-03-02-12-23-50-572.png|width=650,height=153!
> By dumping the heap memory, we can find a strange phenomenon that there are 
> available buffers in the {{{}LocalBufferPool{}}}, but it was considered to be 
> un-available. Another thing to note is that it now holds an overdraft buffer.
> !image-2023-03-02-12-28-48-437.png|width=520,height=200!
> !image-2023-03-02-12-29-03-003.png|width=438,height=84!
> TL;DR: This problem occurred in multi-thread race related to the introduction 
> of overdraft buffer.
> Suppose we have two threads, called A and B. For simplicity, 
> {{LocalBufferPool}} is called {{LocalPool}} and {{NetworkBufferPool}} is 
> called {{{}GlobalPool{}}}.
> Thread A continuously request buffers blocking from the {{{}LocalPool{}}}.
> Thread B continuously return buffers to {{{}GlobalPool{}}}.
>  # If thread A takes the last available buffer of {{{}LocalPool{}}}, but 
> {{GlobalPool}} does not have a buffer at this time, it will register a 
> callback function with {{{}GlobalPool{}}}.
>  # Thread B returns one buffer to {{{}GlobalPool{}}}, but has not started to 
> trigger the callback.
>  # Thread A continues to request buffer. Because the 
> {{availableMemorySegments}} of {{LocalPool}} is empty, it requests the 
> overdraftBuffer instead. But there is already a buffer in the 
> {{{}GlobalPool{}}}, it successfully gets the buffer.
>  # Thread B triggers the callback. Since there is no buffer in {{GlobalPool}} 
> now, the callback is re-registered.
>  # Thread A continues to request buffer. Because there is no buffer in 
> {{{}GlobalPool{}}}, it will block on {{{}CompletableFuture#get{}}}.
>  # Thread B continues to return a buffer and triggers the recently registered 
> callback. As a result, {{LocalPool}} puts the buffer into 
> {{{}availableMemorySegments{}}}. Unfortunately, the current logic of 
> {{shouldBeAvailable}} method is: if there is an overdraft buffer, 
> {{LocalPool}} is considered as un-available.
>  # Thread A will keep blocking forever.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31710) Remove and rely on the curator dependency that's provided by flink

2023-04-03 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17708191#comment-17708191
 ] 

Ran Tao commented on FLINK-31710:
-

got it. thanks for your explanations.

> Remove  and rely on the curator dependency that's provided 
> by flink
> -
>
> Key: FLINK-31710
> URL: https://issues.apache.org/jira/browse/FLINK-31710
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.17.0, 1.16.1, 1.18.0
>Reporter: Matthias Pohl
>Assignee: Ran Tao
>Priority: Major
>
> Currently, we're relying on a dedicated curator dependency in tests to use 
> the {{TestingZooKeeperServer}} (see [Flink's parent 
> pom|https://github.com/apache/flink/blob/97cff0768d05e4a7d0217ddc92fd9ea3c7fae2c2/pom.xml#L143]).
>  Besides that, we're using {{flink-shaded}} to provide the zookeeper and 
> curator dependency that is used in Flink's production code.
> The flaw of that approach is that we have to maintain two curator versions. 
> This Jira issue is about investigating whether we could just remove the 
> curator test dependency and rely on the {{flink-shaded}} curator sources.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] reswqa closed pull request #22084: [FLINK-31293][runtime] LocalBufferPool request overdraft buffer only when no available buffer and pool size is reached

2023-04-03 Thread via GitHub


reswqa closed pull request #22084: [FLINK-31293][runtime] LocalBufferPool 
request overdraft buffer only when no available buffer and pool size is reached
URL: https://github.com/apache/flink/pull/22084


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] reswqa merged pull request #22339: [BP-1.17][FLINK-31293] LocalBufferPool request overdraft buffer only when no available buffer and pool size is reached.

2023-04-03 Thread via GitHub


reswqa merged PR #22339:
URL: https://github.com/apache/flink/pull/22339


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] reswqa merged pull request #22340: [BP-1.16][FLINK-31293] LocalBufferPool request overdraft buffer only when no available buffer and pool size is reached.

2023-04-03 Thread via GitHub


reswqa merged PR #22340:
URL: https://github.com/apache/flink/pull/22340


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-31707) Constant string cannot be used as input arguments of Pandas UDAF

2023-04-03 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-31707.
---
Fix Version/s: 1.18.0
   1.17.1
   Resolution: Fixed

Fixed in:
- master via 7c6d8b0134cbcdc60d56b87d39ff2f28c310b1eb
- release-1.17 via 9c5ca0590806932e4e8f9d3f942f0a2a5442fe2d

> Constant string cannot be used as input arguments of Pandas UDAF
> 
>
> Key: FLINK-31707
> URL: https://issues.apache.org/jira/browse/FLINK-31707
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: Dian Fu
>Assignee: Dian Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0, 1.17.1
>
>
> It will throw exceptions as following when using constant strings in Pandas 
> UDAF:
> {code}
> E   raise ValueError("field_type %s is not supported." % 
> field_type)
> E   ValueError: field_type type_name: CHAR
> E   char_info {
> E length: 3
> E   }
> Eis not supported.
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] dianfu closed pull request #22332: [FLINK-31707][python] Fix Pandas UDAF to support to accept constant s…

2023-04-03 Thread via GitHub


dianfu closed pull request #22332: [FLINK-31707][python] Fix Pandas UDAF to 
support to accept constant s…
URL: https://github.com/apache/flink/pull/22332


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] luoyuxia closed pull request #22314: Fixed inaccurancy in Chinese doc of Execution Mode

2023-04-03 Thread via GitHub


luoyuxia closed pull request #22314: Fixed inaccurancy in Chinese doc of 
Execution Mode
URL: https://github.com/apache/flink/pull/22314


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] luoyuxia merged pull request #22326: [docs-zh] Fixed the inaccurancy in Chinese doc of ExecutionMode

2023-04-03 Thread via GitHub


luoyuxia merged PR #22326:
URL: https://github.com/apache/flink/pull/22326


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-31689) Filesystem sink fails when parallelism of compactor operator changed

2023-04-03 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17708187#comment-17708187
 ] 

luoyuxia edited comment on FLINK-31689 at 4/4/23 1:48 AM:
--

It's in 
[here|https://github.com/apache/flink/blob/0915c9850d861165e283acc0f60545cd836f0567/flink-connectors/flink-connector-files/src/main/java/org/apache/flink/connector/file/table/stream/compact/CompactOperator.java#L114].

I think the these code line :
{code:java}
this.expiredFiles.putAll(this.expiredFilesState.get().iterator().next()); {code}
, we can check where this.expiredFilesState.get().iterator().hasNext(), and 
then put. 

I think you can modify this code line, build file-connector, and then try it 
again to see whether it works.


was (Author: luoyuxia):
It's in 
[here|https://github.com/apache/flink/blob/0915c9850d861165e283acc0f60545cd836f0567/flink-connectors/flink-connector-files/src/main/java/org/apache/flink/connector/file/table/stream/compact/CompactOperator.java#L114].

I think the these code line :
{code:java}
this.expiredFiles.putAll(this.expiredFilesState.get().iterator().next()); {code}
, we can check where this.expiredFilesState.get().iterator().hasNext(), and 
then put. 

> Filesystem sink fails when parallelism of compactor operator changed
> 
>
> Key: FLINK-31689
> URL: https://issues.apache.org/jira/browse/FLINK-31689
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.16.1
>Reporter: jirawech.s
>Priority: Major
> Attachments: HelloFlinkHadoopSink.java
>
>
> I encounter this error when i tried to use Filesystem sink with Table SQL. I 
> have not tested with Datastream API tho. You may refers to the error as below
> {code:java}
> // code placeholder
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:864)
>   at 
> org.apache.flink.connector.file.table.stream.compact.CompactOperator.initializeState(CompactOperator.java:119)
>   at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:122)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:286)
>   at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:700)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:676)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:643)
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:917)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
>   at java.lang.Thread.run(Thread.java:750) {code}
> I cannot attach the full reproducible code here, but you may follow my pseudo 
> code in attachment and reproducible steps below
> 1. Create Kafka source
> 2. Set state.savepoints.dir
> 3. Set Job parallelism to 1
> 4. Create FileSystem Sink
> 5. Run the job and trigger savepoint with API
> {noformat}
> curl -X POST localhost:8081/jobs/:jobId/savepoints -d '{"cancel-job": 
> false}'{noformat}
> {color:#172b4d}6. Cancel job, change parallelism to 2, and resume job from 
> savepoint{color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31689) Filesystem sink fails when parallelism of compactor operator changed

2023-04-03 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17708187#comment-17708187
 ] 

luoyuxia commented on FLINK-31689:
--

It's in 
[here|https://github.com/apache/flink/blob/0915c9850d861165e283acc0f60545cd836f0567/flink-connectors/flink-connector-files/src/main/java/org/apache/flink/connector/file/table/stream/compact/CompactOperator.java#L114].

I think the these code line :
{code:java}
this.expiredFiles.putAll(this.expiredFilesState.get().iterator().next()); {code}
, we can check where this.expiredFilesState.get().iterator().hasNext(), and 
then put. 

> Filesystem sink fails when parallelism of compactor operator changed
> 
>
> Key: FLINK-31689
> URL: https://issues.apache.org/jira/browse/FLINK-31689
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.16.1
>Reporter: jirawech.s
>Priority: Major
> Attachments: HelloFlinkHadoopSink.java
>
>
> I encounter this error when i tried to use Filesystem sink with Table SQL. I 
> have not tested with Datastream API tho. You may refers to the error as below
> {code:java}
> // code placeholder
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:864)
>   at 
> org.apache.flink.connector.file.table.stream.compact.CompactOperator.initializeState(CompactOperator.java:119)
>   at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:122)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:286)
>   at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:700)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:676)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:643)
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:917)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
>   at java.lang.Thread.run(Thread.java:750) {code}
> I cannot attach the full reproducible code here, but you may follow my pseudo 
> code in attachment and reproducible steps below
> 1. Create Kafka source
> 2. Set state.savepoints.dir
> 3. Set Job parallelism to 1
> 4. Create FileSystem Sink
> 5. Run the job and trigger savepoint with API
> {noformat}
> curl -X POST localhost:8081/jobs/:jobId/savepoints -d '{"cancel-job": 
> false}'{noformat}
> {color:#172b4d}6. Cancel job, change parallelism to 2, and resume job from 
> savepoint{color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-pulsar] syhily commented on pull request #16: [BK-3.0][FLINK-30552][Connector/Pulsar] drop batch message size assertion, better set the cursor position.

2023-04-03 Thread via GitHub


syhily commented on PR #16:
URL: 
https://github.com/apache/flink-connector-pulsar/pull/16#issuecomment-1495219182

   @MartijnVisser we may need a new 4.0 release.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31695) Calling bin/flink stop when flink-shaded-hadoop-2-uber-2.8.3-10.0.jar is in lib directory throws NoSuchMethodError

2023-04-03 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng updated FLINK-31695:

Summary: Calling bin/flink stop when 
flink-shaded-hadoop-2-uber-2.8.3-10.0.jar is in lib directory throws 
NoSuchMethodError  (was: Calling bin/flink stop when 
flink-shaded-hadoop-2-uber-2.8.3-10.0.jar is in lib directory thwos 
NoSuchMethodError)

> Calling bin/flink stop when flink-shaded-hadoop-2-uber-2.8.3-10.0.jar is in 
> lib directory throws NoSuchMethodError
> --
>
> Key: FLINK-31695
> URL: https://issues.apache.org/jira/browse/FLINK-31695
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client
>Affects Versions: 1.17.0, 1.16.1, 1.15.4
>Reporter: Caizhi Weng
>Priority: Major
>
> To reproduce this bug, follow these steps:
> 1. Download 
> [flink-shaded-hadoop-2-uber-2.8.3-10.0.jar|https://mvnrepository.com/artifact/org.apache.flink/flink-shaded-hadoop-2-uber/2.8.3-10.0]
>  and put it in the {{lib}} directory.
> 2. Run {{bin/flink stop}}
> The exception stack is
> {code}
> java.lang.NoSuchMethodError: 
> org.apache.commons.cli.CommandLine.hasOption(Lorg/apache/commons/cli/Option;)Z
>   at org.apache.flink.client.cli.StopOptions.(StopOptions.java:53)
>   at org.apache.flink.client.cli.CliFrontend.stop(CliFrontend.java:539)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1102)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1165)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at 
> org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1165)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-pulsar] syhily commented on pull request #37: [FLINK-31676][Connector/Pulsar] Replace Shaded Guava from Flink with Shaded Guava from Pulsar

2023-04-03 Thread via GitHub


syhily commented on PR #37:
URL: 
https://github.com/apache/flink-connector-pulsar/pull/37#issuecomment-1495218308

   `flink-shade` is an transitive dependency. We can't prevent the use of it, 
but we can ask others to use Pulsar's shaded Guava when review the PRs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] elphastori commented on a diff in pull request #22308: [FLINK-31518][Runtime / REST] Fix StandaloneHaServices#getClusterRestEndpointLeaderRetreiver to return correct rest port

2023-04-03 Thread via GitHub


elphastori commented on code in PR #22308:
URL: https://github.com/apache/flink/pull/22308#discussion_r1156594224


##
flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/nonha/standalone/StandaloneHaServices.java:
##
@@ -134,9 +136,13 @@ public LeaderElectionService 
getJobManagerLeaderElectionService(JobID jobID) {
 }
 
 @Override
-public LeaderRetrievalService getClusterRestEndpointLeaderRetriever() {
+public LeaderRetrievalService getClusterRestEndpointLeaderRetriever()
+throws UnknownHostException {
 synchronized (lock) {
 checkNotShutdown();
+String clusterRestEndpointAddress =
+HighAvailabilityServicesUtils.getWebMonitorAddress(
+configuration, 
AddressResolution.NO_ADDRESS_RESOLUTION);

Review Comment:
   Why have you moved the call to `getWebMonitorAddress` inside this method? Is 
it to have it after `checkNotShutdown()` or to have it in a synchronized block?
   
   Having the check in a lock may result reduced performance and there may be 
no need for thread safety since it's only a read operation.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gyfora commented on a diff in pull request #561: [FLINK-30609] Add ephemeral storage to CRD

2023-04-03 Thread via GitHub


gyfora commented on code in PR #561:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/561#discussion_r115659


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/config/FlinkConfigBuilder.java:
##
@@ -409,15 +419,44 @@ private static void setPodTemplate(
 effectiveConfig.setString(
 podConfigOption,
 createTempFile(
-mergePodTemplates(
-basicPod,
-appendPod,
-effectiveConfig.get(
-KubernetesOperatorConfigOptions
-
.POD_TEMPLATE_MERGE_BY_NAME;
+applyResourceToPodTemplate(
+mergePodTemplates(
+basicPod,
+appendPod,
+effectiveConfig.get(
+
KubernetesOperatorConfigOptions
+
.POD_TEMPLATE_MERGE_BY_NAME)),
+resource)));
 }
 }
 
+private static Pod applyResourceToPodTemplate(Pod podTemplate, Resource 
resource) {
+if (resource != null
+&& 
!StringUtils.isNullOrWhitespaceOnly(resource.getEphemeralStorage())
+&& podTemplate != null
+&& podTemplate.getSpec() != null) {

Review Comment:
   This will ignore the ephemeralStorage setting if the user did not specify 
any podTemplate (or spec in the podTemplate), that is probably incorrect.



##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/config/FlinkConfigBuilder.java:
##
@@ -409,15 +419,44 @@ private static void setPodTemplate(
 effectiveConfig.setString(
 podConfigOption,
 createTempFile(
-mergePodTemplates(
-basicPod,
-appendPod,
-effectiveConfig.get(
-KubernetesOperatorConfigOptions
-
.POD_TEMPLATE_MERGE_BY_NAME;
+applyResourceToPodTemplate(
+mergePodTemplates(
+basicPod,
+appendPod,
+effectiveConfig.get(
+
KubernetesOperatorConfigOptions
+
.POD_TEMPLATE_MERGE_BY_NAME)),
+resource)));
 }
 }
 
+private static Pod applyResourceToPodTemplate(Pod podTemplate, Resource 
resource) {
+if (resource != null
+&& 
!StringUtils.isNullOrWhitespaceOnly(resource.getEphemeralStorage())
+&& podTemplate != null
+&& podTemplate.getSpec() != null) {
+for (Container container : podTemplate.getSpec().getContainers()) {

Review Comment:
   Should this apply for all containers or only to the main jm/tm container? 
`Constants.MAIN_CONTAINER_NAME` 
   I think only to the main one.



##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/config/FlinkConfigBuilder.java:
##
@@ -409,15 +419,44 @@ private static void setPodTemplate(
 effectiveConfig.setString(
 podConfigOption,
 createTempFile(
-mergePodTemplates(
-basicPod,
-appendPod,
-effectiveConfig.get(
-KubernetesOperatorConfigOptions
-
.POD_TEMPLATE_MERGE_BY_NAME;
+applyResourceToPodTemplate(
+mergePodTemplates(
+basicPod,
+appendPod,
+effectiveConfig.get(
+
KubernetesOperatorConfigOptions
+
.POD_TEMPLATE_MERGE_BY_NAME)),
+resource)));
 }
 }
 
+private static Pod applyResourceToPodTemplate(Pod podTemplate, Resource 
resource) {
+if (resource != null
+&& 
!StringUtils.isNullOrWhitespaceOnly(resource.getEphemeralStorage())
+ 

[GitHub] [flink-kubernetes-operator] morhidi commented on a diff in pull request #561: [FLINK-30609] Add ephemeral storage to CRD

2023-04-03 Thread via GitHub


morhidi commented on code in PR #561:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/561#discussion_r1156574710


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/validation/DefaultValidator.java:
##
@@ -366,6 +368,18 @@ private Optional validateResources(String 
component, Resource resource)
 return Optional.of(component + " resource memory parse error: " + 
iae.getMessage());
 }
 
+String storage = resource.getEphemeralStorage();

Review Comment:
   This will not catch misconfigured storage resources if memory is not defined.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-30609) Add ephemeral storage to CRD

2023-04-03 Thread Zhenqiu Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17708165#comment-17708165
 ] 

Zhenqiu Huang commented on FLINK-30609:
---

[~nfraison.datadog]
Please review the PR according to the requirement from your org. Thank you very 
much!

> Add ephemeral storage to CRD
> 
>
> Key: FLINK-30609
> URL: https://issues.apache.org/jira/browse/FLINK-30609
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: Matyas Orhidi
>Assignee: Zhenqiu Huang
>Priority: Major
>  Labels: pull-request-available, starter
> Fix For: kubernetes-operator-1.5.0
>
>
> We should consider adding ephemeral storage to the existing [resource 
> specification 
> |https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/custom-resource/reference/#resource]in
>  CRD, next to {{cpu}} and {{memory}}
> https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#setting-requests-and-limits-for-local-ephemeral-storage



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30609) Add ephemeral storage to CRD

2023-04-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-30609:
---
Labels: pull-request-available starter  (was: starter)

> Add ephemeral storage to CRD
> 
>
> Key: FLINK-30609
> URL: https://issues.apache.org/jira/browse/FLINK-30609
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: Matyas Orhidi
>Assignee: Zhenqiu Huang
>Priority: Major
>  Labels: pull-request-available, starter
> Fix For: kubernetes-operator-1.5.0
>
>
> We should consider adding ephemeral storage to the existing [resource 
> specification 
> |https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/custom-resource/reference/#resource]in
>  CRD, next to {{cpu}} and {{memory}}
> https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#setting-requests-and-limits-for-local-ephemeral-storage



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-kubernetes-operator] HuangZhenQiu opened a new pull request, #561: [FLINK-30609] Add ephemeral storage to CRD

2023-04-03 Thread via GitHub


HuangZhenQiu opened a new pull request, #561:
URL: https://github.com/apache/flink-kubernetes-operator/pull/561

   ## What is the purpose of the change
   Add ephemeral storage to CRD, so that users can apply ephemeral storage as 
resource requirement to containers of JM and TM.
   
   ## Brief change log 
   
 - Add ephemeral storage to Resource of flink deployment spec
   
   ## Verifying this change
   This change added tests and can be verified as follows:
   
 - Changes are tested in unit test in FlinkConfigBuilderTest
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changes to the `CustomResourceDescriptors`: 
(yes)
 - Core observer or reconciler logic that is regularly executed: ( no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes)
 - If yes, how is the feature documented? (not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] nateab commented on pull request #22313: [FLINK-31660][connector-kafka] fix kafka connector pom so ITCases run in IDE

2023-04-03 Thread via GitHub


nateab commented on PR #22313:
URL: https://github.com/apache/flink/pull/22313#issuecomment-1495083007

   I did not rebuild the modules on the command line, I clicked `Maven -> 
Reload all maven projects` in Intellij


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] snuyanzin commented on a diff in pull request #22063: [FLINK-24909][Table SQL/Client] Add syntax highlighting

2023-04-03 Thread via GitHub


snuyanzin commented on code in PR #22063:
URL: https://github.com/apache/flink/pull/22063#discussion_r1156501755


##
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/parser/SqlClientSyntaxHighlighter.java:
##
@@ -0,0 +1,246 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.client.cli.parser;
+
+import org.apache.flink.table.api.SqlDialect;
+import org.apache.flink.table.api.config.TableConfigOptions;
+import org.apache.flink.table.client.gateway.Executor;
+
+import org.jline.reader.LineReader;
+import org.jline.reader.impl.DefaultHighlighter;
+import org.jline.utils.AttributedString;
+import org.jline.utils.AttributedStringBuilder;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.List;
+import java.util.Locale;
+import java.util.Properties;
+import java.util.Set;
+import java.util.function.BiConsumer;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.table.client.cli.CliClient.COLOR_SCHEMA_VAR;
+
+/** Sql Client syntax highlighter. */
+public class SqlClientSyntaxHighlighter extends DefaultHighlighter {
+private static final Set FLINK_KEYWORD_SET;
+private static final Set HIVE_KEYWORD_SET;
+
+static {
+try (InputStream is =
+
SqlClientSyntaxHighlighter.class.getResourceAsStream("/keywords.properties")) {
+Properties props = new Properties();
+props.load(is);
+FLINK_KEYWORD_SET =
+Collections.unmodifiableSet(
+
Arrays.stream(props.get("default").toString().split(";"))
+.collect(Collectors.toSet()));
+HIVE_KEYWORD_SET =
+Collections.unmodifiableSet(
+
Arrays.stream(props.get("hive").toString().split(";"))
+.collect(Collectors.toSet()));
+} catch (IOException e) {
+throw new RuntimeException(e);
+}
+}
+
+private final Executor executor;
+
+public SqlClientSyntaxHighlighter(Executor executor) {
+this.executor = executor;
+}
+
+@Override
+public AttributedString highlight(LineReader reader, String buffer) {
+
+final Object colorSchemeOrdinal = reader.getVariable(COLOR_SCHEMA_VAR);
+SyntaxHighlightStyle.BuiltInStyle style =
+SyntaxHighlightStyle.BuiltInStyle.fromOrdinal(
+colorSchemeOrdinal == null ? 0 : (Integer) 
colorSchemeOrdinal);
+if (style == SyntaxHighlightStyle.BuiltInStyle.DEFAULT) {
+return super.highlight(reader, buffer);
+}
+final SqlDialect dialect =
+executor.getSessionConfig()
+.get(TableConfigOptions.TABLE_SQL_DIALECT)
+.equalsIgnoreCase(SqlDialect.HIVE.toString())
+? SqlDialect.HIVE
+: SqlDialect.DEFAULT;
+return getHighlightedOutput(buffer, style.getHighlightStyle(), 
dialect);
+}
+
+static AttributedString getHighlightedOutput(

Review Comment:
   In general it shows the behavior howbeit there are some details like 
splitting words.
   In fact if there is a `DEFAULT` state and current symbol is not a part of 
word, e.g. whitespace or some other character then it's better to append it 
directly to output. This will allow to not care about any possible whitespace 
or non word related prefixes while checking against keywords



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] snuyanzin commented on a diff in pull request #22063: [FLINK-24909][Table SQL/Client] Add syntax highlighting

2023-04-03 Thread via GitHub


snuyanzin commented on code in PR #22063:
URL: https://github.com/apache/flink/pull/22063#discussion_r1156496831


##
flink-table/flink-sql-client/src/main/resources/keywords.properties:
##
@@ -0,0 +1,20 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to you under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# Resources for the Apache Calcite project.
+# See wrapper class org.apache.calcite.runtime.CalciteResource.
+#
+default=ABS;ALL;ALLOCATE;ALLOW;ALTER;AND;ANY;ARE;ARRAY;ARRAY_AGG;ARRAY_CONCAT_AGG;ARRAY_MAX_CARDINALITY;AS;ASENSITIVE;ASYMMETRIC;AT;ATOMIC;AUTHORIZATION;AVG;BEGIN;BEGIN_FRAME;BEGIN_PARTITION;BETWEEN;BIGINT;BINARY;BIT;BLOB;BOOLEAN;BOTH;BY;BYTES;CALL;CALLED;CARDINALITY;CASCADED;CASE;CAST;CATALOGS;CEIL;CEILING;CHAR;CHARACTER;CHARACTER_LENGTH;CHAR_LENGTH;CHECK;CLASSIFIER;CLOB;CLOSE;COALESCE;COLLATE;COLLECT;COLUMN;COMMENT;COMMIT;CONDITION;CONNECT;CONSTRAINT;CONTAINS;CONVERT;CORR;CORRESPONDING;COUNT;COVAR_POP;COVAR_SAMP;CREATE;CROSS;CUBE;CUME_DIST;CURRENT;CURRENT_CATALOG;CURRENT_DATE;CURRENT_DEFAULT_TRANSFORM_GROUP;CURRENT_PATH;CURRENT_ROLE;CURRENT_ROW;CURRENT_SCHEMA;CURRENT_TIME;CURRENT_TIMESTAMP;CURRENT_TRANSFORM_GROUP_FOR_TYPE;CURRENT_USER;CURSOR;CYCLE;DATABASES;DATE;DAY;DEALLOCATE;DEC;DECIMAL;DECLARE;DEFAULT;DEFINE;DELETE;DENSE_RANK;DEREF;DESCRIBE;DETERMINISTIC;DISALLOW;DISCONNECT;DISTINCT;DOT;DOUBLE;DROP;DYNAMIC;EACH;ELEMENT;ELSE;EMPTY;END;END-EXEC;END_FRAME;END_PARTITION;EQUALS;ESCA
 
PE;EVERY;EXCEPT;EXEC;EXECUTE;EXISTS;EXP;EXPLAIN;EXTEND;EXTENDED;EXTERNAL;EXTRACT;FALSE;FETCH;FILTER;FIRST_VALUE;FLOAT;FLOOR;FOR;FOREIGN;FRAME_ROW;FREE;FROM;FULL;FUNCTION;FUNCTIONS;FUSION;GET;GLOBAL;GRANT;GROUP;GROUPING;GROUPS;GROUP_CONCAT;HAVING;HOLD;HOUR;IDENTITY;ILIKE;IMPORT;IN;INCLUDE;INDICATOR;INITIAL;INNER;INOUT;INSENSITIVE;INSERT;INT;INTEGER;INTERSECT;INTERSECTION;INTERVAL;INTO;IS;JOIN;JSON_ARRAY;JSON_ARRAYAGG;JSON_EXISTS;JSON_OBJECT;JSON_OBJECTAGG;JSON_QUERY;JSON_VALUE;LAG;LANGUAGE;LARGE;LAST_VALUE;LATERAL;LEAD;LEADING;LEFT;LIKE;LIKE_REGEX;LIMIT;LN;LOCAL;LOCALTIME;LOCALTIMESTAMP;LOWER;MATCH;MATCHES;MATCH_NUMBER;MATCH_RECOGNIZE;MAX;MEASURES;MEMBER;MERGE;METHOD;MIN;MINUS;MINUTE;MOD;MODIFIES;MODIFY;MODULE;MODULES;MONTH;MULTISET;NATIONAL;NATURAL;NCHAR;NCLOB;NEW;NEXT;NO;NONE;NORMALIZE;NOT;NTH_VALUE;NTILE;NULL;NULLIF;NUMERIC;OCCURRENCES_REGEX;OCTET_LENGTH;OF;OFFSET;OLD;OMIT;ON;ONE;ONLY;OPEN;OR;ORDER;OUT;OUTER;OVER;OVERLAPS;OVERLAY;PARAMETER;PARTITION;PATTERN;PER;PERCENT;PERCENTILE_
 
CONT;PERCENTILE_DISC;PERCENT_RANK;PERIOD;PERMUTE;PIVOT;PORTION;POSITION;POSITION_REGEX;POWER;PRECEDES;PRECISION;PREPARE;PREV;PRIMARY;PROCEDURE;RANGE;RANK;RAW;READS;REAL;RECURSIVE;REF;REFERENCES;REFERENCING;REGR_AVGX;REGR_AVGY;REGR_COUNT;REGR_INTERCEPT;REGR_R2;REGR_SLOPE;REGR_SXX;REGR_SXY;REGR_SYY;RELEASE;RENAME;RESET;RESULT;RETURN;RETURNS;REVOKE;RIGHT;RLIKE;ROLLBACK;ROLLUP;ROW;ROWS;ROW_NUMBER;RUNNING;SAVEPOINT;SCALA;SCOPE;SCROLL;SEARCH;SECOND;SEEK;SELECT;SENSITIVE;SEPARATOR;SESSION_USER;SET;SHOW;SIMILAR;SKIP;SMALLINT;SOME;SPECIFIC;SPECIFICTYPE;SQL;SQLEXCEPTION;SQLSTATE;SQLWARNING;SQRT;START;STATEMENT;STATIC;STDDEV_POP;STDDEV_SAMP;STREAM;STRING;STRING_AGG;SUBMULTISET;SUBSET;SUBSTRING;SUBSTRING_REGEX;SUCCEEDS;SUM;SYMMETRIC;SYSTEM;SYSTEM_TIME;SYSTEM_USER;TABLE;TABLES;TABLESAMPLE;THEN;TIME;TIMESTAMP;TIMESTAMP_LTZ;TIMEZONE_HOUR;TIMEZONE_MINUTE;TINYINT;TO;TRAILING;TRANSLATE;TRANSLATE_REGEX;TRANSLATION;TREAT;TRIGGER;TRIM;TRIM_ARRAY;TRUE;TRUNCATE;UESCAPE;UNION;UNIQUE;UNKNOWN;UNNEST;UNPIVOT;
 
UPDATE;UPPER;UPSERT;USE;USER;USING;VALUE;VALUES;VALUE_OF;VARBINARY;VARCHAR;VARYING;VAR_POP;VAR_SAMP;VERSIONING;VIEWS;WATERMARK;WATERMARKS;WHEN;WHENEVER;WHERE;WIDTH_BUCKET;WINDOW;WITH;WITHIN;WITHOUT;YEAR

Review Comment:
   Oops, sorry, looks like I was wrong here...
   `getJdbcKeywords()` does not give all the keywords...
   In fact it's a bit trickier... 
   To answer a question if a whatever word is a keyword there is 
`org.apache.calcite.sql.parser.SqlAbstractParserImpl.MetadataImpl#isReservedWord`,
 e.g. 
   ```java
   String candidate = ...;
   FlinkSqlParserImpl.FACTORY
   .getParser(new StringReader(""))
   .getMetadata()
   .isReservedWord(candidate));
   ```
   
   The problem is that under the hood it works like 
   ```java
   @Override
   public boolean isReservedWord(String token) {
 return 

[GitHub] [flink] snuyanzin commented on a diff in pull request #22063: [FLINK-24909][Table SQL/Client] Add syntax highlighting

2023-04-03 Thread via GitHub


snuyanzin commented on code in PR #22063:
URL: https://github.com/apache/flink/pull/22063#discussion_r1156496831


##
flink-table/flink-sql-client/src/main/resources/keywords.properties:
##
@@ -0,0 +1,20 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to you under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# Resources for the Apache Calcite project.
+# See wrapper class org.apache.calcite.runtime.CalciteResource.
+#
+default=ABS;ALL;ALLOCATE;ALLOW;ALTER;AND;ANY;ARE;ARRAY;ARRAY_AGG;ARRAY_CONCAT_AGG;ARRAY_MAX_CARDINALITY;AS;ASENSITIVE;ASYMMETRIC;AT;ATOMIC;AUTHORIZATION;AVG;BEGIN;BEGIN_FRAME;BEGIN_PARTITION;BETWEEN;BIGINT;BINARY;BIT;BLOB;BOOLEAN;BOTH;BY;BYTES;CALL;CALLED;CARDINALITY;CASCADED;CASE;CAST;CATALOGS;CEIL;CEILING;CHAR;CHARACTER;CHARACTER_LENGTH;CHAR_LENGTH;CHECK;CLASSIFIER;CLOB;CLOSE;COALESCE;COLLATE;COLLECT;COLUMN;COMMENT;COMMIT;CONDITION;CONNECT;CONSTRAINT;CONTAINS;CONVERT;CORR;CORRESPONDING;COUNT;COVAR_POP;COVAR_SAMP;CREATE;CROSS;CUBE;CUME_DIST;CURRENT;CURRENT_CATALOG;CURRENT_DATE;CURRENT_DEFAULT_TRANSFORM_GROUP;CURRENT_PATH;CURRENT_ROLE;CURRENT_ROW;CURRENT_SCHEMA;CURRENT_TIME;CURRENT_TIMESTAMP;CURRENT_TRANSFORM_GROUP_FOR_TYPE;CURRENT_USER;CURSOR;CYCLE;DATABASES;DATE;DAY;DEALLOCATE;DEC;DECIMAL;DECLARE;DEFAULT;DEFINE;DELETE;DENSE_RANK;DEREF;DESCRIBE;DETERMINISTIC;DISALLOW;DISCONNECT;DISTINCT;DOT;DOUBLE;DROP;DYNAMIC;EACH;ELEMENT;ELSE;EMPTY;END;END-EXEC;END_FRAME;END_PARTITION;EQUALS;ESCA
 
PE;EVERY;EXCEPT;EXEC;EXECUTE;EXISTS;EXP;EXPLAIN;EXTEND;EXTENDED;EXTERNAL;EXTRACT;FALSE;FETCH;FILTER;FIRST_VALUE;FLOAT;FLOOR;FOR;FOREIGN;FRAME_ROW;FREE;FROM;FULL;FUNCTION;FUNCTIONS;FUSION;GET;GLOBAL;GRANT;GROUP;GROUPING;GROUPS;GROUP_CONCAT;HAVING;HOLD;HOUR;IDENTITY;ILIKE;IMPORT;IN;INCLUDE;INDICATOR;INITIAL;INNER;INOUT;INSENSITIVE;INSERT;INT;INTEGER;INTERSECT;INTERSECTION;INTERVAL;INTO;IS;JOIN;JSON_ARRAY;JSON_ARRAYAGG;JSON_EXISTS;JSON_OBJECT;JSON_OBJECTAGG;JSON_QUERY;JSON_VALUE;LAG;LANGUAGE;LARGE;LAST_VALUE;LATERAL;LEAD;LEADING;LEFT;LIKE;LIKE_REGEX;LIMIT;LN;LOCAL;LOCALTIME;LOCALTIMESTAMP;LOWER;MATCH;MATCHES;MATCH_NUMBER;MATCH_RECOGNIZE;MAX;MEASURES;MEMBER;MERGE;METHOD;MIN;MINUS;MINUTE;MOD;MODIFIES;MODIFY;MODULE;MODULES;MONTH;MULTISET;NATIONAL;NATURAL;NCHAR;NCLOB;NEW;NEXT;NO;NONE;NORMALIZE;NOT;NTH_VALUE;NTILE;NULL;NULLIF;NUMERIC;OCCURRENCES_REGEX;OCTET_LENGTH;OF;OFFSET;OLD;OMIT;ON;ONE;ONLY;OPEN;OR;ORDER;OUT;OUTER;OVER;OVERLAPS;OVERLAY;PARAMETER;PARTITION;PATTERN;PER;PERCENT;PERCENTILE_
 
CONT;PERCENTILE_DISC;PERCENT_RANK;PERIOD;PERMUTE;PIVOT;PORTION;POSITION;POSITION_REGEX;POWER;PRECEDES;PRECISION;PREPARE;PREV;PRIMARY;PROCEDURE;RANGE;RANK;RAW;READS;REAL;RECURSIVE;REF;REFERENCES;REFERENCING;REGR_AVGX;REGR_AVGY;REGR_COUNT;REGR_INTERCEPT;REGR_R2;REGR_SLOPE;REGR_SXX;REGR_SXY;REGR_SYY;RELEASE;RENAME;RESET;RESULT;RETURN;RETURNS;REVOKE;RIGHT;RLIKE;ROLLBACK;ROLLUP;ROW;ROWS;ROW_NUMBER;RUNNING;SAVEPOINT;SCALA;SCOPE;SCROLL;SEARCH;SECOND;SEEK;SELECT;SENSITIVE;SEPARATOR;SESSION_USER;SET;SHOW;SIMILAR;SKIP;SMALLINT;SOME;SPECIFIC;SPECIFICTYPE;SQL;SQLEXCEPTION;SQLSTATE;SQLWARNING;SQRT;START;STATEMENT;STATIC;STDDEV_POP;STDDEV_SAMP;STREAM;STRING;STRING_AGG;SUBMULTISET;SUBSET;SUBSTRING;SUBSTRING_REGEX;SUCCEEDS;SUM;SYMMETRIC;SYSTEM;SYSTEM_TIME;SYSTEM_USER;TABLE;TABLES;TABLESAMPLE;THEN;TIME;TIMESTAMP;TIMESTAMP_LTZ;TIMEZONE_HOUR;TIMEZONE_MINUTE;TINYINT;TO;TRAILING;TRANSLATE;TRANSLATE_REGEX;TRANSLATION;TREAT;TRIGGER;TRIM;TRIM_ARRAY;TRUE;TRUNCATE;UESCAPE;UNION;UNIQUE;UNKNOWN;UNNEST;UNPIVOT;
 
UPDATE;UPPER;UPSERT;USE;USER;USING;VALUE;VALUES;VALUE_OF;VARBINARY;VARCHAR;VARYING;VAR_POP;VAR_SAMP;VERSIONING;VIEWS;WATERMARK;WATERMARKS;WHEN;WHENEVER;WHERE;WIDTH_BUCKET;WINDOW;WITH;WITHIN;WITHOUT;YEAR

Review Comment:
   Oops, sorry, looks like I was wrong here...
   `getJdbcKeywords()` does not give all the keywords...
   In fact it's a bit trickier... 
   To answer a question if a whatever word is a keyword there is 
`org.apache.calcite.sql.parser.SqlAbstractParserImpl.MetadataImpl#isReservedWord`,
 e.g. 
   ```java
   String candidate = ...;
   FlinkSqlParserImpl.FACTORY
   .getParser(new StringReader(""))
   .getMetadata()
   .isReservedWord(candidate));
   ```
   
   The problem is that under the hood it works like 
   ```java
   @Override
   public boolean isReservedWord(String token) {
 return 

[GitHub] [flink] snuyanzin commented on a diff in pull request #22063: [FLINK-24909][Table SQL/Client] Add syntax highlighting

2023-04-03 Thread via GitHub


snuyanzin commented on code in PR #22063:
URL: https://github.com/apache/flink/pull/22063#discussion_r1156488674


##
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/parser/SqlClientSyntaxHighlighter.java:
##
@@ -0,0 +1,257 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.client.cli.parser;
+
+import org.apache.flink.table.api.SqlDialect;
+import org.apache.flink.table.api.config.TableConfigOptions;
+import org.apache.flink.table.client.config.SqlClientOptions;
+import org.apache.flink.table.client.gateway.Executor;
+
+import org.jline.reader.LineReader;
+import org.jline.reader.impl.DefaultHighlighter;
+import org.jline.utils.AttributedString;
+import org.jline.utils.AttributedStringBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.List;
+import java.util.Locale;
+import java.util.Properties;
+import java.util.Set;
+import java.util.function.BiConsumer;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+/** Sql Client syntax highlighter. */
+public class SqlClientSyntaxHighlighter extends DefaultHighlighter {
+private static final Logger LOG = 
LoggerFactory.getLogger(SqlClientSyntaxHighlighter.class);
+private static Set flinkKeywordSet;
+private static Set flinkKeywordCharacterSet;
+
+static {
+try (InputStream is =
+
SqlClientSyntaxHighlighter.class.getResourceAsStream("/keywords.properties")) {
+Properties props = new Properties();
+props.load(is);
+flinkKeywordSet =
+Collections.unmodifiableSet(
+
Arrays.stream(props.get("default").toString().split(";"))
+.collect(Collectors.toSet()));
+flinkKeywordCharacterSet =
+flinkKeywordSet.stream()
+.flatMap(t -> t.chars().mapToObj(c -> (char) c))
+.collect(Collectors.toSet());
+} catch (IOException e) {
+LOG.error("Exception: ", e);
+flinkKeywordSet = Collections.emptySet();
+}
+}
+
+private final Executor executor;
+
+public SqlClientSyntaxHighlighter(Executor executor) {
+this.executor = executor;
+}
+
+@Override
+public AttributedString highlight(LineReader reader, String buffer) {
+final SyntaxHighlightStyle.BuiltInStyle style =
+SyntaxHighlightStyle.BuiltInStyle.fromString(
+executor.getSessionConfig()
+
.get(SqlClientOptions.DISPLAY_DEFAULT_COLOR_SCHEMA));
+
+if (style == SyntaxHighlightStyle.BuiltInStyle.DEFAULT) {
+return super.highlight(reader, buffer);
+}
+final String dialectName =
+
executor.getSessionConfig().get(TableConfigOptions.TABLE_SQL_DIALECT);
+final SqlDialect dialect =
+SqlDialect.HIVE.name().equalsIgnoreCase(dialectName)
+? SqlDialect.HIVE
+: SqlDialect.DEFAULT;
+return getHighlightedOutput(buffer, style.getHighlightStyle(), 
dialect);
+}
+
+static AttributedString getHighlightedOutput(
+String buffer, SyntaxHighlightStyle style, SqlDialect dialect) {
+final AttributedStringBuilder highlightedOutput = new 
AttributedStringBuilder();
+State currentParseState = null;
+StringBuilder word = new StringBuilder();
+for (int i = 0; i < buffer.length(); i++) {
+final char currentChar = buffer.charAt(i);
+if (currentParseState == null) {
+currentParseState = State.computeStateAt(buffer, i, dialect);
+if (currentParseState == null) {
+if 
(!flinkKeywordCharacterSet.contains(Character.toUpperCase(currentChar))) {
+handleWord(word, highlightedOutput, currentParseState, 
style, true);
+

[GitHub] [flink] snuyanzin commented on a diff in pull request #22063: [FLINK-24909][Table SQL/Client] Add syntax highlighting

2023-04-03 Thread via GitHub


snuyanzin commented on code in PR #22063:
URL: https://github.com/apache/flink/pull/22063#discussion_r1156488186


##
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliClient.java:
##
@@ -307,6 +314,26 @@ private LineReader createLineReader(Terminal terminal, 
ExecutionMode mode) {
 terminal.writer().println(msg);
 LOG.warn(msg);
 }
+if (mode == ExecutionMode.INTERACTIVE_EXECUTION) {
+lineReader.setVariable(
+COLOR_SCHEMA_VAR,
+executor.getSessionConfig()

Review Comment:
   now only table config options is used



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] snuyanzin commented on a diff in pull request #22063: [FLINK-24909][Table SQL/Client] Add syntax highlighting

2023-04-03 Thread via GitHub


snuyanzin commented on code in PR #22063:
URL: https://github.com/apache/flink/pull/22063#discussion_r1156487198


##
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/parser/SqlClientSyntaxHighlighter.java:
##
@@ -0,0 +1,257 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.client.cli.parser;
+
+import org.apache.flink.table.api.SqlDialect;
+import org.apache.flink.table.api.config.TableConfigOptions;
+import org.apache.flink.table.client.config.SqlClientOptions;
+import org.apache.flink.table.client.gateway.Executor;
+
+import org.jline.reader.LineReader;
+import org.jline.reader.impl.DefaultHighlighter;
+import org.jline.utils.AttributedString;
+import org.jline.utils.AttributedStringBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.List;
+import java.util.Locale;
+import java.util.Properties;
+import java.util.Set;
+import java.util.function.BiConsumer;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+/** Sql Client syntax highlighter. */
+public class SqlClientSyntaxHighlighter extends DefaultHighlighter {
+private static final Logger LOG = 
LoggerFactory.getLogger(SqlClientSyntaxHighlighter.class);
+private static Set flinkKeywordSet;
+private static Set flinkKeywordCharacterSet;
+
+static {
+try (InputStream is =
+
SqlClientSyntaxHighlighter.class.getResourceAsStream("/keywords.properties")) {
+Properties props = new Properties();
+props.load(is);
+flinkKeywordSet =
+Collections.unmodifiableSet(
+
Arrays.stream(props.get("default").toString().split(";"))
+.collect(Collectors.toSet()));
+flinkKeywordCharacterSet =
+flinkKeywordSet.stream()
+.flatMap(t -> t.chars().mapToObj(c -> (char) c))
+.collect(Collectors.toSet());
+} catch (IOException e) {
+LOG.error("Exception: ", e);
+flinkKeywordSet = Collections.emptySet();
+}
+}
+
+private final Executor executor;
+
+public SqlClientSyntaxHighlighter(Executor executor) {
+this.executor = executor;
+}
+
+@Override
+public AttributedString highlight(LineReader reader, String buffer) {
+final SyntaxHighlightStyle.BuiltInStyle style =
+SyntaxHighlightStyle.BuiltInStyle.fromString(
+executor.getSessionConfig()
+
.get(SqlClientOptions.DISPLAY_DEFAULT_COLOR_SCHEMA));
+
+if (style == SyntaxHighlightStyle.BuiltInStyle.DEFAULT) {
+return super.highlight(reader, buffer);
+}
+final String dialectName =
+
executor.getSessionConfig().get(TableConfigOptions.TABLE_SQL_DIALECT);
+final SqlDialect dialect =
+SqlDialect.HIVE.name().equalsIgnoreCase(dialectName)
+? SqlDialect.HIVE
+: SqlDialect.DEFAULT;
+return getHighlightedOutput(buffer, style.getHighlightStyle(), 
dialect);
+}
+
+static AttributedString getHighlightedOutput(
+String buffer, SyntaxHighlightStyle style, SqlDialect dialect) {
+final AttributedStringBuilder highlightedOutput = new 
AttributedStringBuilder();
+State currentParseState = null;
+StringBuilder word = new StringBuilder();
+for (int i = 0; i < buffer.length(); i++) {
+final char currentChar = buffer.charAt(i);
+if (currentParseState == null) {
+currentParseState = State.computeStateAt(buffer, i, dialect);
+if (currentParseState == null) {
+if 
(!flinkKeywordCharacterSet.contains(Character.toUpperCase(currentChar))) {
+handleWord(word, highlightedOutput, currentParseState, 
style, true);
+

[GitHub] [flink] snuyanzin commented on a diff in pull request #22063: [FLINK-24909][Table SQL/Client] Add syntax highlighting

2023-04-03 Thread via GitHub


snuyanzin commented on code in PR #22063:
URL: https://github.com/apache/flink/pull/22063#discussion_r1156486556


##
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/parser/SqlClientSyntaxHighlighter.java:
##
@@ -0,0 +1,257 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.client.cli.parser;
+
+import org.apache.flink.table.api.SqlDialect;
+import org.apache.flink.table.api.config.TableConfigOptions;
+import org.apache.flink.table.client.config.SqlClientOptions;
+import org.apache.flink.table.client.gateway.Executor;
+
+import org.jline.reader.LineReader;
+import org.jline.reader.impl.DefaultHighlighter;
+import org.jline.utils.AttributedString;
+import org.jline.utils.AttributedStringBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.List;
+import java.util.Locale;
+import java.util.Properties;
+import java.util.Set;
+import java.util.function.BiConsumer;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+/** Sql Client syntax highlighter. */
+public class SqlClientSyntaxHighlighter extends DefaultHighlighter {
+private static final Logger LOG = 
LoggerFactory.getLogger(SqlClientSyntaxHighlighter.class);
+private static Set flinkKeywordSet;
+private static Set flinkKeywordCharacterSet;
+
+static {
+try (InputStream is =
+
SqlClientSyntaxHighlighter.class.getResourceAsStream("/keywords.properties")) {
+Properties props = new Properties();
+props.load(is);
+flinkKeywordSet =
+Collections.unmodifiableSet(
+
Arrays.stream(props.get("default").toString().split(";"))
+.collect(Collectors.toSet()));
+flinkKeywordCharacterSet =
+flinkKeywordSet.stream()
+.flatMap(t -> t.chars().mapToObj(c -> (char) c))
+.collect(Collectors.toSet());
+} catch (IOException e) {
+LOG.error("Exception: ", e);
+flinkKeywordSet = Collections.emptySet();
+}
+}
+
+private final Executor executor;
+
+public SqlClientSyntaxHighlighter(Executor executor) {
+this.executor = executor;
+}
+
+@Override
+public AttributedString highlight(LineReader reader, String buffer) {
+final SyntaxHighlightStyle.BuiltInStyle style =
+SyntaxHighlightStyle.BuiltInStyle.fromString(
+executor.getSessionConfig()
+
.get(SqlClientOptions.DISPLAY_DEFAULT_COLOR_SCHEMA));
+
+if (style == SyntaxHighlightStyle.BuiltInStyle.DEFAULT) {
+return super.highlight(reader, buffer);
+}
+final String dialectName =
+
executor.getSessionConfig().get(TableConfigOptions.TABLE_SQL_DIALECT);
+final SqlDialect dialect =
+SqlDialect.HIVE.name().equalsIgnoreCase(dialectName)
+? SqlDialect.HIVE
+: SqlDialect.DEFAULT;
+return getHighlightedOutput(buffer, style.getHighlightStyle(), 
dialect);
+}
+
+static AttributedString getHighlightedOutput(
+String buffer, SyntaxHighlightStyle style, SqlDialect dialect) {
+final AttributedStringBuilder highlightedOutput = new 
AttributedStringBuilder();
+State currentParseState = null;
+StringBuilder word = new StringBuilder();
+for (int i = 0; i < buffer.length(); i++) {
+final char currentChar = buffer.charAt(i);
+if (currentParseState == null) {
+currentParseState = State.computeStateAt(buffer, i, dialect);
+if (currentParseState == null) {
+if 
(!flinkKeywordCharacterSet.contains(Character.toUpperCase(currentChar))) {
+handleWord(word, highlightedOutput, currentParseState, 
style, true);
+

[jira] [Commented] (FLINK-31121) KafkaSink should be able to catch and ignore exp via config on/off

2023-04-03 Thread Mason Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17708143#comment-17708143
 ] 

Mason Chen commented on FLINK-31121:


[~zjureel] are you still pursuing implementing the fix? I also would like to 
use this feature

> KafkaSink should be able to catch and ignore exp via config on/off
> --
>
> Key: FLINK-31121
> URL: https://issues.apache.org/jira/browse/FLINK-31121
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.17.0
>Reporter: Jing Ge
>Assignee: Fang Yong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> It is a common requirement for users to catch and ignore exp while sinking 
> the event to to downstream system like Kafka. It will be convenient for some 
> use cases, if Flink Sink can provide built-in functionality and config to 
> turn it on and off, especially for cases that data consistency is not very 
> important or the stream contains dirty events. [1][2]
> First of all, consider doing it for KafkaSink. Long term, a common solution 
> that can be used by any connector would be even better.
>  
> [1][https://lists.apache.org/thread/wy31s8wb9qnskq29wn03kp608z4vrwv8]
> [2]https://stackoverflow.com/questions/52308911/how-to-handle-exceptions-in-kafka-sink
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31593) Update reference data for Migration Tests

2023-04-03 Thread Roman Khachatryan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17708135#comment-17708135
 ] 

Roman Khachatryan commented on FLINK-31593:
---

Sorry for the late reply [~mapohl], I'll try to look into it tomorrow.

> Update reference data for Migration Tests
> -
>
> Key: FLINK-31593
> URL: https://issues.apache.org/jira/browse/FLINK-31593
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available
> Attachments: 
> FLINK-31593.StatefulJobSavepointMigrationITCase.create_snapshot.log, 
> FLINK-31593.StatefulJobSavepointMigrationITCase.verify_snapshot.log
>
>
> # Update {{CURRENT_VERSION in TypeSerializerUpgradeTestBase}}  with the new 
> version. This will likely fail some tests because snapshots are missing for 
> that version. Generate them, for example in 
> {{TypeSerializerUpgradeTestBase.}} 
>  # (major/minor only) Update migration tests in master to cover migration 
> from new version: (search for usages of FlinkV{{{}ersion{}}})
>  ** AbstractOperatorRestoreTestBase
>  ** CEPMigrationTest
>  ** BucketingSinkMigrationTest
>  ** FlinkKafkaConsumerBaseMigrationTest
>  ** ContinuousFileProcessingMigrationTest
>  ** WindowOperatorMigrationTest
>  ** StatefulJobSavepointMigrationITCase
>  ** StatefulJobWBroadcastStateMigrationITCase



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-kubernetes-operator] morhidi commented on pull request #560: [FLINK-31716] Event UID field is missing the first time that an event…

2023-04-03 Thread via GitHub


morhidi commented on PR #560:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/560#issuecomment-1494978500

   LGTM


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] morhidi commented on a diff in pull request #560: [FLINK-31716] Event UID field is missing the first time that an event…

2023-04-03 Thread via GitHub


morhidi commented on code in PR #560:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/560#discussion_r1156455769


##
flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/utils/EventUtilsTest.java:
##
@@ -52,14 +63,16 @@ public void testCreateOrReplaceEvent() {
 reason,
 message,
 EventRecorder.Component.Operator,
-e -> {}));
+consumer));

Review Comment:
   if we don't set a consumer here...



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] morhidi commented on a diff in pull request #560: [FLINK-31716] Event UID field is missing the first time that an event…

2023-04-03 Thread via GitHub


morhidi commented on code in PR #560:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/560#discussion_r1156452579


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/EventUtils.java:
##
@@ -104,8 +104,12 @@ public static boolean createOrUpdateEvent(
 .withNamespace(target.getMetadata().getNamespace())
 .endMetadata()
 .build();
-client.resource(event).createOrReplace();
+
+var ev = client.resource(event).createOrReplace();
+var metadata = event.getMetadata();

Review Comment:
   There's no need for this extra local env variable.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] rodmeneses commented on pull request #560: [FLINK-31716] Event UID field is missing the first time that an event…

2023-04-03 Thread via GitHub


rodmeneses commented on PR #560:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/560#issuecomment-1494970613

   cc: @morhidi @mbalassi 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31716) Event UID field is missing the first time that an event is consumed

2023-04-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-31716:
---
Labels: pull-request-available  (was: )

> Event UID field is missing the first time that an event is consumed
> ---
>
> Key: FLINK-31716
> URL: https://issues.apache.org/jira/browse/FLINK-31716
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Rodrigo Meneses
>Assignee: Rodrigo Meneses
>Priority: Major
>  Labels: pull-request-available
>
> on `EventUtils.createOrUpdateEvent` we use a `Consumer` instance to 
> `accept` the underlying event that is being created or updated.
> The first time an event is created, we are calling 
> `client.resource(event).createOrReplace()` but we are discarding the return 
> value of such method, and we are returning the `event` that we just created, 
> which has an empty UID field.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-kubernetes-operator] rodmeneses opened a new pull request, #560: [FLINK-31716] Event UID field is missing the first time that an event…

2023-04-03 Thread via GitHub


rodmeneses opened a new pull request, #560:
URL: https://github.com/apache/flink-kubernetes-operator/pull/560

   … is consumed
   
   
   
   ## What is the purpose of the change
   Fixes a bug reported on https://issues.apache.org/jira/browse/FLINK-31716 
where the event being consumed for the first time didn't have an UID field
   
   ## Brief change log
   
   - *Fixes a bug on `EventUtils` where the event consumed was missing the UID 
field*
   
   ## Verifying this change
   
   
   This change modified existing unit tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changes to the `CustomResourceDescriptors`: 
no
 - Core observer or reconciler logic that is regularly executed: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature?  no
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31717) Unit tests running with local kube config

2023-04-03 Thread Matyas Orhidi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matyas Orhidi updated FLINK-31717:
--
Fix Version/s: kubernetes-operator-1.5.0

> Unit tests running with local kube config
> -
>
> Key: FLINK-31717
> URL: https://issues.apache.org/jira/browse/FLINK-31717
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.4.0
>Reporter: Matyas Orhidi
>Priority: Critical
> Fix For: kubernetes-operator-1.5.0
>
>
> Some unit tests are using local kube environment. This can be dangerous when 
> pointing to sensitive clusters e.g. in prod.
> {quote}2023-04-03 12:32:53,956 i.f.k.c.Config [DEBUG] Found 
> for Kubernetes config at: [/Users//.kube/config].
> {quote}
> A misconfigured kube config environment revealed the issue:
> {quote}[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time 
> elapsed: 0.012 s <<< FAILURE! - in 
> org.apache.flink.kubernetes.operator.FlinkOperatorTest
> [ERROR] 
> org.apache.flink.kubernetes.operator.FlinkOperatorTest.testConfigurationPassedToJOSDK
>   Time elapsed: 0.008 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.flink.kubernetes.operator.FlinkOperatorTest.testConfigurationPassedToJOSDK(FlinkOperatorTest.java:63)
> [ERROR] 
> org.apache.flink.kubernetes.operator.FlinkOperatorTest.testLeaderElectionConfig
>   Time elapsed: 0.004 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.flink.kubernetes.operator.FlinkOperatorTest.testLeaderElectionConfig(FlinkOperatorTest.java:108){quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31717) Unit tests running with local kube config

2023-04-03 Thread Matyas Orhidi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matyas Orhidi updated FLINK-31717:
--
Affects Version/s: kubernetes-operator-1.4.0

> Unit tests running with local kube config
> -
>
> Key: FLINK-31717
> URL: https://issues.apache.org/jira/browse/FLINK-31717
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.4.0
>Reporter: Matyas Orhidi
>Priority: Critical
>
> Some unit tests are using local kube environment. This can be dangerous when 
> pointing to sensitive clusters e.g. in prod.
> {quote}2023-04-03 12:32:53,956 i.f.k.c.Config [DEBUG] Found 
> for Kubernetes config at: [/Users//.kube/config].
> {quote}
> A misconfigured kube config environment revealed the issue:
> {quote}[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time 
> elapsed: 0.012 s <<< FAILURE! - in 
> org.apache.flink.kubernetes.operator.FlinkOperatorTest
> [ERROR] 
> org.apache.flink.kubernetes.operator.FlinkOperatorTest.testConfigurationPassedToJOSDK
>   Time elapsed: 0.008 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.flink.kubernetes.operator.FlinkOperatorTest.testConfigurationPassedToJOSDK(FlinkOperatorTest.java:63)
> [ERROR] 
> org.apache.flink.kubernetes.operator.FlinkOperatorTest.testLeaderElectionConfig
>   Time elapsed: 0.004 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.flink.kubernetes.operator.FlinkOperatorTest.testLeaderElectionConfig(FlinkOperatorTest.java:108){quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31717) Unit tests running with local kube config

2023-04-03 Thread Matyas Orhidi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matyas Orhidi updated FLINK-31717:
--
Issue Type: Bug  (was: New Feature)

> Unit tests running with local kube config
> -
>
> Key: FLINK-31717
> URL: https://issues.apache.org/jira/browse/FLINK-31717
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Matyas Orhidi
>Priority: Critical
>
> Some unit tests are using local kube environment. This can be dangerous when 
> pointing to sensitive clusters e.g. in prod.
> {quote}2023-04-03 12:32:53,956 i.f.k.c.Config [DEBUG] Found 
> for Kubernetes config at: [/Users//.kube/config].
> {quote}
> A misconfigured kube config environment revealed the issue:
> {quote}[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time 
> elapsed: 0.012 s <<< FAILURE! - in 
> org.apache.flink.kubernetes.operator.FlinkOperatorTest
> [ERROR] 
> org.apache.flink.kubernetes.operator.FlinkOperatorTest.testConfigurationPassedToJOSDK
>   Time elapsed: 0.008 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.flink.kubernetes.operator.FlinkOperatorTest.testConfigurationPassedToJOSDK(FlinkOperatorTest.java:63)
> [ERROR] 
> org.apache.flink.kubernetes.operator.FlinkOperatorTest.testLeaderElectionConfig
>   Time elapsed: 0.004 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.flink.kubernetes.operator.FlinkOperatorTest.testLeaderElectionConfig(FlinkOperatorTest.java:108){quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31717) Unit tests running with local kube config

2023-04-03 Thread Matyas Orhidi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matyas Orhidi updated FLINK-31717:
--
Description: 
Some unit tests are using local kube environment. This can be dangerous when 
pointing to sensitive clusters e.g. in prod.

{quote}2023-04-03 12:32:53,956 i.f.k.c.Config [DEBUG] Found for 
Kubernetes config at: [/Users//.kube/config].
{quote}

A misconfigured kube config environment revealed the issue:

{quote}[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 
0.012 s <<< FAILURE! - in org.apache.flink.kubernetes.operator.FlinkOperatorTest
[ERROR] 
org.apache.flink.kubernetes.operator.FlinkOperatorTest.testConfigurationPassedToJOSDK
  Time elapsed: 0.008 s  <<< ERROR!
java.lang.NullPointerException
at 
org.apache.flink.kubernetes.operator.FlinkOperatorTest.testConfigurationPassedToJOSDK(FlinkOperatorTest.java:63)

[ERROR] 
org.apache.flink.kubernetes.operator.FlinkOperatorTest.testLeaderElectionConfig 
 Time elapsed: 0.004 s  <<< ERROR!
java.lang.NullPointerException
at 
org.apache.flink.kubernetes.operator.FlinkOperatorTest.testLeaderElectionConfig(FlinkOperatorTest.java:108){quote}



  was:
Some unit tests are using local kube environment. This can be dangerous when 
pointing to sensitive clusters e.g. in prod.

{{2023-04-03 12:32:53,956 i.f.k.c.Config [DEBUG] Found for 
Kubernetes config at: [/Users//.kube/config].
}}
A misconfigured kube config environment revealed the issue:

{{[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.012 
s <<< FAILURE! - in org.apache.flink.kubernetes.operator.FlinkOperatorTest
[ERROR] 
org.apache.flink.kubernetes.operator.FlinkOperatorTest.testConfigurationPassedToJOSDK
  Time elapsed: 0.008 s  <<< ERROR!
java.lang.NullPointerException
at 
org.apache.flink.kubernetes.operator.FlinkOperatorTest.testConfigurationPassedToJOSDK(FlinkOperatorTest.java:63)

[ERROR] 
org.apache.flink.kubernetes.operator.FlinkOperatorTest.testLeaderElectionConfig 
 Time elapsed: 0.004 s  <<< ERROR!
java.lang.NullPointerException
at 
org.apache.flink.kubernetes.operator.FlinkOperatorTest.testLeaderElectionConfig(FlinkOperatorTest.java:108)
}}

move ~/.kube/config


> Unit tests running with local kube config
> -
>
> Key: FLINK-31717
> URL: https://issues.apache.org/jira/browse/FLINK-31717
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: Matyas Orhidi
>Priority: Critical
>
> Some unit tests are using local kube environment. This can be dangerous when 
> pointing to sensitive clusters e.g. in prod.
> {quote}2023-04-03 12:32:53,956 i.f.k.c.Config [DEBUG] Found 
> for Kubernetes config at: [/Users//.kube/config].
> {quote}
> A misconfigured kube config environment revealed the issue:
> {quote}[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time 
> elapsed: 0.012 s <<< FAILURE! - in 
> org.apache.flink.kubernetes.operator.FlinkOperatorTest
> [ERROR] 
> org.apache.flink.kubernetes.operator.FlinkOperatorTest.testConfigurationPassedToJOSDK
>   Time elapsed: 0.008 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.flink.kubernetes.operator.FlinkOperatorTest.testConfigurationPassedToJOSDK(FlinkOperatorTest.java:63)
> [ERROR] 
> org.apache.flink.kubernetes.operator.FlinkOperatorTest.testLeaderElectionConfig
>   Time elapsed: 0.004 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.flink.kubernetes.operator.FlinkOperatorTest.testLeaderElectionConfig(FlinkOperatorTest.java:108){quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31717) Unit tests running with local kube config

2023-04-03 Thread Matyas Orhidi (Jira)
Matyas Orhidi created FLINK-31717:
-

 Summary: Unit tests running with local kube config
 Key: FLINK-31717
 URL: https://issues.apache.org/jira/browse/FLINK-31717
 Project: Flink
  Issue Type: New Feature
  Components: Kubernetes Operator
Reporter: Matyas Orhidi


Some unit tests are using local kube environment. This can be dangerous when 
pointing to sensitive clusters e.g. in prod.

{{2023-04-03 12:32:53,956 i.f.k.c.Config [DEBUG] Found for 
Kubernetes config at: [/Users//.kube/config].
}}
A misconfigured kube config environment revealed the issue:

{{[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.012 
s <<< FAILURE! - in org.apache.flink.kubernetes.operator.FlinkOperatorTest
[ERROR] 
org.apache.flink.kubernetes.operator.FlinkOperatorTest.testConfigurationPassedToJOSDK
  Time elapsed: 0.008 s  <<< ERROR!
java.lang.NullPointerException
at 
org.apache.flink.kubernetes.operator.FlinkOperatorTest.testConfigurationPassedToJOSDK(FlinkOperatorTest.java:63)

[ERROR] 
org.apache.flink.kubernetes.operator.FlinkOperatorTest.testLeaderElectionConfig 
 Time elapsed: 0.004 s  <<< ERROR!
java.lang.NullPointerException
at 
org.apache.flink.kubernetes.operator.FlinkOperatorTest.testLeaderElectionConfig(FlinkOperatorTest.java:108)
}}

move ~/.kube/config



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] Samrat002 commented on pull request #22308: [FLINK-31518][Runtime / REST] Fix StandaloneHaServices#getClusterRestEndpointLeaderRetreiver to return correct rest port

2023-04-03 Thread via GitHub


Samrat002 commented on PR #22308:
URL: https://github.com/apache/flink/pull/22308#issuecomment-1494840097

   @huwh please review the changes in free time 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-31704) Pulsar docs should be pulled from dedicated branch

2023-04-03 Thread Danny Cranmer (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17708079#comment-17708079
 ] 

Danny Cranmer commented on FLINK-31704:
---

[~Weijie Guo] it sounds like you are proposing something similar to the `main` 
branch in the gitflow model [1]. This would work but would require 1 stable 
branch per version line (3.x, 4.x, 5.x etc). This would not work for the case 
when we drop support for old Flink versions. For example if we reference the 
stable branch 3.x in Flink 1.16, once Flink 1.18 is released we would want to 
stop taking updates for the 1.16 branch.

We could also consider mutable tags for docs. For example, if Flink references 
the tag `v1.0.0-docs` then we could rewrite this tag as required in the 
connector repo. This has the same issue as above for old Flink versions.

For now I think being pragmatic and using the tag where possible, and a 
dedicated branch when not possible is a good compromise. For example, MongoDB 
now uses {{v1.0.0-docs}} branch and AWS connectors uses {{4.1.0-docs}} branch, 
both due to build issues :( 

[1] https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow

> Pulsar docs should be pulled from dedicated branch
> --
>
> Key: FLINK-31704
> URL: https://issues.apache.org/jira/browse/FLINK-31704
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Pulsar, Documentation
>Reporter: Danny Cranmer
>Priority: Major
>
> Pulsar docs are pulled from the {{main}} 
> [branch|https://github.com/apache/flink/blob/release-1.17/docs/setup_docs.sh#L49].
>  This is dangerous for final versions since we may include features in the 
> docs that are not supported. Update Pulsar to pull from a dedicated branch or 
> tag.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31716) Event UID field is missing the first time that an event is consumed

2023-04-03 Thread Jira


[ 
https://issues.apache.org/jira/browse/FLINK-31716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17708076#comment-17708076
 ] 

Márton Balassi commented on FLINK-31716:


Good catch [~rodmeneses]

> Event UID field is missing the first time that an event is consumed
> ---
>
> Key: FLINK-31716
> URL: https://issues.apache.org/jira/browse/FLINK-31716
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Rodrigo Meneses
>Assignee: Rodrigo Meneses
>Priority: Major
>
> on `EventUtils.createOrUpdateEvent` we use a `Consumer` instance to 
> `accept` the underlying event that is being created or updated.
> The first time an event is created, we are calling 
> `client.resource(event).createOrReplace()` but we are discarding the return 
> value of such method, and we are returning the `event` that we just created, 
> which has an empty UID field.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-31716) Event UID field is missing the first time that an event is consumed

2023-04-03 Thread Matyas Orhidi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matyas Orhidi reassigned FLINK-31716:
-

Assignee: Rodrigo Meneses

> Event UID field is missing the first time that an event is consumed
> ---
>
> Key: FLINK-31716
> URL: https://issues.apache.org/jira/browse/FLINK-31716
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Rodrigo Meneses
>Assignee: Rodrigo Meneses
>Priority: Major
>
> on `EventUtils.createOrUpdateEvent` we use a `Consumer` instance to 
> `accept` the underlying event that is being created or updated.
> The first time an event is created, we are calling 
> `client.resource(event).createOrReplace()` but we are discarding the return 
> value of such method, and we are returning the `event` that we just created, 
> which has an empty UID field.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31716) Event UID field is missing the first time that an event is consumed

2023-04-03 Thread Rodrigo Meneses (Jira)
Rodrigo Meneses created FLINK-31716:
---

 Summary: Event UID field is missing the first time that an event 
is consumed
 Key: FLINK-31716
 URL: https://issues.apache.org/jira/browse/FLINK-31716
 Project: Flink
  Issue Type: Bug
  Components: Kubernetes Operator
Reporter: Rodrigo Meneses


on `EventUtils.createOrUpdateEvent` we use a `Consumer` instance to 
`accept` the underlying event that is being created or updated.

The first time an event is created, we are calling 
`client.resource(event).createOrReplace()` but we are discarding the return 
value of such method, and we are returning the `event` that we just created, 
which has an empty UID field.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-kubernetes-operator] mateczagany commented on a diff in pull request #558: [FLINK-31303] Expose Flink application resource usage via metrics and status

2023-04-03 Thread via GitHub


mateczagany commented on code in PR #558:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/558#discussion_r1156270801


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/AbstractFlinkService.java:
##
@@ -645,10 +641,19 @@ public Map getClusterInfo(Configuration 
conf) throws Exception {
 dashboardConfiguration.getFlinkRevision());
 }
 
-// JobManager resource usage can be deduced from the CR
-var jmParameters =
-new KubernetesJobManagerParameters(
-conf, new 
KubernetesClusterClientFactory().getClusterSpecification(conf));
+clusterInfo.putAll(
+calculateClusterResourceMetrics(
+conf, 
getTaskManagersInfo(conf).getTaskManagerInfos().size()));
+
+return clusterInfo;
+}
+
+private HashMap calculateClusterResourceMetrics(

Review Comment:
   You're right! I've also moved the method to two separate methods in 
`FlinkUtils` and will add tests tomorrow if this seems okay. This will result 
in duplicated code, but I think it improves the code, also easier to re-use and 
test this way.
   
   I will add tests for the two new methods tomorrow.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] mateczagany commented on a diff in pull request #558: [FLINK-31303] Expose Flink application resource usage via metrics and status

2023-04-03 Thread via GitHub


mateczagany commented on code in PR #558:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/558#discussion_r1156268027


##
flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/metrics/FlinkDeploymentMetricsTest.java:
##
@@ -187,6 +193,66 @@ public void testMetricsMultiNamespace() {
 }
 }
 
+@Test
+public void testResourceMetrics() {
+var namespace1 = "ns1";
+var namespace2 = "ns2";
+var deployment1 = TestUtils.buildApplicationCluster("deployment1", 
namespace1);
+var deployment2 = TestUtils.buildApplicationCluster("deployment2", 
namespace1);
+var deployment3 = TestUtils.buildApplicationCluster("deployment3", 
namespace2);
+
+deployment1
+.getStatus()
+.getClusterInfo()
+.putAll(
+Map.of(
+AbstractFlinkService.FIELD_NAME_TOTAL_CPU, "5",

Review Comment:
   I've added the tests, also added a check to convert `Infinity` and `NaN` 
values to 0 instead.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-31697) OpenSearch nightly CI failure

2023-04-03 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-31697.
---
Resolution: Fixed

> OpenSearch nightly CI failure
> -
>
> Key: FLINK-31697
> URL: https://issues.apache.org/jira/browse/FLINK-31697
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Opensearch
>Reporter: Danny Cranmer
>Assignee: Andriy Redko
>Priority: Major
>  Labels: pull-request-available
>
> Investigate and fix the nightly CI failure. Example 
> [https://github.com/apache/flink-connector-opensearch/actions/runs/4585851921]
>  
>  
> {code:java}
> Error: Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test (default-test) 
> on project flink-connector-opensearch: Execution default-test of goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test failed: 
> org.junit.platform.commons.JUnitException: TestEngine with ID 'archunit' 
> failed to discover tests: 'java.lang.Object 
> com.tngtech.archunit.lang.syntax.elements.MethodsThat.areAnnotatedWith(java.lang.Class)'
>  -> [Help 1]{code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31697) OpenSearch nightly CI failure

2023-04-03 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17708049#comment-17708049
 ] 

Sergey Nuyanzin commented on FLINK-31697:
-

Merged to main as a0569724ae41ff663a6ffcb07385f223d03f202e

> OpenSearch nightly CI failure
> -
>
> Key: FLINK-31697
> URL: https://issues.apache.org/jira/browse/FLINK-31697
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Opensearch
>Reporter: Danny Cranmer
>Assignee: Andriy Redko
>Priority: Major
>  Labels: pull-request-available
>
> Investigate and fix the nightly CI failure. Example 
> [https://github.com/apache/flink-connector-opensearch/actions/runs/4585851921]
>  
>  
> {code:java}
> Error: Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test (default-test) 
> on project flink-connector-opensearch: Execution default-test of goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test failed: 
> org.junit.platform.commons.JUnitException: TestEngine with ID 'archunit' 
> failed to discover tests: 'java.lang.Object 
> com.tngtech.archunit.lang.syntax.elements.MethodsThat.areAnnotatedWith(java.lang.Class)'
>  -> [Help 1]{code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-opensearch] snuyanzin merged pull request #15: [FLINK-31697] OpenSearch nightly CI failure

2023-04-03 Thread via GitHub


snuyanzin merged PR #15:
URL: https://github.com/apache/flink-connector-opensearch/pull/15


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-31715) Warning - 'An illegal reflective access operation has occurred'

2023-04-03 Thread Feroze Daud (Jira)
Feroze Daud created FLINK-31715:
---

 Summary: Warning - 'An illegal reflective access operation has 
occurred'
 Key: FLINK-31715
 URL: https://issues.apache.org/jira/browse/FLINK-31715
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.14.4
Reporter: Feroze Daud


I am seeing the following exception when my app starts up.

 
{noformat}
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.flink.api.java.ClosureCleaner 
(file:/Users/ferozed/.gradle/caches/modules-2/files-2.1/org.apache.flink/flink-core/1.14.4/1c397865a94743deb286c658384fae954a381df/flink-core-1.14.4.jar)
 to field java.lang.String.value
WARNING: Please consider reporting this to the maintainers of 
org.apache.flink.api.java.ClosureCleaner
WARNING: Use --illegal-access=warn to enable warnings of further illegal 
reflective access operations
WARNING: All illegal access operations will be denied in a future release
09:51:49.626 [main] INFO  
com.zillow.clickstream.preprocess.utils.SchemaRegistryHelper  - Schema id = 9636
An illegal reflective access operation has occurredIllegal reflective access by 
org.apache.flink.api.java.ClosureCleaner 
(file:/Users/ferozed/.gradle/caches/modules-2/files-2.1/org.apache.flink/flink-core/1.14.4/1c397865a94743deb286c658384fae954a381df/flink-core-1.14.4.jar)
 to field java.lang.String.valuePlease consider reporting this to the 
maintainers of org.apache.flink.api.java.ClosureCleanerUse 
--illegal-access=warn to enable warnings of further illegal reflective access 
operationsAll illegal access operations will be denied in a future release
 {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-31703) Update Flink docs for AWS v4.1.0

2023-04-03 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer resolved FLINK-31703.
---
Resolution: Fixed

> Update Flink docs for AWS v4.1.0
> 
>
> Key: FLINK-31703
> URL: https://issues.apache.org/jira/browse/FLINK-31703
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / AWS, Documentation
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.2, 1.18.0, 1.17.1
>
>
> Update Flink docs for 1.16/1.17/1.18 to pull in the AWS connector docs for 
> v4.1.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31703) Update Flink docs for AWS v4.1.0

2023-04-03 Thread Danny Cranmer (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17708036#comment-17708036
 ] 

Danny Cranmer commented on FLINK-31703:
---

Merged commit 
[{{2f3b701}}|https://github.com/apache/flink/commit/2f3b7016dcdcb1e21f817165091b5dffe9d19fa1]
 into apache:master
Merged commit 
[{{40ee201}}|https://github.com/apache/flink/commit/40ee2019c61736865c09732f431c545f09ffe665]
 into apache:release-1.17
Merged commit 
[{{95db617}}|https://github.com/apache/flink/commit/95db6179d509f122b6099118e9c3789d1e497c3d]
 into apache:release-1.16

> Update Flink docs for AWS v4.1.0
> 
>
> Key: FLINK-31703
> URL: https://issues.apache.org/jira/browse/FLINK-31703
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / AWS, Documentation
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.2, 1.18.0, 1.17.1
>
>
> Update Flink docs for 1.16/1.17/1.18 to pull in the AWS connector docs for 
> v4.1.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] dannycranmer merged pull request #22336: [FLINK-31703][Connectors/AWS][docs] Update AWS connector docs to v4.1.0 (1.16)

2023-04-03 Thread via GitHub


dannycranmer merged PR #22336:
URL: https://github.com/apache/flink/pull/22336


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-30609) Add ephemeral storage to CRD

2023-04-03 Thread Zhenqiu Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17708033#comment-17708033
 ] 

Zhenqiu Huang commented on FLINK-30609:
---

[~nfraison.datadog]
We are glad that you also have the needs. I am actively creating the PR now. 
Please wait for 1 or 2 days. 

> Add ephemeral storage to CRD
> 
>
> Key: FLINK-30609
> URL: https://issues.apache.org/jira/browse/FLINK-30609
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: Matyas Orhidi
>Assignee: Zhenqiu Huang
>Priority: Major
>  Labels: starter
> Fix For: kubernetes-operator-1.5.0
>
>
> We should consider adding ephemeral storage to the existing [resource 
> specification 
> |https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/custom-resource/reference/#resource]in
>  CRD, next to {{cpu}} and {{memory}}
> https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#setting-requests-and-limits-for-local-ephemeral-storage



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31689) Filesystem sink fails when parallelism of compactor operator changed

2023-04-03 Thread jirawech.s (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17708030#comment-17708030
 ] 

jirawech.s commented on FLINK-31689:


[~luoyuxia] I see. So, we could say that it is normal behaviour of 
CompactOperator in File Sink for now. If we were to improve, we could do that 
by implementing state compatible CompactOperator right? Could you point me to 
the code/class i should check out. I am not so familiar with Flink development

> Filesystem sink fails when parallelism of compactor operator changed
> 
>
> Key: FLINK-31689
> URL: https://issues.apache.org/jira/browse/FLINK-31689
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.16.1
>Reporter: jirawech.s
>Priority: Major
> Attachments: HelloFlinkHadoopSink.java
>
>
> I encounter this error when i tried to use Filesystem sink with Table SQL. I 
> have not tested with Datastream API tho. You may refers to the error as below
> {code:java}
> // code placeholder
> java.util.NoSuchElementException
>   at java.util.ArrayList$Itr.next(ArrayList.java:864)
>   at 
> org.apache.flink.connector.file.table.stream.compact.CompactOperator.initializeState(CompactOperator.java:119)
>   at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:122)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:286)
>   at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:700)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:676)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:643)
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:917)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
>   at java.lang.Thread.run(Thread.java:750) {code}
> I cannot attach the full reproducible code here, but you may follow my pseudo 
> code in attachment and reproducible steps below
> 1. Create Kafka source
> 2. Set state.savepoints.dir
> 3. Set Job parallelism to 1
> 4. Create FileSystem Sink
> 5. Run the job and trigger savepoint with API
> {noformat}
> curl -X POST localhost:8081/jobs/:jobId/savepoints -d '{"cancel-job": 
> false}'{noformat}
> {color:#172b4d}6. Cancel job, change parallelism to 2, and resume job from 
> savepoint{color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-gcp-pubsub] weifonghsia commented on pull request #5: PubSubSink doc example correction

2023-04-03 Thread via GitHub


weifonghsia commented on PR #5:
URL: 
https://github.com/apache/flink-connector-gcp-pubsub/pull/5#issuecomment-1494598849

   @MartijnVisser thanks for taking a look at the other ticket  - I opened one 
here as well
   
   I feel like this is a trivial change and doesn't require a JIRA but I can 
open one and update the PR if necessary. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] dannycranmer merged pull request #22334: [FLINK-31703][Connectors/AWS][docs] Update AWS connector docs to v4.1.0 (1.18)

2023-04-03 Thread via GitHub


dannycranmer merged PR #22334:
URL: https://github.com/apache/flink/pull/22334


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-30609) Add ephemeral storage to CRD

2023-04-03 Thread Nicolas Fraison (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17708023#comment-17708023
 ] 

Nicolas Fraison commented on FLINK-30609:
-

We are looking into using this feature and have some time to implement it.
If it is fine by you I can take over on it.

> Add ephemeral storage to CRD
> 
>
> Key: FLINK-30609
> URL: https://issues.apache.org/jira/browse/FLINK-30609
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: Matyas Orhidi
>Assignee: Zhenqiu Huang
>Priority: Major
>  Labels: starter
> Fix For: kubernetes-operator-1.5.0
>
>
> We should consider adding ephemeral storage to the existing [resource 
> specification 
> |https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/custom-resource/reference/#resource]in
>  CRD, next to {{cpu}} and {{memory}}
> https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#setting-requests-and-limits-for-local-ephemeral-storage



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-opensearch] snuyanzin commented on pull request #15: [FLINK-31697] OpenSearch nightly CI failure

2023-04-03 Thread via GitHub


snuyanzin commented on PR #15:
URL: 
https://github.com/apache/flink-connector-opensearch/pull/15#issuecomment-1494573637

   yeah, from 1.16 point of view probably makes sense, 
   then with 1.17 update can update a bit configuration


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] XComp commented on pull request #21873: [FLINK-30921][ci] Adds mirrors instead of relying on a single source for Ubuntu packages

2023-04-03 Thread via GitHub


XComp commented on PR #21873:
URL: https://github.com/apache/flink/pull/21873#issuecomment-1494572066

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-kafka] tzulitai commented on pull request #20: [FLINK-29398][connector/kafka] Provide rack ID to Kafka Source to take advantage of Rack Awareness

2023-04-03 Thread via GitHub


tzulitai commented on PR #20:
URL: 
https://github.com/apache/flink-connector-kafka/pull/20#issuecomment-1494566992

   Thanks for the PR @jeremy-degroot. I'll try to find some time to review this 
over this week.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-31301) Unsupported nested columns in column list of insert statement

2023-04-03 Thread Aitozi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17708014#comment-17708014
 ] 

Aitozi commented on FLINK-31301:


Hello [~lincoln.86xy], I'm interested in this ticket and spend some time to 
look into this. I think we can support insert partial nested column by 
analysing the targetRow type to derive the RowType from the RelDataType. Then, 
we can construct the target row by {{row(b, cast(null as int), row(cast(null as 
varchar), c))}} to make up the complete row.
WDYT ?

> Unsupported nested columns in column list of insert statement
> -
>
> Key: FLINK-31301
> URL: https://issues.apache.org/jira/browse/FLINK-31301
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.17.0, 1.16.1
>Reporter: lincoln lee
>Priority: Major
>
> Currently an error will be raised when use nested columns in column list of 
> insert statement, e.g.,
> {code:java}
> INSERT INTO nested_type_sink (a,b.b1,c.c2,f)
> SELECT a,b.b1,c.c2,f FROM nested_type_src
> {code}
>  
> {code:java}
> java.lang.AssertionError
>     at org.apache.calcite.sql.SqlIdentifier.getSimple(SqlIdentifier.java:333)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorUtil.getTargetField(SqlValidatorUtil.java:612)
>     at 
> org.apache.flink.table.planner.calcite.PreValidateReWriter$.$anonfun$appendPartitionAndNullsProjects$3(PreValidateReWriter.scala:171)
>     at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
>     at scala.collection.Iterator.foreach(Iterator.scala:937)
>     at scala.collection.Iterator.foreach$(Iterator.scala:937)
>     at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
>     at scala.collection.IterableLike.foreach(IterableLike.scala:70)
>     at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
>     at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>     at scala.collection.TraversableLike.map(TraversableLike.scala:233)
>     at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>     at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>     at 
> org.apache.flink.table.planner.calcite.PreValidateReWriter$.appendPartitionAndNullsProjects(PreValidateReWriter.scala:164)
>     at 
> org.apache.flink.table.planner.calcite.PreValidateReWriter.rewriteInsert(PreValidateReWriter.scala:71)
>     at 
> org.apache.flink.table.planner.calcite.PreValidateReWriter.visit(PreValidateReWriter.scala:61)
>     at 
> org.apache.flink.table.planner.calcite.PreValidateReWriter.visit(PreValidateReWriter.scala:50)
>     at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:161)
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:118)
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:113)
>     at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:281)
>     at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:106)
>     at 
> org.apache.flink.table.api.internal.StatementSetImpl.addInsertSql(StatementSetImpl.java:63)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-jdbc] hackergin opened a new pull request, #35: [hotfix][docs] Fix the broken postgresql driver download link

2023-04-03 Thread via GitHub


hackergin opened a new pull request, #35:
URL: https://github.com/apache/flink-connector-jdbc/pull/35

   Fix the broken postgresql driver download link


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-jdbc] boring-cyborg[bot] commented on pull request #35: [hotfix][docs] Fix the broken postgresql driver download link

2023-04-03 Thread via GitHub


boring-cyborg[bot] commented on PR #35:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/35#issuecomment-1494548531

   Thanks for opening this pull request! Please check out our contributing 
guidelines. (https://flink.apache.org/contributing/how-to-contribute.html)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #22340: [BP-1.16][FLINK-31293] LocalBufferPool request overdraft buffer only when no available buffer and pool size is reached.

2023-04-03 Thread via GitHub


flinkbot commented on PR #22340:
URL: https://github.com/apache/flink/pull/22340#issuecomment-1494544634

   
   ## CI report:
   
   * 17fcaa05268b886f5d9d0e7edaa06268a80704f1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #22339: [BP-1.17][FLINK-31293] LocalBufferPool request overdraft buffer only when no available buffer and pool size is reached.

2023-04-03 Thread via GitHub


flinkbot commented on PR #22339:
URL: https://github.com/apache/flink/pull/22339#issuecomment-1494534822

   
   ## CI report:
   
   * 1731933baf9a53a0c5f17816bc5c92da53927e8e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #22338: [FLINK-31469] Allow setting JobResourceRequirements through REST API.

2023-04-03 Thread via GitHub


flinkbot commented on PR #22338:
URL: https://github.com/apache/flink/pull/22338#issuecomment-1494534638

   
   ## CI report:
   
   * 00e9899cf9ac913e9e39dc56675a77bc2f349755 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] reswqa opened a new pull request, #22340: [BP-1.16][FLINK-31293] LocalBufferPool request overdraft buffer only when no available buffer and pool size is reached.

2023-04-03 Thread via GitHub


reswqa opened a new pull request, #22340:
URL: https://github.com/apache/flink/pull/22340

   Backport FLINK-31293 to release-1.16.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] reswqa opened a new pull request, #22339: [BP-1.17][FLINK-31293] LocalBufferPool request overdraft buffer only when no available buffer and pool size is reached.

2023-04-03 Thread via GitHub


reswqa opened a new pull request, #22339:
URL: https://github.com/apache/flink/pull/22339

   Backport FLINK-31293 to release-1.17.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31469) Allow setting JobResourceRequirements through REST API

2023-04-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-31469:
---
Labels: pull-request-available  (was: )

> Allow setting JobResourceRequirements through REST API
> --
>
> Key: FLINK-31469
> URL: https://issues.apache.org/jira/browse/FLINK-31469
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination, Runtime / REST
>Reporter: David Morávek
>Assignee: David Morávek
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] dmvk opened a new pull request, #22338: [FLINK-31469] Allow setting JobResourceRequirements through REST API.

2023-04-03 Thread via GitHub


dmvk opened a new pull request, #22338:
URL: https://github.com/apache/flink/pull/22338

   https://issues.apache.org/jira/browse/FLINK-31469
   
   This PR exposes JobResourceRequirements via REST API.
   
   New endpoints are going to be covered by the integration tests that will be 
introduced at https://issues.apache.org/jira/browse/FLINK-31470.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] dannycranmer merged pull request #22333: [FLINK-31703][Connectors/AWS][docs] Update AWS connector docs to v4.1.0 (1.17)

2023-04-03 Thread via GitHub


dannycranmer merged PR #22333:
URL: https://github.com/apache/flink/pull/22333


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-31714) Conjars.org has died

2023-04-03 Thread Niels Basjes (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes closed FLINK-31714.

Resolution: Duplicate

> Conjars.org has died
> 
>
> Key: FLINK-31714
> URL: https://issues.apache.org/jira/browse/FLINK-31714
> Project: Flink
>  Issue Type: Bug
>Reporter: Niels Basjes
>Priority: Blocker
>
> Recently conjars.org has died.
> The effect is that it is now *impossible* to build Flink on a clean machine.
> Chris Wensel has setup a readonly mirror https://conjars.wensel.net/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-31469) Allow setting JobResourceRequirements through REST API

2023-04-03 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-31469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Morávek reassigned FLINK-31469:
-

Assignee: David Morávek  (was: Chesnay Schepler)

> Allow setting JobResourceRequirements through REST API
> --
>
> Key: FLINK-31469
> URL: https://issues.apache.org/jira/browse/FLINK-31469
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination, Runtime / REST
>Reporter: David Morávek
>Assignee: David Morávek
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31714) Conjars.org has died

2023-04-03 Thread Niels Basjes (Jira)
Niels Basjes created FLINK-31714:


 Summary: Conjars.org has died
 Key: FLINK-31714
 URL: https://issues.apache.org/jira/browse/FLINK-31714
 Project: Flink
  Issue Type: Bug
Reporter: Niels Basjes


Recently conjars.org has died.
The effect is that it is now *impossible* to build Flink on a clean machine.
Chris Wensel has setup a readonly mirror https://conjars.wensel.net/




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-opensearch] reta commented on pull request #15: [FLINK-31697] OpenSearch nightly CI failure

2023-04-03 Thread via GitHub


reta commented on PR #15:
URL: 
https://github.com/apache/flink-connector-opensearch/pull/15#issuecomment-1494500244

   > doesn't it help?
   
   I was exploring bumping to 1.17 as well, but that seems to be breaking 1.16 
builds, it seems like relying on managed ArchUnit version is safer (and 
simpler), wdyt?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-opensearch] snuyanzin commented on pull request #15: [FLINK-31697] OpenSearch nightly CI failure

2023-04-03 Thread via GitHub


snuyanzin commented on PR #15:
URL: 
https://github.com/apache/flink-connector-opensearch/pull/15#issuecomment-1494488043

   There were breaking changes in 0.23.0
   There is a pr making it compilaible with arch unit 1.0.0
https://github.com/apache/flink-connector-opensearch/pull/14
doesn't it help?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31697) OpenSearch nightly CI failure

2023-04-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-31697:
---
Labels: pull-request-available  (was: )

> OpenSearch nightly CI failure
> -
>
> Key: FLINK-31697
> URL: https://issues.apache.org/jira/browse/FLINK-31697
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Opensearch
>Reporter: Danny Cranmer
>Assignee: Andriy Redko
>Priority: Major
>  Labels: pull-request-available
>
> Investigate and fix the nightly CI failure. Example 
> [https://github.com/apache/flink-connector-opensearch/actions/runs/4585851921]
>  
>  
> {code:java}
> Error: Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test (default-test) 
> on project flink-connector-opensearch: Execution default-test of goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test failed: 
> org.junit.platform.commons.JUnitException: TestEngine with ID 'archunit' 
> failed to discover tests: 'java.lang.Object 
> com.tngtech.archunit.lang.syntax.elements.MethodsThat.areAnnotatedWith(java.lang.Class)'
>  -> [Help 1]{code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-opensearch] reta opened a new pull request, #15: [FLINK-31697] OpenSearch nightly CI failure

2023-04-03 Thread via GitHub


reta opened a new pull request, #15:
URL: https://github.com/apache/flink-connector-opensearch/pull/15

   There was an issue with ArchUnit version: 1.16.1 uses 0.22.0 whereas 1.17.0 
uses 1.0.0. The versions are not compatible so the suggestion is to drop 
explicit ArchUnit version management from connector and rely on 
`flink-architecture-tests-test` dependencies instead. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-30609) Add ephemeral storage to CRD

2023-04-03 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora reassigned FLINK-30609:
--

Assignee: Zhenqiu Huang  (was: Prabcs)

> Add ephemeral storage to CRD
> 
>
> Key: FLINK-30609
> URL: https://issues.apache.org/jira/browse/FLINK-30609
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: Matyas Orhidi
>Assignee: Zhenqiu Huang
>Priority: Major
>  Labels: starter
> Fix For: kubernetes-operator-1.5.0
>
>
> We should consider adding ephemeral storage to the existing [resource 
> specification 
> |https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/custom-resource/reference/#resource]in
>  CRD, next to {{cpu}} and {{memory}}
> https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#setting-requests-and-limits-for-local-ephemeral-storage



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31713) k8s operator should gather job version metrics

2023-04-03 Thread Jira
Márton Balassi created FLINK-31713:
--

 Summary: k8s operator should gather job version metrics
 Key: FLINK-31713
 URL: https://issues.apache.org/jira/browse/FLINK-31713
 Project: Flink
  Issue Type: New Feature
  Components: Kubernetes Operator, Runtime / Metrics
Affects Versions: kubernetes-operator-1.5.0
Reporter: Márton Balassi


Similarly to the FLINK-31303 we should expose the number of times each Flink 
version is used in applications on a per namespace basis, this is sufficient 
for FlinkDeployments imho (no need to try to dig into session jobs) as the main 
purpose is to be able to gain visibility to the distribution of version used 
and be able to nudge users along to upgrade.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   >