[jira] [Commented] (FLINK-26676) Set ClusterIP service type when watching specific namespaces

2022-03-26 Thread Aitozi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17512832#comment-17512832
 ] 

Aitozi commented on FLINK-26676:


Hi, what error we will get if we not set to ClusterIP? 

> Set ClusterIP service type when watching specific namespaces
> 
>
> Key: FLINK-26676
> URL: https://issues.apache.org/jira/browse/FLINK-26676
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>Priority: Major
>  Labels: pull-request-available
>
> As noted in this PR
> [https://github.com/apache/flink-kubernetes-operator/pull/42#issue-1159776739]
> Users get service account related error messages unless we set:
> {noformat}
> kubernetes.rest-service.exposed.type: ClusterIP{noformat}
> In cases where we are watching specific namespaces.
> We should configure this automatically unless override by the user in the 
> flinkConfiguration for these cases.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] hackergin commented on a change in pull request #19236: [FLINK-26712][table-planner] Metadata keys should not conflict with physical columns

2022-03-26 Thread GitBox


hackergin commented on a change in pull request #19236:
URL: https://github.com/apache/flink/pull/19236#discussion_r835849412



##
File path: 
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/plan/stream/sql/TableScanTest.scala
##
@@ -99,6 +99,25 @@ class TableScanTest extends TableTestBase {
 util.verifyExecPlan("SELECT * FROM MetadataTable")
   }
 
+  @Test
+  def testDDLWithMetadataThatConflictsWithPhysicalColumn(): Unit = {

Review comment:
   Should the same test be done for the sink table as well? 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Reopened] (FLINK-26789) RescaleCheckpointManuallyITCase.testCheckpointRescalingInKeyedState failed

2022-03-26 Thread Roman Khachatryan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Khachatryan reopened FLINK-26789:
---

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=33776=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba=5623]

 

The build above used 84723eea4e7ddae846092ca8bb0905a7b9d6dc6a, i.e. included 
c1bd957be3fe45df80602eab78f2980361df22cf.

 

Reopening

> RescaleCheckpointManuallyITCase.testCheckpointRescalingInKeyedState failed
> --
>
> Key: FLINK-26789
> URL: https://issues.apache.org/jira/browse/FLINK-26789
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.16.0
>Reporter: Matthias Pohl
>Assignee: Yanfei Lei
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> [This 
> build|https://dev.azure.com/mapohl/flink/_build/results?buildId=894=logs=0a15d512-44ac-5ba5-97ab-13a5d066c22c=9a028d19-6c4b-5a4e-d378-03fca149d0b1=5687]
>  failed due to 
> {{RescaleCheckpointManuallyITCase.testCheckpointRescalingInKeyedState}}:
> {code}
> Mar 21 17:05:32 [ERROR] 
> org.apache.flink.test.checkpointing.RescaleCheckpointManuallyITCase.testCheckpointRescalingInKeyedState
>   Time elapsed: 23.966 s  <<< FAILURE!
> Mar 21 17:05:32 java.lang.AssertionError: expected:<[(0,24000), (1,22500), 
> (0,34500), (1,33000), (0,21000), (0,45000), (2,31500), (2,42000), (1,6000), 
> (0,28500), (0,52500), (2,15000), (1,3000), (1,51000), (0,1500), (0,49500), 
> (2,12000), (2,6), (0,36000), (1,10500), (1,58500), (0,46500), (0,9000), 
> (0,57000), (2,19500), (2,43500), (1,7500), (1,55500), (2,3), (1,18000), 
> (0,54000), (2,40500), (1,4500), (0,16500), (2,27000), (1,39000), (2,13500), 
> (1,25500), (0,37500), (0,61500), (2,0), (2,48000)]> but was:<[(1,22500), 
> (1,33000), (0,21000), (2,18000), (1,6000), (0,20500), (0,52500), (0,15000), 
> (0,31000), (2,12000), (2,6), (0,36000), (1,58500), (1,10500), (0,46500), 
> (0,25000), (0,41000), (0,9000), (0,57000), (2,43500), (0,3), (1,4500), 
> (2,27000), (1,15000), (0,35000), (0,19000), (0,3000), (1,25500), (0,61500), 
> (2,48000), (2,0), (0,24000), (0,34500), (0,45000), (2,31500), (1,19500), 
> (2,1), (2,42000), (0,12500), (0,28500), (2,15000), (1,3000), (1,51000), 
> (0,23000), (0,49500), (0,1500), (0,33000), (0,1000), (2,19500), (1,7500), 
> (1,55500), (2,3), (1,18000), (0,6000), (0,38000), (0,54000), (2,40500), 
> (0,500), (0,16500), (1,39000), (1,7000), (0,11000), (2,13500), (0,37500)]>
> Mar 21 17:05:32   at org.junit.Assert.fail(Assert.java:89)
> Mar 21 17:05:32   at org.junit.Assert.failNotEquals(Assert.java:835)
> Mar 21 17:05:32   at org.junit.Assert.assertEquals(Assert.java:120)
> Mar 21 17:05:32   at org.junit.Assert.assertEquals(Assert.java:146)
> Mar 21 17:05:32   at 
> org.apache.flink.test.checkpointing.RescaleCheckpointManuallyITCase.restoreAndAssert(RescaleCheckpointManuallyITCase.java:218)
> Mar 21 17:05:32   at 
> org.apache.flink.test.checkpointing.RescaleCheckpointManuallyITCase.testCheckpointRescalingKeyedState(RescaleCheckpointManuallyITCase.java:122)
> Mar 21 17:05:32   at 
> org.apache.flink.test.checkpointing.RescaleCheckpointManuallyITCase.testCheckpointRescalingInKeyedState(RescaleCheckpointManuallyITCase.java:88)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #19235: [FLINK-26848][JDBC]write data when disable flush-interval and max-rows

2022-03-26 Thread GitBox


flinkbot edited a comment on pull request #19235:
URL: https://github.com/apache/flink/pull/19235#issuecomment-1077732821


   
   ## CI report:
   
   * 3ba06197d984e7dd7575b27cad17b208d23be500 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33778)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-26873) Align the helm chart version with the flink operator

2022-03-26 Thread Aitozi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aitozi updated FLINK-26873:
---
Component/s: Kubernetes Operator

> Align the helm chart version with the flink operator
> 
>
> Key: FLINK-26873
> URL: https://issues.apache.org/jira/browse/FLINK-26873
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kubernetes Operator
>Reporter: Aitozi
>Priority: Major
>
> Now the flink-operator helm chart version is 1.0.13. I think it should be 
> aligned to the flink-operator version during release 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26873) Align the helm chart version with the flink operator

2022-03-26 Thread Aitozi (Jira)
Aitozi created FLINK-26873:
--

 Summary: Align the helm chart version with the flink operator
 Key: FLINK-26873
 URL: https://issues.apache.org/jira/browse/FLINK-26873
 Project: Flink
  Issue Type: Sub-task
Reporter: Aitozi


Now the flink-operator helm chart version is 1.0.13. I think it should be 
aligned to the flink-operator version during release 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26873) Align the helm chart version with the flink operator

2022-03-26 Thread Aitozi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17512748#comment-17512748
 ] 

Aitozi commented on FLINK-26873:


cc [~gyfora] 

> Align the helm chart version with the flink operator
> 
>
> Key: FLINK-26873
> URL: https://issues.apache.org/jira/browse/FLINK-26873
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kubernetes Operator
>Reporter: Aitozi
>Priority: Major
>
> Now the flink-operator helm chart version is 1.0.13. I think it should be 
> aligned to the flink-operator version during release 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26871) Handle Session job spec change

2022-03-26 Thread Aitozi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17512747#comment-17512747
 ] 

Aitozi commented on FLINK-26871:


I will work on this 

> Handle Session job spec change 
> ---
>
> Key: FLINK-26871
> URL: https://issues.apache.org/jira/browse/FLINK-26871
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Aitozi
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26872) The network memory min (xx mb) and max (xx mb) mismatch

2022-03-26 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-26872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

吴中勤 updated FLINK-26872:

Attachment: (was: image-2022-03-26-22-29-41-349.png)

> The network memory min (xx mb) and max (xx mb) mismatch
> ---
>
> Key: FLINK-26872
> URL: https://issues.apache.org/jira/browse/FLINK-26872
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Configuration
>Affects Versions: 1.14.3, 1.14.4
>Reporter: 吴中勤
>Priority: Minor
> Attachments: image-2022-03-26-22-59-08-043.png
>
>
> TaskManger  config : network memory mim and max  must be equal ?
>  
> case: run flink source code(flink 1.14.3 on win10) ,
> result: StandaloneSessionEntrypoint runs ok but  
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner fail 
> relation code is 
> class: org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils
> line: 100
> !image-2022-03-26-22-59-08-043.png!
> which means max/min network memory config must be equal.
> is it a bug or some special purpose?
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26872) The network memory min (xx mb) and max (xx mb) mismatch

2022-03-26 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-26872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

吴中勤 updated FLINK-26872:

Description: 
TaskManger  config : network memory mim and max  must be equal ?

 

case: run flink source code(flink 1.14.3 on win10) ,

result: StandaloneSessionEntrypoint runs ok but  
org.apache.flink.runtime.taskexecutor.TaskManagerRunner fail 

relation code is 

class: org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils

line: 100

!image-2022-03-26-22-59-08-043.png!

which means max/min network memory config must be equal.

is it a bug or some special purpose?

 

  was:
TaskManger  config : network memory mim and max  must be equal ?

 

case: run flink source code(flink 1.14.3 on win10) ,

result: StandaloneSessionEntrypoint runs ok but  
org.apache.flink.runtime.taskexecutor.TaskManagerRunner fail 

relation code is 

class: org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils

line: 100

!image-2022-03-26-22-59-08-043.png!

which means max/min network memory config must be equal.

is it a bug or something else?

 


> The network memory min (xx mb) and max (xx mb) mismatch
> ---
>
> Key: FLINK-26872
> URL: https://issues.apache.org/jira/browse/FLINK-26872
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Configuration
>Affects Versions: 1.14.3, 1.14.4
>Reporter: 吴中勤
>Priority: Minor
> Attachments: image-2022-03-26-22-29-41-349.png, 
> image-2022-03-26-22-59-08-043.png
>
>
> TaskManger  config : network memory mim and max  must be equal ?
>  
> case: run flink source code(flink 1.14.3 on win10) ,
> result: StandaloneSessionEntrypoint runs ok but  
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner fail 
> relation code is 
> class: org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils
> line: 100
> !image-2022-03-26-22-59-08-043.png!
> which means max/min network memory config must be equal.
> is it a bug or some special purpose?
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26872) The network memory min (xx mb) and max (xx mb) mismatch

2022-03-26 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-26872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

吴中勤 updated FLINK-26872:

Description: 
TaskManger  config : network memory mim and max  must be equal ?

 

case: run flink source code(flink 1.14.3 on win10) ,

result: StandaloneSessionEntrypoint runs ok but  
org.apache.flink.runtime.taskexecutor.TaskManagerRunner fail 

relation code is 

class: org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils

line: 100

 

which means max/min network memory config must be equal.

is it a bug or something else?

 

  was:
TaskManger  config : network memory mim and max  must be equal ?

 

case: run flink source code(flink 1.14.3 on win10) ,

result: StandaloneSessionEntrypoint runs ok but  
org.apache.flink.runtime.taskexecutor.TaskManagerRunner fail 

relation code is 

class: org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils

line: 100

!image-2022-03-26-22-31-04-214.png!

which means max/min network memory config must be equal.

is it a bug or something else?

 


> The network memory min (xx mb) and max (xx mb) mismatch
> ---
>
> Key: FLINK-26872
> URL: https://issues.apache.org/jira/browse/FLINK-26872
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Configuration
>Affects Versions: 1.14.3, 1.14.4
>Reporter: 吴中勤
>Priority: Minor
> Attachments: image-2022-03-26-22-29-41-349.png, 
> image-2022-03-26-22-59-08-043.png
>
>
> TaskManger  config : network memory mim and max  must be equal ?
>  
> case: run flink source code(flink 1.14.3 on win10) ,
> result: StandaloneSessionEntrypoint runs ok but  
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner fail 
> relation code is 
> class: org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils
> line: 100
>  
> which means max/min network memory config must be equal.
> is it a bug or something else?
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26872) The network memory min (xx mb) and max (xx mb) mismatch

2022-03-26 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-26872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

吴中勤 updated FLINK-26872:

Description: 
TaskManger  config : network memory mim and max  must be equal ?

 

case: run flink source code(flink 1.14.3 on win10) ,

result: StandaloneSessionEntrypoint runs ok but  
org.apache.flink.runtime.taskexecutor.TaskManagerRunner fail 

relation code is 

class: org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils

line: 100

!image-2022-03-26-22-59-08-043.png!

which means max/min network memory config must be equal.

is it a bug or something else?

 

  was:
TaskManger  config : network memory mim and max  must be equal ?

 

case: run flink source code(flink 1.14.3 on win10) ,

result: StandaloneSessionEntrypoint runs ok but  
org.apache.flink.runtime.taskexecutor.TaskManagerRunner fail 

relation code is 

class: org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils

line: 100

 

which means max/min network memory config must be equal.

is it a bug or something else?

 


> The network memory min (xx mb) and max (xx mb) mismatch
> ---
>
> Key: FLINK-26872
> URL: https://issues.apache.org/jira/browse/FLINK-26872
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Configuration
>Affects Versions: 1.14.3, 1.14.4
>Reporter: 吴中勤
>Priority: Minor
> Attachments: image-2022-03-26-22-29-41-349.png, 
> image-2022-03-26-22-59-08-043.png
>
>
> TaskManger  config : network memory mim and max  must be equal ?
>  
> case: run flink source code(flink 1.14.3 on win10) ,
> result: StandaloneSessionEntrypoint runs ok but  
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner fail 
> relation code is 
> class: org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils
> line: 100
> !image-2022-03-26-22-59-08-043.png!
> which means max/min network memory config must be equal.
> is it a bug or something else?
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26872) The network memory min (xx mb) and max (xx mb) mismatch

2022-03-26 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-26872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

吴中勤 updated FLINK-26872:

Attachment: image-2022-03-26-22-59-08-043.png

> The network memory min (xx mb) and max (xx mb) mismatch
> ---
>
> Key: FLINK-26872
> URL: https://issues.apache.org/jira/browse/FLINK-26872
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Configuration
>Affects Versions: 1.14.3, 1.14.4
>Reporter: 吴中勤
>Priority: Minor
> Attachments: image-2022-03-26-22-29-41-349.png, 
> image-2022-03-26-22-59-08-043.png
>
>
> TaskManger  config : network memory mim and max  must be equal ?
>  
> case: run flink source code(flink 1.14.3 on win10) ,
> result: StandaloneSessionEntrypoint runs ok but  
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner fail 
> relation code is 
> class: org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils
> line: 100
>  
> which means max/min network memory config must be equal.
> is it a bug or something else?
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26872) The network memory min (xx mb) and max (xx mb) mismatch

2022-03-26 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-26872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

吴中勤 updated FLINK-26872:

Attachment: (was: image-2022-03-26-22-28-19-662.png)

> The network memory min (xx mb) and max (xx mb) mismatch
> ---
>
> Key: FLINK-26872
> URL: https://issues.apache.org/jira/browse/FLINK-26872
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Configuration
>Affects Versions: 1.14.3, 1.14.4
>Reporter: 吴中勤
>Priority: Minor
> Attachments: image-2022-03-26-22-29-41-349.png
>
>
> TaskManger  config : network memory mim and max  must be equal ?
>  
> case: run flink source code(flink 1.14.3 on win10) ,
> result: StandaloneSessionEntrypoint runs ok but  
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner fail 
> relation code is 
> class: org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils
> line: 100
> !image-2022-03-26-22-31-04-214.png!
> which means max/min network memory config must be equal.
> is it a bug or something else?
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26872) The network memory min (xx mb) and max (xx mb) mismatch

2022-03-26 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-26872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

吴中勤 updated FLINK-26872:

Description: 
TaskManger  config : network memory mim and max  must be equal ?

 

case: run flink source code(flink 1.14.3 on win10) ,

result: StandaloneSessionEntrypoint runs ok but  
org.apache.flink.runtime.taskexecutor.TaskManagerRunner fail 

relation code is 

class: org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils

line: 100

!image-2022-03-26-22-31-04-214.png!

which means max/min network memory config must be equal.

is it a bug or something else?

 

  was:
TaskManger  config : network memory mim and max  must be equal ?

 

case: run flink source code(flink 1.14.3 on win10) ,

result: StandaloneSessionEntrypoint runs ok but  
org.apache.flink.runtime.taskexecutor.TaskManagerRunner fail 

relation code is 

class: org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils

line: 100

!image-2022-03-26-22-29-41-349.png!

which means max/min network memory config must be equal.

is it a bug or something else?

 


> The network memory min (xx mb) and max (xx mb) mismatch
> ---
>
> Key: FLINK-26872
> URL: https://issues.apache.org/jira/browse/FLINK-26872
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Configuration
>Affects Versions: 1.14.3, 1.14.4
>Reporter: 吴中勤
>Priority: Minor
> Attachments: image-2022-03-26-22-29-41-349.png
>
>
> TaskManger  config : network memory mim and max  must be equal ?
>  
> case: run flink source code(flink 1.14.3 on win10) ,
> result: StandaloneSessionEntrypoint runs ok but  
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner fail 
> relation code is 
> class: org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils
> line: 100
> !image-2022-03-26-22-31-04-214.png!
> which means max/min network memory config must be equal.
> is it a bug or something else?
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26872) The network memory min (xx mb) and max (xx mb) mismatch

2022-03-26 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-26872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

吴中勤 updated FLINK-26872:

Attachment: image-2022-03-26-22-31-04-214.png

> The network memory min (xx mb) and max (xx mb) mismatch
> ---
>
> Key: FLINK-26872
> URL: https://issues.apache.org/jira/browse/FLINK-26872
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Configuration
>Affects Versions: 1.14.3, 1.14.4
>Reporter: 吴中勤
>Priority: Minor
> Attachments: image-2022-03-26-22-29-41-349.png
>
>
> TaskManger  config : network memory mim and max  must be equal ?
>  
> case: run flink source code(flink 1.14.3 on win10) ,
> result: StandaloneSessionEntrypoint runs ok but  
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner fail 
> relation code is 
> class: org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils
> line: 100
> !image-2022-03-26-22-29-41-349.png!
> which means max/min network memory config must be equal.
> is it a bug or something else?
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26872) The network memory min (xx mb) and max (xx mb) mismatch

2022-03-26 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-26872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

吴中勤 updated FLINK-26872:

Attachment: (was: image-2022-03-26-22-31-04-214.png)

> The network memory min (xx mb) and max (xx mb) mismatch
> ---
>
> Key: FLINK-26872
> URL: https://issues.apache.org/jira/browse/FLINK-26872
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Configuration
>Affects Versions: 1.14.3, 1.14.4
>Reporter: 吴中勤
>Priority: Minor
> Attachments: image-2022-03-26-22-29-41-349.png
>
>
> TaskManger  config : network memory mim and max  must be equal ?
>  
> case: run flink source code(flink 1.14.3 on win10) ,
> result: StandaloneSessionEntrypoint runs ok but  
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner fail 
> relation code is 
> class: org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils
> line: 100
> !image-2022-03-26-22-31-04-214.png!
> which means max/min network memory config must be equal.
> is it a bug or something else?
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26872) The network memory min (xx mb) and max (xx mb) mismatch

2022-03-26 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-26872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

吴中勤 updated FLINK-26872:

Description: 
TaskManger  config : network memory mim and max  must be equal ?

 

case: run flink source code(flink 1.14.3 on win10) ,

result: StandaloneSessionEntrypoint runs ok but  
org.apache.flink.runtime.taskexecutor.TaskManagerRunner fail 

relation code is 

class: org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils

line: 100

!image-2022-03-26-22-29-41-349.png!

which means max/min network memory config must be equal.

is it a bug or something else?

 

  was:
TaskManger  config : network memory mim and max  must be equal ?

 

case: run flink source code(flink 1.14.3 on win10) ,

result: StandaloneSessionEntrypoint runs ok but  
org.apache.flink.runtime.taskexecutor.TaskManagerRunner fail 

relation code is 

class: org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils

line: 100

!image-2022-03-26-22-29-41-349.png!

which means max/min network memory config must be equal.

is it a bug or else?

 


> The network memory min (xx mb) and max (xx mb) mismatch
> ---
>
> Key: FLINK-26872
> URL: https://issues.apache.org/jira/browse/FLINK-26872
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Configuration
>Affects Versions: 1.14.3, 1.14.4
>Reporter: 吴中勤
>Priority: Minor
> Attachments: image-2022-03-26-22-28-19-662.png, 
> image-2022-03-26-22-29-41-349.png
>
>
> TaskManger  config : network memory mim and max  must be equal ?
>  
> case: run flink source code(flink 1.14.3 on win10) ,
> result: StandaloneSessionEntrypoint runs ok but  
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner fail 
> relation code is 
> class: org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils
> line: 100
> !image-2022-03-26-22-29-41-349.png!
> which means max/min network memory config must be equal.
> is it a bug or something else?
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink-ml] lindong28 commented on a change in pull request #70: [FLINK-26313] Add Transformer and Estimator of OnlineKMeans

2022-03-26 Thread GitBox


lindong28 commented on a change in pull request #70:
URL: https://github.com/apache/flink-ml/pull/70#discussion_r835771045



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/clustering/kmeans/OnlineKMeansModel.java
##
@@ -0,0 +1,182 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.clustering.kmeans;
+
+import org.apache.flink.api.common.typeinfo.Types;
+import org.apache.flink.api.java.typeutils.RowTypeInfo;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.metrics.Gauge;
+import org.apache.flink.ml.api.Model;
+import org.apache.flink.ml.common.datastream.TableUtils;
+import org.apache.flink.ml.common.distance.DistanceMeasure;
+import org.apache.flink.ml.linalg.DenseVector;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.util.ParamUtils;
+import org.apache.flink.ml.util.ReadWriteUtils;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.functions.co.CoProcessFunction;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
+import org.apache.flink.table.api.internal.TableImpl;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.Collector;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.commons.lang3.ArrayUtils;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * OnlineKMeansModel can be regarded as an advanced {@link KMeansModel} 
operator which can update
+ * model data in a streaming format, using the model data provided by {@link 
OnlineKMeans}.
+ */
+public class OnlineKMeansModel
+implements Model, 
KMeansModelParams {
+private final Map, Object> paramMap = new HashMap<>();
+private Table modelDataTable;
+
+public OnlineKMeansModel() {
+ParamUtils.initializeMapWithDefaultValues(paramMap, this);
+}
+
+@Override
+public OnlineKMeansModel setModelData(Table... inputs) {
+Preconditions.checkArgument(inputs.length == 1);
+modelDataTable = inputs[0];
+return this;
+}
+
+@Override
+public Table[] getModelData() {
+return new Table[] {modelDataTable};
+}
+
+@Override
+public Table[] transform(Table... inputs) {
+Preconditions.checkArgument(inputs.length == 1);
+
+StreamTableEnvironment tEnv =
+(StreamTableEnvironment) ((TableImpl) 
inputs[0]).getTableEnvironment();
+
+RowTypeInfo inputTypeInfo = 
TableUtils.getRowTypeInfo(inputs[0].getResolvedSchema());
+RowTypeInfo outputTypeInfo =
+new RowTypeInfo(
+ArrayUtils.addAll(inputTypeInfo.getFieldTypes(), 
Types.INT),
+ArrayUtils.addAll(inputTypeInfo.getFieldNames(), 
getPredictionCol()));
+
+DataStream predictionResult =
+KMeansModelData.getModelDataStream(modelDataTable)
+.broadcast()
+.connect(tEnv.toDataStream(inputs[0]))
+.process(
+new PredictLabelFunction(
+getFeaturesCol(),
+
DistanceMeasure.getInstance(getDistanceMeasure())),
+outputTypeInfo);
+
+return new Table[] {tEnv.fromDataStream(predictionResult)};
+}
+
+/** A utility function used for prediction. */
+private static class PredictLabelFunction extends 
CoProcessFunction {
+private final String featuresCol;
+
+private final DistanceMeasure distanceMeasure;
+
+private DenseVector[] centroids;
+
+// TODO: replace this with a complete solution of reading first model 
data from unbounded
+// model data stream before processing the first predict data.
+private final List bufferedPoints = new ArrayList<>();

Review comment:
   The long term solution is to `read first model data from unbounded model 
data stream before 

[GitHub] [flink-ml] lindong28 commented on a change in pull request #70: [FLINK-26313] Add Transformer and Estimator of OnlineKMeans

2022-03-26 Thread GitBox


lindong28 commented on a change in pull request #70:
URL: https://github.com/apache/flink-ml/pull/70#discussion_r835771045



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/clustering/kmeans/OnlineKMeansModel.java
##
@@ -0,0 +1,182 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.clustering.kmeans;
+
+import org.apache.flink.api.common.typeinfo.Types;
+import org.apache.flink.api.java.typeutils.RowTypeInfo;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.metrics.Gauge;
+import org.apache.flink.ml.api.Model;
+import org.apache.flink.ml.common.datastream.TableUtils;
+import org.apache.flink.ml.common.distance.DistanceMeasure;
+import org.apache.flink.ml.linalg.DenseVector;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.util.ParamUtils;
+import org.apache.flink.ml.util.ReadWriteUtils;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.functions.co.CoProcessFunction;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
+import org.apache.flink.table.api.internal.TableImpl;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.Collector;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.commons.lang3.ArrayUtils;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * OnlineKMeansModel can be regarded as an advanced {@link KMeansModel} 
operator which can update
+ * model data in a streaming format, using the model data provided by {@link 
OnlineKMeans}.
+ */
+public class OnlineKMeansModel
+implements Model, 
KMeansModelParams {
+private final Map, Object> paramMap = new HashMap<>();
+private Table modelDataTable;
+
+public OnlineKMeansModel() {
+ParamUtils.initializeMapWithDefaultValues(paramMap, this);
+}
+
+@Override
+public OnlineKMeansModel setModelData(Table... inputs) {
+Preconditions.checkArgument(inputs.length == 1);
+modelDataTable = inputs[0];
+return this;
+}
+
+@Override
+public Table[] getModelData() {
+return new Table[] {modelDataTable};
+}
+
+@Override
+public Table[] transform(Table... inputs) {
+Preconditions.checkArgument(inputs.length == 1);
+
+StreamTableEnvironment tEnv =
+(StreamTableEnvironment) ((TableImpl) 
inputs[0]).getTableEnvironment();
+
+RowTypeInfo inputTypeInfo = 
TableUtils.getRowTypeInfo(inputs[0].getResolvedSchema());
+RowTypeInfo outputTypeInfo =
+new RowTypeInfo(
+ArrayUtils.addAll(inputTypeInfo.getFieldTypes(), 
Types.INT),
+ArrayUtils.addAll(inputTypeInfo.getFieldNames(), 
getPredictionCol()));
+
+DataStream predictionResult =
+KMeansModelData.getModelDataStream(modelDataTable)
+.broadcast()
+.connect(tEnv.toDataStream(inputs[0]))
+.process(
+new PredictLabelFunction(
+getFeaturesCol(),
+
DistanceMeasure.getInstance(getDistanceMeasure())),
+outputTypeInfo);
+
+return new Table[] {tEnv.fromDataStream(predictionResult)};
+}
+
+/** A utility function used for prediction. */
+private static class PredictLabelFunction extends 
CoProcessFunction {
+private final String featuresCol;
+
+private final DistanceMeasure distanceMeasure;
+
+private DenseVector[] centroids;
+
+// TODO: replace this with a complete solution of reading first model 
data from unbounded
+// model data stream before processing the first predict data.
+private final List bufferedPoints = new ArrayList<>();

Review comment:
   The long term solution is to `read first model data from unbounded model 
data stream before 

[jira] [Updated] (FLINK-26872) The network memory min (xx mb) and max (xx mb) mismatch

2022-03-26 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-26872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

吴中勤 updated FLINK-26872:

Description: 
TaskManger  config : network memory mim and max  must be equal ?

 

case: run flink source code(flink 1.14.3 on win10) ,

result: StandaloneSessionEntrypoint runs ok but  
org.apache.flink.runtime.taskexecutor.TaskManagerRunner fail 

relation code is 

class: org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils

line: 100

!image-2022-03-26-22-29-41-349.png!

which means max/min network memory config must be equal.

is it a bug or else?

 

  was:
TaskManger  config : network memory mim and max  must be equal ?

 

case: run flink source code(flink 1.14.3 on win10) ,

result: StandaloneSessionEntrypoint runs ok but  
org.apache.flink.runtime.taskexecutor.TaskManagerRunner fail 

relation code is 

class: org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils

line: 100

!image-2022-03-26-22-28-19-662.png!

which means max/min network memory config must be equal.

is it a bug or else?

 


> The network memory min (xx mb) and max (xx mb) mismatch
> ---
>
> Key: FLINK-26872
> URL: https://issues.apache.org/jira/browse/FLINK-26872
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Configuration
>Affects Versions: 1.14.3, 1.14.4
>Reporter: 吴中勤
>Priority: Minor
> Attachments: image-2022-03-26-22-28-19-662.png, 
> image-2022-03-26-22-29-41-349.png
>
>
> TaskManger  config : network memory mim and max  must be equal ?
>  
> case: run flink source code(flink 1.14.3 on win10) ,
> result: StandaloneSessionEntrypoint runs ok but  
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner fail 
> relation code is 
> class: org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils
> line: 100
> !image-2022-03-26-22-29-41-349.png!
> which means max/min network memory config must be equal.
> is it a bug or else?
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26872) The network memory min (xx mb) and max (xx mb) mismatch

2022-03-26 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-26872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

吴中勤 updated FLINK-26872:

Attachment: image-2022-03-26-22-29-41-349.png

> The network memory min (xx mb) and max (xx mb) mismatch
> ---
>
> Key: FLINK-26872
> URL: https://issues.apache.org/jira/browse/FLINK-26872
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Configuration
>Affects Versions: 1.14.3, 1.14.4
>Reporter: 吴中勤
>Priority: Minor
> Attachments: image-2022-03-26-22-28-19-662.png, 
> image-2022-03-26-22-29-41-349.png
>
>
> TaskManger  config : network memory mim and max  must be equal ?
>  
> case: run flink source code(flink 1.14.3 on win10) ,
> result: StandaloneSessionEntrypoint runs ok but  
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner fail 
> relation code is 
> class: org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils
> line: 100
> !image-2022-03-26-22-28-19-662.png!
> which means max/min network memory config must be equal.
> is it a bug or else?
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26872) The network memory min (xx mb) and max (xx mb) mismatch

2022-03-26 Thread Jira
吴中勤 created FLINK-26872:
---

 Summary: The network memory min (xx mb) and max (xx mb) mismatch
 Key: FLINK-26872
 URL: https://issues.apache.org/jira/browse/FLINK-26872
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Configuration
Affects Versions: 1.14.4, 1.14.3
Reporter: 吴中勤
 Attachments: image-2022-03-26-22-28-19-662.png

TaskManger  config : network memory mim and max  must be equal ?

 

case: run flink source code(flink 1.14.3 on win10) ,

result: StandaloneSessionEntrypoint runs ok but  
org.apache.flink.runtime.taskexecutor.TaskManagerRunner fail 

relation code is 

class: org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils

line: 100

!image-2022-03-26-22-28-19-662.png!

which means max/min network memory config must be equal.

is it a bug or else?

 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink-ml] lindong28 commented on a change in pull request #70: [FLINK-26313] Add Transformer and Estimator of OnlineKMeans

2022-03-26 Thread GitBox


lindong28 commented on a change in pull request #70:
URL: https://github.com/apache/flink-ml/pull/70#discussion_r835764832



##
File path: 
flink-ml-lib/src/test/java/org/apache/flink/ml/util/InMemorySinkFunction.java
##
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.util;
+
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.streaming.api.functions.sink.RichSinkFunction;
+import org.apache.flink.streaming.api.functions.sink.SinkFunction;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.UUID;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.TimeUnit;
+
+/** A {@link SinkFunction} implementation that makes all collected records 
available for tests. */
+@SuppressWarnings({"unchecked", "rawtypes"})
+public class InMemorySinkFunction extends RichSinkFunction {
+private static final Map queueMap = new 
ConcurrentHashMap<>();
+private final UUID id;
+private BlockingQueue queue;
+
+public InMemorySinkFunction() {
+id = UUID.randomUUID();
+queue = new LinkedBlockingQueue();
+queueMap.put(id, queue);
+}
+
+@Override
+public void open(Configuration parameters) throws Exception {
+super.open(parameters);
+queue = queueMap.get(id);
+}
+
+@Override
+public void close() throws Exception {
+super.close();
+queueMap.remove(id);
+}
+
+@Override
+public void invoke(T value, Context context) {
+if (!queue.offer(value)) {
+throw new RuntimeException(
+"Failed to offer " + value + " to blocking queue " + id + 
".");
+}
+}
+
+public List poll(int num) throws InterruptedException {
+List result = new ArrayList<>();
+for (int i = 0; i < num; i++) {
+result.add(poll());
+}
+return result;
+}
+
+public T poll() throws InterruptedException {
+return poll(1, TimeUnit.MINUTES);
+}
+
+public T poll(long timeout, TimeUnit unit) throws InterruptedException {

Review comment:
   nits: would it be simpler to remove this method and move its content to 
`T poll()`?

##
File path: 
flink-ml-lib/src/test/java/org/apache/flink/ml/util/InMemorySourceFunction.java
##
@@ -0,0 +1,83 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.util;
+
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.streaming.api.functions.source.RichSourceFunction;
+import org.apache.flink.streaming.api.functions.source.SourceFunction;
+import org.apache.flink.util.Preconditions;
+
+import java.util.Map;
+import java.util.Optional;
+import java.util.UUID;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.LinkedBlockingQueue;
+
+/** A {@link SourceFunction} implementation that can directly receive records 
from tests. */
+@SuppressWarnings({"unchecked", "rawtypes"})
+public class InMemorySourceFunction extends RichSourceFunction {
+private static final Map queueMap = new 
ConcurrentHashMap<>();
+private final UUID id;
+private BlockingQueue> queue;
+private volatile boolean isRunning = true;
+
+

[GitHub] [flink] flinkbot edited a comment on pull request #19235: [FLINK-26848][JDBC]write data when disable flush-interval and max-rows

2022-03-26 Thread GitBox


flinkbot edited a comment on pull request #19235:
URL: https://github.com/apache/flink/pull/19235#issuecomment-1077732821


   
   ## CI report:
   
   * fcca995c7a83260f47e5028c400ece1dc24b39f9 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33741)
 
   * 3ba06197d984e7dd7575b27cad17b208d23be500 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33778)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #19235: [FLINK-26848][JDBC]write data when disable flush-interval and max-rows

2022-03-26 Thread GitBox


flinkbot edited a comment on pull request #19235:
URL: https://github.com/apache/flink/pull/19235#issuecomment-1077732821


   
   ## CI report:
   
   * fcca995c7a83260f47e5028c400ece1dc24b39f9 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33741)
 
   * 3ba06197d984e7dd7575b27cad17b208d23be500 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #19235: [FLINK-26848][JDBC]write data when disable flush-interval and max-rows

2022-03-26 Thread GitBox


flinkbot edited a comment on pull request #19235:
URL: https://github.com/apache/flink/pull/19235#issuecomment-1077732821


   
   ## CI report:
   
   * fcca995c7a83260f47e5028c400ece1dc24b39f9 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33741)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #19235: [FLINK-26848][JDBC]write data when disable flush-interval and max-rows

2022-03-26 Thread GitBox


flinkbot edited a comment on pull request #19235:
URL: https://github.com/apache/flink/pull/19235#issuecomment-1077732821


   
   ## CI report:
   
   * fcca995c7a83260f47e5028c400ece1dc24b39f9 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33741)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] hehuiyuan edited a comment on pull request #19235: [FLINK-26848][JDBC]write data when disable flush-interval and max-rows

2022-03-26 Thread GitBox


hehuiyuan edited a comment on pull request #19235:
URL: https://github.com/apache/flink/pull/19235#issuecomment-1079697393


   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] hehuiyuan commented on pull request #19235: [FLINK-26848][JDBC]write data when disable flush-interval and max-rows

2022-03-26 Thread GitBox


hehuiyuan commented on pull request #19235:
URL: https://github.com/apache/flink/pull/19235#issuecomment-1079697393


   @flinkbot re run


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-26871) Handle Session job spec change

2022-03-26 Thread Aitozi (Jira)
Aitozi created FLINK-26871:
--

 Summary: Handle Session job spec change 
 Key: FLINK-26871
 URL: https://issues.apache.org/jira/browse/FLINK-26871
 Project: Flink
  Issue Type: Sub-task
Reporter: Aitozi






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26870) Implement session job observer

2022-03-26 Thread Aitozi (Jira)
Aitozi created FLINK-26870:
--

 Summary: Implement session job observer
 Key: FLINK-26870
 URL: https://issues.apache.org/jira/browse/FLINK-26870
 Project: Flink
  Issue Type: Sub-task
Reporter: Aitozi






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (FLINK-26864) Performance regression on 25.03.2022

2022-03-26 Thread Yuan Mei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17512723#comment-17512723
 ] 

Yuan Mei edited comment on FLINK-26864 at 3/26/22, 12:07 PM:
-

Is changelog enabled in the benchmark?

If not, the code path to add "on success/failure" action into the mailbox 
should not take effect. Specifically, I mean this FLINK-26592


was (Author: ym):
Is changelog enabled in the benchmark?

If not, the code path to add "on success/failure" action into the mailbox 
should not take effect. Specifically, I mean this FLINK-26592

 

 

> Performance regression on 25.03.2022
> 
>
> Key: FLINK-26864
> URL: https://issues.apache.org/jira/browse/FLINK-26864
> Project: Flink
>  Issue Type: Bug
>  Components: Benchmarks
>Affects Versions: 1.16.0
>Reporter: Piotr Nowojski
>Assignee: Sebastian Mattheis
>Priority: Blocker
>
> http://codespeed.dak8s.net:8000/timeline/#/?exe=1=arrayKeyBy=on=on=off=2=200
> http://codespeed.dak8s.net:8000/timeline/#/?exe=1=remoteFilePartition=on=on=off=2=200
> http://codespeed.dak8s.net:8000/timeline/#/?exe=1=remoteSortPartition=on=on=off=2=200
> http://codespeed.dak8s.net:8000/timeline/#/?exe=1=tupleKeyBy=on=on=off=2=200



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (FLINK-26864) Performance regression on 25.03.2022

2022-03-26 Thread Yuan Mei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17512723#comment-17512723
 ] 

Yuan Mei edited comment on FLINK-26864 at 3/26/22, 12:02 PM:
-

Is changelog enabled in the benchmark?

If not, the code path to add "on success/failure" action into the mailbox 
should not take effect. Specifically, I mean this FLINK-26592

 

 


was (Author: ym):
Is changelog enabled in the benchmark?

If not, the code path to add "on success/failure" action into the mailbox 
should not take effect.

I mean this FLINK-26592

 

 

> Performance regression on 25.03.2022
> 
>
> Key: FLINK-26864
> URL: https://issues.apache.org/jira/browse/FLINK-26864
> Project: Flink
>  Issue Type: Bug
>  Components: Benchmarks
>Affects Versions: 1.16.0
>Reporter: Piotr Nowojski
>Assignee: Sebastian Mattheis
>Priority: Blocker
>
> http://codespeed.dak8s.net:8000/timeline/#/?exe=1=arrayKeyBy=on=on=off=2=200
> http://codespeed.dak8s.net:8000/timeline/#/?exe=1=remoteFilePartition=on=on=off=2=200
> http://codespeed.dak8s.net:8000/timeline/#/?exe=1=remoteSortPartition=on=on=off=2=200
> http://codespeed.dak8s.net:8000/timeline/#/?exe=1=tupleKeyBy=on=on=off=2=200



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26864) Performance regression on 25.03.2022

2022-03-26 Thread Yuan Mei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17512723#comment-17512723
 ] 

Yuan Mei commented on FLINK-26864:
--

Is changelog enabled in the benchmark?

If not, the code path to add "on success/failure" action into the mailbox 
should not take effect.

I mean this FLINK-26592

 

 

> Performance regression on 25.03.2022
> 
>
> Key: FLINK-26864
> URL: https://issues.apache.org/jira/browse/FLINK-26864
> Project: Flink
>  Issue Type: Bug
>  Components: Benchmarks
>Affects Versions: 1.16.0
>Reporter: Piotr Nowojski
>Assignee: Sebastian Mattheis
>Priority: Blocker
>
> http://codespeed.dak8s.net:8000/timeline/#/?exe=1=arrayKeyBy=on=on=off=2=200
> http://codespeed.dak8s.net:8000/timeline/#/?exe=1=remoteFilePartition=on=on=off=2=200
> http://codespeed.dak8s.net:8000/timeline/#/?exe=1=remoteSortPartition=on=on=off=2=200
> http://codespeed.dak8s.net:8000/timeline/#/?exe=1=tupleKeyBy=on=on=off=2=200



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26524) Elasticsearch (v5.3.3) sink end-to-end test

2022-03-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-26524:
---
Labels: stale-critical test-stability  (was: test-stability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Critical but is unassigned and neither itself nor its Sub-Tasks have been 
updated for 14 days. I have gone ahead and marked it "stale-critical". If this 
ticket is critical, please either assign yourself or give an update. 
Afterwards, please remove the label or in 7 days the issue will be 
deprioritized.


> Elasticsearch (v5.3.3) sink end-to-end test
> ---
>
> Key: FLINK-26524
> URL: https://issues.apache.org/jira/browse/FLINK-26524
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.14.3
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: stale-critical, test-stability
>
> e2e test {{Elasticsearch (v5.3.3) sink end-to-end test}} failed in [this 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32627=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=070ff179-953e-5bda-71fa-d6599415701c=16598]
>  on {{release-1.14}} probably because of the following stacktrace showing up 
> in the logs:
> {code}
> Mar 07 15:40:41 2022-03-07 15:40:40,336 WARN  
> org.apache.flink.runtime.checkpoint.CheckpointFailureManager [] - Failed to 
> trigger checkpoint 1 for job 3a2fd4c6fb03d5b20929a6f2b7131d82. (0 consecutive 
> failed attempts so far)
> Mar 07 15:40:41 org.apache.flink.runtime.checkpoint.CheckpointException: 
> Checkpoint was declined (task is closing)
> Mar 07 15:40:41   at 
> org.apache.flink.runtime.messages.checkpoint.SerializedCheckpointException.unwrap(SerializedCheckpointException.java:51)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> Mar 07 15:40:41   at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.receiveDeclineMessage(CheckpointCoordinator.java:988)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> Mar 07 15:40:41   at 
> org.apache.flink.runtime.scheduler.ExecutionGraphHandler.lambda$declineCheckpoint$2(ExecutionGraphHandler.java:103)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> Mar 07 15:40:41   at 
> org.apache.flink.runtime.scheduler.ExecutionGraphHandler.lambda$processCheckpointCoordinatorMessage$3(ExecutionGraphHandler.java:119)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> Mar 07 15:40:41   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_322]
> Mar 07 15:40:41   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_322]
> Mar 07 15:40:41   at java.lang.Thread.run(Thread.java:750) [?:1.8.0_322]
> Mar 07 15:40:41 Caused by: org.apache.flink.util.SerializedThrowable: Task 
> name with subtask : Source: Sequence Source (Deprecated) -> Flat Map -> Sink: 
> Unnamed (1/1)#0 Failure reason: Checkpoint was declined (task is closing)
> Mar 07 15:40:41   at 
> org.apache.flink.runtime.taskmanager.Task.declineCheckpoint(Task.java:1389) 
> ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> Mar 07 15:40:41   at 
> org.apache.flink.runtime.taskmanager.Task.declineCheckpoint(Task.java:1382) 
> ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> Mar 07 15:40:41   at 
> org.apache.flink.runtime.taskmanager.Task.triggerCheckpointBarrier(Task.java:1348)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> Mar 07 15:40:41   at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.triggerCheckpoint(TaskExecutor.java:956)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> Mar 07 15:40:41   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) ~[?:1.8.0_322]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25765) Kubernetes: flink's configmap and flink's actual config are out of sync

2022-03-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-25765:
---
Labels: stale-major usability  (was: usability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 60 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Kubernetes: flink's configmap and flink's actual config are out of sync
> ---
>
> Key: FLINK-25765
> URL: https://issues.apache.org/jira/browse/FLINK-25765
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.15.0
>Reporter: Niklas Semmler
>Priority: Major
>  Labels: stale-major, usability
>
> For kubernetes setups, Flink's configmap does not reflect the actual config.
> Causes
>  # Config values are overridden by the environment variables in the docker 
> image (see FLINK-25764)
>  # Flink reads the config on start-up, but does not subscribe to changes
>  # Changes to the config map do not lead to restarts of the flink cluster
> Effects
>  # Users cannot expect to understand Flink's config from the configmap
>  # TaskManager/JobManager started at different times may start with different 
> configs, if the user edits the configmap
> Related to FLINK-21383.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25859) Add documentation for DynamoDB Async Sink

2022-03-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-25859:
---
Labels: pull-request-available stale-assigned  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> Add documentation for DynamoDB Async Sink
> -
>
> Key: FLINK-25859
> URL: https://issues.apache.org/jira/browse/FLINK-25859
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Kinesis, Documentation
>Reporter: Yuri Gusev
>Assignee: Zichen Liu
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.16.0
>
>
> h2. Motivation
> FLINK-24229 _introduces a new sink for DynamoDB_
> *Scope:*
>  * Create documentation for the new connector
> h2. References
> More details to be found 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-17232) Rethink the implicit behavior to use the Service externalIP as the address of the Endpoint

2022-03-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-17232:
---
Labels: auto-unassigned stale-assigned  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> Rethink the implicit behavior to use the Service externalIP as the address of 
> the Endpoint
> --
>
> Key: FLINK-17232
> URL: https://issues.apache.org/jira/browse/FLINK-17232
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes
>Affects Versions: 1.10.0, 1.10.1
>Reporter: Canbin Zheng
>Assignee: Aitozi
>Priority: Major
>  Labels: auto-unassigned, stale-assigned
>
> Currently, for the LB/NodePort type Service, if we found that the 
> {{LoadBalancer}} in the {{Service}} is null, we would use the externalIPs 
> configured in the external Service as the address of the Endpoint. Again, 
> this is another implicit toleration and may confuse the users.
> This ticket proposes to rethink the implicit toleration behaviour.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26386) CassandraConnectorITCase.testCassandraTableSink failed on azure

2022-03-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-26386:
---
Labels: stale-critical test-stability  (was: test-stability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Critical but is unassigned and neither itself nor its Sub-Tasks have been 
updated for 14 days. I have gone ahead and marked it "stale-critical". If this 
ticket is critical, please either assign yourself or give an update. 
Afterwards, please remove the label or in 7 days the issue will be 
deprioritized.


> CassandraConnectorITCase.testCassandraTableSink failed on azure
> ---
>
> Key: FLINK-26386
> URL: https://issues.apache.org/jira/browse/FLINK-26386
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Cassandra
>Affects Versions: 1.15.0, 1.13.6
>Reporter: Yun Gao
>Priority: Critical
>  Labels: stale-critical, test-stability
>
> {code:java}
> Feb 28 02:39:19 [ERROR] Tests run: 20, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 226.77 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase
> Feb 28 02:39:19 [ERROR] 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.testCassandraTableSink
>   Time elapsed: 52.49 s  <<< ERROR!
> Feb 28 02:39:19 
> com.datastax.driver.core.exceptions.InvalidConfigurationInQueryException: 
> Keyspace flink doesn't exist
> Feb 28 02:39:19   at 
> com.datastax.driver.core.exceptions.InvalidConfigurationInQueryException.copy(InvalidConfigurationInQueryException.java:37)
> Feb 28 02:39:19   at 
> com.datastax.driver.core.exceptions.InvalidConfigurationInQueryException.copy(InvalidConfigurationInQueryException.java:27)
> Feb 28 02:39:19   at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
> Feb 28 02:39:19   at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
> Feb 28 02:39:19   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
> Feb 28 02:39:19   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
> Feb 28 02:39:19   at 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.createTable(CassandraConnectorITCase.java:391)
> Feb 28 02:39:19   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Feb 28 02:39:19   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Feb 28 02:39:19   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Feb 28 02:39:19   at java.lang.reflect.Method.invoke(Method.java:498)
> Feb 28 02:39:19   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Feb 28 02:39:19   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Feb 28 02:39:19   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Feb 28 02:39:19   at 
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
> Feb 28 02:39:19   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> Feb 28 02:39:19   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Feb 28 02:39:19   at 
> org.apache.flink.testutils.junit.RetryRule$RetryOnExceptionStatement.evaluate(RetryRule.java:192)
> Feb 28 02:39:19   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Feb 28 02:39:19   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> Feb 28 02:39:19   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Feb 28 02:39:19   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Feb 28 02:39:19   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Feb 28 02:39:19   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Feb 28 02:39:19   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Feb 28 02:39:19   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Feb 28 02:39:19   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Feb 28 02:39:19   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Feb 28 02:39:19   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Feb 28 02:39:19   at 
> 

[jira] [Closed] (FLINK-26869) Querying job overview in the REST API fails

2022-03-26 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-26869.

Resolution: Duplicate

> Querying job overview in the REST API fails
> ---
>
> Key: FLINK-26869
> URL: https://issues.apache.org/jira/browse/FLINK-26869
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.14.3
>Reporter: Peihui He
>Priority: Major
>
> Hello,
> In my setup there are three servers in a standalone the Flink cluster using 
> ZooKeeper HA Services running Flink 1.14.3. There's one TaskManager and one 
> JobManager on all servers. ZooKeeper is running on all servers. All the 
> servers have been just started and one simple job has been deployed.
> One of the JobManagers is the leading JobManager.
>  
> If I query (curl) the leading job manager with /v1/jobs/overview, the 
> response is correct.
> But if I query a JobManager that is not the leading JobManager, the HTTP 
> request fails and the following can be seen in the logs.
> {code:java}
> 2022-03-26 10:12:55,539 ERROR 
> org.apache.flink.runtime.rest.handler.job.JobsOverviewHandler [] - Unhandled 
> exception.
> org.apache.flink.runtime.rpc.akka.exceptions.AkkaRpcException: Failed to 
> serialize the result for RPC call : requestMultipleJobDetails.
>         at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.serializeRemoteResultAndVerifySize(AkkaRpcActor.java:417)
>  ~[?:?]
>         at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$sendAsyncResponse$2(AkkaRpcActor.java:373)
>  ~[?:?]
>         at 
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836) 
> ~[?:1.8.0_261]
>         at 
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
>  ~[?:1.8.0_261]
>         at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>  ~[?:1.8.0_261]
>         at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) 
> ~[?:1.8.0_261]
>         at 
> org.apache.flink.util.concurrent.FutureUtils$ResultConjunctFuture.handleCompletedFuture(FutureUtils.java:858)
>  ~[flink-dist_2.11-1.14.3.jar:1.14.3]
>         at 
> org.apache.flink.util.concurrent.FutureUtils$ResultConjunctFuture.lambda$new$0(FutureUtils.java:876)
>  ~[flink-dist_2.11-1.14.3.jar:1.14.3]
>         at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>  ~[?:1.8.0_261]
>         at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>  ~[?:1.8.0_261]
>         at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>  ~[?:1.8.0_261]
>         at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) 
> ~[?:1.8.0_261]
>         at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:258)
>  ~[?:?]
>         at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>  ~[?:1.8.0_261]
>         at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>  ~[?:1.8.0_261]
>         at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>  ~[?:1.8.0_261]
>         at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) 
> ~[?:1.8.0_261]
>         at 
> org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1389) 
> ~[flink-dist_2.11-1.14.3.jar:1.14.3]
>         at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93)
>  ~[?:?]
>         at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
>  ~[?:?]
>         at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
>  ~[?:?]
>         at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>  ~[?:1.8.0_261]
>         at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>  ~[?:1.8.0_261]
>         at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>  ~[?:1.8.0_261]
>         at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) 
> ~[?:1.8.0_261]
>         at 
> org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47)
>  ~[?:?]
>         at akka.dispatch.OnComplete.internal(Future.scala:300) ~[?:?]
>         at akka.dispatch.OnComplete.internal(Future.scala:297) ~[?:?]
>         at akka.dispatch.japi$CallbackBridge.apply(Future.scala:224) ~[?:?]
>         at 

[jira] [Commented] (FLINK-26869) Querying job overview in the REST API fails

2022-03-26 Thread Peihui He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17512710#comment-17512710
 ] 

Peihui He commented on FLINK-26869:
---

When testing locally, It is ok.
{code:java}
import org.apache.flink.api.common.JobID;
import org.apache.flink.api.common.JobStatus;
import org.apache.flink.runtime.messages.webmonitor.JobDetails;
import org.apache.flink.runtime.messages.webmonitor.MultipleJobsDetails;
import org.apache.flink.util.InstantiationUtil;

import java.util.Collections;

public class MultipleJobsDetailsTest {


public static void main(String[] args) {

int[] verticesPerState = new int[]{0, 0, 0, 31, 0, 0, 0,0,0 ,0 };


final JobDetails running =
new JobDetails(
new JobID(0, 0),
"running",
1648093977892L,
-1L,
185357815L,
JobStatus.RUNNING,
1648093991952L,
verticesPerState,
31);


final MultipleJobsDetails expected =
new MultipleJobsDetails(Collections.singleton(running));

try{
InstantiationUtil.serializeObject(expected);
}catch (Exception e){
System.out.println(e.getMessage());
}

}
}
 {code}

> Querying job overview in the REST API fails
> ---
>
> Key: FLINK-26869
> URL: https://issues.apache.org/jira/browse/FLINK-26869
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.14.3
>Reporter: Peihui He
>Priority: Major
>
> Hello,
> In my setup there are three servers in a standalone the Flink cluster using 
> ZooKeeper HA Services running Flink 1.14.3. There's one TaskManager and one 
> JobManager on all servers. ZooKeeper is running on all servers. All the 
> servers have been just started and one simple job has been deployed.
> One of the JobManagers is the leading JobManager.
>  
> If I query (curl) the leading job manager with /v1/jobs/overview, the 
> response is correct.
> But if I query a JobManager that is not the leading JobManager, the HTTP 
> request fails and the following can be seen in the logs.
> {code:java}
> 2022-03-26 10:12:55,539 ERROR 
> org.apache.flink.runtime.rest.handler.job.JobsOverviewHandler [] - Unhandled 
> exception.
> org.apache.flink.runtime.rpc.akka.exceptions.AkkaRpcException: Failed to 
> serialize the result for RPC call : requestMultipleJobDetails.
>         at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.serializeRemoteResultAndVerifySize(AkkaRpcActor.java:417)
>  ~[?:?]
>         at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$sendAsyncResponse$2(AkkaRpcActor.java:373)
>  ~[?:?]
>         at 
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836) 
> ~[?:1.8.0_261]
>         at 
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
>  ~[?:1.8.0_261]
>         at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>  ~[?:1.8.0_261]
>         at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) 
> ~[?:1.8.0_261]
>         at 
> org.apache.flink.util.concurrent.FutureUtils$ResultConjunctFuture.handleCompletedFuture(FutureUtils.java:858)
>  ~[flink-dist_2.11-1.14.3.jar:1.14.3]
>         at 
> org.apache.flink.util.concurrent.FutureUtils$ResultConjunctFuture.lambda$new$0(FutureUtils.java:876)
>  ~[flink-dist_2.11-1.14.3.jar:1.14.3]
>         at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>  ~[?:1.8.0_261]
>         at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>  ~[?:1.8.0_261]
>         at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>  ~[?:1.8.0_261]
>         at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) 
> ~[?:1.8.0_261]
>         at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:258)
>  ~[?:?]
>         at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>  ~[?:1.8.0_261]
>         at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>  ~[?:1.8.0_261]
>         at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>  ~[?:1.8.0_261]
>         at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) 
> ~[?:1.8.0_261]
>         at 
> org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1389) 
> ~[flink-dist_2.11-1.14.3.jar:1.14.3]
>         at 
> 

[jira] [Created] (FLINK-26869) Querying job overview in the REST API fails

2022-03-26 Thread Peihui He (Jira)
Peihui He created FLINK-26869:
-

 Summary: Querying job overview in the REST API fails
 Key: FLINK-26869
 URL: https://issues.apache.org/jira/browse/FLINK-26869
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Network
Affects Versions: 1.14.3
Reporter: Peihui He


Hello,

In my setup there are three servers in a standalone the Flink cluster using 
ZooKeeper HA Services running Flink 1.14.3. There's one TaskManager and one 
JobManager on all servers. ZooKeeper is running on all servers. All the servers 
have been just started and one simple job has been deployed.

One of the JobManagers is the leading JobManager.

 

If I query (curl) the leading job manager with /v1/jobs/overview, the response 
is correct.

But if I query a JobManager that is not the leading JobManager, the HTTP 
request fails and the following can be seen in the logs.
{code:java}
2022-03-26 10:12:55,539 ERROR 
org.apache.flink.runtime.rest.handler.job.JobsOverviewHandler [] - Unhandled 
exception.
org.apache.flink.runtime.rpc.akka.exceptions.AkkaRpcException: Failed to 
serialize the result for RPC call : requestMultipleJobDetails.
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.serializeRemoteResultAndVerifySize(AkkaRpcActor.java:417)
 ~[?:?]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$sendAsyncResponse$2(AkkaRpcActor.java:373)
 ~[?:?]
        at 
java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836) 
~[?:1.8.0_261]
        at 
java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
 ~[?:1.8.0_261]
        at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) 
~[?:1.8.0_261]
        at 
java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) 
~[?:1.8.0_261]
        at 
org.apache.flink.util.concurrent.FutureUtils$ResultConjunctFuture.handleCompletedFuture(FutureUtils.java:858)
 ~[flink-dist_2.11-1.14.3.jar:1.14.3]
        at 
org.apache.flink.util.concurrent.FutureUtils$ResultConjunctFuture.lambda$new$0(FutureUtils.java:876)
 ~[flink-dist_2.11-1.14.3.jar:1.14.3]
        at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
 ~[?:1.8.0_261]
        at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
 ~[?:1.8.0_261]
        at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) 
~[?:1.8.0_261]
        at 
java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) 
~[?:1.8.0_261]
        at 
org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:258)
 ~[?:?]
        at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
 ~[?:1.8.0_261]
        at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
 ~[?:1.8.0_261]
        at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) 
~[?:1.8.0_261]
        at 
java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) 
~[?:1.8.0_261]
        at 
org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1389) 
~[flink-dist_2.11-1.14.3.jar:1.14.3]
        at 
org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93)
 ~[?:?]
        at 
org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
 ~[?:?]
        at 
org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
 ~[?:?]
        at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
 ~[?:1.8.0_261]
        at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
 ~[?:1.8.0_261]
        at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) 
~[?:1.8.0_261]
        at 
java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) 
~[?:1.8.0_261]
        at 
org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47)
 ~[?:?]
        at akka.dispatch.OnComplete.internal(Future.scala:300) ~[?:?]
        at akka.dispatch.OnComplete.internal(Future.scala:297) ~[?:?]
        at akka.dispatch.japi$CallbackBridge.apply(Future.scala:224) ~[?:?]
        at akka.dispatch.japi$CallbackBridge.apply(Future.scala:221) ~[?:?]
        at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60) 
~[flink-dist_2.11-1.14.3.jar:1.14.3]
        at 
org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute(AkkaFutureUtils.java:65)
 ~[?:?]
        at 
scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68) 
~[flink-dist_2.11-1.14.3.jar:1.14.3]
     

[jira] [Commented] (FLINK-26718) Limitations of flink+hive dimension table

2022-03-26 Thread kunghsu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17512700#comment-17512700
 ] 

kunghsu commented on FLINK-26718:
-

[~luoyuxia] thanks. I tried it, tried to use 
'streaming-source.partition.include' = 'all', it can load 25 million data, but 
the query is very slow, it takes about 10 minutes, I found that it uses RockDB. 
It is estimated that because my query scenario must be a full table scan, the 
use of dimension tables is not very suitable.

> Limitations of flink+hive dimension table
> -
>
> Key: FLINK-26718
> URL: https://issues.apache.org/jira/browse/FLINK-26718
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.12.7
>Reporter: kunghsu
>Priority: Major
>  Labels: HIVE
>
> Limitations of flink+hive dimension table
> The scenario I am involved in is a join relationship between the Kafka input 
> table and the Hive dimension table. The hive dimension table is some user 
> data, and the data is very large.
> When the data volume of the hive table is small, about a few hundred rows, 
> everything is normal, the partition is automatically recognized and the 
> entire task is executed normally.
> When the hive table reached about 1.3 million, the TaskManager began to fail 
> to work properly. It was very difficult to even look at the log. I guess it 
> burst the JVM memory when it tried to load the entire table into memory. You 
> can see that a heartbeat timeout exception occurs in Taskmanager, such as 
> Heartbeat TimeoutException.I even increased the parallelism to no avail.
> Official website documentation: 
> [https://nightlies.apache.org/flink/flink-docs-release-1.12/dev/table/connectors/hive/hive_read_write.html#source-parallelism-inference]
> So I have a question, does flink+hive not support association of large tables 
> so far?
> Is this solution unusable when the amount of data is too large?
>  
>  
>  
> Simply estimate, how much memory will 25 million data take up?
> Suppose a line of data is 1K, 25 million K is 25000M, or 25G.
> If the memory of the TM is set to 32G, can the problem be solved?
> It doesn't seem to work either, because this can only be allocated roughly 
> 16G to the jvm.
> Assuming that the official solution can support such a large amount, how 
> should the memory of the TM be set?
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #19248: Update JobLeaderIdService.java

2022-03-26 Thread GitBox


flinkbot edited a comment on pull request #19248:
URL: https://github.com/apache/flink/pull/19248#issuecomment-1079581419


   
   ## CI report:
   
   * 282c6ca968612dc277ed663dd7dba0d0f63d11a2 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=33775)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-26075) Persist per-ExecNode configuration

2022-03-26 Thread Timo Walther (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther closed FLINK-26075.

Fix Version/s: 1.15.0
   Resolution: Fixed

Fixed in master:

commit 84723eea4e7ddae846092ca8bb0905a7b9d6dc6a
[table-planner][test] Regenerate JSON plans

commit bc11450bbf2519924e354af7900174e7bd6c33e3
[table-planner] Persist node configuration to JSON plan

Fixed in 1.15:
commit ec8e43d584054d3b9dc7783a4646f51cd81bcc10
[table-planner][test] Regenerate JSON plans

commit 00c1439bc3cc277fc37e39d084c4f1bc14ecf5b7
[table-planner] Persist node configuration to JSON plan


> Persist per-ExecNode configuration
> --
>
> Key: FLINK-26075
> URL: https://issues.apache.org/jira/browse/FLINK-26075
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Timo Walther
>Assignee: Marios Trivyzas
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> Even though a compiled plan is static, some configuration options still 
> change the topology of an ExecNode. In general, we will request users to keep 
> the configuration constant between Flink versions. However, setting 
> configuration more fine-grained per-ExecNode is a frequently requested 
> feature. It can also allow us to set the parallelism more fine-grained in the 
> future.
> We need the following infrastructure for the mentioned use cases:
> - Every ExecNode can have a configuration
> - By default the configuration per node are the values from the global 
> configuration using the keys from the ExecNodeMetadata annotation.
> - We persist the ExecNode configuration in the JSON plan.
> - If the persisted plan contains a configuration, the persisted configuration 
> is merged with the global configuration and has priority.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] twalthr merged pull request #19247: [FLINK-26075][table-api][table-planner] Persist and use node configuration (1.15)

2022-03-26 Thread GitBox


twalthr merged pull request #19247:
URL: https://github.com/apache/flink/pull/19247


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] twalthr closed pull request #19232: [FLINK-26075][table-api][table-planner] Persist and use node configuration

2022-03-26 Thread GitBox


twalthr closed pull request #19232:
URL: https://github.com/apache/flink/pull/19232


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal commented on a change in pull request #18386: [FLINK-25684][table] Support enhanced `show databases` syntax

2022-03-26 Thread GitBox


RocMarshal commented on a change in pull request #18386:
URL: https://github.com/apache/flink/pull/18386#discussion_r835724413



##
File path: 
flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/dql/SqlShowDatabases.java
##
@@ -49,8 +70,44 @@ public SqlOperator getOperator() {
 return Collections.EMPTY_LIST;
 }
 
+public String getCatalogName() {
+return Objects.isNull(this.catalogName) ? null : 
catalogName.getSimple();
+}
+
+public boolean isNotLike() {
+return notLike;
+}
+
+public String getPreposition() {
+return preposition;
+}
+
+public String getLikeSqlPattern() {
+return Objects.isNull(this.likeLiteral) ? null : 
likeLiteral.getValueAs(String.class);
+}
+
+public SqlCharStringLiteral getLikeLiteral() {
+return likeLiteral;
+}
+
+public boolean isWithLike() {
+return Objects.nonNull(likeLiteral);
+}
+
 @Override
 public void unparse(SqlWriter writer, int leftPrec, int rightPrec) {
-writer.keyword("SHOW DATABASES");
+if (this.preposition == null) {
+writer.keyword("SHOW DATABASES");
+} else if (catalogName != null) {
+writer.keyword("SHOW DATABASES " + this.preposition);
+catalogName.unparse(writer, leftPrec, rightPrec);
+}
+if (likeLiteral != null) {
+if (notLike) {
+writer.keyword(String.format("NOT LIKE '%s'", 
getLikeSqlPattern()));

Review comment:
   There would encounter an error like `Was expecting:
...` without `like` QUOTED_STRING.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org