[jira] [Created] (FLINK-26938) HybridSource recovery from savepoint fails When flink parallelism is greater than the number of Kafka partitions

2022-03-30 Thread Jira
文报 created FLINK-26938:
--

 Summary: HybridSource recovery from savepoint fails When flink 
parallelism is greater than the number of Kafka partitions
 Key: FLINK-26938
 URL: https://issues.apache.org/jira/browse/FLINK-26938
 Project: Flink
  Issue Type: Bug
  Components: API / DataStream
Affects Versions: 1.14.0
 Environment: Flink 1.14.0
Reporter: 文报
 Attachments: image-2022-03-31-13-25-28-520.png

First test

Flink job before savePoint
    flink parallelism =16
    kafka partition=3
Flink after savePoint
case 1:
    flink parallelism =16
    kafka partition=3

HybridSource recovery from savepoint fails 
!image-2022-03-31-11-12-56-562.png!


case 2:
    flink parallelism =3
    kafka partition=3
HybridSource recovery from savepoint  successful

case 3:
    flink parallelism =8
    kafka partition=3
HybridSource recovery from savepoint fails  the same NullPointerException: 
Source for index=0 not available

case 4:
    flink parallelism =4
    kafka partition=3
HybridSource recovery from savepoint fails  the same NullPointerException: 
Source for index=0 not available

case 5:
    flink parallelism =1
    kafka partition=3
HybridSource recovery from savepoint  successful

Second test

Flink job before savePoint
    flink parallelism =3
    kafka partition=3
Flink after savePoint
case 1:
    flink parallelism =3
    kafka partition=3
HybridSource recovery from savepoint  successful

case 2:
    flink parallelism =1
    kafka partition=3
HybridSource recovery from savepoint  successful

case 3:
    flink parallelism =4
    kafka partition=3
HybridSource recovery from savepoint fails  the same NullPointerException: 
Source for index=0 not available

Specific code see the attached test code HybridSourceTest

 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26937) Introduce Leveled compression for LSM

2022-03-30 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-26937:


 Summary: Introduce Leveled compression for LSM
 Key: FLINK-26937
 URL: https://issues.apache.org/jira/browse/FLINK-26937
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Reporter: Jingsong Lee
 Fix For: table-store-0.1.0


Currently ORC is all ZLIB compression by default, in fact the files at level 0, 
will definitely be rewritten and we can have different compression for 
different levels.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26936) Pushdown watermark to table store source

2022-03-30 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-26936:


 Summary: Pushdown watermark to table store source
 Key: FLINK-26936
 URL: https://issues.apache.org/jira/browse/FLINK-26936
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Reporter: Jingsong Lee
 Fix For: table-store-0.1.0






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26935) Sort records by sequence number when continuous reading

2022-03-30 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-26935:


 Summary: Sort records by sequence number when continuous reading
 Key: FLINK-26935
 URL: https://issues.apache.org/jira/browse/FLINK-26935
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Reporter: Jingsong Lee
 Fix For: table-store-0.1.0






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: [VOTE] Apache Flink Kubernetes Operator Release 0.1.0, release candidate #3

2022-03-30 Thread Márton Balassi
+1 (binding)

Verified the following:

   - shasums
   - gpg signatures
   - source does not contain any binaries
   - built from source
   - deployed via helm after adding the distribution webserver endpoint as
   a helm registry
   - all relevant files have license headers


On Wed, Mar 30, 2022 at 4:39 PM Gyula Fóra  wrote:

> Hi everyone,
>
> Please review and vote on the release candidate #3 for the version 0.1.0 of
> Apache Flink Kubernetes Operator,
> as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
> **Release Overview**
>
> As an overview, the release consists of the following:
> a) Kubernetes Operator canonical source distribution (including the
> Dockerfile), to be deployed to the release repository at dist.apache.org
> b) Kubernetes Operator Helm Chart to be deployed to the release repository
> at dist.apache.org
> c) Maven artifacts to be deployed to the Maven Central Repository
> d) Docker image to be pushed to dockerhub
>
> **Staging Areas to Review**
>
> The staging areas containing the above mentioned artifacts are as follows,
> for your review:
> * All artifacts for a,b) can be found in the corresponding dev repository
> at dist.apache.org [1]
> * All artifacts for c) can be found at the Apache Nexus Repository [2]
> * The docker image is staged on github [7]
>
> All artifacts are signed with the key
> 0B4A34ADDFFA2BB54EB720B221F06303B87DAFF1 [3]
>
> Other links for your review:
> * JIRA release notes [4]
> * source code tag "release-0.1.0-rc3" [5]
> * PR to update the website Downloads page to include Kubernetes Operator
> links [6]
>
> **Vote Duration**
>
> The voting time will run for at least 72 hours.
> It is adopted by majority approval, with at least 3 PMC affirmative votes.
>
> **Note on Verification**
>
> You can follow the basic verification guide here
> <
> https://cwiki.apache.org/confluence/display/FLINK/Verifying+a+Flink+Kubernetes+Operator+Release
> >
> .
> Note that you don't need to verify everything yourself, but please make
> note of what you have tested together with your +- vote.
>
> Thanks,
> Gyula
>
> [1]
>
> https://dist.apache.org/repos/dist/dev/flink/flink-kubernetes-operator-0.1.0-rc3/
> [2]
> https://repository.apache.org/content/repositories/orgapacheflink-1492/
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4]
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12351499
> [5]
> https://github.com/apache/flink-kubernetes-operator/tree/release-0.1.0-rc3
> [6] https://github.com/apache/flink-web/pull/519
> [7] ghcr.io/apache/flink-kubernetes-operator:2c166e3
>


[jira] [Created] (FLINK-26934) Add reference to operator meetup talk to docs

2022-03-30 Thread Jira
Márton Balassi created FLINK-26934:
--

 Summary: Add reference to operator meetup talk to docs
 Key: FLINK-26934
 URL: https://issues.apache.org/jira/browse/FLINK-26934
 Project: Flink
  Issue Type: Sub-task
Reporter: Márton Balassi
Assignee: Márton Balassi


Based on the community feedback let us add [~matyas]'s talk recording to the 
docs main page.

https://www.youtube.com/watch?v=bedzs9jBUfc=2121s



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink-connector-elasticsearch] AHeise merged pull request #3: [hotfix] Change email/repository notifications to match with Flink Core

2022-03-30 Thread GitBox


AHeise merged pull request #3:
URL: https://github.com/apache/flink-connector-elasticsearch/pull/3


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-26933) FlinkKinesisConsumer incorrectly determines shards as newly discovered when tested against Kinesalite when consuming DynamoDB streams

2022-03-30 Thread Arian Rohani (Jira)
Arian Rohani created FLINK-26933:


 Summary: FlinkKinesisConsumer incorrectly determines shards as 
newly discovered when tested against Kinesalite when consuming DynamoDB streams
 Key: FLINK-26933
 URL: https://issues.apache.org/jira/browse/FLINK-26933
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kinesis
Reporter: Arian Rohani


This ticket is related to https://issues.apache.org/jira/browse/FLINK-5075

The kinesalite (mock implementation of kinesis) does not take 
exclusiveShardStartId into account when performing a DescribeStream operation. 
This causes the FlinkKinesisConsumer to resubscribe to already subscribed 
shards and reconsume the records when consuming a DynamoDB stream. A fix was 
implemented for the Kinesis stream inside of the listShards(...) method, but 
this logic (see 
[here|https://github.com/apache/flink/blame/b2ca390d478aa855eb0f2028d0ed965803a98af1/flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/proxy/KinesisProxy.java#L568])
 is not executed when connecting to a DynamoDB stream.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: [DISCUSS] FLIP-216 Decouple Hive connector with Flink planner

2022-03-30 Thread Francesco Guardiani
Sorry I replied on the wrong thread, i repost my answer here :)

As there was already a discussion in the doc, I'll just summarize my
opinions here on the proposed execution of this FLIP.

I think we should rather avoid exposing internal details, which I consider
Calcite to be part of, but rather reuse what we already have to define an
AST from Table API, which is what I'll refer in this mail as Operation tree.

First of all, the reason I think this FLIP is not a good idea is that it
proposes is to expose types out of our control, so an API we cannot control
and we may realistically never be able to stabilize. A Calcite bump in the
table project is already pretty hard today, as shown by tasks like that
https://github.com/apache/flink/pull/13577. This will make them even
harder. Essentially it will couple us to Calcite even more, and create a
different but still big maintenance/complexity burden we would like to get
rid of with this FLIP.

There are also some technical aspects that seem to me a bit overlooked here:

* What about Scala? Is flink-table-planner-spi going to be a scala module
with the related suffix? Because I see you want to expose a couple of types
which we have implemented with Scala right now, and making this module
Scala dependent makes even more complicated shipping both modules that use
it and flink-table-planner-loader.
* Are you sure exposing the Calcite interfaces is going to be enough? Don't
you also require some instance specific methods? E.g.
FlinkTypeFactory#toLogicalType? What if at some point you need to expose
something like FlinkTypeFactory? How do you plan to support it and
stabilize it in the long term?

Now let me talk a bit about the Operation tree. For who doesn't know what
it is, it's the pure Flink AST for defining DML, used for converting the
Table API DSL to an AST the planner can manipulate. Essentially, it's our
own version of the RelNode tree/RexNode tree. This operation tree can be
used already by Hive, without any API changes on Table side. You just need
a downcast of TableEnvironmentImpl to use getPlanner() and use
Planner#translate, or alternatively you can add getPlanner to
TableEnvironmentInternal directly. From what I've seen about your use case,
and please correct me if I'm wrong, you can implement your SQL -> Operation
tree layer without substantial changes on both sides.

The reason why I think this is a better idea, rather than exposing Calcite
and RelNodes directly, is:

* Aforementioned downsides of exposing Calcite
* It doesn't require a new API to get you started with it
* Doesn't add complexity on planner side, just removes it from the existing
coupling with hive
* Letting another project use the Operation tree will harden it, make it
more stable and eventually lead to become public

The last point in particular is extremely interesting for the future of the
project, as having a stable public Operation tree will allow people to
implement other relational based APIs on top of Flink SQL, or manipulate
the AST to define new semantics, or even more crazy things we can't think
of right now, leading to a broader bigger and more diverse ecosystem. Which
is exactly what Hive is doing right now at the end of the day, define a new
relational API on top of the Flink Table planner functionalities.

On Wed, Mar 30, 2022 at 4:45 PM 罗宇侠(莫辞)
 wrote:

> Hi, I would like to explain a bit more about the current dissusion[1] for
> the ways to decouple Hive connector with Flink Planner.
>
> The background is to support Hive dialect, the Hive connector is dependent
> on Flink Planner for the current implementation is generate RelNode and
> then deliver the RelNode to Flink.
>  But it also brings much complexity and maintenance burden, so we want to
> decouple Hive connector with Flink planner.
>
> There're two ways to do that:
> 1. Make the hive parser just generate an Operation tree like Table API
> currently does.
>
> 2. Introduce a slim module called table-planner-spl which provide Calcite
> dependency and expose limit public intefaces. Then, still generating
> Calcite RelNode, the Hive connector will only require the slim module.
>
> The first way is the ideal way and we should go in that direction. But it
> will take much effort for it requires rewriting all the code about Hive
> dialect and it's hard to do it in one shot.
> And given we want to move out Hive connector in 1.16, it's more pratical
> to decouple first, and then migrate it to operation tree.
> So, the FLIP-216[2] is for the second way. It explains the public
> interfaces to be exposed,  all of which has been implemented by
> PlannerContext.
>
> [1]
> https://docs.google.com/document/d/1LMQ_mWfB_mkYkEBCUa2DgCO2YdtiZV7YRs2mpXyjdP4/edit?usp=sharing
> [2]
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-216%3A+Decouple+Hive+connector+with+Flink+planner
>
> Best regards,
> Yuxia--
> 发件人:罗宇侠(莫辞)
> 日 期:2022年03月25日 20:41:15
> 

[jira] [Created] (FLINK-26932) TaskManager hung in cleanupAllocationBaseDirs not exit.

2022-03-30 Thread huweihua (Jira)
huweihua created FLINK-26932:


 Summary: TaskManager hung in cleanupAllocationBaseDirs not exit.
 Key: FLINK-26932
 URL: https://issues.apache.org/jira/browse/FLINK-26932
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Reporter: huweihua


The disk TaskManager used had some fatal error. And then TaskManager hung in 
cleanupAllocationBaseDirs and took the main thread.
 
So this TaskManager would not respond to the 
cancelTask/disconnectResourceManager request.
 
At the same time, JobMaster already take this TaskManager is lost, and schedule 
task to other TaskManager.
 
This may cause some unexpected task running.
 
After checking the log of TaskManager, TM already lost the connection with 
ResourceManager, and it is always trying to register with ResourceManager. The 
RegistrationTimeout cannot take effect because the main thread of TaskManager 
is hung-up.
 
I think there are two options to handle it. # 
Option 1: Add timeout for 
TaskExecutorLocalStateStoreManager.cleanupAllocationBaseDirs, But I am afraid 
some other methods would block main thread too.

 # 
Option 2: Move the registrationTimeout in another thread, we need to deal will 
the concurrency problem
!https://bytedance.feishu.cn/space/api/box/stream/download/asynccode/?code=ZmVkMDNhZjZkNzA2NTNkOGZjNjJmNGM0ZGYxNGY2NDFfTnV4SUd0RzQ3WnVJRWpWdVBJNFFncEMzTHdZZ3U0WDFfVG9rZW46Ym94Y25zMG1GdWM5M2hKNzJEcXhyN0FmRFgxXzE2NDg2NTE4Njg6MTY0ODY1NTQ2OF9WNA!
 
!https://bytedance.feishu.cn/space/api/box/stream/download/asynccode/?code=MDhiZDU0NDg0NzU3ZjgwYmIxOTU0YzQyMTIxMGE4YzJfQkhLMVI2bEZGUnhpR210c1BDelZDRUI3YjJDY2Q1T3NfVG9rZW46Ym94Y250aG1UTjBXTmI2TTFqYlV1eG9MTnMwXzE2NDg2NTE4NzU6MTY0ODY1NTQ3NV9WNA!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


回复:[DISCUSS] FLIP-216 Decouple Hive connector with Flink planner

2022-03-30 Thread 罗宇侠(莫辞)
Hi, I would like to explain a bit more about the current dissusion[1] for the 
ways to decouple Hive connector with Flink Planner. 

The background is to support Hive dialect, the Hive connector is dependent on 
Flink Planner for the current implementation is generate RelNode and then 
deliver the RelNode to Flink.
 But it also brings much complexity and maintenance burden, so we want to 
decouple Hive connector with Flink planner. 

There're two ways to do that:
1. Make the hive parser just generate an Operation tree like Table API 
currently does.

2. Introduce a slim module called table-planner-spl which provide Calcite 
dependency and expose limit public intefaces. Then, still generating Calcite 
RelNode, the Hive connector will only require the slim module.

The first way is the ideal way and we should go in that direction. But it will 
take much effort for it requires rewriting all the code about Hive dialect and 
it's hard to do it in one shot.
And given we want to move out Hive connector in 1.16, it's more pratical to 
decouple first, and then migrate it to operation tree.
So, the FLIP-216[2] is for the second way. It explains the public interfaces to 
be exposed,  all of which has been implemented by PlannerContext. 

[1] 
https://docs.google.com/document/d/1LMQ_mWfB_mkYkEBCUa2DgCO2YdtiZV7YRs2mpXyjdP4/edit?usp=sharing
[2] 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-216%3A+Decouple+Hive+connector+with+Flink+planner

Best regards,
Yuxia--
发件人:罗宇侠(莫辞)
日 期:2022年03月25日 20:41:15
收件人:dev@flink.apache.org
主 题:[DISCUSS] FLIP-216 Decouple Hive connector with Flink planner

Hi, everyone

I would like to open a discussion about decoupling Hive connector with Flink 
table planner.  It's a follow-up discussion after Hive syntax discussion[1], 
but only focus on how to decouple Hive connector. The origin doc is here[2], 
from which you can see the details work and heated discussion about exposing 
Calcite API or reuse Operation tree to decouple. 
I have created FLIP-216: Decouple Hive connector with Flink planner[3] for it.

Thanks and looking forward to a lively discussion!

[1] https://lists.apache.org/thread/2w046dwl46tf2wy750gzmt0qrcz17z8t
[2] 
https://docs.google.com/document/d/1LMQ_mWfB_mkYkEBCUa2DgCO2YdtiZV7YRs2mpXyjdP4/edit?usp=sharing
[3] 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-216%3A+Decouple+Hive+connector+with+Flink+planner

Best regards,
Yuxia


[VOTE] Apache Flink Kubernetes Operator Release 0.1.0, release candidate #3

2022-03-30 Thread Gyula Fóra
Hi everyone,

Please review and vote on the release candidate #3 for the version 0.1.0 of
Apache Flink Kubernetes Operator,
as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)

**Release Overview**

As an overview, the release consists of the following:
a) Kubernetes Operator canonical source distribution (including the
Dockerfile), to be deployed to the release repository at dist.apache.org
b) Kubernetes Operator Helm Chart to be deployed to the release repository
at dist.apache.org
c) Maven artifacts to be deployed to the Maven Central Repository
d) Docker image to be pushed to dockerhub

**Staging Areas to Review**

The staging areas containing the above mentioned artifacts are as follows,
for your review:
* All artifacts for a,b) can be found in the corresponding dev repository
at dist.apache.org [1]
* All artifacts for c) can be found at the Apache Nexus Repository [2]
* The docker image is staged on github [7]

All artifacts are signed with the key
0B4A34ADDFFA2BB54EB720B221F06303B87DAFF1 [3]

Other links for your review:
* JIRA release notes [4]
* source code tag "release-0.1.0-rc3" [5]
* PR to update the website Downloads page to include Kubernetes Operator
links [6]

**Vote Duration**

The voting time will run for at least 72 hours.
It is adopted by majority approval, with at least 3 PMC affirmative votes.

**Note on Verification**

You can follow the basic verification guide here

.
Note that you don't need to verify everything yourself, but please make
note of what you have tested together with your +- vote.

Thanks,
Gyula

[1]
https://dist.apache.org/repos/dist/dev/flink/flink-kubernetes-operator-0.1.0-rc3/
[2] https://repository.apache.org/content/repositories/orgapacheflink-1492/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4]
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12351499
[5]
https://github.com/apache/flink-kubernetes-operator/tree/release-0.1.0-rc3
[6] https://github.com/apache/flink-web/pull/519
[7] ghcr.io/apache/flink-kubernetes-operator:2c166e3


[GitHub] [flink-connector-elasticsearch] MartijnVisser opened a new pull request #3: [hotfix] Change email/repository notifications to match with Flink Core

2022-03-30 Thread GitBox


MartijnVisser opened a new pull request #3:
URL: https://github.com/apache/flink-connector-elasticsearch/pull/3


   Matching https://github.com/apache/flink/blob/master/.asf.yaml


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-connector-elasticsearch] MartijnVisser commented on pull request #2: [Flink-26884][draft] move elasticsearch connectors to the external repo

2022-03-30 Thread GitBox


MartijnVisser commented on pull request #2:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/2#issuecomment-1083167678


   > > @JingGe Thanks for this! Should we also be able to run `mvn clean 
install` ? I've tried that, but I'm getting getting some errors:
   > 
   > looks like a Java version issue with the `InaccessibleObjectException`. 
Which Java version did you use while running the script?
   
   ```
   java -version
   
   openjdk version "1.8.0_322"
   OpenJDK Runtime Environment (Temurin)(build 1.8.0_322-b06)
   OpenJDK 64-Bit Server VM (Temurin)(build 25.322-b06, mixed mode)
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-connector-elasticsearch] JingGe commented on pull request #2: [Flink-26884][draft] move elasticsearch connectors to the external repo

2022-03-30 Thread GitBox


JingGe commented on pull request #2:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/2#issuecomment-1083166099


   > @JingGe Thanks for this! Should we also be able to run `mvn clean install` 
? I've tried that, but I'm getting getting some errors:
   
   looks like a Java version issue with the `InaccessibleObjectException`. 
Which Java version did you use while running the script? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-connector-elasticsearch] MartijnVisser edited a comment on pull request #2: [Flink-26884][draft] move elasticsearch connectors to the external repo

2022-03-30 Thread GitBox


MartijnVisser edited a comment on pull request #2:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/2#issuecomment-1083152212


   @JingGe Thanks for this! Should we also be able to run `mvn clean install` ? 
I've tried that, but I'm getting getting some errors:
   
   ```
   [INFO] Running 
org.apache.flink.streaming.connectors.elasticsearch6.ElasticsearchSinkITCase
   [INFO] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
20.075 s - in 
org.apache.flink.streaming.connectors.elasticsearch6.ElasticsearchSinkITCase
   [INFO] Running 
org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch6DynamicSinkITCase
   [ERROR] Tests run: 4, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 
14.291 s <<< FAILURE! - in 
org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch6DynamicSinkITCase
   [ERROR] 
org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch6DynamicSinkITCase.testWritingDocumentsNoPrimaryKey
  Time elapsed: 2.69 s  <<< ERROR!
   java.lang.reflect.InaccessibleObjectException: Unable to make field private 
static final int java.lang.Class.ANNOTATION accessible: module java.base does 
not "opens java.lang" to unnamed module @1a27aae3
at 
java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354)
at 
java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
at 
java.base/java.lang.reflect.Field.checkCanSetAccessible(Field.java:178)
at java.base/java.lang.reflect.Field.setAccessible(Field.java:172)
at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:106)
at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132)
at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132)
at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:69)
at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:2194)
at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.addSource(StreamExecutionEnvironment.java:1871)
at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.addSource(StreamExecutionEnvironment.java:1854)
at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.createInput(StreamExecutionEnvironment.java:1753)
at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.createInput(StreamExecutionEnvironment.java:1743)
at 
org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecValues.translateToPlanInternal(CommonExecValues.java:73)
at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:148)
at 
org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:249)
at 
org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecCalc.translateToPlanInternal(CommonExecCalc.java:94)
at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:148)
at 
org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:249)
at 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.java:136)
at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:148)
at 
org.apache.flink.table.planner.delegation.StreamPlanner.$anonfun$translateToPlan$1(StreamPlanner.scala:79)
at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
at scala.collection.Iterator.foreach(Iterator.scala:937)
at scala.collection.Iterator.foreach$(Iterator.scala:937)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
at scala.collection.IterableLike.foreach(IterableLike.scala:70)
at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike.map(TraversableLike.scala:233)
at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at 
org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:78)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:181)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1656)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:782)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:861)
at 

[GitHub] [flink-connector-elasticsearch] MartijnVisser commented on pull request #2: [Flink-26884][draft] move elasticsearch connectors to the external repo

2022-03-30 Thread GitBox


MartijnVisser commented on pull request #2:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/2#issuecomment-1083152212


   @JingGe Thanks for this! Should we also be able to run `mvn clean install` ? 
I've tried that, but I'm getting getting some errors:
   
   ```
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-connector-elasticsearch] JingGe opened a new pull request #2: [Flink-26884][draft] move elasticsearch connectors to the external repo

2022-03-30 Thread GitBox


JingGe opened a new pull request #2:
URL: https://github.com/apache/flink-connector-elasticsearch/pull/2


   ## What is the purpose of the change
   
   **Attention**: this PR is still under construction with the following tasks:
   1. merge the change of #18634
   2. remove all code and test related to legacy SouceFunction/SinkFunction.
   
   With this PR, Elasticsearch connectors will be moved to the external repo.
   
   
   ## Brief change log
   
 -  create a new maven project and migrate the most part of the Flink maven 
pom.
 -  migrate `flink-connector-elasticsearch-base`, 
`flink-connector-elasticsearch6`, `flink-connector-elasticsearch7` 
 -  migrate uber for SQL: ` flink-sql-connector-elasticsearch6`, 
`flink-sql-connector-elasticsearch7` 
 -  some dependency bugs fix to make the compile and test passed
   
   ## Verifying this change
   
   It can be verified by checking if "mvn clean package" works successfully.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-26931) Pulsar sink's producer name should be unique

2022-03-30 Thread Yufan Sheng (Jira)
Yufan Sheng created FLINK-26931:
---

 Summary: Pulsar sink's producer name should be unique
 Key: FLINK-26931
 URL: https://issues.apache.org/jira/browse/FLINK-26931
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Pulsar
Affects Versions: 1.15.0, 1.16.0
Reporter: Yufan Sheng
 Fix For: 1.15.0


Pulsar's new sink interface didn't make the producer name unique. Which would 
make the pulsar fail to consume messages.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Call for Presentations now open, ApacheCon North America 2022

2022-03-30 Thread Rich Bowen
[You are receiving this because you are subscribed to one or more user
or dev mailing list of an Apache Software Foundation project.]

ApacheCon draws participants at all levels to explore “Tomorrow’s
Technology Today” across 300+ Apache projects and their diverse
communities. ApacheCon showcases the latest developments in ubiquitous
Apache projects and emerging innovations through hands-on sessions,
keynotes, real-world case studies, trainings, hackathons, community
events, and more.

The Apache Software Foundation will be holding ApacheCon North America
2022 at the New Orleans Sheration, October 3rd through 6th, 2022. The
Call for Presentations is now open, and will close at 00:01 UTC on May
23rd, 2022.

We are accepting presentation proposals for any topic that is related
to the Apache mission of producing free software for the public good.
This includes, but is not limited to:

Community
Big Data
Search
IoT
Cloud
Fintech
Pulsar
Tomcat

You can submit your session proposals starting today at
https://cfp.apachecon.com/

Rich Bowen, on behalf of the ApacheCon Planners
apachecon.com
@apachecon


[jira] [Created] (FLINK-26930) Rethink last-state upgrade implementation in flink-kubernetes-operator

2022-03-30 Thread Yang Wang (Jira)
Yang Wang created FLINK-26930:
-

 Summary: Rethink last-state upgrade implementation in 
flink-kubernetes-operator
 Key: FLINK-26930
 URL: https://issues.apache.org/jira/browse/FLINK-26930
 Project: Flink
  Issue Type: Improvement
  Components: Kubernetes Operator
Reporter: Yang Wang


Following the discussion in FLINK-26916.

 

How the last-state upgrade works now?

First, delete the Flink cluster directly with HA ConfigMap retained. This 
leaves job in a "SUSPENDED" state. Then flink-kubernetes-operator will deploy a 
new Flink application with same cluster-id so that it could recover from the 
latest checkpoint. Please note that before starting the application, JobGraph 
will be deleted from the HA ConfigMap. This is to ensure the newly changed job 
options could take effect.

 

Some community devs are thinking to extend the JRS so the stored job result 
contains list of retained checkpoints. This of course implies that cluster gets 
shut down / job gets terminated properly (other cases should be used for 
fail-over scenarios only).

 

As soon as there is a straightforward way of accessing the last checkpoint, we 
should improve the current implementation.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26929) Introduce adaptive hash join for batch sql optimization

2022-03-30 Thread dalongliu (Jira)
dalongliu created FLINK-26929:
-

 Summary: Introduce adaptive hash join for batch sql optimization
 Key: FLINK-26929
 URL: https://issues.apache.org/jira/browse/FLINK-26929
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / Runtime
Reporter: dalongliu
 Fix For: 1.16.0


We propose an optimization method adaptive hash join for the batch join 
scenario, hoping to integrate the advantages of sorted-merge join and hash join 
according to the characteristics of runtime data. The adaptive hash join will 
try to use hash join firstly, if it failed, will fallback to sort merge join.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26928) Remove unnecessary Docker network creation in Kafka connector tests

2022-03-30 Thread Qingsheng Ren (Jira)
Qingsheng Ren created FLINK-26928:
-

 Summary: Remove unnecessary Docker network creation in Kafka 
connector tests
 Key: FLINK-26928
 URL: https://issues.apache.org/jira/browse/FLINK-26928
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Kafka
Affects Versions: 1.15.0, 1.16.0
Reporter: Qingsheng Ren
 Fix For: 1.15.0, 1.16.0


Currently each Kafka test class will create a Docker network, which could flush 
the network usage on Docker host, and test would fail if all IP address in the 
pool of Docker are occupied. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26927) Support restarting JobManager in FlinkContainers

2022-03-30 Thread Qingsheng Ren (Jira)
Qingsheng Ren created FLINK-26927:
-

 Summary: Support restarting JobManager in FlinkContainers
 Key: FLINK-26927
 URL: https://issues.apache.org/jira/browse/FLINK-26927
 Project: Flink
  Issue Type: Improvement
  Components: Test Infrastructure
Affects Versions: 1.16.0
Reporter: Qingsheng Ren
 Fix For: 1.16.0


Some HA related test cases require restarting JobManager during the test. Some 
improvements are required in FlinkContainers to support this feature.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: [VOTE] Apache Flink Kubernetes Operator Release 0.1.0, release candidate #2

2022-03-30 Thread Biao Geng
Thanks Marton,
I vote for consistency and `flink-kubernetes-operator` as well.
`flink-operator` have been used widely in docker hub and other k8s operator
implementations, which may confuse users.

Best,
Biao

Gyula Fóra  于2022年3月30日周三 16:26写道:

> Thanks Marton!
> I think we should aim for consistency and easy discoverability. Since we
> use the name `flink-kubernetes-operator` for the project and everywhere on
> the website to refer to it, I suggest adopting it as the name also.
>
> This should be a very simple change that we can easily do before the next
> rc if there are no objections.
>
> Gyula
>
> On Wed, Mar 30, 2022 at 10:10 AM Márton Balassi 
> wrote:
>
> > Hi team,
> >
> > I would like to ask for your input in naming the operator docker image
> and
> > helm chart. [1]
> >
> > For the sake of brevity when we started the Kubernetes Operator work we
> > named the docker image and the helm chart simply flink-operator, while
> the
> > git repository is named flink-kubernetes-operator. [2]
> > Now closing in on our preview release it makes sense to reconsider this,
> it
> > might be preferred to name all artifacts flink-kubernetes-operator for
> the
> > sake of consistency.
> > Currently docker images of our builds are available in the GitHub
> Registry
> > tagged with the short git commit hash and the last build of select
> branches
> > is tagged with the branch name:
> >
> > ghcr.io/apache/flink-operator:439bd41ghcr.io/apache/flink-operator:main
> >
> > During the release process we plan to move the docker image to dockerhub
> > following the process established for Flink.
> > Currently the helm operator for the release candidate can be installed as
> > follows:
> >
> > helm repo add operator-rc2
> >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-kubernetes-operator-0.1.0-rc2
> > helm install flink-operator operator-rc2/flink-operator
> >
> > So the helm chart itself is called flink-operator, but to follow the name
> > of the project it is packaged into
> flink-kubernetes-operator-.tgz.
> > Do you prefer flink-operator for brevity or flink-kubernetes-operator for
> > consistency? If we vote for changing the current setup we could get the
> > changes in for the next RC.
> >
> > [1] https://issues.apache.org/jira/browse/FLINK-26924
> > [2] https://github.com/apache/flink-kubernetes-operator
> > [3]
> >
> >
> https://cwiki.apache.org/confluence/display/FLINK/Creating+a+Flink+Release#CreatingaFlinkRelease-PublishtheDockerfilesforthenewrelease
> >
> > On Wed, Mar 30, 2022 at 9:17 AM Gyula Fóra  wrote:
> >
> > > Hi Devs,
> > >
> > > I am cancelling the current vote as we have promoted
> > > https://issues.apache.org/jira/browse/FLINK-26916 as a blocker issue.
> > >
> > > Yang has found a fix that we can reasonably do in a short time frame,
> so
> > we
> > > will work on that today before creating the next RC (hopefully later
> > today)
> > >
> > > Thank you!
> > > Gyula
> > >
> > > On Tue, Mar 29, 2022 at 7:51 PM Gyula Fóra 
> wrote:
> > >
> > > > Hi Devs,
> > > >
> > > > I just want to give you heads up regarding a significant issue Matyas
> > > > found around the last-state upgrade mode:
> > > > https://issues.apache.org/jira/browse/FLINK-26916
> > > >
> > > > Long story short, the last-state mode does not actually upgrade the
> > job,
> > > > only the job/taskmanagers. The fix is non-trivial to say the least
> and
> > > > requires changes to the Flink Kubernetes HA implementations to be
> > > feasible.
> > > > Please see the jira ticket for details.
> > > >
> > > > I discussed this offline with Thomas Weise and we agreed to clearly
> > > > highlight this limitation with warnings in the documentation and the
> > > > release blogpost but not block the release on it as the fix will most
> > > > likely require Flink 1.15 if we can include the required changes
> there.
> > > >
> > > > Please continue testing the current RC :)
> > > >
> > > > Thank you all!
> > > > Gyula
> > > >
> > > >
> > > > On Tue, Mar 29, 2022 at 5:17 PM Chenya Zhang 
> > wrote:
> > > >
> > > >> +1 non-binding, thanks for the great efforts and look forward to the
> > > >> release!
> > > >>
> > > >> Chenya
> > > >>
> > > >> On Tue, Mar 29, 2022 at 2:10 AM Gyula Fóra 
> wrote:
> > > >>
> > > >> > Hi everyone,
> > > >> >
> > > >> > Please review and vote on the release candidate #2 for the version
> > > >> 0.1.0 of
> > > >> > Apache Flink Kubernetes Operator,
> > > >> > as follows:
> > > >> > [ ] +1, Approve the release
> > > >> > [ ] -1, Do not approve the release (please provide specific
> > comments)
> > > >> >
> > > >> > **Release Overview**
> > > >> >
> > > >> > As an overview, the release consists of the following:
> > > >> > a) Kubernetes Operator canonical source distribution (including
> the
> > > >> > Dockerfile), to be deployed to the release repository at
> > > >> dist.apache.org
> > > >> > b) Kubernetes Operator Helm Chart to be deployed to the release
> > > >> repository
> > > >> > at dist.apache.org
> > > >> > c) 

[jira] [Created] (FLINK-26926) Allow users to force upgrade even if savepoint is in progress

2022-03-30 Thread Gyula Fora (Jira)
Gyula Fora created FLINK-26926:
--

 Summary: Allow users to force upgrade even if savepoint is in 
progress
 Key: FLINK-26926
 URL: https://issues.apache.org/jira/browse/FLINK-26926
 Project: Flink
  Issue Type: Improvement
  Components: Kubernetes Operator
Reporter: Gyula Fora
 Fix For: kubernetes-operator-1.0.0


Currently all upgrades (regardless of upgrade mode) are delayed as long as 
there is a pending savepoint operation.

We should allow users to override this and execute the upgrade (thus 
potentially cancelling the savepoint) regardless of the savepoint status.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: [VOTE] Apache Flink Kubernetes Operator Release 0.1.0, release candidate #2

2022-03-30 Thread Gyula Fóra
Thanks Marton!
I think we should aim for consistency and easy discoverability. Since we
use the name `flink-kubernetes-operator` for the project and everywhere on
the website to refer to it, I suggest adopting it as the name also.

This should be a very simple change that we can easily do before the next
rc if there are no objections.

Gyula

On Wed, Mar 30, 2022 at 10:10 AM Márton Balassi 
wrote:

> Hi team,
>
> I would like to ask for your input in naming the operator docker image and
> helm chart. [1]
>
> For the sake of brevity when we started the Kubernetes Operator work we
> named the docker image and the helm chart simply flink-operator, while the
> git repository is named flink-kubernetes-operator. [2]
> Now closing in on our preview release it makes sense to reconsider this, it
> might be preferred to name all artifacts flink-kubernetes-operator for the
> sake of consistency.
> Currently docker images of our builds are available in the GitHub Registry
> tagged with the short git commit hash and the last build of select branches
> is tagged with the branch name:
>
> ghcr.io/apache/flink-operator:439bd41ghcr.io/apache/flink-operator:main
>
> During the release process we plan to move the docker image to dockerhub
> following the process established for Flink.
> Currently the helm operator for the release candidate can be installed as
> follows:
>
> helm repo add operator-rc2
>
> https://dist.apache.org/repos/dist/dev/flink/flink-kubernetes-operator-0.1.0-rc2
> helm install flink-operator operator-rc2/flink-operator
>
> So the helm chart itself is called flink-operator, but to follow the name
> of the project it is packaged into flink-kubernetes-operator-.tgz.
> Do you prefer flink-operator for brevity or flink-kubernetes-operator for
> consistency? If we vote for changing the current setup we could get the
> changes in for the next RC.
>
> [1] https://issues.apache.org/jira/browse/FLINK-26924
> [2] https://github.com/apache/flink-kubernetes-operator
> [3]
>
> https://cwiki.apache.org/confluence/display/FLINK/Creating+a+Flink+Release#CreatingaFlinkRelease-PublishtheDockerfilesforthenewrelease
>
> On Wed, Mar 30, 2022 at 9:17 AM Gyula Fóra  wrote:
>
> > Hi Devs,
> >
> > I am cancelling the current vote as we have promoted
> > https://issues.apache.org/jira/browse/FLINK-26916 as a blocker issue.
> >
> > Yang has found a fix that we can reasonably do in a short time frame, so
> we
> > will work on that today before creating the next RC (hopefully later
> today)
> >
> > Thank you!
> > Gyula
> >
> > On Tue, Mar 29, 2022 at 7:51 PM Gyula Fóra  wrote:
> >
> > > Hi Devs,
> > >
> > > I just want to give you heads up regarding a significant issue Matyas
> > > found around the last-state upgrade mode:
> > > https://issues.apache.org/jira/browse/FLINK-26916
> > >
> > > Long story short, the last-state mode does not actually upgrade the
> job,
> > > only the job/taskmanagers. The fix is non-trivial to say the least and
> > > requires changes to the Flink Kubernetes HA implementations to be
> > feasible.
> > > Please see the jira ticket for details.
> > >
> > > I discussed this offline with Thomas Weise and we agreed to clearly
> > > highlight this limitation with warnings in the documentation and the
> > > release blogpost but not block the release on it as the fix will most
> > > likely require Flink 1.15 if we can include the required changes there.
> > >
> > > Please continue testing the current RC :)
> > >
> > > Thank you all!
> > > Gyula
> > >
> > >
> > > On Tue, Mar 29, 2022 at 5:17 PM Chenya Zhang 
> wrote:
> > >
> > >> +1 non-binding, thanks for the great efforts and look forward to the
> > >> release!
> > >>
> > >> Chenya
> > >>
> > >> On Tue, Mar 29, 2022 at 2:10 AM Gyula Fóra  wrote:
> > >>
> > >> > Hi everyone,
> > >> >
> > >> > Please review and vote on the release candidate #2 for the version
> > >> 0.1.0 of
> > >> > Apache Flink Kubernetes Operator,
> > >> > as follows:
> > >> > [ ] +1, Approve the release
> > >> > [ ] -1, Do not approve the release (please provide specific
> comments)
> > >> >
> > >> > **Release Overview**
> > >> >
> > >> > As an overview, the release consists of the following:
> > >> > a) Kubernetes Operator canonical source distribution (including the
> > >> > Dockerfile), to be deployed to the release repository at
> > >> dist.apache.org
> > >> > b) Kubernetes Operator Helm Chart to be deployed to the release
> > >> repository
> > >> > at dist.apache.org
> > >> > c) Maven artifacts to be deployed to the Maven Central Repository
> > >> > d) Docker image to be pushed to dockerhub
> > >> >
> > >> > **Staging Areas to Review**
> > >> >
> > >> > The staging areas containing the above mentioned artifacts are as
> > >> follows,
> > >> > for your review:
> > >> > * All artifacts for a,b) can be found in the corresponding dev
> > >> repository
> > >> > at dist.apache.org [1]
> > >> > * All artifacts for c) can be found at the Apache Nexus Repository
> [2]
> > >> > * The docker image is 

[DISCUSS] assign SQL Table properties from environment variables

2022-03-30 Thread Teunissen, F.G.J. (Fred)
Hi devs,

Some SQL Table properties contain sensitive data, like passwords that we do not 
want to expose in the VVP ui to other users. Also, having them clear text in a 
SQL statement is not secure. For example,

CREATE TABLE Orders (
`user` BIGINT,
product STRING,
order_time TIMESTAMP(3)
) WITH (
'connector' = 'kafka',

'properties.bootstrap.servers' = 'kafka-host-1:9093,kafka-host-2:9093',
'properties.security.protocol' = 'SSL',
'properties.ssl.key.password' = 'should-be-a-secret',
'properties.ssl.keystore.location' = '/tmp/secrets/my-keystore.jks',
'properties.ssl.keystore.password' = 'should-also-be-a-secret',
'properties.ssl.truststore.location' = '/tmp/secrets/my-truststore.jks',
'properties.ssl.truststore.password' = 'should-again-be-a-secret',
'scan.startup.mode' = 'earliest-offset'
);

I would like to bring up for a discussion a proposal to provide these secrets 
values via environment variables since these can be populated from a K8s 
configMap or secrets.

For implementing the SQL Table properties, the ConfigOption class is used in 
connectors and formatters. This class could be extended that it checks whether 
the config-value contains certain tokens, like ‘${env-var-name}’. If it does, 
it could fetch the value from the environment variable and use that to replace 
that token in the config-value.

The above SQL statement would then look like,

CREATE TABLE Orders (
`user` BIGINT,
product STRING,
order_time TIMESTAMP(3)
) WITH (
'connector' = 'kafka',

'properties.bootstrap.servers' = 'kafka-host-1:9093,kafka-host-2:9093',
'properties.security.protocol' = 'SSL',
'properties.ssl.key.password' = '${secret_kafka_ssl_key_password}',
'properties.ssl.keystore.location' = '/tmp/secrets/my-keystore.jks',
'properties.ssl.keystore.password' = 
'${secret_kafka_ssl_keystore_password}',
'properties.ssl.truststore.location' = '/tmp/secrets/my-truststore.jks',
'properties.ssl.truststore.password' = 
'${secret_kafka_ssl_truststore_password}',
'scan.startup.mode' = 'earliest-offset'
);

For the purpose of secrets I don’t think you need any complex processing of 
tokens but perhaps there are other usages as well. For instance,

'properties.bootstrap.servers' = 
'kafka-${otap_env}-1:9093,kafka-${otap_env}-2:9093',

Because it is possible that (but I think unlikely) someone wants a property 
value like ‘${not-an-env-var}’ you need to be able to escape this ’$’ token 
like ‘$${not-an-env-var}’. This also means that in theory it would break 
compatibility.

Looking forward for your feedback!

Best,
Fred Teunissen

-
ATTENTION:
The information in this e-mail is confidential and only meant for the intended 
recipient. If you are not the intended recipient, don't use or disclose it in 
any way. Please let the sender know and delete the message immediately.
-


[jira] [Created] (FLINK-26925) loss scale in union situation

2022-03-30 Thread Spongebob (Jira)
Spongebob created FLINK-26925:
-

 Summary: loss scale in union situation
 Key: FLINK-26925
 URL: https://issues.apache.org/jira/browse/FLINK-26925
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Reporter: Spongebob


when I union two columns that datatypes are decimal(38,4) and decimal(38,2), 
but got decimal(38,2) in return. This cause the problem that loss scale in 
result set. I think the final datatype should be decimal(38,4) would be fine.
{code:java}
TableEnvironment tableEnvironment = 
TableEnvironment.create(EnvironmentSettings.newInstance().inBatchMode().build());

Table t1 = tableEnvironment.sqlQuery("select cast(1.23 as decimal(38,2)) as a");
Table t2 = tableEnvironment.sqlQuery("select cast(4.5678 as decimal(38,4)) as 
a");

tableEnvironment.createTemporaryView("t1", t1);
tableEnvironment.createTemporaryView("t2", t2);

tableEnvironment.executeSql("select a from t1 union all select a from 
t2").print();
tableEnvironment.sqlQuery("select a from t1 union all select a from 
t2").printSchema(); 


// output
+--+
|                                        a |
+--+
|                                     1.23 |
|                                     4.57 |
+--+
2 rows in set
(
  `a` DECIMAL(38, 2) NOT NULL
){code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: [VOTE] Apache Flink Kubernetes Operator Release 0.1.0, release candidate #2

2022-03-30 Thread Márton Balassi
Hi team,

I would like to ask for your input in naming the operator docker image and
helm chart. [1]

For the sake of brevity when we started the Kubernetes Operator work we
named the docker image and the helm chart simply flink-operator, while the
git repository is named flink-kubernetes-operator. [2]
Now closing in on our preview release it makes sense to reconsider this, it
might be preferred to name all artifacts flink-kubernetes-operator for the
sake of consistency.
Currently docker images of our builds are available in the GitHub Registry
tagged with the short git commit hash and the last build of select branches
is tagged with the branch name:

ghcr.io/apache/flink-operator:439bd41ghcr.io/apache/flink-operator:main

During the release process we plan to move the docker image to dockerhub
following the process established for Flink.
Currently the helm operator for the release candidate can be installed as
follows:

helm repo add operator-rc2
https://dist.apache.org/repos/dist/dev/flink/flink-kubernetes-operator-0.1.0-rc2
helm install flink-operator operator-rc2/flink-operator

So the helm chart itself is called flink-operator, but to follow the name
of the project it is packaged into flink-kubernetes-operator-.tgz.
Do you prefer flink-operator for brevity or flink-kubernetes-operator for
consistency? If we vote for changing the current setup we could get the
changes in for the next RC.

[1] https://issues.apache.org/jira/browse/FLINK-26924
[2] https://github.com/apache/flink-kubernetes-operator
[3]
https://cwiki.apache.org/confluence/display/FLINK/Creating+a+Flink+Release#CreatingaFlinkRelease-PublishtheDockerfilesforthenewrelease

On Wed, Mar 30, 2022 at 9:17 AM Gyula Fóra  wrote:

> Hi Devs,
>
> I am cancelling the current vote as we have promoted
> https://issues.apache.org/jira/browse/FLINK-26916 as a blocker issue.
>
> Yang has found a fix that we can reasonably do in a short time frame, so we
> will work on that today before creating the next RC (hopefully later today)
>
> Thank you!
> Gyula
>
> On Tue, Mar 29, 2022 at 7:51 PM Gyula Fóra  wrote:
>
> > Hi Devs,
> >
> > I just want to give you heads up regarding a significant issue Matyas
> > found around the last-state upgrade mode:
> > https://issues.apache.org/jira/browse/FLINK-26916
> >
> > Long story short, the last-state mode does not actually upgrade the job,
> > only the job/taskmanagers. The fix is non-trivial to say the least and
> > requires changes to the Flink Kubernetes HA implementations to be
> feasible.
> > Please see the jira ticket for details.
> >
> > I discussed this offline with Thomas Weise and we agreed to clearly
> > highlight this limitation with warnings in the documentation and the
> > release blogpost but not block the release on it as the fix will most
> > likely require Flink 1.15 if we can include the required changes there.
> >
> > Please continue testing the current RC :)
> >
> > Thank you all!
> > Gyula
> >
> >
> > On Tue, Mar 29, 2022 at 5:17 PM Chenya Zhang  wrote:
> >
> >> +1 non-binding, thanks for the great efforts and look forward to the
> >> release!
> >>
> >> Chenya
> >>
> >> On Tue, Mar 29, 2022 at 2:10 AM Gyula Fóra  wrote:
> >>
> >> > Hi everyone,
> >> >
> >> > Please review and vote on the release candidate #2 for the version
> >> 0.1.0 of
> >> > Apache Flink Kubernetes Operator,
> >> > as follows:
> >> > [ ] +1, Approve the release
> >> > [ ] -1, Do not approve the release (please provide specific comments)
> >> >
> >> > **Release Overview**
> >> >
> >> > As an overview, the release consists of the following:
> >> > a) Kubernetes Operator canonical source distribution (including the
> >> > Dockerfile), to be deployed to the release repository at
> >> dist.apache.org
> >> > b) Kubernetes Operator Helm Chart to be deployed to the release
> >> repository
> >> > at dist.apache.org
> >> > c) Maven artifacts to be deployed to the Maven Central Repository
> >> > d) Docker image to be pushed to dockerhub
> >> >
> >> > **Staging Areas to Review**
> >> >
> >> > The staging areas containing the above mentioned artifacts are as
> >> follows,
> >> > for your review:
> >> > * All artifacts for a,b) can be found in the corresponding dev
> >> repository
> >> > at dist.apache.org [1]
> >> > * All artifacts for c) can be found at the Apache Nexus Repository [2]
> >> > * The docker image is staged on github [7]
> >> >
> >> > All artifacts are signed with the
> >> > key 911F218F79ACEA8EB453799EEE325DDEBFED467D [3]
> >> >
> >> > Other links for your review:
> >> > * JIRA release notes [4]
> >> > * source code tag "release-0.1.0-rc2" [5]
> >> > * PR to update the website Downloads page to include Kubernetes
> Operator
> >> > links [6]
> >> >
> >> > **Vote Duration**
> >> >
> >> > The voting time will run for at least 72 hours.
> >> > It is adopted by majority approval, with at least 3 PMC affirmative
> >> votes.
> >> >
> >> > **Note for Functional Verification**
> >> >
> >> > You can use the dev repository as a helm 

[jira] [Created] (FLINK-26924) Decide on operator docker image name

2022-03-30 Thread Jira
Márton Balassi created FLINK-26924:
--

 Summary: Decide on operator docker image name
 Key: FLINK-26924
 URL: https://issues.apache.org/jira/browse/FLINK-26924
 Project: Flink
  Issue Type: Improvement
  Components: Kubernetes Operator
Affects Versions: kubernetes-operator-0.1.0
Reporter: Márton Balassi
Assignee: Márton Balassi


For the sake of brevity when we started the Kubernetes Operator work we named 
the docker image and the helm chart simply flink-operator, while the git 
repository is named 
[flink-kubernetes-operator|https://github.com/apache/flink-kubernetes-operator].

Now closing in on our preview release it makes sense to reconsider this, it 
might be preferred to name all artifacts flink-kubernetes-operator for the sake 
of consistency.

Currently docker images of our builds are available in the GitHub Registry 
tagged with the short git commit hash and the last build of select branches is 
tagged with the branch name:
{code:java}
ghcr.io/apache/flink-operator:439bd41
ghcr.io/apache/flink-operator:main{code}
During the release process we plan to move the docker image to dockerhub 
following the process established for 
[Flink|https://cwiki.apache.org/confluence/display/FLINK/Creating+a+Flink+Release#CreatingaFlinkRelease-PublishtheDockerfilesforthenewrelease].

Currently the helm operator for the release candidate can be installed as 
follows:
{noformat}
helm repo add operator-rc2
https://dist.apache.org/repos/dist/dev/flink/flink-kubernetes-operator-0.1.0-rc2
helm install flink-operator operator-rc2/flink-operator{noformat}
So the helm chart itself is called flink-operator, but to follow the name of 
the project it is packaged into flink-kubernetes-operator-.tgz.

Do you prefer flink-operator for brevity or flink-kubernetes-operator for 
consistency?

 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26923) SavepointITCase.testStopWithSavepointFailsOverToSavepoint failed on azure

2022-03-30 Thread Yun Gao (Jira)
Yun Gao created FLINK-26923:
---

 Summary: SavepointITCase.testStopWithSavepointFailsOverToSavepoint 
 failed on azure
 Key: FLINK-26923
 URL: https://issues.apache.org/jira/browse/FLINK-26923
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing
Affects Versions: 1.16.0
Reporter: Yun Gao



{code:java}
2022-03-29T05:50:18.9604940Z Mar 29 05:50:18 [ERROR] Tests run: 20, Failures: 
0, Errors: 1, Skipped: 1, Time elapsed: 28.306 s <<< FAILURE! - in 
org.apache.flink.test.checkpointing.SavepointITCase
2022-03-29T05:50:18.9609713Z Mar 29 05:50:18 [ERROR] 
org.apache.flink.test.checkpointing.SavepointITCase.testStopWithSavepointFailsOverToSavepoint
  Time elapsed: 3.363 s  <<< ERROR!
2022-03-29T05:50:18.9611347Z Mar 29 05:50:18 
org.apache.flink.util.FlinkException: Stop with savepoint operation could not 
be completed.
2022-03-29T05:50:18.9613057Z Mar 29 05:50:18at 
org.apache.flink.runtime.scheduler.adaptive.StopWithSavepoint.onLeave(StopWithSavepoint.java:124)
2022-03-29T05:50:18.9614629Z Mar 29 05:50:18at 
org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.transitionToState(AdaptiveScheduler.java:1181)
2022-03-29T05:50:18.9616369Z Mar 29 05:50:18at 
org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.goToRestarting(AdaptiveScheduler.java:858)
2022-03-29T05:50:18.9618273Z Mar 29 05:50:18at 
org.apache.flink.runtime.scheduler.adaptive.FailureResultUtil.restartOrFail(FailureResultUtil.java:28)
2022-03-29T05:50:18.9619815Z Mar 29 05:50:18at 
org.apache.flink.runtime.scheduler.adaptive.StopWithSavepoint.onFailure(StopWithSavepoint.java:149)
2022-03-29T05:50:18.9621464Z Mar 29 05:50:18at 
org.apache.flink.runtime.scheduler.adaptive.StateWithExecutionGraph.updateTaskExecutionState(StateWithExecutionGraph.java:367)
2022-03-29T05:50:18.9623122Z Mar 29 05:50:18at 
org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.lambda$updateTaskExecutionState$4(AdaptiveScheduler.java:496)
2022-03-29T05:50:18.9624528Z Mar 29 05:50:18at 
org.apache.flink.runtime.scheduler.adaptive.State.tryCall(State.java:137)
2022-03-29T05:50:18.9626318Z Mar 29 05:50:18at 
org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.updateTaskExecutionState(AdaptiveScheduler.java:493)
2022-03-29T05:50:18.9627831Z Mar 29 05:50:18at 
org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:78)
2022-03-29T05:50:18.9629329Z Mar 29 05:50:18at 
org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:443)
2022-03-29T05:50:18.9630643Z Mar 29 05:50:18at 
sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
2022-03-29T05:50:18.9632127Z Mar 29 05:50:18at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2022-03-29T05:50:18.9633394Z Mar 29 05:50:18at 
java.lang.reflect.Method.invoke(Method.java:498)
2022-03-29T05:50:18.9634943Z Mar 29 05:50:18at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRpcInvocation$1(AkkaRpcActor.java:304)
2022-03-29T05:50:18.9636737Z Mar 29 05:50:18at 
org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
2022-03-29T05:50:18.9638234Z Mar 29 05:50:18at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:302)
2022-03-29T05:50:18.9639920Z Mar 29 05:50:18at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:217)
2022-03-29T05:50:18.9641506Z Mar 29 05:50:18at 
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:78)
2022-03-29T05:50:18.9643007Z Mar 29 05:50:18at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163)
2022-03-29T05:50:18.9644379Z Mar 29 05:50:18at 
akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
2022-03-29T05:50:18.9645829Z Mar 29 05:50:18at 
akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
2022-03-29T05:50:18.9647316Z Mar 29 05:50:18at 
scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
2022-03-29T05:50:18.9648648Z Mar 29 05:50:18at 
scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
2022-03-29T05:50:18.9650044Z Mar 29 05:50:18at 
akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
2022-03-29T05:50:18.9651437Z Mar 29 05:50:18at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
2022-03-29T05:50:18.9652830Z Mar 29 05:50:18at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
2022-03-29T05:50:18.9654205Z Mar 29 05:50:18at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
2022-03-29T05:50:18.9655489Z Mar 29 05:50:18at 
akka.actor.Actor.aroundReceive(Actor.scala:537)
2022-03-29T05:50:18.9656976Z Mar 29 05:50:18at 

Re: [VOTE] Apache Flink Kubernetes Operator Release 0.1.0, release candidate #2

2022-03-30 Thread Gyula Fóra
Hi Devs,

I am cancelling the current vote as we have promoted
https://issues.apache.org/jira/browse/FLINK-26916 as a blocker issue.

Yang has found a fix that we can reasonably do in a short time frame, so we
will work on that today before creating the next RC (hopefully later today)

Thank you!
Gyula

On Tue, Mar 29, 2022 at 7:51 PM Gyula Fóra  wrote:

> Hi Devs,
>
> I just want to give you heads up regarding a significant issue Matyas
> found around the last-state upgrade mode:
> https://issues.apache.org/jira/browse/FLINK-26916
>
> Long story short, the last-state mode does not actually upgrade the job,
> only the job/taskmanagers. The fix is non-trivial to say the least and
> requires changes to the Flink Kubernetes HA implementations to be feasible.
> Please see the jira ticket for details.
>
> I discussed this offline with Thomas Weise and we agreed to clearly
> highlight this limitation with warnings in the documentation and the
> release blogpost but not block the release on it as the fix will most
> likely require Flink 1.15 if we can include the required changes there.
>
> Please continue testing the current RC :)
>
> Thank you all!
> Gyula
>
>
> On Tue, Mar 29, 2022 at 5:17 PM Chenya Zhang  wrote:
>
>> +1 non-binding, thanks for the great efforts and look forward to the
>> release!
>>
>> Chenya
>>
>> On Tue, Mar 29, 2022 at 2:10 AM Gyula Fóra  wrote:
>>
>> > Hi everyone,
>> >
>> > Please review and vote on the release candidate #2 for the version
>> 0.1.0 of
>> > Apache Flink Kubernetes Operator,
>> > as follows:
>> > [ ] +1, Approve the release
>> > [ ] -1, Do not approve the release (please provide specific comments)
>> >
>> > **Release Overview**
>> >
>> > As an overview, the release consists of the following:
>> > a) Kubernetes Operator canonical source distribution (including the
>> > Dockerfile), to be deployed to the release repository at
>> dist.apache.org
>> > b) Kubernetes Operator Helm Chart to be deployed to the release
>> repository
>> > at dist.apache.org
>> > c) Maven artifacts to be deployed to the Maven Central Repository
>> > d) Docker image to be pushed to dockerhub
>> >
>> > **Staging Areas to Review**
>> >
>> > The staging areas containing the above mentioned artifacts are as
>> follows,
>> > for your review:
>> > * All artifacts for a,b) can be found in the corresponding dev
>> repository
>> > at dist.apache.org [1]
>> > * All artifacts for c) can be found at the Apache Nexus Repository [2]
>> > * The docker image is staged on github [7]
>> >
>> > All artifacts are signed with the
>> > key 911F218F79ACEA8EB453799EEE325DDEBFED467D [3]
>> >
>> > Other links for your review:
>> > * JIRA release notes [4]
>> > * source code tag "release-0.1.0-rc2" [5]
>> > * PR to update the website Downloads page to include Kubernetes Operator
>> > links [6]
>> >
>> > **Vote Duration**
>> >
>> > The voting time will run for at least 72 hours.
>> > It is adopted by majority approval, with at least 3 PMC affirmative
>> votes.
>> >
>> > **Note for Functional Verification**
>> >
>> > You can use the dev repository as a helm repo to test the operator
>> > directly:
>> >
>> > helm repo add operator-rc2
>> >
>> >
>> https://dist.apache.org/repos/dist/dev/flink/flink-kubernetes-operator-0.1.0-rc2
>> > helm install flink-operator operator-rc2/flink-operator
>> >
>> > Thanks,
>> > Gyula
>> >
>> > [1]
>> >
>> >
>> https://dist.apache.org/repos/dist/dev/flink/flink-kubernetes-operator-0.1.0-rc2/
>> > [2]
>> > https://repository.apache.org/content/repositories/orgapacheflink-1491/
>> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>> > [4]
>> >
>> >
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12351499
>> > [5]
>> >
>> https://github.com/apache/flink-kubernetes-operator/tree/release-0.1.0-rc2
>> > [6] https://github.com/apache/flink-web/pull/519
>> > [7] ghcr.io/apache/flink-operator:260df17
>> >
>>
>


[jira] [Created] (FLINK-26922) KafkaSinkITCase.testRecoveryWithExactlyOnceGuaranteeAndConcurrentCheckpoints failed on azure

2022-03-30 Thread Yun Gao (Jira)
Yun Gao created FLINK-26922:
---

 Summary: 
KafkaSinkITCase.testRecoveryWithExactlyOnceGuaranteeAndConcurrentCheckpoints 
failed on azure
 Key: FLINK-26922
 URL: https://issues.apache.org/jira/browse/FLINK-26922
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.15.0
Reporter: Yun Gao



{code:java}
2022-03-29T06:20:18.1760718Z Mar 29 06:20:18 [ERROR] Tests run: 8, Failures: 1, 
Errors: 0, Skipped: 0, Time elapsed: 116.912 s <<< FAILURE! - in 
org.apache.flink.connector.kafka.sink.KafkaSinkITCase
2022-03-29T06:20:18.1762540Z Mar 29 06:20:18 [ERROR] 
org.apache.flink.connector.kafka.sink.KafkaSinkITCase.testRecoveryWithExactlyOnceGuaranteeAndConcurrentCheckpoints
  Time elapsed: 30.087 s  <<< FAILURE!
2022-03-29T06:20:18.1763909Z Mar 29 06:20:18 
org.opentest4j.MultipleFailuresError: 
2022-03-29T06:20:18.1765302Z Mar 29 06:20:18 Multiple Failures (2 failures)
2022-03-29T06:20:18.1766544Z Mar 29 06:20:18
java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: The request timed out.
2022-03-29T06:20:18.1768163Z Mar 29 06:20:18
java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
2022-03-29T06:20:18.1769034Z Mar 29 06:20:18at 
org.junit.vintage.engine.execution.TestRun.getStoredResultOrSuccessful(TestRun.java:196)
2022-03-29T06:20:18.1769850Z Mar 29 06:20:18at 
org.junit.vintage.engine.execution.RunListenerAdapter.fireExecutionFinished(RunListenerAdapter.java:226)
2022-03-29T06:20:18.1770979Z Mar 29 06:20:18at 
org.junit.vintage.engine.execution.RunListenerAdapter.testFinished(RunListenerAdapter.java:192)
2022-03-29T06:20:18.1772244Z Mar 29 06:20:18at 
org.junit.vintage.engine.execution.RunListenerAdapter.testFinished(RunListenerAdapter.java:79)
2022-03-29T06:20:18.1773131Z Mar 29 06:20:18at 
org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87)
2022-03-29T06:20:18.1774252Z Mar 29 06:20:18at 
org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225)
2022-03-29T06:20:18.1775004Z Mar 29 06:20:18at 
org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72)
2022-03-29T06:20:18.1775857Z Mar 29 06:20:18at 
org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222)
2022-03-29T06:20:18.1776826Z Mar 29 06:20:18at 
org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38)
2022-03-29T06:20:18.124Z Mar 29 06:20:18at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372)
2022-03-29T06:20:18.1778617Z Mar 29 06:20:18at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
2022-03-29T06:20:18.1779395Z Mar 29 06:20:18at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
2022-03-29T06:20:18.1780054Z Mar 29 06:20:18at 
org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
2022-03-29T06:20:18.1780688Z Mar 29 06:20:18at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
2022-03-29T06:20:18.1781340Z Mar 29 06:20:18at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
2022-03-29T06:20:18.1782014Z Mar 29 06:20:18at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
2022-03-29T06:20:18.1782867Z Mar 29 06:20:18at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
2022-03-29T06:20:18.1783647Z Mar 29 06:20:18at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
2022-03-29T06:20:18.1784349Z Mar 29 06:20:18at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
2022-03-29T06:20:18.1804236Z Mar 29 06:20:18at 
org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:30)
2022-03-29T06:20:18.1805948Z Mar 29 06:20:18at 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2022-03-29T06:20:18.1807077Z Mar 29 06:20:18at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
2022-03-29T06:20:18.1808190Z Mar 29 06:20:18at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413)
2022-03-29T06:20:18.1809730Z Mar 29 06:20:18at 
org.junit.runner.JUnitCore.run(JUnitCore.java:137)
2022-03-29T06:20:18.1810753Z Mar 29 06:20:18at 
org.junit.runner.JUnitCore.run(JUnitCore.java:115)
2022-03-29T06:20:18.1812091Z Mar 29 06:20:18at 
org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
2022-03-29T06:20:18.1813709Z Mar 29 06:20:18at 
org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
2022-03-29T06:20:18.1815050Z Mar 29 06:20:18at 

[jira] [Created] (FLINK-26921) KafkaSinkE2ECase.testStartFromSavepoint failed on azure

2022-03-30 Thread Yun Gao (Jira)
Yun Gao created FLINK-26921:
---

 Summary: KafkaSinkE2ECase.testStartFromSavepoint failed on azure
 Key: FLINK-26921
 URL: https://issues.apache.org/jira/browse/FLINK-26921
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.15.0
Reporter: Yun Gao
 Fix For: 1.15.0



{code:java}
2022-03-29T08:08:40.8158780Z Mar 29 08:08:40 [ERROR] Tests run: 10, Failures: 
0, Errors: 1, Skipped: 0, Time elapsed: 181.448 s <<< FAILURE! - in 
org.apache.flink.tests.util.kafka.KafkaSinkE2ECase
2022-03-29T08:08:40.8164304Z Mar 29 08:08:40 [ERROR] 
org.apache.flink.tests.util.kafka.KafkaSinkE2ECase.testStartFromSavepoint(TestEnvironment,
 DataStreamSinkExternalContext, CheckpointingMode)[1]  Time elapsed: 40.065 s  
<<< ERROR!
2022-03-29T08:08:40.8166237Z Mar 29 08:08:40 
java.util.concurrent.TimeoutException: Condition was not met in given timeout.
2022-03-29T08:08:40.8167431Z Mar 29 08:08:40at 
org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:167)
2022-03-29T08:08:40.8168412Z Mar 29 08:08:40at 
org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145)
2022-03-29T08:08:40.8169380Z Mar 29 08:08:40at 
org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:137)
2022-03-29T08:08:40.8170349Z Mar 29 08:08:40at 
org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus(CommonTestUtils.java:285)
2022-03-29T08:08:40.8171306Z Mar 29 08:08:40at 
org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.killJob(SinkTestSuiteBase.java:565)
2022-03-29T08:08:40.8172594Z Mar 29 08:08:40at 
org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.restartFromSavepoint(SinkTestSuiteBase.java:339)
2022-03-29T08:08:40.8173644Z Mar 29 08:08:40at 
org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase.testStartFromSavepoint(SinkTestSuiteBase.java:184)
2022-03-29T08:08:40.8174543Z Mar 29 08:08:40at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2022-03-29T08:08:40.8175883Z Mar 29 08:08:40at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2022-03-29T08:08:40.8176693Z Mar 29 08:08:40at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2022-03-29T08:08:40.8177354Z Mar 29 08:08:40at 
java.lang.reflect.Method.invoke(Method.java:498)
2022-03-29T08:08:40.8182704Z Mar 29 08:08:40at 
org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
2022-03-29T08:08:40.8184845Z Mar 29 08:08:40at 
org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
2022-03-29T08:08:40.8185816Z Mar 29 08:08:40at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
2022-03-29T08:08:40.8186862Z Mar 29 08:08:40at 
org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
2022-03-29T08:08:40.8187625Z Mar 29 08:08:40at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
2022-03-29T08:08:40.8188421Z Mar 29 08:08:40at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92)
2022-03-29T08:08:40.8189279Z Mar 29 08:08:40at 
org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
2022-03-29T08:08:40.8190343Z Mar 29 08:08:40at 
org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
2022-03-29T08:08:40.8191193Z Mar 29 08:08:40at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
2022-03-29T08:08:40.8192057Z Mar 29 08:08:40at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
2022-03-29T08:08:40.8192873Z Mar 29 08:08:40at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
2022-03-29T08:08:40.8193697Z Mar 29 08:08:40at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
2022-03-29T08:08:40.8194844Z Mar 29 08:08:40at 
org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
2022-03-29T08:08:40.8195781Z Mar 29 08:08:40at 
org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
2022-03-29T08:08:40.8196588Z Mar 29 08:08:40at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214)
2022-03-29T08:08:40.8197616Z Mar 29 08:08:40at 

[jira] [Created] (FLINK-26920) It reported "The configured managed memory fraction for Python worker process must be within (0, 1], was: %s."

2022-03-30 Thread Dian Fu (Jira)
Dian Fu created FLINK-26920:
---

 Summary: It reported "The configured managed memory fraction for 
Python worker process must be within (0, 1], was: %s."
 Key: FLINK-26920
 URL: https://issues.apache.org/jira/browse/FLINK-26920
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Affects Versions: 1.14.0, 1.13.0, 1.12.0, 1.15.0
Reporter: Dian Fu


For the following code:
{code}
import numpy as np
from pyflink.common import Row
from pyflink.common.typeinfo import Types
from pyflink.datastream import StreamExecutionEnvironment
from pyflink.datastream.functions import KeyedProcessFunction, RuntimeContext
from pyflink.table import StreamTableEnvironment
from sklearn import svm, datasets

env = StreamExecutionEnvironment.get_execution_environment()
t_env = StreamTableEnvironment.create(stream_execution_environment=env)

# Table Source
t_env.execute_sql("""
CREATE TABLE my_source (
a FLOAT,
key STRING
) WITH (
'connector' = 'datagen',
'rows-per-second' = '1',
'fields.a.min' = '4.3',
'fields.a.max' = '7.9',
'fields.key.length' = '10'
)
""")


def process_type():
return Types.ROW_NAMED(
["a", "key"],
[Types.FLOAT(), Types.STRING()]
)


# append only datastream
ds = t_env.to_append_stream(
t_env.from_path('my_source'),
process_type())


class MyKeyedProcessFunction(KeyedProcessFunction):

def open(self, runtime_context: RuntimeContext):
clf = svm.SVC()
X, y= datasets.load_iris(return_X_y=True)
clf.fit(X, y)

self.model = clf


def process_element(self, value: Row, ctx: 'KeyedProcessFunction.Context'):

# 根据role_id + space去redis查询回合结算日志

features = np.array([value['a'], 3.5, 1.4, 0.2]).reshape(1, -1)
predict = int(self.model.predict(features)[0])

yield Row(predict=predict, role_id=value['key'])



ds = ds.key_by(lambda a: a['key'], key_type=Types.STRING()) \
.process(
MyKeyedProcessFunction(), 
output_type=Types.ROW_NAMED(
["hit", "role_id"],
[Types.INT(), Types.STRING()]
))


# 采用table sink
t_env.execute_sql("""
CREATE TABLE my_sink (
  hit INT,
  role_id STRING
) WITH (
  'connector' = 'print'
)
""")

t_env.create_temporary_view("predict", ds)
t_env.execute_sql("""
INSERT INTO my_sink
SELECT * FROM predict
""").wait()
{code}

It reported the following exception:
{code}
Caused by: java.lang.IllegalArgumentException: The configured managed memory 
fraction for Python worker process must be within (0, 1], was: %s. It may be 
because the consumer type "Python" was missing or set to 0 for the config 
option "taskmanager.memory.managed.consumer-weights".0.0
at 
org.apache.flink.util.Preconditions.checkArgument(Preconditions.java:138)
at 
org.apache.flink.streaming.api.runners.python.beam.BeamPythonFunctionRunner.open(BeamPythonFunctionRunner.java:233)
at 
org.apache.flink.streaming.api.operators.python.AbstractExternalPythonFunctionOperator.open(AbstractExternalPythonFunctionOperator.java:56)
at 
org.apache.flink.streaming.api.operators.python.AbstractOneInputPythonFunctionOperator.open(AbstractOneInputPythonFunctionOperator.java:116)
at 
org.apache.flink.streaming.api.operators.python.PythonKeyedProcessOperator.open(PythonKeyedProcessOperator.java:121)
at 
org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:107)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:712)
at 
org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:688)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:655)
at 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
at 
org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:917)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
at java.lang.Thread.run(Thread.java:748)
{code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)