[GitHub] [flink] RocMarshal commented on pull request #17352: [FLINK-10230][table] Support 'SHOW CREATE VIEW' syntax to print the query of a view

2021-10-12 Thread GitBox


RocMarshal commented on pull request #17352:
URL: https://github.com/apache/flink/pull/17352#issuecomment-941932352


   @leonardBang Could you help me to merge it if there are nothing 
inappropriate ? Thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24520) Building of WebUI failed on Ubuntu 20

2021-10-12 Thread Jira


[ 
https://issues.apache.org/jira/browse/FLINK-24520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17428011#comment-17428011
 ] 

Ingo Bürk commented on FLINK-24520:
---

What was the Maven command used here?

> Building of WebUI failed on Ubuntu 20
> -
>
> Key: FLINK-24520
> URL: https://issues.apache.org/jira/browse/FLINK-24520
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Affects Versions: 1.13.3
>Reporter: Chesnay Schepler
>Priority: Major
>
> While preparing the 1.13.3 release I ran into an issue where the WebUI could 
> not be built via maven.
> {code}
> [INFO] husky > setting up git hooks
> [INFO] HUSKY_SKIP_INSTALL environment variable is set to 'true', skipping Git 
> hooks installation.
> [ERROR] Aborted
> [ERROR] npm ERR! code ELIFECYCLE
> [ERROR] npm ERR! errno 134
> [ERROR] npm ERR! husky@1.3.1 install: `node husky install`
> [ERROR] npm ERR! Exit status 134
> [ERROR] npm ERR!
> [ERROR] npm ERR! Failed at the husky@1.3.1 install script.
> [ERROR] npm ERR! This is probably not a problem with npm. There is likely 
> additional logging output above.
> {code}
> However, manually invoking npm from the command-line worked just fine.
> Deleting the node[_modules] directory had no effect.
> We should investigate what was causing these issues.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #17461: [FLINK-24503] [Deployment / Kubernetes] change kubernetes.rest-service.exposed.type defaults to ClusterIP

2021-10-12 Thread GitBox


flinkbot edited a comment on pull request #17461:
URL: https://github.com/apache/flink/pull/17461#issuecomment-941929030


   
   ## CI report:
   
   * c1b70a0eca8014f62194ccc75001da67accfb0f5 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25000)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24503) Security: native kubernetes exposes REST service via LoadBalancer in default

2021-10-12 Thread LI Zhennan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17428010#comment-17428010
 ] 

LI Zhennan commented on FLINK-24503:


Hi [~xtsong],

I have submitted a PR: https://github.com/apache/flink/pull/17461

I am a first-time contributor to Flink. If I missed something, please let me 
know.

Thanks.

Best regards.

> Security: native kubernetes exposes REST service via LoadBalancer in default
> 
>
> Key: FLINK-24503
> URL: https://issues.apache.org/jira/browse/FLINK-24503
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes
>Affects Versions: 1.13.0, 1.14.0, 1.13.1, 1.13.2
> Environment: Flink 1.13.2, native kubernetes
>Reporter: LI Zhennan
>Priority: Major
>  Labels: pull-request-available, security
>
> Hi,
>  
> Flink native k8s deployment exposes REST service via LoadBalancer in default: 
> https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/config/#kubernetes-rest-service-exposed-type
> I propose to consider it a security issue.
> It is very likely for users to unconciously expose their Flink REST service 
> to the wild Internet, given they are deploying on a k8s cluster provided by 
> cloud service like AWS or Google Cloud.
> Given access, anyone can browse and cancel Flink job on REST service.
> Personally I noticed this issue after my staging deployment went online for 2 
> days.
> Here, I propose to alter the default value to `ClusterIP`, so that:
>  # the REST service is not exposed to Internet accidentally;
>  # the developer can use `kubectl port-forward` to access the service in 
> default;
>  # the developer can still expose REST service via LoadBalancer by expressing 
> it explicitly in `flink run-application` params.
> If it is okay, I would like to contribute the fix.
>  
> Thank you.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #17461: [FLINK-24503] [Deployment / Kubernetes] change kubernetes.rest-service.exposed.type defaults to ClusterIP

2021-10-12 Thread GitBox


flinkbot commented on pull request #17461:
URL: https://github.com/apache/flink/pull/17461#issuecomment-941929462


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit c1b70a0eca8014f62194ccc75001da67accfb0f5 (Wed Oct 13 
05:14:28 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-24503).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17461: [FLINK-24503] [Deployment / Kubernetes] change kubernetes.rest-service.exposed.type defaults to ClusterIP

2021-10-12 Thread GitBox


flinkbot commented on pull request #17461:
URL: https://github.com/apache/flink/pull/17461#issuecomment-941929030


   
   ## CI report:
   
   * c1b70a0eca8014f62194ccc75001da67accfb0f5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17311: [FLINK-24318][table-planner]Casting a number to boolean has different results between 'select' fields and 'where' condition

2021-10-12 Thread GitBox


flinkbot edited a comment on pull request #17311:
URL: https://github.com/apache/flink/pull/17311#issuecomment-921772935


   
   ## CI report:
   
   * c0a29893bdaa662e2fa2ce1bb4f0398440c72fb5 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24504)
 
   * d0b3ab929530f10e31b323190b234f3d7b79514d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24999)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24503) Security: native kubernetes exposes REST service via LoadBalancer in default

2021-10-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-24503:
---
Labels: pull-request-available security  (was: security)

> Security: native kubernetes exposes REST service via LoadBalancer in default
> 
>
> Key: FLINK-24503
> URL: https://issues.apache.org/jira/browse/FLINK-24503
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes
>Affects Versions: 1.13.0, 1.14.0, 1.13.1, 1.13.2
> Environment: Flink 1.13.2, native kubernetes
>Reporter: LI Zhennan
>Priority: Major
>  Labels: pull-request-available, security
>
> Hi,
>  
> Flink native k8s deployment exposes REST service via LoadBalancer in default: 
> https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/config/#kubernetes-rest-service-exposed-type
> I propose to consider it a security issue.
> It is very likely for users to unconciously expose their Flink REST service 
> to the wild Internet, given they are deploying on a k8s cluster provided by 
> cloud service like AWS or Google Cloud.
> Given access, anyone can browse and cancel Flink job on REST service.
> Personally I noticed this issue after my staging deployment went online for 2 
> days.
> Here, I propose to alter the default value to `ClusterIP`, so that:
>  # the REST service is not exposed to Internet accidentally;
>  # the developer can use `kubectl port-forward` to access the service in 
> default;
>  # the developer can still expose REST service via LoadBalancer by expressing 
> it explicitly in `flink run-application` params.
> If it is okay, I would like to contribute the fix.
>  
> Thank you.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] nanmu42 opened a new pull request #17461: [FLINK-24503] [Deployment / Kubernetes] change kubernetes.rest-service.exposed.type defaults to ClusterIP

2021-10-12 Thread GitBox


nanmu42 opened a new pull request #17461:
URL: https://github.com/apache/flink/pull/17461


   
   
   ## What is the purpose of the change
   
   This patch aims to improve security of Flink native Kubernetes deployment on 
vendor k8s cluster as the original `kubernetes.rest-service.exposed.type` 
defaults `LoadBalancer` results in REST API and web UI exposed to the wild 
Internet if the developer forgets or fails to override the defaults.
   
   This is a impact-limited breaking change that should be noted.
   
   ## Brief change log
   
   * `kubernetes.rest-service.exposed.type` defaults is now `ClusterIP` instead 
of `LoadBalancer`.
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: yes, it is 
`org.apache.flink.kubernetes.configuration.KubernetesConfigOptions`
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: yes, it is Kubernetes.
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature?  no.
 - If yes, how is the feature documented? N/A
   
   I see there seems to exist a doc generation mechanism relying on 
`src/main/java/org/apache/flink/kubernetes/configuration/KubernetesConfigOptions.java`,
 so I guess the generated docs are updated automaticly. Please correct me if I 
missed something.
   
   Thank you.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17311: [FLINK-24318][table-planner]Casting a number to boolean has different results between 'select' fields and 'where' condition

2021-10-12 Thread GitBox


flinkbot edited a comment on pull request #17311:
URL: https://github.com/apache/flink/pull/17311#issuecomment-921772935


   
   ## CI report:
   
   * c0a29893bdaa662e2fa2ce1bb4f0398440c72fb5 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24504)
 
   * d0b3ab929530f10e31b323190b234f3d7b79514d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23696) RMQSourceTest.testRedeliveredSessionIDsAck fails on azure

2021-10-12 Thread Yuan Mei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17428004#comment-17428004
 ] 

Yuan Mei commented on FLINK-23696:
--

another case:

https://dev.azure.com/mymeiyuan/Flink/_build/results?buildId=342=logs=dafbab6d-4616-5d7b-ee37-3c54e4828fd7=e204f081-e6cd-5c04-4f4c-919639b63be9

> RMQSourceTest.testRedeliveredSessionIDsAck fails on azure
> -
>
> Key: FLINK-23696
> URL: https://issues.apache.org/jira/browse/FLINK-23696
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors/ RabbitMQ
>Affects Versions: 1.14.0, 1.12.5, 1.13.2, 1.15.0
>Reporter: Xintong Song
>Assignee: Fabian Paul
>Priority: Major
>  Labels: test-stability
> Fix For: 1.15.0, 1.14.1, 1.13.4
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21792=logs=e1276d0f-df12-55ec-86b5-c0ad597d83c9=906e9244-f3be-5604-1979-e767c8a6f6d9=13297
> {code}
> Aug 10 01:15:35 [ERROR] Tests run: 16, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 3.181 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.rabbitmq.RMQSourceTest
> Aug 10 01:15:35 [ERROR] 
> testRedeliveredSessionIDsAck(org.apache.flink.streaming.connectors.rabbitmq.RMQSourceTest)
>   Time elapsed: 0.269 s  <<< FAILURE!
> Aug 10 01:15:35 java.lang.AssertionError: expected:<25> but was:<27>
> Aug 10 01:15:35   at org.junit.Assert.fail(Assert.java:88)
> Aug 10 01:15:35   at org.junit.Assert.failNotEquals(Assert.java:834)
> Aug 10 01:15:35   at org.junit.Assert.assertEquals(Assert.java:645)
> Aug 10 01:15:35   at org.junit.Assert.assertEquals(Assert.java:631)
> Aug 10 01:15:35   at 
> org.apache.flink.streaming.connectors.rabbitmq.RMQSourceTest.testRedeliveredSessionIDsAck(RMQSourceTest.java:407)
> Aug 10 01:15:35   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Aug 10 01:15:35   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Aug 10 01:15:35   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Aug 10 01:15:35   at java.lang.reflect.Method.invoke(Method.java:498)
> Aug 10 01:15:35   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Aug 10 01:15:35   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Aug 10 01:15:35   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Aug 10 01:15:35   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Aug 10 01:15:35   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Aug 10 01:15:35   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Aug 10 01:15:35   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
> Aug 10 01:15:35   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Aug 10 01:15:35   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> Aug 10 01:15:35   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Aug 10 01:15:35   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Aug 10 01:15:35   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Aug 10 01:15:35   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Aug 10 01:15:35   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Aug 10 01:15:35   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Aug 10 01:15:35   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Aug 10 01:15:35   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> Aug 10 01:15:35   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> Aug 10 01:15:35   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> Aug 10 01:15:35   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> Aug 10 01:15:35   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> Aug 10 01:15:35   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> Aug 10 01:15:35   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> Aug 10 01:15:35   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> Aug 10 01:15:35   at 
> 

[GitHub] [flink] Myasuka commented on a change in pull request #17419: [FLINK-24460][rocksdb] Rocksdb Iterator Error Handling Improvement

2021-10-12 Thread GitBox


Myasuka commented on a change in pull request #17419:
URL: https://github.com/apache/flink/pull/17419#discussion_r727686247



##
File path: 
flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksIteratorWrapper.java
##
@@ -111,7 +114,6 @@ public void status() {
 @Override
 public void refresh() throws RocksDBException {
 iterator.refresh();
-status();

Review comment:
   I think we'd better leave the `status()` check for `refresh()` API here.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24529) flink sql job cannot use custom job name

2021-10-12 Thread Jira


[ 
https://issues.apache.org/jira/browse/FLINK-24529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17428000#comment-17428000
 ] 

Ingo Bürk commented on FLINK-24529:
---

CC [~twalthr]

> flink sql job cannot use custom job name
> 
>
> Key: FLINK-24529
> URL: https://issues.apache.org/jira/browse/FLINK-24529
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.14.0
>Reporter: simenliuxing
>Priority: Minor
> Fix For: 1.14.1
>
>
> In the 1.14.0 branch, I set the configuration of StreamTableEnvironment as 
> follows:
> {code:java}
> Configuration configuration = tEnv.getConfig().getConfiguration();
> configuration.setString(PipelineOptions.NAME, jobName);{code}
> It is not displayed on flink dashboard but the default insert-into_xxx 
> name,But it is displayed normally in the 1.13.2 branch
> I found that the jobname is set in line 756 of the class TableEnvironmentImpl 
> in the source code of 1.13.2:
> {code:java}
> String jobName = getJobName("insert-into_" + 
> String.join(",",sinkIdentifierNames));
> private String getJobName(String defaultJobName) { 
>   return 
> tableConfig.getConfiguration().getString(PipelineOptions.NAME,defaultJobName);
>  
> } 
> {code}
> But in the 1.14.0 branch, there is no getJobName method. Is there any other 
> setting way here? Or the method should be added.
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-24529) flink sql job cannot use custom job name

2021-10-12 Thread simenliuxing (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17427995#comment-17427995
 ] 

simenliuxing commented on FLINK-24529:
--

[~jark]

hi jark . Can you help me look at this problem
 

> flink sql job cannot use custom job name
> 
>
> Key: FLINK-24529
> URL: https://issues.apache.org/jira/browse/FLINK-24529
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.14.0
>Reporter: simenliuxing
>Priority: Minor
> Fix For: 1.14.1
>
>
> In the 1.14.0 branch, I set the configuration of StreamTableEnvironment as 
> follows:
> {code:java}
> Configuration configuration = tEnv.getConfig().getConfiguration();
> configuration.setString(PipelineOptions.NAME, jobName);{code}
> It is not displayed on flink dashboard but the default insert-into_xxx 
> name,But it is displayed normally in the 1.13.2 branch
> I found that the jobname is set in line 756 of the class TableEnvironmentImpl 
> in the source code of 1.13.2:
> {code:java}
> String jobName = getJobName("insert-into_" + 
> String.join(",",sinkIdentifierNames));
> private String getJobName(String defaultJobName) { 
>   return 
> tableConfig.getConfiguration().getString(PipelineOptions.NAME,defaultJobName);
>  
> } 
> {code}
> But in the 1.14.0 branch, there is no getJobName method. Is there any other 
> setting way here? Or the method should be added.
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24529) flink sql job cannot use custom job name

2021-10-12 Thread simenliuxing (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

simenliuxing updated FLINK-24529:
-
Description: 
In the 1.14.0 branch, I set the configuration of StreamTableEnvironment as 
follows:
{code:java}
Configuration configuration = tEnv.getConfig().getConfiguration();
configuration.setString(PipelineOptions.NAME, jobName);{code}
It is not displayed on flink dashboard but the default insert-into_xxx name,But 
it is displayed normally in the 1.13.2 branch

I found that the jobname is set in line 756 of the class TableEnvironmentImpl 
in the source code of 1.13.2:
{code:java}
String jobName = getJobName("insert-into_" + 
String.join(",",sinkIdentifierNames));

private String getJobName(String defaultJobName) { 
  return 
tableConfig.getConfiguration().getString(PipelineOptions.NAME,defaultJobName); 
} 
{code}
But in the 1.14.0 branch, there is no getJobName method. Is there any other 
setting way here? Or the method should be added.

 

 

 

 

  was:
In the 1.14.0 branch, I set the configuration of StreamTableEnvironment as 
follows:
{code:java}
Configuration configuration = tEnv.getConfig().getConfiguration();
configuration.setString(PipelineOptions.NAME, jobName);{code}
It is not displayed on flink dashboard but the default insert-into_xxx name,But 
it is displayed normally in the 1.13.2 branch

I found that the jobname is set in line 756 of the class TableEnvironmentImpl 
in the source code of 1.13.2:
{code:java}
String jobName = getJobName("insert-into_" + 
String.join(",",sinkIdentifierNames));
private String getJobName(String defaultJobName) { 
  return 
tableConfig.getConfiguration().getString(PipelineOptions.NAME,defaultJobName); 
} 
{code}
But in the 1.14.0 branch, there is no getJobName method. Is there any other 
setting way here? Or the method should be added.

 

 

 

 


> flink sql job cannot use custom job name
> 
>
> Key: FLINK-24529
> URL: https://issues.apache.org/jira/browse/FLINK-24529
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.14.0
>Reporter: simenliuxing
>Priority: Minor
> Fix For: 1.14.1
>
>
> In the 1.14.0 branch, I set the configuration of StreamTableEnvironment as 
> follows:
> {code:java}
> Configuration configuration = tEnv.getConfig().getConfiguration();
> configuration.setString(PipelineOptions.NAME, jobName);{code}
> It is not displayed on flink dashboard but the default insert-into_xxx 
> name,But it is displayed normally in the 1.13.2 branch
> I found that the jobname is set in line 756 of the class TableEnvironmentImpl 
> in the source code of 1.13.2:
> {code:java}
> String jobName = getJobName("insert-into_" + 
> String.join(",",sinkIdentifierNames));
> private String getJobName(String defaultJobName) { 
>   return 
> tableConfig.getConfiguration().getString(PipelineOptions.NAME,defaultJobName);
>  
> } 
> {code}
> But in the 1.14.0 branch, there is no getJobName method. Is there any other 
> setting way here? Or the method should be added.
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24529) flink sql job cannot use custom job name

2021-10-12 Thread simenliuxing (Jira)
simenliuxing created FLINK-24529:


 Summary: flink sql job cannot use custom job name
 Key: FLINK-24529
 URL: https://issues.apache.org/jira/browse/FLINK-24529
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Affects Versions: 1.14.0
Reporter: simenliuxing
 Fix For: 1.14.1


In the 1.14.0 branch, I set the configuration of StreamTableEnvironment as 
follows:
{code:java}
Configuration configuration = tEnv.getConfig().getConfiguration();
configuration.setString(PipelineOptions.NAME, jobName);{code}
It is not displayed on flink dashboard but the default insert-into_xxx name,But 
it is displayed normally in the 1.13.2 branch

I found that the jobname is set in line 756 of the class TableEnvironmentImpl 
in the source code of 1.13.2:
{code:java}
String jobName = getJobName("insert-into_" + 
String.join(",",sinkIdentifierNames));
private String getJobName(String defaultJobName) { 
  return 
tableConfig.getConfiguration().getString(PipelineOptions.NAME,defaultJobName); 
} 
{code}
But in the 1.14.0 branch, there is no getJobName method. Is there any other 
setting way here? Or the method should be added.

 

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24528) Flink HBase Asyc Lookup throw NPE if rowkey is null

2021-10-12 Thread zhisheng (Jira)
zhisheng created FLINK-24528:


 Summary: Flink HBase Asyc Lookup throw NPE if rowkey is null
 Key: FLINK-24528
 URL: https://issues.apache.org/jira/browse/FLINK-24528
 Project: Flink
  Issue Type: Bug
  Components: Connectors / HBase
Affects Versions: 1.13.0
Reporter: zhisheng


Flink SQL DDL create HBase table, if set 'lookup.async' = 'true', when the 
rowkey is null, may throw NPE:
{code:java}

2021-10-12 21:11:07,100 INFO  
org.apache.flink.connector.hbase2.source.HBaseRowDataAsyncLookupFunction [] - 
start close ...2021-10-12 21:11:07,100 INFO  
org.apache.flink.connector.hbase2.source.HBaseRowDataAsyncLookupFunction [] - 
start close ...2021-10-12 21:11:07,103 WARN  
org.apache.flink.runtime.taskmanager.Task                    [] - 
LookupJoin(table=[default_catalog.default_database.dim_user_guid_relation], 
joinType=[LeftOuterJoin], async=[true], lookup=[rowkey=userGuid], 
select=[userGuid, last_time, rowkey, cf]) -> Calc(select=[userGuid AS 
user_guid, cf.user_new_id AS user_new_id, last_time AS 
usr_pwtx_ectx_driver_last_seek_order_time, _UTF-16LE'prfl.usr' AS metric]) -> 
Sink: Sink(table=[default_catalog.default_database.print_table], 
fields=[user_guid, user_new_id, usr_pwtx_ectx_driver_last_seek_order_time, 
metric]) (1/1)#0 (06bf3d7b0c341101e070796e20f7e571) switched from RUNNING to 
FAILED.java.lang.NullPointerException: null at 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.RawAsyncTableImpl.get(RawAsyncTableImpl.java:249)
 ~[flink-sql-connector-hbase-2.2_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.AsyncTableImpl.get(AsyncTableImpl.java:96)
 ~[flink-sql-connector-hbase-2.2_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.connector.hbase2.source.HBaseRowDataAsyncLookupFunction.fetchResult(HBaseRowDataAsyncLookupFunction.java:187)
 ~[flink-sql-connector-hbase-2.2_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.connector.hbase2.source.HBaseRowDataAsyncLookupFunction.eval(HBaseRowDataAsyncLookupFunction.java:174)
 ~[flink-sql-connector-hbase-2.2_2.11-1.12.0.jar:1.12.0] at 
LookupFunction$24.asyncInvoke(Unknown Source) ~[?:?] at 
org.apache.flink.table.runtime.operators.join.lookup.AsyncLookupJoinRunner.asyncInvoke(AsyncLookupJoinRunner.java:139)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.runtime.operators.join.lookup.AsyncLookupJoinRunner.asyncInvoke(AsyncLookupJoinRunner.java:53)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.api.operators.async.AsyncWaitOperator.processElement(AsyncWaitOperator.java:195)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:193)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:179)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:152)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:67)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:372)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:186)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:575)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:539) 
~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:722) 
[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.runtime.taskmanager.Task.run(Task.java:547) 
[flink-dist_2.11-1.12.0.jar:1.12.0] at java.lang.Thread.run(Thread.java:748) 
[?:1.8.0_201] Suppressed: java.lang.Exception: java.lang.NoClassDefFoundError: 
org/apache/flink/hbase/shaded/org/apache/commons/io/IOUtils at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runAndSuppressThrowable(StreamTask.java:723)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.runtime.tasks.StreamTask.cleanUpInvoke(StreamTask.java:643)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:552) 
~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:722) 
[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.runtime.taskmanager.Task.run(Task.java:547) 
[flink-dist_2.11-1.12.0.jar:1.12.0] at java.lang.Thread.run(Thread.java:748) 
[?:1.8.0_201] Caused by: java.lang.NoClassDefFoundError: 

[jira] [Created] (FLINK-24527) Add SqlUseDialect and UseDialectOperation

2021-10-12 Thread chouc (Jira)
chouc created FLINK-24527:
-

 Summary: Add SqlUseDialect and  UseDialectOperation
 Key: FLINK-24527
 URL: https://issues.apache.org/jira/browse/FLINK-24527
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.14.0
Reporter: chouc
 Fix For: 1.14.1


users have to tabEnv.getConfig().setSqlDialect(SqlDialect.HIVE) to choose HIVE 
dialect , it's maybe convenient that use `use dialect hive` or `use dialect 
default` 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-ml] yunfengzhou-hub commented on a change in pull request #18: [FLINK-24279] Support withBroadcast in DataStream by caching in static variables

2021-10-12 Thread GitBox


yunfengzhou-hub commented on a change in pull request #18:
URL: https://github.com/apache/flink-ml/pull/18#discussion_r727636497



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/common/broadcast/BroadcastContext.java
##
@@ -0,0 +1,123 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.common.broadcast;
+
+import org.apache.flink.api.java.tuple.Tuple2;
+
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+public class BroadcastContext {
+/**
+ * Store broadcast DataStreams in a Map. The key is (broadcastName, 
partitionId) and the value
+ * is (isBroaddcastVariableReady, cacheList).
+ */
+private static Map, Tuple2>> 
broadcastVariables =
+new HashMap<>();
+/**
+ * We use lock because we want to enable `getBroadcastVariable(String)` in 
a TM with multiple
+ * slots here. Note that using ConcurrentHashMap is not enough since we 
need "contains and get
+ * in an atomic operation".
+ */
+private static ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
+
+public static void putBroadcastVariable(
+Tuple2 key, Tuple2> variable) {
+lock.writeLock().lock();
+try {
+broadcastVariables.put(key, variable);
+} finally {
+lock.writeLock().unlock();
+}
+}
+
+/**
+ * get the cached list with the given key.
+ *
+ * @param key
+ * @param 
+ * @return the cache list.
+ */
+public static  List getBroadcastVariable(Tuple2 
key) {
+lock.readLock().lock();
+List result = null;
+try {
+result = broadcastVariables.get(key).f1;
+} finally {
+lock.readLock().unlock();
+}
+return (List) result;
+}
+
+/**
+ * get broadcast variables by name
+ *
+ * @param name
+ * @param 
+ * @return
+ */
+public static  List getBroadcastVariable(String name) {
+lock.readLock().lock();
+List result = null;
+try {
+for (Tuple2 nameAndPartitionId : 
broadcastVariables.keySet()) {
+if (name.equals(nameAndPartitionId.f0) && 
isCacheFinished(nameAndPartitionId)) {
+result = broadcastVariables.get(nameAndPartitionId).f1;
+break;

Review comment:
   If the cached broadcast variables are the same regardless of 
partitionId, I personally think that it might be unnecessary to store 
partitionId as part of the key of `broadcastVariables`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] yunfengzhou-hub commented on a change in pull request #18: [FLINK-24279] Support withBroadcast in DataStream by caching in static variables

2021-10-12 Thread GitBox


yunfengzhou-hub commented on a change in pull request #18:
URL: https://github.com/apache/flink-ml/pull/18#discussion_r727635514



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/common/broadcast/operator/CacheStreamOperator.java
##
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.common.broadcast.operator;
+
+import org.apache.flink.api.common.state.ListState;
+import org.apache.flink.api.common.state.ListStateDescriptor;
+import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.ml.common.broadcast.BroadcastContext;
+import org.apache.flink.runtime.state.StateInitializationContext;
+import org.apache.flink.runtime.state.StateSnapshotContext;
+import org.apache.flink.streaming.api.operators.AbstractInput;
+import org.apache.flink.streaming.api.operators.AbstractStreamOperatorV2;
+import org.apache.flink.streaming.api.operators.BoundedMultiInput;
+import org.apache.flink.streaming.api.operators.Input;
+import org.apache.flink.streaming.api.operators.MultipleInputStreamOperator;
+import org.apache.flink.streaming.api.operators.StreamOperatorParameters;
+import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
+
+import org.apache.commons.collections.IteratorUtils;
+
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.List;
+
+/** The operator that process all broadcast inputs and stores them in {@link 
BroadcastContext}. */
+public class CacheStreamOperator extends AbstractStreamOperatorV2

Review comment:
   Could it be better to rename `CacheStreamOperator` to something like 
`BroadcastStreamOperator`? I find it a little bit hard to associate the current 
name with its functionality.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] yunfengzhou-hub commented on a change in pull request #18: [FLINK-24279] Support withBroadcast in DataStream by caching in static variables

2021-10-12 Thread GitBox


yunfengzhou-hub commented on a change in pull request #18:
URL: https://github.com/apache/flink-ml/pull/18#discussion_r727632118



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/common/broadcast/operator/OneInputBroadcastWrapperOperator.java
##
@@ -0,0 +1,170 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.common.broadcast.operator;
+
+import org.apache.flink.api.common.state.ListStateDescriptor;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.core.fs.FileSystem;
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.ml.iteration.datacache.nonkeyed.DataCacheReader;
+import org.apache.flink.ml.iteration.datacache.nonkeyed.DataCacheWriter;
+import org.apache.flink.ml.iteration.datacache.nonkeyed.Segment;
+import org.apache.flink.runtime.checkpoint.CheckpointOptions;
+import org.apache.flink.runtime.state.CheckpointStreamFactory;
+import org.apache.flink.streaming.api.operators.OneInputStreamOperator;
+import org.apache.flink.streaming.api.operators.OperatorSnapshotFutures;
+import org.apache.flink.streaming.api.operators.StreamOperatorFactory;
+import org.apache.flink.streaming.api.operators.StreamOperatorParameters;
+import org.apache.flink.streaming.api.operators.StreamTaskStateInitializer;
+import org.apache.flink.streaming.api.watermark.Watermark;
+import org.apache.flink.streaming.runtime.streamrecord.LatencyMarker;
+import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
+import org.apache.flink.streaming.runtime.watermarkstatus.WatermarkStatus;
+
+import org.apache.commons.collections.IteratorUtils;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+
+/** Wrapper for WithBroadcastOneInputStreamOperator. */
+public class OneInputBroadcastWrapperOperator
+extends AbstractBroadcastWrapperOperator>
+implements OneInputStreamOperator {
+
+private List cache;
+
+public OneInputBroadcastWrapperOperator(
+StreamOperatorParameters parameters,
+StreamOperatorFactory operatorFactory,
+String[] broadcastStreamNames,
+TypeInformation[] inTypes,
+boolean[] isBlocking) {
+super(parameters, operatorFactory, broadcastStreamNames, inTypes, 
isBlocking);
+this.cache = new ArrayList<>();
+}
+
+@Override
+public void processElement(StreamRecord streamRecord) throws Exception 
{
+if (isBlocking[0]) {
+if (areBroadcastVariablesReady()) {
+for (IN ele : cache) {
+wrappedOperator.processElement(new StreamRecord<>(ele));
+}
+cache.clear();
+wrappedOperator.processElement(streamRecord);
+
+} else {
+cache.add(streamRecord.getValue());

Review comment:
   I can see that this PR is trying to use this caching list to avoid 
fulfilling Flink's buffer, and the list is only stored in memory. I am worried 
that in case when the size of the cached records grows and exceeds the size of 
memory, this solution might cause java to throw exceptions and Flink job to 
fail.
   
   Shall we add some mechanism like follows to avoid this problem?
   
   - store part of the cached records on disk to avoid excess usage of memory.
   - check the size of cached records stored in memory and handle possible 
exceptions.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24526) Combine all environment variables into a single section

2021-10-12 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-24526:
-
Description: 
In every release the first thing I always do is scour the release guide for all 
the environment variables that I need and copy it into a text files to ease the 
re-initialization of the release environment (say, after a restart).

We should just move all of them into a single code block in the release guide.

  was:
In every release the first thing I always do is scour the release guide for all 
the environment variables that I need and copy it into a text files to ease the 
re-initialization of the release environment (say, after a restart).

We should just move all of them into a single code block in the release guide
.


> Combine all environment variables into a single section
> ---
>
> Key: FLINK-24526
> URL: https://issues.apache.org/jira/browse/FLINK-24526
> Project: Flink
>  Issue Type: Sub-task
>  Components: Release System
>Reporter: Chesnay Schepler
>Priority: Major
>
> In every release the first thing I always do is scour the release guide for 
> all the environment variables that I need and copy it into a text files to 
> ease the re-initialization of the release environment (say, after a restart).
> We should just move all of them into a single code block in the release guide.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24526) Combine all environment variables into a single section

2021-10-12 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-24526:


 Summary: Combine all environment variables into a single section
 Key: FLINK-24526
 URL: https://issues.apache.org/jira/browse/FLINK-24526
 Project: Flink
  Issue Type: Sub-task
  Components: Release System
Reporter: Chesnay Schepler


In every release the first thing I always do is scour the release guide for all 
the environment variables that I need and copy it into a text files to ease the 
re-initialization of the release environment (say, after a restart).

We should just move all of them into a single code block in the documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24526) Combine all environment variables into a single section

2021-10-12 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-24526:
-
Description: 
In every release the first thing I always do is scour the release guide for all 
the environment variables that I need and copy it into a text files to ease the 
re-initialization of the release environment (say, after a restart).

We should just move all of them into a single code block in the release guide
.

  was:
In every release the first thing I always do is scour the release guide for all 
the environment variables that I need and copy it into a text files to ease the 
re-initialization of the release environment (say, after a restart).

We should just move all of them into a single code block in the documentation.


> Combine all environment variables into a single section
> ---
>
> Key: FLINK-24526
> URL: https://issues.apache.org/jira/browse/FLINK-24526
> Project: Flink
>  Issue Type: Sub-task
>  Components: Release System
>Reporter: Chesnay Schepler
>Priority: Major
>
> In every release the first thing I always do is scour the release guide for 
> all the environment variables that I need and copy it into a text files to 
> ease the re-initialization of the release environment (say, after a restart).
> We should just move all of them into a single code block in the release guide
> .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-24517) Streamline Flink releases

2021-10-12 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-24517:


Assignee: (was: Chesnay Schepler)

> Streamline Flink releases
> -
>
> Key: FLINK-24517
> URL: https://issues.apache.org/jira/browse/FLINK-24517
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Release System
>Reporter: Chesnay Schepler
>Priority: Major
> Fix For: 1.15.0
>
>
> Collection of changes that I'd like to make based on recent experiences with 
> the 1.13.3 release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24525) Use different names for release branch and tags

2021-10-12 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-24525:


 Summary: Use different names for release branch and tags
 Key: FLINK-24525
 URL: https://issues.apache.org/jira/browse/FLINK-24525
 Project: Flink
  Issue Type: Sub-task
  Components: Release System
Reporter: Chesnay Schepler


As part of the release process we create a separate branch and tag that 
currently use the same name (e.g., release-1.13.3-rc1).
This makes it more complicated to push the branch/tag because you always need 
to tell git which of the 2 you are referencing.

We should use a different name for the branch to remedy that. It is only 
temporary anyway.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24524) Rework release guide and scripts to be used from the root directory

2021-10-12 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-24524:


 Summary: Rework release guide and scripts to be used from the root 
directory
 Key: FLINK-24524
 URL: https://issues.apache.org/jira/browse/FLINK-24524
 Project: Flink
  Issue Type: Sub-task
  Components: Release System
Reporter: Chesnay Schepler


The current release process is generally run from the tools directory. To 
document that all commands in the release guide are prefixed with "tools $" to 
clarify that.

The result is that it is not possible to copy the scripts as is from the guide, 
because you always first have to remove that prefix.

We should just remove the requirement to run things from the tools directory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24494) Avro Confluent Registry SQL kafka connector fails to write to the topic with schema

2021-10-12 Thread Mahendran Ponnusamy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mahendran Ponnusamy updated FLINK-24494:

Description: 
 *Summary:*

Given a schema registered to a topic with name and namespace

when the flink sql with upsert-kafka connector writes to the topic,

it fails coz row it tries to produce is not compatible with the schema 
registered

 

*Root cause:*

The upsert-kafka connector auto generates a schema with the +*name as `record` 
and no namespace*+.  The below schema is generated by the connector. I'm 
expecting the connector should pull the schema from the subject and use 
ConfluentAvroRowSerialization to[which is not there today i believe] serialize 
using the schema from the subject.

Schema generated by the upsert-kafka connector which is using AvroRowSerializer 
interanally

!image-2021-10-12-12-21-46-177.png|width=813,height=23!

{color:#cc7832}Schema Registered to the subject: {color}

{
 {color:#9876aa}"type" {color}{color:#cc7832}: 
{color}{color:#6a8759}"record"{color}{color:#cc7832},{color} 
{color:#9876aa}"name" {color}{color:#cc7832}: 
{color}{color:#6a8759}"SimpleCustomer"{color}{color:#cc7832},{color} 
{color:#9876aa}"namespace" {color}{color:#cc7832}: 
{color}{color:#6a8759}"com...example.model"{color}{color:#cc7832},{color} 
{color:#9876aa}"fields" {color}{color:#cc7832}: {color}[ {
 {color:#9876aa}"name" {color}{color:#cc7832}: 
{color}{color:#6a8759}"customerId"{color}{color:#cc7832},{color} 
{color:#9876aa}"type" {color}{color:#cc7832}: 
{color}{color:#6a8759}"string"{color} }{color:#cc7832}, {color}{
 {color:#9876aa}"name" {color}{color:#cc7832}: 
{color}{color:#6a8759}"age"{color}{color:#cc7832},{color} {color:#9876aa}"type" 
{color}{color:#cc7832}: 
{color}{color:#6a8759}"int"{color}{color:#cc7832},{color} 
{color:#9876aa}"default" {color}{color:#cc7832}: {color}{color:#6897bb}0{color} 
}]
 }

 

Table SQL with upsert-kafka connector

!image-2021-10-12-12-18-53-016.png|width=351,height=176!

 

The name "record" hardcoded

!image-2021-10-12-12-21-02-008.png|width=464,height=138!  

  was:
 *Summary:*

Given a schema registered to a topic with name and namespave

when the flink sql with upsert-kafka connector writes to the topic,

it fails coz row it tries to produce is not compatible with the schema 
registered

 

*Root cause:*

The upsert-kafka connector auto generates a schema with the +*name as `record` 
and no namespace*+.  The below schema is generated by the connector. I'm 
expecting the connector should pull the schema from the subject and use 
ConfluentAvroRowSerialization to[which is not there today i believe] serialize 
using the schema from the subject.

Schema generated by the upsert-kafka connector which is using AvroRowSerializer 
interanally

!image-2021-10-12-12-21-46-177.png|width=813,height=23!

{color:#cc7832}Schema Registered to a subject: 
{color}

{
 {color:#9876aa}"type" {color}{color:#cc7832}: 
{color}{color:#6a8759}"record"{color}{color:#cc7832},{color} 
{color:#9876aa}"name" {color}{color:#cc7832}: 
{color}{color:#6a8759}"SimpleCustomer"{color}{color:#cc7832},{color} 
{color:#9876aa}"namespace" {color}{color:#cc7832}: 
{color}{color:#6a8759}"com...example.model"{color}{color:#cc7832},{color} 
{color:#9876aa}"fields" {color}{color:#cc7832}: {color}[ {
 {color:#9876aa}"name" {color}{color:#cc7832}: 
{color}{color:#6a8759}"customerId"{color}{color:#cc7832},{color} 
{color:#9876aa}"type" {color}{color:#cc7832}: 
{color}{color:#6a8759}"string"{color} }{color:#cc7832}, {color}{
 {color:#9876aa}"name" {color}{color:#cc7832}: 
{color}{color:#6a8759}"age"{color}{color:#cc7832},{color} {color:#9876aa}"type" 
{color}{color:#cc7832}: 
{color}{color:#6a8759}"int"{color}{color:#cc7832},{color} 
{color:#9876aa}"default" {color}{color:#cc7832}: {color}{color:#6897bb}0{color} 
}]
 }

 

Table SQL with upsert-kafka connector

!image-2021-10-12-12-18-53-016.png|width=351,height=176!

 

The name "record" hardcoded

!image-2021-10-12-12-21-02-008.png|width=464,height=138!  


> Avro Confluent Registry SQL kafka connector  fails to write to the topic with 
> schema 
> -
>
> Key: FLINK-24494
> URL: https://issues.apache.org/jira/browse/FLINK-24494
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.14.0, 1.13.2
>Reporter: Mahendran Ponnusamy
>Priority: Critical
> Attachments: Screen Shot 2021-10-12 at 10.38.55 AM.png, 
> image-2021-10-12-12-09-30-374.png, image-2021-10-12-12-11-04-664.png, 
> image-2021-10-12-12-18-53-016.png, image-2021-10-12-12-19-37-227.png, 
> image-2021-10-12-12-21-02-008.png, image-2021-10-12-12-21-46-177.png
>
>
>  *Summary:*
> Given a schema registered to a topic with name and namespace
> when the 

[jira] [Created] (FLINK-24523) Extend GPG setup instructions

2021-10-12 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-24523:


 Summary: Extend GPG setup instructions
 Key: FLINK-24523
 URL: https://issues.apache.org/jira/browse/FLINK-24523
 Project: Flink
  Issue Type: Sub-task
  Components: Release System
Reporter: Chesnay Schepler


The current GPG setup instructions are insufficient as they do not mention that 
the password store duration should (at least temporarily) be increased to 
several hours.
As is it is easily possible that the password is evicted during the build 
process, failing the release process with a timeout.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24494) Avro Confluent Registry SQL kafka connector fails to write to the topic with schema

2021-10-12 Thread Mahendran Ponnusamy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mahendran Ponnusamy updated FLINK-24494:

Affects Version/s: 1.14.0

> Avro Confluent Registry SQL kafka connector  fails to write to the topic with 
> schema 
> -
>
> Key: FLINK-24494
> URL: https://issues.apache.org/jira/browse/FLINK-24494
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.14.0, 1.13.2
>Reporter: Mahendran Ponnusamy
>Priority: Critical
> Attachments: Screen Shot 2021-10-12 at 10.38.55 AM.png, 
> image-2021-10-12-12-09-30-374.png, image-2021-10-12-12-11-04-664.png, 
> image-2021-10-12-12-18-53-016.png, image-2021-10-12-12-19-37-227.png, 
> image-2021-10-12-12-21-02-008.png, image-2021-10-12-12-21-46-177.png
>
>
>  *Summary:*
> Given a schema registered to a topic with name and namespave
> when the flink sql with upsert-kafka connector writes to the topic,
> it fails coz row it tries to produce is not compatible with the schema 
> registered
>  
> *Root cause:*
> The upsert-kafka connector auto generates a schema with the +*name as 
> `record` and no namespace*+.  The below schema is generated by the connector. 
> I'm expecting the connector should pull the schema from the subject and use 
> ConfluentAvroRowSerialization to[which is not there today i believe] 
> serialize using the schema from the subject.
> Schema generated by the upsert-kafka connector which is using 
> AvroRowSerializer interanally
> !image-2021-10-12-12-21-46-177.png|width=813,height=23!
> {color:#cc7832}Schema Registered to a subject: 
> {color}
> {
>  {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"record"{color}{color:#cc7832},{color} 
> {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"SimpleCustomer"{color}{color:#cc7832},{color} 
> {color:#9876aa}"namespace" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"com...example.model"{color}{color:#cc7832},{color} 
> {color:#9876aa}"fields" {color}{color:#cc7832}: {color}[ {
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"customerId"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"string"{color} }{color:#cc7832}, {color}{
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"age"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"int"{color}{color:#cc7832},{color} 
> {color:#9876aa}"default" {color}{color:#cc7832}: 
> {color}{color:#6897bb}0{color} }]
>  }
>  
> Table SQL with upsert-kafka connector
> !image-2021-10-12-12-18-53-016.png|width=351,height=176!
>  
> The name "record" hardcoded
> !image-2021-10-12-12-21-02-008.png|width=464,height=138!  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24494) Avro Confluent Registry SQL kafka connector fails to write to the topic with schema

2021-10-12 Thread Mahendran Ponnusamy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mahendran Ponnusamy updated FLINK-24494:

Description: 
 *Summary:*

Given a schema registered to a topic with name and namespave

when the flink sql with upsert-kafka connector writes to the topic,

it fails coz row it tries to produce is not compatible with the schema 
registered

 

*Root cause:*

The upsert-kafka connector auto generates a schema with the +*name as `record` 
and no namespace*+.  The below schema is generated by the connector. I'm 
expecting the connector should pull the schema from the subject and use 
ConfluentAvroRowSerialization to[which is not there today i believe] serialize 
using the schema from the subject.

Schema generated by the upsert-kafka connector which is using AvroRowSerializer 
interanally

!image-2021-10-12-12-21-46-177.png|width=813,height=23!

{color:#cc7832}Schema Registered to a subject: 
{color}

{
 {color:#9876aa}"type" {color}{color:#cc7832}: 
{color}{color:#6a8759}"record"{color}{color:#cc7832},{color} 
{color:#9876aa}"name" {color}{color:#cc7832}: 
{color}{color:#6a8759}"SimpleCustomer"{color}{color:#cc7832},{color} 
{color:#9876aa}"namespace" {color}{color:#cc7832}: 
{color}{color:#6a8759}"com...example.model"{color}{color:#cc7832},{color} 
{color:#9876aa}"fields" {color}{color:#cc7832}: {color}[ {
 {color:#9876aa}"name" {color}{color:#cc7832}: 
{color}{color:#6a8759}"customerId"{color}{color:#cc7832},{color} 
{color:#9876aa}"type" {color}{color:#cc7832}: 
{color}{color:#6a8759}"string"{color} }{color:#cc7832}, {color}{
 {color:#9876aa}"name" {color}{color:#cc7832}: 
{color}{color:#6a8759}"age"{color}{color:#cc7832},{color} {color:#9876aa}"type" 
{color}{color:#cc7832}: 
{color}{color:#6a8759}"int"{color}{color:#cc7832},{color} 
{color:#9876aa}"default" {color}{color:#cc7832}: {color}{color:#6897bb}0{color} 
}]
 }

 

Table SQL with upsert-kafka connector

!image-2021-10-12-12-18-53-016.png|width=351,height=176!

 

The name "record" hardcoded

!image-2021-10-12-12-21-02-008.png|width=464,height=138!  

  was:
 

Given a schema registered to a topic with name and namespave

when the flink sql with upsert-kafka connector writes to the topic,

it fails coz row it tries to produce is not compatibale with the schema 
registered

{color:#cc7832} {color}

{
 {color:#9876aa}"type" {color}{color:#cc7832}: 
{color}{color:#6a8759}"record"{color}{color:#cc7832},{color} 
{color:#9876aa}"name" {color}{color:#cc7832}: 
{color}{color:#6a8759}"SimpleCustomer"{color}{color:#cc7832},{color} 
{color:#9876aa}"namespace" {color}{color:#cc7832}: 
{color}{color:#6a8759}"com...example.model"{color}{color:#cc7832},{color} 
{color:#9876aa}"fields" {color}{color:#cc7832}: {color}[ {
 {color:#9876aa}"name" {color}{color:#cc7832}: 
{color}{color:#6a8759}"customerId"{color}{color:#cc7832},{color} 
{color:#9876aa}"type" {color}{color:#cc7832}: 
{color}{color:#6a8759}"string"{color} }{color:#cc7832}, {color}{
 {color:#9876aa}"name" {color}{color:#cc7832}: 
{color}{color:#6a8759}"age"{color}{color:#cc7832},{color} {color:#9876aa}"type" 
{color}{color:#cc7832}: 
{color}{color:#6a8759}"int"{color}{color:#cc7832},{color} 
{color:#9876aa}"default" {color}{color:#cc7832}: {color}{color:#6897bb}0{color} 
}]
 }

 

 


> Avro Confluent Registry SQL kafka connector  fails to write to the topic with 
> schema 
> -
>
> Key: FLINK-24494
> URL: https://issues.apache.org/jira/browse/FLINK-24494
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.13.2
>Reporter: Mahendran Ponnusamy
>Priority: Critical
> Attachments: Screen Shot 2021-10-12 at 10.38.55 AM.png, 
> image-2021-10-12-12-09-30-374.png, image-2021-10-12-12-11-04-664.png, 
> image-2021-10-12-12-18-53-016.png, image-2021-10-12-12-19-37-227.png, 
> image-2021-10-12-12-21-02-008.png, image-2021-10-12-12-21-46-177.png
>
>
>  *Summary:*
> Given a schema registered to a topic with name and namespave
> when the flink sql with upsert-kafka connector writes to the topic,
> it fails coz row it tries to produce is not compatible with the schema 
> registered
>  
> *Root cause:*
> The upsert-kafka connector auto generates a schema with the +*name as 
> `record` and no namespace*+.  The below schema is generated by the connector. 
> I'm expecting the connector should pull the schema from the subject and use 
> ConfluentAvroRowSerialization to[which is not there today i believe] 
> serialize using the schema from the subject.
> Schema generated by the upsert-kafka connector which is using 
> AvroRowSerializer interanally
> !image-2021-10-12-12-21-46-177.png|width=813,height=23!
> {color:#cc7832}Schema Registered to a subject: 
> {color}
> {
>  

[jira] [Updated] (FLINK-24494) Avro Confluent Registry SQL kafka connector fails to write to the topic with schema

2021-10-12 Thread Mahendran Ponnusamy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mahendran Ponnusamy updated FLINK-24494:

Attachment: image-2021-10-12-12-21-46-177.png

> Avro Confluent Registry SQL kafka connector  fails to write to the topic with 
> schema 
> -
>
> Key: FLINK-24494
> URL: https://issues.apache.org/jira/browse/FLINK-24494
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.13.2
>Reporter: Mahendran Ponnusamy
>Priority: Critical
> Attachments: Screen Shot 2021-10-12 at 10.38.55 AM.png, 
> image-2021-10-12-12-09-30-374.png, image-2021-10-12-12-11-04-664.png, 
> image-2021-10-12-12-18-53-016.png, image-2021-10-12-12-19-37-227.png, 
> image-2021-10-12-12-21-02-008.png, image-2021-10-12-12-21-46-177.png
>
>
>  
> Given a schema registered to a topic with name and namespave
> when the flink sql with upsert-kafka connector writes to the topic,
> it fails coz row it tries to produce is not compatibale with the schema 
> registered
> {color:#cc7832} {color}
> {
>  {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"record"{color}{color:#cc7832},{color} 
> {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"SimpleCustomer"{color}{color:#cc7832},{color} 
> {color:#9876aa}"namespace" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"com...example.model"{color}{color:#cc7832},{color} 
> {color:#9876aa}"fields" {color}{color:#cc7832}: {color}[ {
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"customerId"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"string"{color} }{color:#cc7832}, {color}{
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"age"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"int"{color}{color:#cc7832},{color} 
> {color:#9876aa}"default" {color}{color:#cc7832}: 
> {color}{color:#6897bb}0{color} }]
>  }
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24494) Avro Confluent Registry SQL kafka connector fails to write to the topic with schema

2021-10-12 Thread Mahendran Ponnusamy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mahendran Ponnusamy updated FLINK-24494:

Attachment: image-2021-10-12-12-21-02-008.png

> Avro Confluent Registry SQL kafka connector  fails to write to the topic with 
> schema 
> -
>
> Key: FLINK-24494
> URL: https://issues.apache.org/jira/browse/FLINK-24494
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.13.2
>Reporter: Mahendran Ponnusamy
>Priority: Critical
> Attachments: Screen Shot 2021-10-12 at 10.38.55 AM.png, 
> image-2021-10-12-12-09-30-374.png, image-2021-10-12-12-11-04-664.png, 
> image-2021-10-12-12-18-53-016.png, image-2021-10-12-12-19-37-227.png, 
> image-2021-10-12-12-21-02-008.png, image-2021-10-12-12-21-46-177.png
>
>
>  
> Given a schema registered to a topic with name and namespave
> when the flink sql with upsert-kafka connector writes to the topic,
> it fails coz row it tries to produce is not compatibale with the schema 
> registered
> {color:#cc7832} {color}
> {
>  {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"record"{color}{color:#cc7832},{color} 
> {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"SimpleCustomer"{color}{color:#cc7832},{color} 
> {color:#9876aa}"namespace" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"com...example.model"{color}{color:#cc7832},{color} 
> {color:#9876aa}"fields" {color}{color:#cc7832}: {color}[ {
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"customerId"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"string"{color} }{color:#cc7832}, {color}{
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"age"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"int"{color}{color:#cc7832},{color} 
> {color:#9876aa}"default" {color}{color:#cc7832}: 
> {color}{color:#6897bb}0{color} }]
>  }
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24522) Remove spaces from python wheel azure artifacts

2021-10-12 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-24522:


 Summary: Remove spaces from python wheel azure artifacts
 Key: FLINK-24522
 URL: https://issues.apache.org/jira/browse/FLINK-24522
 Project: Flink
  Issue Type: Sub-task
  Components: Release System
Reporter: Chesnay Schepler


The python wheel artifacts downloaded from azure contain spaces. This makes 
scripting overly complicated (since we have to escape those).
Rename the files and adjust the release guide accordingly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24494) Avro Confluent Registry SQL kafka connector fails to write to the topic with schema

2021-10-12 Thread Mahendran Ponnusamy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mahendran Ponnusamy updated FLINK-24494:

Attachment: image-2021-10-12-12-19-37-227.png

> Avro Confluent Registry SQL kafka connector  fails to write to the topic with 
> schema 
> -
>
> Key: FLINK-24494
> URL: https://issues.apache.org/jira/browse/FLINK-24494
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.13.2
>Reporter: Mahendran Ponnusamy
>Priority: Critical
> Attachments: Screen Shot 2021-10-12 at 10.38.55 AM.png, 
> image-2021-10-12-12-09-30-374.png, image-2021-10-12-12-11-04-664.png, 
> image-2021-10-12-12-18-53-016.png, image-2021-10-12-12-19-37-227.png
>
>
>  
> Given a schema registered to a topic with name and namespave
> when the flink sql with upsert-kafka connector writes to the topic,
> it fails coz row it tries to produce is not compatibale with the schema 
> registered
> {color:#cc7832} {color}
> {
>  {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"record"{color}{color:#cc7832},{color} 
> {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"SimpleCustomer"{color}{color:#cc7832},{color} 
> {color:#9876aa}"namespace" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"com...example.model"{color}{color:#cc7832},{color} 
> {color:#9876aa}"fields" {color}{color:#cc7832}: {color}[ {
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"customerId"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"string"{color} }{color:#cc7832}, {color}{
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"age"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"int"{color}{color:#cc7832},{color} 
> {color:#9876aa}"default" {color}{color:#cc7832}: 
> {color}{color:#6897bb}0{color} }]
>  }
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24521) Add release script for Python wheel step

2021-10-12 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-24521:


 Summary: Add release script for Python wheel step
 Key: FLINK-24521
 URL: https://issues.apache.org/jira/browse/FLINK-24521
 Project: Flink
  Issue Type: Sub-task
  Components: Release System
Reporter: Chesnay Schepler


Building the wheels for the Python API includes a large number of manual steps 
that we could potentially automate; including:
- creating the pipeline
- triggering the build
- waiting for the build to finish
- downloading the artifacts
- unpacking the artifacts to the desired location



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24494) Avro Confluent Registry SQL kafka connector fails to write to the topic with schema

2021-10-12 Thread Mahendran Ponnusamy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mahendran Ponnusamy updated FLINK-24494:

Attachment: image-2021-10-12-12-18-53-016.png

> Avro Confluent Registry SQL kafka connector  fails to write to the topic with 
> schema 
> -
>
> Key: FLINK-24494
> URL: https://issues.apache.org/jira/browse/FLINK-24494
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.13.2
>Reporter: Mahendran Ponnusamy
>Priority: Critical
> Attachments: Screen Shot 2021-10-12 at 10.38.55 AM.png, 
> image-2021-10-12-12-09-30-374.png, image-2021-10-12-12-11-04-664.png, 
> image-2021-10-12-12-18-53-016.png
>
>
>  
> Given a schema registered to a topic with name and namespave
> when the flink sql with upsert-kafka connector writes to the topic,
> it fails coz row it tries to produce is not compatibale with the schema 
> registered
> {color:#cc7832} {color}
> {
>  {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"record"{color}{color:#cc7832},{color} 
> {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"SimpleCustomer"{color}{color:#cc7832},{color} 
> {color:#9876aa}"namespace" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"com...example.model"{color}{color:#cc7832},{color} 
> {color:#9876aa}"fields" {color}{color:#cc7832}: {color}[ {
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"customerId"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"string"{color} }{color:#cc7832}, {color}{
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"age"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"int"{color}{color:#cc7832},{color} 
> {color:#9876aa}"default" {color}{color:#cc7832}: 
> {color}{color:#6897bb}0{color} }]
>  }
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24520) Building of WebUI failed on Ubuntu 20

2021-10-12 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-24520:


 Summary: Building of WebUI failed on Ubuntu 20
 Key: FLINK-24520
 URL: https://issues.apache.org/jira/browse/FLINK-24520
 Project: Flink
  Issue Type: Sub-task
  Components: Build System
Affects Versions: 1.13.3
Reporter: Chesnay Schepler


While preparing the 1.13.3 release I ran into an issue where the WebUI could 
not be built via maven.

{code}
[INFO] husky > setting up git hooks
[INFO] HUSKY_SKIP_INSTALL environment variable is set to 'true', skipping Git 
hooks installation.
[ERROR] Aborted
[ERROR] npm ERR! code ELIFECYCLE
[ERROR] npm ERR! errno 134
[ERROR] npm ERR! husky@1.3.1 install: `node husky install`
[ERROR] npm ERR! Exit status 134
[ERROR] npm ERR!
[ERROR] npm ERR! Failed at the husky@1.3.1 install script.
[ERROR] npm ERR! This is probably not a problem with npm. There is likely 
additional logging output above.
{code}

However, manually invoking npm from the command-line worked just fine.

Deleting the node[_modules] directory had no effect.

We should investigate what was causing these issues.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24494) Avro Confluent Registry SQL kafka connector fails to write to the topic with schema

2021-10-12 Thread Mahendran Ponnusamy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mahendran Ponnusamy updated FLINK-24494:

Description: 
 

Given a schema registered to a topic with name and namespave

when the flink sql with upsert-kafka connector writes to the topic,

it fails coz row it tries to produce is not compatibale with the schema 
registered

{color:#cc7832} {color}

{
 {color:#9876aa}"type" {color}{color:#cc7832}: 
{color}{color:#6a8759}"record"{color}{color:#cc7832},{color} 
{color:#9876aa}"name" {color}{color:#cc7832}: 
{color}{color:#6a8759}"SimpleCustomer"{color}{color:#cc7832},{color} 
{color:#9876aa}"namespace" {color}{color:#cc7832}: 
{color}{color:#6a8759}"com...example.model"{color}{color:#cc7832},{color} 
{color:#9876aa}"fields" {color}{color:#cc7832}: {color}[ {
 {color:#9876aa}"name" {color}{color:#cc7832}: 
{color}{color:#6a8759}"customerId"{color}{color:#cc7832},{color} 
{color:#9876aa}"type" {color}{color:#cc7832}: 
{color}{color:#6a8759}"string"{color} }{color:#cc7832}, {color}{
 {color:#9876aa}"name" {color}{color:#cc7832}: 
{color}{color:#6a8759}"age"{color}{color:#cc7832},{color} {color:#9876aa}"type" 
{color}{color:#cc7832}: 
{color}{color:#6a8759}"int"{color}{color:#cc7832},{color} 
{color:#9876aa}"default" {color}{color:#cc7832}: {color}{color:#6897bb}0{color} 
}]
 }

 

 

  was:
 

Given a schema registered to a topic with default value as '0' for int,

when the flink sql with upsert-kafka connector writes to the topic,

it fails coz the schema registered is not compatible with the data [schema] it 
is producing where there is not default value [{color:#9876aa}"default" 
{color}{color:#cc7832}: {color}{color:#6897bb}0{color}] for the int.

{color:#cc7832} {color}

{
 {color:#9876aa}"type" {color}{color:#cc7832}: 
{color}{color:#6a8759}"record"{color}{color:#cc7832},{color} 
{color:#9876aa}"name" {color}{color:#cc7832}: 
{color}{color:#6a8759}"SimpleCustomer"{color}{color:#cc7832},{color} 
{color:#9876aa}"namespace" {color}{color:#cc7832}: 
{color}{color:#6a8759}"com...example.model"{color}{color:#cc7832},{color} 
{color:#9876aa}"fields" {color}{color:#cc7832}: {color}[ {
 {color:#9876aa}"name" {color}{color:#cc7832}: 
{color}{color:#6a8759}"customerId"{color}{color:#cc7832},{color} 
{color:#9876aa}"type" {color}{color:#cc7832}: 
{color}{color:#6a8759}"string"{color} }{color:#cc7832}, {color}{
 {color:#9876aa}"name" {color}{color:#cc7832}: 
{color}{color:#6a8759}"age"{color}{color:#cc7832},{color} {color:#9876aa}"type" 
{color}{color:#cc7832}: 
{color}{color:#6a8759}"int"{color}{color:#cc7832},{color} 
{color:#9876aa}"default" {color}{color:#cc7832}: {color}{color:#6897bb}0{color} 
}]
 }

 

 


> Avro Confluent Registry SQL kafka connector  fails to write to the topic with 
> schema 
> -
>
> Key: FLINK-24494
> URL: https://issues.apache.org/jira/browse/FLINK-24494
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.13.2
>Reporter: Mahendran Ponnusamy
>Priority: Critical
> Attachments: Screen Shot 2021-10-12 at 10.38.55 AM.png, 
> image-2021-10-12-12-09-30-374.png, image-2021-10-12-12-11-04-664.png
>
>
>  
> Given a schema registered to a topic with name and namespave
> when the flink sql with upsert-kafka connector writes to the topic,
> it fails coz row it tries to produce is not compatibale with the schema 
> registered
> {color:#cc7832} {color}
> {
>  {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"record"{color}{color:#cc7832},{color} 
> {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"SimpleCustomer"{color}{color:#cc7832},{color} 
> {color:#9876aa}"namespace" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"com...example.model"{color}{color:#cc7832},{color} 
> {color:#9876aa}"fields" {color}{color:#cc7832}: {color}[ {
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"customerId"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"string"{color} }{color:#cc7832}, {color}{
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"age"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"int"{color}{color:#cc7832},{color} 
> {color:#9876aa}"default" {color}{color:#cc7832}: 
> {color}{color:#6897bb}0{color} }]
>  }
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24519) Add release script for uplading release to SVN

2021-10-12 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-24519:


 Summary: Add release script for uplading release to SVN
 Key: FLINK-24519
 URL: https://issues.apache.org/jira/browse/FLINK-24519
 Project: Flink
  Issue Type: Sub-task
  Components: Release System
Reporter: Chesnay Schepler


We can add a release script to subsume the {{Stage source and binary releases 
on dist.apache.org}} section of the release guide.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24494) Avro Confluent Registry SQL not fails to write to the topic with schema with default value for int

2021-10-12 Thread Mahendran Ponnusamy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mahendran Ponnusamy updated FLINK-24494:

Description: 
 

Given a schema registered to a topic with default value as '0' for int,

when the flink sql with upsert-kafka connector writes to the topic,

it fails coz the schema registered is not compatible with the data [schema] it 
is producing where there is not default value [{color:#9876aa}"default" 
{color}{color:#cc7832}: {color}{color:#6897bb}0{color}] for the int.

{color:#cc7832} {color}

{
 {color:#9876aa}"type" {color}{color:#cc7832}: 
{color}{color:#6a8759}"record"{color}{color:#cc7832},{color} 
{color:#9876aa}"name" {color}{color:#cc7832}: 
{color}{color:#6a8759}"SimpleCustomer"{color}{color:#cc7832},{color} 
{color:#9876aa}"namespace" {color}{color:#cc7832}: 
{color}{color:#6a8759}"com...example.model"{color}{color:#cc7832},{color} 
{color:#9876aa}"fields" {color}{color:#cc7832}: {color}[ {
 {color:#9876aa}"name" {color}{color:#cc7832}: 
{color}{color:#6a8759}"customerId"{color}{color:#cc7832},{color} 
{color:#9876aa}"type" {color}{color:#cc7832}: 
{color}{color:#6a8759}"string"{color} }{color:#cc7832}, {color}{
 {color:#9876aa}"name" {color}{color:#cc7832}: 
{color}{color:#6a8759}"age"{color}{color:#cc7832},{color} {color:#9876aa}"type" 
{color}{color:#cc7832}: 
{color}{color:#6a8759}"int"{color}{color:#cc7832},{color} 
{color:#9876aa}"default" {color}{color:#cc7832}: {color}{color:#6897bb}0{color} 
}]
 }

 

 

  was:
 

Given a schema registered to a topic with default value as '0' for int,

when the flink sql with upsert-kafka connector writes to the topic,

it fails coz the schema registered is not compatible with the data [schema] it 
is producing where there is not default value [{color:#9876aa}"default" 
{color}{color:#cc7832}: {color}{color:#6897bb}0{color}] for the int.

{color:#cc7832} {color}

{
 {color:#9876aa}"type" {color}{color:#cc7832}: 
{color}{color:#6a8759}"record"{color}{color:#cc7832},{color} 
{color:#9876aa}"name" {color}{color:#cc7832}: 
{color}{color:#6a8759}"SimpleCustomer"{color}{color:#cc7832},{color} 
{color:#9876aa}"namespace" {color}{color:#cc7832}: 
{color}{color:#6a8759}"com.nordstrom.nap.onehop.example.model"{color}{color:#cc7832},{color}
 {color:#9876aa}"fields" {color}{color:#cc7832}: {color}[ {
 {color:#9876aa}"name" {color}{color:#cc7832}: 
{color}{color:#6a8759}"customerId"{color}{color:#cc7832},{color} 
{color:#9876aa}"type" {color}{color:#cc7832}: 
{color}{color:#6a8759}"string"{color} }{color:#cc7832}, {color}{
 {color:#9876aa}"name" {color}{color:#cc7832}: 
{color}{color:#6a8759}"age"{color}{color:#cc7832},{color} {color:#9876aa}"type" 
{color}{color:#cc7832}: 
{color}{color:#6a8759}"int"{color}{color:#cc7832},{color} 
{color:#9876aa}"default" {color}{color:#cc7832}: {color}{color:#6897bb}0{color} 
}]
 }

 

The full code is here [^KafkaTableTumbling.java]


> Avro Confluent Registry SQL not fails to write to the topic with schema with 
> default value for int
> --
>
> Key: FLINK-24494
> URL: https://issues.apache.org/jira/browse/FLINK-24494
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.13.2
>Reporter: Mahendran Ponnusamy
>Priority: Critical
> Attachments: Screen Shot 2021-10-12 at 10.38.55 AM.png, 
> image-2021-10-12-12-09-30-374.png, image-2021-10-12-12-11-04-664.png
>
>
>  
> Given a schema registered to a topic with default value as '0' for int,
> when the flink sql with upsert-kafka connector writes to the topic,
> it fails coz the schema registered is not compatible with the data [schema] 
> it is producing where there is not default value [{color:#9876aa}"default" 
> {color}{color:#cc7832}: {color}{color:#6897bb}0{color}] for the int.
> {color:#cc7832} {color}
> {
>  {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"record"{color}{color:#cc7832},{color} 
> {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"SimpleCustomer"{color}{color:#cc7832},{color} 
> {color:#9876aa}"namespace" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"com...example.model"{color}{color:#cc7832},{color} 
> {color:#9876aa}"fields" {color}{color:#cc7832}: {color}[ {
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"customerId"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"string"{color} }{color:#cc7832}, {color}{
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"age"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"int"{color}{color:#cc7832},{color} 
> {color:#9876aa}"default" 

[jira] [Created] (FLINK-24518) Combine deployment of staging jars and binary release creation

2021-10-12 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-24518:


 Summary: Combine deployment of staging jars and binary release 
creation
 Key: FLINK-24518
 URL: https://issues.apache.org/jira/browse/FLINK-24518
 Project: Flink
  Issue Type: Sub-task
  Components: Release System
Reporter: Chesnay Schepler


The deployment of jars to repository.apache.org and the creation of binary 
releases are currently separate steps, that are each done once per Scala 
version.

However, maven-wise, the deployment of jars does everything needed for the 
binary release already. We'd just have to zip flink-dist and are done.

We should combine these steps to half the number of required builds.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24494) Avro Confluent Registry SQL kafka connector fails to write to the topic with schema

2021-10-12 Thread Mahendran Ponnusamy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mahendran Ponnusamy updated FLINK-24494:

Summary: Avro Confluent Registry SQL kafka connector  fails to write to the 
topic with schema   (was: Avro Confluent Registry SQL not fails to write to the 
topic with schema with default value for int)

> Avro Confluent Registry SQL kafka connector  fails to write to the topic with 
> schema 
> -
>
> Key: FLINK-24494
> URL: https://issues.apache.org/jira/browse/FLINK-24494
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.13.2
>Reporter: Mahendran Ponnusamy
>Priority: Critical
> Attachments: Screen Shot 2021-10-12 at 10.38.55 AM.png, 
> image-2021-10-12-12-09-30-374.png, image-2021-10-12-12-11-04-664.png
>
>
>  
> Given a schema registered to a topic with default value as '0' for int,
> when the flink sql with upsert-kafka connector writes to the topic,
> it fails coz the schema registered is not compatible with the data [schema] 
> it is producing where there is not default value [{color:#9876aa}"default" 
> {color}{color:#cc7832}: {color}{color:#6897bb}0{color}] for the int.
> {color:#cc7832} {color}
> {
>  {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"record"{color}{color:#cc7832},{color} 
> {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"SimpleCustomer"{color}{color:#cc7832},{color} 
> {color:#9876aa}"namespace" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"com...example.model"{color}{color:#cc7832},{color} 
> {color:#9876aa}"fields" {color}{color:#cc7832}: {color}[ {
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"customerId"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"string"{color} }{color:#cc7832}, {color}{
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"age"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"int"{color}{color:#cc7832},{color} 
> {color:#9876aa}"default" {color}{color:#cc7832}: 
> {color}{color:#6897bb}0{color} }]
>  }
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24494) Avro Confluent Registry SQL not fails to write to the topic with schema with default value for int

2021-10-12 Thread Mahendran Ponnusamy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mahendran Ponnusamy updated FLINK-24494:

Attachment: (was: KafkaTableTumbling.java)

> Avro Confluent Registry SQL not fails to write to the topic with schema with 
> default value for int
> --
>
> Key: FLINK-24494
> URL: https://issues.apache.org/jira/browse/FLINK-24494
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.13.2
>Reporter: Mahendran Ponnusamy
>Priority: Critical
> Attachments: Screen Shot 2021-10-12 at 10.38.55 AM.png, 
> image-2021-10-12-12-09-30-374.png, image-2021-10-12-12-11-04-664.png
>
>
>  
> Given a schema registered to a topic with default value as '0' for int,
> when the flink sql with upsert-kafka connector writes to the topic,
> it fails coz the schema registered is not compatible with the data [schema] 
> it is producing where there is not default value [{color:#9876aa}"default" 
> {color}{color:#cc7832}: {color}{color:#6897bb}0{color}] for the int.
> {color:#cc7832} {color}
> {
>  {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"record"{color}{color:#cc7832},{color} 
> {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"SimpleCustomer"{color}{color:#cc7832},{color} 
> {color:#9876aa}"namespace" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"com.nordstrom.nap.onehop.example.model"{color}{color:#cc7832},{color}
>  {color:#9876aa}"fields" {color}{color:#cc7832}: {color}[ {
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"customerId"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"string"{color} }{color:#cc7832}, {color}{
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"age"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"int"{color}{color:#cc7832},{color} 
> {color:#9876aa}"default" {color}{color:#cc7832}: 
> {color}{color:#6897bb}0{color} }]
>  }
>  
> The full code is here [^KafkaTableTumbling.java]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24494) Avro Confluent Registry SQL not fails to write to the topic with schema with default value for int

2021-10-12 Thread Mahendran Ponnusamy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mahendran Ponnusamy updated FLINK-24494:

Attachment: (was: image-2021-10-12-12-09-24-259.png)

> Avro Confluent Registry SQL not fails to write to the topic with schema with 
> default value for int
> --
>
> Key: FLINK-24494
> URL: https://issues.apache.org/jira/browse/FLINK-24494
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.13.2
>Reporter: Mahendran Ponnusamy
>Priority: Critical
> Attachments: Screen Shot 2021-10-12 at 10.38.55 AM.png, 
> image-2021-10-12-12-09-30-374.png, image-2021-10-12-12-11-04-664.png
>
>
>  
> Given a schema registered to a topic with default value as '0' for int,
> when the flink sql with upsert-kafka connector writes to the topic,
> it fails coz the schema registered is not compatible with the data [schema] 
> it is producing where there is not default value [{color:#9876aa}"default" 
> {color}{color:#cc7832}: {color}{color:#6897bb}0{color}] for the int.
> {color:#cc7832} {color}
> {
>  {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"record"{color}{color:#cc7832},{color} 
> {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"SimpleCustomer"{color}{color:#cc7832},{color} 
> {color:#9876aa}"namespace" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"com.nordstrom.nap.onehop.example.model"{color}{color:#cc7832},{color}
>  {color:#9876aa}"fields" {color}{color:#cc7832}: {color}[ {
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"customerId"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"string"{color} }{color:#cc7832}, {color}{
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"age"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"int"{color}{color:#cc7832},{color} 
> {color:#9876aa}"default" {color}{color:#cc7832}: 
> {color}{color:#6897bb}0{color} }]
>  }
>  
> The full code is here [^KafkaTableTumbling.java]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24494) Avro Confluent Registry SQL not fails to write to the topic with schema with default value for int

2021-10-12 Thread Mahendran Ponnusamy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mahendran Ponnusamy updated FLINK-24494:

Attachment: (was: image-2021-10-12-12-06-16-262.png)

> Avro Confluent Registry SQL not fails to write to the topic with schema with 
> default value for int
> --
>
> Key: FLINK-24494
> URL: https://issues.apache.org/jira/browse/FLINK-24494
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.13.2
>Reporter: Mahendran Ponnusamy
>Priority: Critical
> Attachments: Screen Shot 2021-10-12 at 10.38.55 AM.png, 
> image-2021-10-12-12-09-30-374.png, image-2021-10-12-12-11-04-664.png
>
>
>  
> Given a schema registered to a topic with default value as '0' for int,
> when the flink sql with upsert-kafka connector writes to the topic,
> it fails coz the schema registered is not compatible with the data [schema] 
> it is producing where there is not default value [{color:#9876aa}"default" 
> {color}{color:#cc7832}: {color}{color:#6897bb}0{color}] for the int.
> {color:#cc7832} {color}
> {
>  {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"record"{color}{color:#cc7832},{color} 
> {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"SimpleCustomer"{color}{color:#cc7832},{color} 
> {color:#9876aa}"namespace" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"com.nordstrom.nap.onehop.example.model"{color}{color:#cc7832},{color}
>  {color:#9876aa}"fields" {color}{color:#cc7832}: {color}[ {
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"customerId"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"string"{color} }{color:#cc7832}, {color}{
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"age"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"int"{color}{color:#cc7832},{color} 
> {color:#9876aa}"default" {color}{color:#cc7832}: 
> {color}{color:#6897bb}0{color} }]
>  }
>  
> The full code is here [^KafkaTableTumbling.java]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24517) Streamline Flink releases

2021-10-12 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-24517:


 Summary: Streamline Flink releases
 Key: FLINK-24517
 URL: https://issues.apache.org/jira/browse/FLINK-24517
 Project: Flink
  Issue Type: Technical Debt
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.15.0


Collection of changes that I'd like to make based on recent experiences with 
the 1.13.3 release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24517) Streamline Flink releases

2021-10-12 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-24517:
-
Component/s: Release System

> Streamline Flink releases
> -
>
> Key: FLINK-24517
> URL: https://issues.apache.org/jira/browse/FLINK-24517
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Release System
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
> Fix For: 1.15.0
>
>
> Collection of changes that I'd like to make based on recent experiences with 
> the 1.13.3 release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-24494) Avro Confluent Registry SQL not fails to write to the topic with schema with default value for int

2021-10-12 Thread Mahendran Ponnusamy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17427871#comment-17427871
 ] 

Mahendran Ponnusamy edited comment on FLINK-24494 at 10/12/21, 7:11 PM:


Thank you [~MartijnVisser], I figured the issue is not with the default, it is 
with the schema name that is generated by the connector. 

The subjects has the below schema registered  

  !image-2021-10-12-12-11-04-664.png|width=1135,height=17!

Where as the connector generates the schema with the name as `record` hardcoded 
and no namespace.  The below schema is generated by the connector. I'm 
expecting the connector should pull the schema from the subject and use 
ConfluentAvroRowSerialization to[which is not there today i believe] serialize 
using the schema from the subject.

!image-2021-10-12-12-09-30-374.png|width=586,height=14!

and it throws "Failed to serialize row"

 

!Screen Shot 2021-10-12 at 10.38.55 AM.png|width=501,height=142!

 

I believe I should close this bug and create a new one, If its a bug


was (Author: mahen):
Thank you [~MartijnVisser], I figured the issue is not with the default, it is 
with the schema name that is generated by the connector. 

The subjects has the below schema registered
{quote}{"type":"record","name":"SimpleCustomer","namespace":"com....example.model","fields":[\{"name":"customerId","type":"string"},\{"name":"age","type":"int","default":0}]}{quote}
 

Where as the connector generates the schema with the name as `record` hardcoded 
and no namespace.  The below schema is generated by the connector. I'm 
expecting the connector should pull the schema from the subject and use 
ConfluentAvroRowSerialization to[which is not there today i believe] serialize 
using the schema from the subject.

 
{quote}{"type":"record",{color:#ff}"name":"record"{color},"fields":[\\{"name":"customerId","type":"string"},\\{"name":"age","type":"int"}]}
{quote}
 

and it throws "Failed to serialize row"

 

!Screen Shot 2021-10-12 at 10.38.55 AM.png|width=501,height=142!

 

I believe I should close this bug and create a new one, If its a bug

> Avro Confluent Registry SQL not fails to write to the topic with schema with 
> default value for int
> --
>
> Key: FLINK-24494
> URL: https://issues.apache.org/jira/browse/FLINK-24494
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.13.2
>Reporter: Mahendran Ponnusamy
>Priority: Critical
> Attachments: KafkaTableTumbling.java, Screen Shot 2021-10-12 at 
> 10.38.55 AM.png, image-2021-10-12-12-06-16-262.png, 
> image-2021-10-12-12-09-24-259.png, image-2021-10-12-12-09-30-374.png, 
> image-2021-10-12-12-11-04-664.png
>
>
>  
> Given a schema registered to a topic with default value as '0' for int,
> when the flink sql with upsert-kafka connector writes to the topic,
> it fails coz the schema registered is not compatible with the data [schema] 
> it is producing where there is not default value [{color:#9876aa}"default" 
> {color}{color:#cc7832}: {color}{color:#6897bb}0{color}] for the int.
> {color:#cc7832} {color}
> {
>  {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"record"{color}{color:#cc7832},{color} 
> {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"SimpleCustomer"{color}{color:#cc7832},{color} 
> {color:#9876aa}"namespace" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"com.nordstrom.nap.onehop.example.model"{color}{color:#cc7832},{color}
>  {color:#9876aa}"fields" {color}{color:#cc7832}: {color}[ {
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"customerId"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"string"{color} }{color:#cc7832}, {color}{
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"age"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"int"{color}{color:#cc7832},{color} 
> {color:#9876aa}"default" {color}{color:#cc7832}: 
> {color}{color:#6897bb}0{color} }]
>  }
>  
> The full code is here [^KafkaTableTumbling.java]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-24494) Avro Confluent Registry SQL not fails to write to the topic with schema with default value for int

2021-10-12 Thread Mahendran Ponnusamy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17427871#comment-17427871
 ] 

Mahendran Ponnusamy edited comment on FLINK-24494 at 10/12/21, 7:04 PM:


Thank you [~MartijnVisser], I figured the issue is not with the default, it is 
with the schema name that is generated by the connector. 

The subjects has the below schema registered
{quote}{"type":"record","name":"SimpleCustomer","namespace":"com....example.model","fields":[\{"name":"customerId","type":"string"},\{"name":"age","type":"int","default":0}]}{quote}
 

Where as the connector generates the schema with the name as `record` hardcoded 
and no namespace.  The below schema is generated by the connector. I'm 
expecting the connector should pull the schema from the subject and use 
ConfluentAvroRowSerialization to[which is not there today i believe] serialize 
using the schema from the subject.

 
{quote}{"type":"record",{color:#ff}"name":"record"{color},"fields":[\\{"name":"customerId","type":"string"},\\{"name":"age","type":"int"}]}
{quote}
 

and it throws "Failed to serialize row"

 

!Screen Shot 2021-10-12 at 10.38.55 AM.png|width=501,height=142!

 

I believe I should close this bug and create a new one, If its a bug


was (Author: mahen):
Thank you [~MartijnVisser], I figured the issue is not with the default, it is 
with the schema name that is generated by the connector. 

The subjects has the below schema registered
{quote}{"type":"record","name":"SimpleCustomer","namespace":"com.nordstrom.nap.onehop.example.model","fields":[\{"name":"customerId","type":"string"},\{"name":"age","type":"int","default":0}]}
{quote}
 

Where as the connector generates the schema with the name as `record` hardcoded 
and no namespace.  The below schema is generated by the connector. I'm 
expecting the connector should pull the schema from the subject and use 
ConfluentAvroRowSerialization to[which is not there today i believe] serialize 
using the schema from the subject.

 
{quote}{"type":"record",{color:#FF}"name":"record"{color},"fields":[\{"name":"customerId","type":"string"},\{"name":"age","type":"int"}]}
 
{quote}
 

and it throws "Failed to serialize row"

 

!Screen Shot 2021-10-12 at 10.38.55 AM.png|width=501,height=142!

 

I believe I should close this bug and create a new one, If its a bug

> Avro Confluent Registry SQL not fails to write to the topic with schema with 
> default value for int
> --
>
> Key: FLINK-24494
> URL: https://issues.apache.org/jira/browse/FLINK-24494
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.13.2
>Reporter: Mahendran Ponnusamy
>Priority: Critical
> Attachments: KafkaTableTumbling.java, Screen Shot 2021-10-12 at 
> 10.38.55 AM.png
>
>
>  
> Given a schema registered to a topic with default value as '0' for int,
> when the flink sql with upsert-kafka connector writes to the topic,
> it fails coz the schema registered is not compatible with the data [schema] 
> it is producing where there is not default value [{color:#9876aa}"default" 
> {color}{color:#cc7832}: {color}{color:#6897bb}0{color}] for the int.
> {color:#cc7832} {color}
> {
>  {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"record"{color}{color:#cc7832},{color} 
> {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"SimpleCustomer"{color}{color:#cc7832},{color} 
> {color:#9876aa}"namespace" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"com.nordstrom.nap.onehop.example.model"{color}{color:#cc7832},{color}
>  {color:#9876aa}"fields" {color}{color:#cc7832}: {color}[ {
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"customerId"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"string"{color} }{color:#cc7832}, {color}{
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"age"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"int"{color}{color:#cc7832},{color} 
> {color:#9876aa}"default" {color}{color:#cc7832}: 
> {color}{color:#6897bb}0{color} }]
>  }
>  
> The full code is here [^KafkaTableTumbling.java]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-24494) Avro Confluent Registry SQL not fails to write to the topic with schema with default value for int

2021-10-12 Thread Mahendran Ponnusamy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17427871#comment-17427871
 ] 

Mahendran Ponnusamy commented on FLINK-24494:
-

Thank you [~MartijnVisser], I figured the issue is not with the default, it is 
with the schema name that is generated by the connector. 

The subjects has the below schema registered
{quote}{"type":"record","name":"SimpleCustomer","namespace":"com.nordstrom.nap.onehop.example.model","fields":[\{"name":"customerId","type":"string"},\{"name":"age","type":"int","default":0}]}
{quote}
 

Where as the connector generates the schema with the name as `record` hardcoded 
and no namespace.  The below schema is generated by the connector. I'm 
expecting the connector should pull the schema from the subject and use 
ConfluentAvroRowSerialization to[which is not there today i believe] serialize 
using the schema from the subject.

 
{quote}{"type":"record",{color:#FF}"name":"record"{color},"fields":[\{"name":"customerId","type":"string"},\{"name":"age","type":"int"}]}
 
{quote}
 

and it throws "Failed to serialize row"

 

!Screen Shot 2021-10-12 at 10.38.55 AM.png|width=501,height=142!

 

I believe I should close this bug and create a new one, If its a bug

> Avro Confluent Registry SQL not fails to write to the topic with schema with 
> default value for int
> --
>
> Key: FLINK-24494
> URL: https://issues.apache.org/jira/browse/FLINK-24494
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.13.2
>Reporter: Mahendran Ponnusamy
>Priority: Critical
> Attachments: KafkaTableTumbling.java, Screen Shot 2021-10-12 at 
> 10.38.55 AM.png
>
>
>  
> Given a schema registered to a topic with default value as '0' for int,
> when the flink sql with upsert-kafka connector writes to the topic,
> it fails coz the schema registered is not compatible with the data [schema] 
> it is producing where there is not default value [{color:#9876aa}"default" 
> {color}{color:#cc7832}: {color}{color:#6897bb}0{color}] for the int.
> {color:#cc7832} {color}
> {
>  {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"record"{color}{color:#cc7832},{color} 
> {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"SimpleCustomer"{color}{color:#cc7832},{color} 
> {color:#9876aa}"namespace" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"com.nordstrom.nap.onehop.example.model"{color}{color:#cc7832},{color}
>  {color:#9876aa}"fields" {color}{color:#cc7832}: {color}[ {
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"customerId"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"string"{color} }{color:#cc7832}, {color}{
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"age"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"int"{color}{color:#cc7832},{color} 
> {color:#9876aa}"default" {color}{color:#cc7832}: 
> {color}{color:#6897bb}0{color} }]
>  }
>  
> The full code is here [^KafkaTableTumbling.java]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24494) Avro Confluent Registry SQL not fails to write to the topic with schema with default value for int

2021-10-12 Thread Mahendran Ponnusamy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mahendran Ponnusamy updated FLINK-24494:

Attachment: Screen Shot 2021-10-12 at 10.38.55 AM.png

> Avro Confluent Registry SQL not fails to write to the topic with schema with 
> default value for int
> --
>
> Key: FLINK-24494
> URL: https://issues.apache.org/jira/browse/FLINK-24494
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.13.2
>Reporter: Mahendran Ponnusamy
>Priority: Critical
> Attachments: KafkaTableTumbling.java, Screen Shot 2021-10-12 at 
> 10.38.55 AM.png
>
>
>  
> Given a schema registered to a topic with default value as '0' for int,
> when the flink sql with upsert-kafka connector writes to the topic,
> it fails coz the schema registered is not compatible with the data [schema] 
> it is producing where there is not default value [{color:#9876aa}"default" 
> {color}{color:#cc7832}: {color}{color:#6897bb}0{color}] for the int.
> {color:#cc7832} {color}
> {
>  {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"record"{color}{color:#cc7832},{color} 
> {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"SimpleCustomer"{color}{color:#cc7832},{color} 
> {color:#9876aa}"namespace" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"com.nordstrom.nap.onehop.example.model"{color}{color:#cc7832},{color}
>  {color:#9876aa}"fields" {color}{color:#cc7832}: {color}[ {
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"customerId"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"string"{color} }{color:#cc7832}, {color}{
>  {color:#9876aa}"name" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"age"{color}{color:#cc7832},{color} 
> {color:#9876aa}"type" {color}{color:#cc7832}: 
> {color}{color:#6a8759}"int"{color}{color:#cc7832},{color} 
> {color:#9876aa}"default" {color}{color:#cc7832}: 
> {color}{color:#6897bb}0{color} }]
>  }
>  
> The full code is here [^KafkaTableTumbling.java]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-24231) Buffer debloating microbenchmark for multiply gate

2021-10-12 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-24231.

  Assignee: Dawid Wysakowicz
Resolution: Implemented

Implemented in 325b1b0d94d00fb4f37152d89435e3485edf3004

> Buffer debloating microbenchmark for multiply gate
> --
>
> Key: FLINK-24231
> URL: https://issues.apache.org/jira/browse/FLINK-24231
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Network
>Affects Versions: 1.14.0
>Reporter: Anton Kalashnikov
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> It needs to expand the microbenchmark from 
> https://issues.apache.org/jira/browse/FLINK-24230  with a scenario when 
> different gates have:
> * different throughput
> * different record size



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-benchmarks] dawidwys merged pull request #35: [FLINK-24231] Benchmark for buffer debloating multiple gates

2021-10-12 Thread GitBox


dawidwys merged pull request #35:
URL: https://github.com/apache/flink-benchmarks/pull/35


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on a change in pull request #17458: [hotfix][docs] add more information about how to speed up maven build.

2021-10-12 Thread GitBox


zentol commented on a change in pull request #17458:
URL: https://github.com/apache/flink/pull/17458#discussion_r727364094



##
File path: docs/content/docs/flinkDev/building.md
##
@@ -52,11 +52,19 @@ mvn clean install -DskipTests
 
 This instructs [Maven](http://maven.apache.org) (`mvn`) to first remove all 
existing builds (`clean`) and then create a new Flink binary (`install`).
 
-To speed up the build you can skip tests, QA plugins, and JavaDocs:
+To speed up the build you can:
+- use the build-in "fast" maven profile which will skip unit tests, 
integration tests, QA plugins, and JavaDocs.
+- active maven parallel build, e.g. "mvn -T 1C" means 1 thread per cpu core.
 
+The build script will be:
 ```bash
-mvn clean install -DskipTests -Dfast
+mvn clean install -DskipTests -Dfast -T 1C
 ```
+You can obviously control all skips manually too:
+```bash
+mvn clean install -DskipTests -Dskip.npm=true -Dcheckstyle.skip=true 
-Drat.skip=true -Dscalastyle.skip=true -Denforcer.skip=true 
-Dmaven.javadoc.skip=true -Djapicmp.skip=true -T 1C

Review comment:
   I don't think we should be documenting the "manual" way in the first 
place. People who don't know maven should just stick to the `fast` profile a) 
for readability and b) to have a hook that works across all current and future 
Flink versions (e.g., they automatically reap the benefits if we add more stuff 
to the profile).
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17441: [FLINK-24461][table] PrintUtils common now accepts only internal data types

2021-10-12 Thread GitBox


flinkbot edited a comment on pull request #17441:
URL: https://github.com/apache/flink/pull/17441#issuecomment-938905036


   
   ## CI report:
   
   * 5a10760f62b70db58a953e0d8e304bafa56a053e Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24994)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on a change in pull request #17458: [hotfix][docs] add more information about how to speed up maven build.

2021-10-12 Thread GitBox


zentol commented on a change in pull request #17458:
URL: https://github.com/apache/flink/pull/17458#discussion_r727361542



##
File path: docs/content/docs/flinkDev/building.md
##
@@ -52,11 +52,19 @@ mvn clean install -DskipTests
 
 This instructs [Maven](http://maven.apache.org) (`mvn`) to first remove all 
existing builds (`clean`) and then create a new Flink binary (`install`).
 
-To speed up the build you can skip tests, QA plugins, and JavaDocs:
+To speed up the build you can:
+- use the build-in "fast" maven profile which will skip unit tests, 
integration tests, QA plugins, and JavaDocs.
+- active maven parallel build, e.g. "mvn -T 1C" means 1 thread per cpu core.
 
+The build script will be:
 ```bash
-mvn clean install -DskipTests -Dfast
+mvn clean install -DskipTests -Dfast -T 1C

Review comment:
   Its really not because this documentation is aimed at people who _do not 
know maven_, and putting them into a situation where the build fails for no 
reason is just not acceptable.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on a change in pull request #17458: [hotfix][docs] add more information about how to speed up maven build.

2021-10-12 Thread GitBox


zentol commented on a change in pull request #17458:
URL: https://github.com/apache/flink/pull/17458#discussion_r727359834



##
File path: docs/content/docs/flinkDev/building.md
##
@@ -52,11 +52,19 @@ mvn clean install -DskipTests
 
 This instructs [Maven](http://maven.apache.org) (`mvn`) to first remove all 
existing builds (`clean`) and then create a new Flink binary (`install`).
 
-To speed up the build you can skip tests, QA plugins, and JavaDocs:
+To speed up the build you can:
+- use the build-in "fast" maven profile which will skip unit tests, 
integration tests, QA plugins, and JavaDocs.
+- active maven parallel build, e.g. "mvn -T 1C" means 1 thread per cpu core.
 
+The build script will be:
 ```bash
-mvn clean install -DskipTests -Dfast
+mvn clean install -DskipTests -Dfast -T 1C
 ```
+You can obviously control all skips manually too:
+```bash
+mvn clean install -DskipTests -Dskip.npm=true -Dcheckstyle.skip=true 
-Drat.skip=true -Dscalastyle.skip=true -Denforcer.skip=true 
-Dmaven.javadoc.skip=true -Djapicmp.skip=true -T 1C

Review comment:
   yes that is fine.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-statefun] neoXfire opened a new pull request #275: Use JDK 8 Optional instead of shaded Optional class from Guava.

2021-10-12 Thread GitBox


neoXfire opened a new pull request #275:
URL: https://github.com/apache/flink-statefun/pull/275


   I got this error on sample project with Flink embedded using 
`org.apache.flink:statefun-flink-datastream:3.1.0` and 
`org.apache.flink:flink-clients_2.12:1.14.0`
   
   ```
   Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/flink/shaded/guava18/com/google/common/base/Optional
at 
org.apache.flink.statefun.flink.datastream.StatefulFunctionDataStreamBuilder.build(StatefulFunctionDataStreamBuilder.java:147)
at org.example.Main.main(Main.java:50)
   Caused by: java.lang.ClassNotFoundException: 
org.apache.flink.shaded.guava18.com.google.common.base.Optional
at 
java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at 
java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
... 2 more
   ```
   I bump flink-clients version down to 
`org.apache.flink:flink-clients_2.12:1.13.2` and there is no error anymore.
   
   I still think using the JDK `Optional` class is more straightforward and 
less error-prone than importing the shaded Guava class. 
   
   So this is my small PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24516) Modernize Maven Archetype

2021-10-12 Thread Seth Wiesman (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17427811#comment-17427811
 ] 

Seth Wiesman commented on FLINK-24516:
--

[~MartijnVisser] maybe you have some ideas about what this should look like? I 
think it should be a unified `DataStream` skeleton and potentially have a 
seperate Table api quickstart. 

> Modernize Maven Archetype
> -
>
> Key: FLINK-24516
> URL: https://issues.apache.org/jira/browse/FLINK-24516
> Project: Flink
>  Issue Type: Improvement
>  Components: Quickstarts
>Reporter: Seth Wiesman
>Priority: Major
>
> The maven archetypes used by many to start their first Flink application do 
> not reflect the project's current state. 
> Issues:
>  * They still bundle the DataSet API and recommend it for batch processing
>  * The JavaDoc recommends deprecated APIs
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #17345: [FLINK-24227][connectors] FLIP-171: Added Kinesis Data Streams Sink i…

2021-10-12 Thread GitBox


flinkbot edited a comment on pull request #17345:
URL: https://github.com/apache/flink/pull/17345#issuecomment-926109717


   
   ## CI report:
   
   * 202b39e05a73f8644d7cc3b46ed3511052d35fd4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24993)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-24516) Modernize Maven Archetype

2021-10-12 Thread Seth Wiesman (Jira)
Seth Wiesman created FLINK-24516:


 Summary: Modernize Maven Archetype
 Key: FLINK-24516
 URL: https://issues.apache.org/jira/browse/FLINK-24516
 Project: Flink
  Issue Type: Improvement
  Components: Quickstarts
Reporter: Seth Wiesman


The maven archetypes used by many to start their first Flink application do not 
reflect the project's current state. 


Issues:
 * They still bundle the DataSet API and recommend it for batch processing
 * The JavaDoc recommends deprecated APIs

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #17450: [FLINK-24418][table-planner] Add support for casting RAW to BINARY

2021-10-12 Thread GitBox


flinkbot edited a comment on pull request #17450:
URL: https://github.com/apache/flink/pull/17450#issuecomment-939949291


   
   ## CI report:
   
   * e17cd221fe36ee6064cfff125b0a9b025c5310bd Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24989)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17345: [FLINK-24227][connectors] FLIP-171: Added Kinesis Data Streams Sink i…

2021-10-12 Thread GitBox


flinkbot edited a comment on pull request #17345:
URL: https://github.com/apache/flink/pull/17345#issuecomment-926109717


   
   ## CI report:
   
   * 3edc8a26d8a9bff3a8878570d038d53e54c5a4ba Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24985)
 
   * 202b39e05a73f8644d7cc3b46ed3511052d35fd4 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24993)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17217: [FLINK-24186][table-planner] Allow multiple rowtime attributes for collect() and print()

2021-10-12 Thread GitBox


flinkbot edited a comment on pull request #17217:
URL: https://github.com/apache/flink/pull/17217#issuecomment-916047672


   
   ## CI report:
   
   * 578e56afa1aaa77e228151f13eba81ededa2b3d0 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24988)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17459: [FLINK-24397] Remove TableSchema usages from Flink connectors

2021-10-12 Thread GitBox


flinkbot edited a comment on pull request #17459:
URL: https://github.com/apache/flink/pull/17459#issuecomment-940996979


   
   ## CI report:
   
   * 69e676d0e426e4a9eb437b3bd4628f9c3cd3b7eb Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24987)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17439: [FLINK-23271][table-planner] Disallow cast from decimal numerics to boolean

2021-10-12 Thread GitBox


flinkbot edited a comment on pull request #17439:
URL: https://github.com/apache/flink/pull/17439#issuecomment-938696330


   
   ## CI report:
   
   * 2b821c46c8ac9285981b41272726fe87d7a03566 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24986)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dawidwys edited a comment on pull request #17440: [FLINK-24468][runtime] Wait for the channel activation before creating partition request client

2021-10-12 Thread GitBox


dawidwys edited a comment on pull request #17440:
URL: https://github.com/apache/flink/pull/17440#issuecomment-941031617


   I found the reason why the exception was swallowed. Please see: 
https://issues.apache.org/jira/browse/FLINK-24515.
   
   I think we should not swallow exceptions if debloating fails. Therefore I'd 
remove the `try/catch` block and replace the `submit` with `execute` in 
`org.apache.flink.streaming.runtime.tasks.StreamTask#scheduleBufferDebloater`:
   
   ```
   private void scheduleBufferDebloater() {
   // See https://issues.apache.org/jira/browse/FLINK-23560
   // If there are no input gates, there is no point of calculating the 
throughput and running
   // the debloater. At the same time, for SourceStreamTask using 
legacy sources and checkpoint
   // lock, enqueuing even a single mailbox action can cause 
performance regression. This is
   // especially visible in batch, with disabled checkpointing and no 
processing time timers.
   if (getEnvironment().getAllInputGates().length == 0) {
   return;
   }
   systemTimerService.registerTimer(
   systemTimerService.getCurrentProcessingTime() + 
bufferDebloatPeriod,
   timestamp ->
   mainMailboxExecutor.execute(
   () -> {
   debloat();
   scheduleBufferDebloater();
   },
   "Buffer size recalculation"));
   }
   ```
   
   As for the `waitForActivation`, I need to think about it for a bit more.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-24515) MailboxExecutor#submit swallows exceptions

2021-10-12 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-24515.

Fix Version/s: (was: 1.14.1)
   (was: 1.15.0)
   (was: 1.13.3)
   (was: 1.12.6)
   Resolution: Not A Problem

> MailboxExecutor#submit swallows exceptions
> --
>
> Key: FLINK-24515
> URL: https://issues.apache.org/jira/browse/FLINK-24515
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.14.0, 1.12.5, 1.13.2
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
>
> If a {{RunnableWithException}}/{{Callable}} is submitted via the 
> {{MailboxExecutor#submit}} any exceptions thrown from it will be swallowed.
> It is caused by the {{FutureTaskWithException}} implementation. The 
> {{FutureTask#run}} does not throw an exception, but it sets it as its 
> internal state. The exception will be thrown only when {{FutureTask#get()}} 
> is called.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-24515) MailboxExecutor#submit swallows exceptions

2021-10-12 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17427775#comment-17427775
 ] 

Dawid Wysakowicz commented on FLINK-24515:
--

Hmm, I see. Yeah, I think it actually makes sense. Otherwise if we e.g. cancel 
the {{Future}}, we will fail the mailbox as well. 

I guess then, we should not use {{submit}}, but rather {{execute}} for 
triggering buffer debloat.

> MailboxExecutor#submit swallows exceptions
> --
>
> Key: FLINK-24515
> URL: https://issues.apache.org/jira/browse/FLINK-24515
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.14.0, 1.12.5, 1.13.2
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.6, 1.13.3, 1.15.0, 1.14.1
>
>
> If a {{RunnableWithException}}/{{Callable}} is submitted via the 
> {{MailboxExecutor#submit}} any exceptions thrown from it will be swallowed.
> It is caused by the {{FutureTaskWithException}} implementation. The 
> {{FutureTask#run}} does not throw an exception, but it sets it as its 
> internal state. The exception will be thrown only when {{FutureTask#get()}} 
> is called.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24515) MailboxExecutor#submit swallows exceptions

2021-10-12 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-24515:
-
Priority: Major  (was: Critical)

> MailboxExecutor#submit swallows exceptions
> --
>
> Key: FLINK-24515
> URL: https://issues.apache.org/jira/browse/FLINK-24515
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.14.0, 1.12.5, 1.13.2
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.6, 1.13.3, 1.15.0, 1.14.1
>
>
> If a {{RunnableWithException}}/{{Callable}} is submitted via the 
> {{MailboxExecutor#submit}} any exceptions thrown from it will be swallowed.
> It is caused by the {{FutureTaskWithException}} implementation. The 
> {{FutureTask#run}} does not throw an exception, but it sets it as its 
> internal state. The exception will be thrown only when {{FutureTask#get()}} 
> is called.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dawidwys closed pull request #17460: [FLINK-24515] MailboxExecutor#submit swallows exceptions

2021-10-12 Thread GitBox


dawidwys closed pull request #17460:
URL: https://github.com/apache/flink/pull/17460


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24515) MailboxExecutor#submit swallows exceptions

2021-10-12 Thread Piotr Nowojski (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17427752#comment-17427752
 ] 

Piotr Nowojski commented on FLINK-24515:


I think it's intentional "feature" and not a bug, that mimics 
{{ExecutorService#submit(...)}} interface. I'm not sure, but I think I was even 
copying this behaviour from some another executor service in Flink? Anyway, for 
this reason we added {{execute()}} methods with {{RunnableWithException}} (I 
think it happened in this FLINK-14935 ticket?).

> MailboxExecutor#submit swallows exceptions
> --
>
> Key: FLINK-24515
> URL: https://issues.apache.org/jira/browse/FLINK-24515
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.14.0, 1.12.5, 1.13.2
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.12.6, 1.13.3, 1.15.0, 1.14.1
>
>
> If a {{RunnableWithException}}/{{Callable}} is submitted via the 
> {{MailboxExecutor#submit}} any exceptions thrown from it will be swallowed.
> It is caused by the {{FutureTaskWithException}} implementation. The 
> {{FutureTask#run}} does not throw an exception, but it sets it as its 
> internal state. The exception will be thrown only when {{FutureTask#get()}} 
> is called.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #17441: [FLINK-24461][table] PrintUtils common now accepts only internal data types

2021-10-12 Thread GitBox


flinkbot edited a comment on pull request #17441:
URL: https://github.com/apache/flink/pull/17441#issuecomment-938905036


   
   ## CI report:
   
   * 0040bfe2d73e9b07cf348fd91a6d93149689bebb Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24958)
 
   * 5a10760f62b70db58a953e0d8e304bafa56a053e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24994)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17441: [FLINK-24461][table] PrintUtils common now accepts only internal data types

2021-10-12 Thread GitBox


flinkbot edited a comment on pull request #17441:
URL: https://github.com/apache/flink/pull/17441#issuecomment-938905036


   
   ## CI report:
   
   * 0040bfe2d73e9b07cf348fd91a6d93149689bebb Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24958)
 
   * 5a10760f62b70db58a953e0d8e304bafa56a053e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17345: [FLINK-24227][connectors] FLIP-171: Added Kinesis Data Streams Sink i…

2021-10-12 Thread GitBox


flinkbot edited a comment on pull request #17345:
URL: https://github.com/apache/flink/pull/17345#issuecomment-926109717


   
   ## CI report:
   
   * 375a3b3fdbb7fb73a700a07e707f5521e2ad6ff6 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24983)
 
   * 3edc8a26d8a9bff3a8878570d038d53e54c5a4ba Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24985)
 
   * 202b39e05a73f8644d7cc3b46ed3511052d35fd4 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24993)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingGe commented on a change in pull request #17458: [hotfix][docs] add more information about how to speed up maven build.

2021-10-12 Thread GitBox


JingGe commented on a change in pull request #17458:
URL: https://github.com/apache/flink/pull/17458#discussion_r727177127



##
File path: docs/content/docs/flinkDev/building.md
##
@@ -52,11 +52,19 @@ mvn clean install -DskipTests
 
 This instructs [Maven](http://maven.apache.org) (`mvn`) to first remove all 
existing builds (`clean`) and then create a new Flink binary (`install`).
 
-To speed up the build you can skip tests, QA plugins, and JavaDocs:
+To speed up the build you can:
+- use the build-in "fast" maven profile which will skip unit tests, 
integration tests, QA plugins, and JavaDocs.
+- active maven parallel build, e.g. "mvn -T 1C" means 1 thread per cpu core.
 
+The build script will be:
 ```bash
-mvn clean install -DskipTests -Dfast
+mvn clean install -DskipTests -Dfast -T 1C
 ```
+You can obviously control all skips manually too:
+```bash
+mvn clean install -DskipTests -Dskip.npm=true -Dcheckstyle.skip=true 
-Drat.skip=true -Dscalastyle.skip=true -Denforcer.skip=true 
-Dmaven.javadoc.skip=true -Djapicmp.skip=true -T 1C

Review comment:
   The focus of this second option is on "manually" without using maven 
profile. I was not aware that skip-webui-build profile is recommended. Thanks 
for the info. The The skip-webui-build profile is only defined in the web 
submodule, as far as I knew, running build like "mvn clean install 
-Pskip-webui-build" will apply it to all submodules. Is that fine?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17345: [FLINK-24227][connectors] FLIP-171: Added Kinesis Data Streams Sink i…

2021-10-12 Thread GitBox


flinkbot edited a comment on pull request #17345:
URL: https://github.com/apache/flink/pull/17345#issuecomment-926109717


   
   ## CI report:
   
   * 375a3b3fdbb7fb73a700a07e707f5521e2ad6ff6 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24983)
 
   * 3edc8a26d8a9bff3a8878570d038d53e54c5a4ba Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24985)
 
   * 202b39e05a73f8644d7cc3b46ed3511052d35fd4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17345: [FLINK-24227][connectors] FLIP-171: Added Kinesis Data Streams Sink i…

2021-10-12 Thread GitBox


flinkbot edited a comment on pull request #17345:
URL: https://github.com/apache/flink/pull/17345#issuecomment-926109717


   
   ## CI report:
   
   * 375a3b3fdbb7fb73a700a07e707f5521e2ad6ff6 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24983)
 
   * 3edc8a26d8a9bff3a8878570d038d53e54c5a4ba Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24985)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17345: [FLINK-24227][connectors] FLIP-171: Added Kinesis Data Streams Sink i…

2021-10-12 Thread GitBox


flinkbot edited a comment on pull request #17345:
URL: https://github.com/apache/flink/pull/17345#issuecomment-926109717


   
   ## CI report:
   
   * 375a3b3fdbb7fb73a700a07e707f5521e2ad6ff6 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24983)
 
   * 3edc8a26d8a9bff3a8878570d038d53e54c5a4ba Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24985)
 
   * 202b39e05a73f8644d7cc3b46ed3511052d35fd4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingGe commented on a change in pull request #17458: [hotfix][docs] add more information about how to speed up maven build.

2021-10-12 Thread GitBox


JingGe commented on a change in pull request #17458:
URL: https://github.com/apache/flink/pull/17458#discussion_r727165480



##
File path: docs/content/docs/flinkDev/building.md
##
@@ -52,11 +52,19 @@ mvn clean install -DskipTests
 
 This instructs [Maven](http://maven.apache.org) (`mvn`) to first remove all 
existing builds (`clean`) and then create a new Flink binary (`install`).
 
-To speed up the build you can skip tests, QA plugins, and JavaDocs:
+To speed up the build you can:
+- use the build-in "fast" maven profile which will skip unit tests, 
integration tests, QA plugins, and JavaDocs.
+- active maven parallel build, e.g. "mvn -T 1C" means 1 thread per cpu core.
 
+The build script will be:
 ```bash
-mvn clean install -DskipTests -Dfast
+mvn clean install -DskipTests -Dfast -T 1C

Review comment:
   Thanks for the feedback. Compare to the ca 50% performance improvement, 
I think it is acceptable to live with the issue.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17345: [FLINK-24227][connectors] FLIP-171: Added Kinesis Data Streams Sink i…

2021-10-12 Thread GitBox


flinkbot edited a comment on pull request #17345:
URL: https://github.com/apache/flink/pull/17345#issuecomment-926109717


   
   ## CI report:
   
   * 375a3b3fdbb7fb73a700a07e707f5521e2ad6ff6 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24983)
 
   * 3edc8a26d8a9bff3a8878570d038d53e54c5a4ba Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24985)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingGe commented on a change in pull request #17458: [hotfix][docs] add more information about how to speed up maven build.

2021-10-12 Thread GitBox


JingGe commented on a change in pull request #17458:
URL: https://github.com/apache/flink/pull/17458#discussion_r727162524



##
File path: docs/content/docs/flinkDev/building.md
##
@@ -52,11 +52,19 @@ mvn clean install -DskipTests
 
 This instructs [Maven](http://maven.apache.org) (`mvn`) to first remove all 
existing builds (`clean`) and then create a new Flink binary (`install`).
 
-To speed up the build you can skip tests, QA plugins, and JavaDocs:
+To speed up the build you can:
+- use the build-in "fast" maven profile which will skip unit tests, 
integration tests, QA plugins, and JavaDocs.
+- active maven parallel build, e.g. "mvn -T 1C" means 1 thread per cpu core.
 
+The build script will be:
 ```bash
-mvn clean install -DskipTests -Dfast
+mvn clean install -DskipTests -Dfast -T 1C
 ```
+You can obviously control all skips manually too:

Review comment:
   Thanks for the hint.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-24467) Set min and max buffer size even if the difference less than threshold

2021-10-12 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-24467.

Fix Version/s: 1.14.1
   1.15.0
   Resolution: Fixed

Implemented in:
* master
** 
747895f1bde53f358df9517e0313c679cb4151bc..58197bebb554df7df1dd3dc27cc3e71cc3ad5b49
* 1.14
** 
28690757444e63d67eb3cdcba0c4fb2f99d9d8e4..1cd2a749d6f7a48a2c0e901438c47d2510a628cf

> Set min and max buffer size even if the difference less than threshold
> --
>
> Key: FLINK-24467
> URL: https://issues.apache.org/jira/browse/FLINK-24467
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.14.0
>Reporter: Anton Kalashnikov
>Assignee: Anton Kalashnikov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0, 1.14.1
>
>
> Right now, we apply a new buffer size only if it differs from the old buffer 
> size more than the configured threshold but if the old buffer size is close 
> to the max or min value less than this threshold we are always stuck on this 
> value. For example, if we have the old buffer size 22k and our threshold is 
> 50% then the value which we can apply should 33k but this is impossible 
> because the max value is 32k so once we calculate the buffer size to 22k it 
> is impossible to increase it.
> The suggestion is to apply the changes every time when we calculate the new 
> value to min or max size and the old value was different.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingGe commented on a change in pull request #17458: [hotfix][docs] add more information about how to speed up maven build.

2021-10-12 Thread GitBox


JingGe commented on a change in pull request #17458:
URL: https://github.com/apache/flink/pull/17458#discussion_r727162253



##
File path: docs/content/docs/flinkDev/building.md
##
@@ -52,11 +52,19 @@ mvn clean install -DskipTests
 
 This instructs [Maven](http://maven.apache.org) (`mvn`) to first remove all 
existing builds (`clean`) and then create a new Flink binary (`install`).
 
-To speed up the build you can skip tests, QA plugins, and JavaDocs:
+To speed up the build you can:
+- use the build-in "fast" maven profile which will skip unit tests, 
integration tests, QA plugins, and JavaDocs.

Review comment:
   Thanks, you are right, I will update the text.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17460: [FLINK-24515] MailboxExecutor#submit swallows exceptions

2021-10-12 Thread GitBox


flinkbot edited a comment on pull request #17460:
URL: https://github.com/apache/flink/pull/17460#issuecomment-941032272


   
   ## CI report:
   
   * ce893d0fb77003619b23fc0b8108b858dafdb9db Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24990)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17345: [FLINK-24227][connectors] FLIP-171: Added Kinesis Data Streams Sink i…

2021-10-12 Thread GitBox


flinkbot edited a comment on pull request #17345:
URL: https://github.com/apache/flink/pull/17345#issuecomment-926109717


   
   ## CI report:
   
   * 375a3b3fdbb7fb73a700a07e707f5521e2ad6ff6 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24983)
 
   * 3edc8a26d8a9bff3a8878570d038d53e54c5a4ba Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24985)
 
   * 202b39e05a73f8644d7cc3b46ed3511052d35fd4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dawidwys closed pull request #17437: [FLINK-24467][streaming] Announce the min and max buffer size despite last diff less than threshold

2021-10-12 Thread GitBox


dawidwys closed pull request #17437:
URL: https://github.com/apache/flink/pull/17437


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17460: [FLINK-24515] MailboxExecutor#submit swallows exceptions

2021-10-12 Thread GitBox


flinkbot commented on pull request #17460:
URL: https://github.com/apache/flink/pull/17460#issuecomment-941034024


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit ce893d0fb77003619b23fc0b8108b858dafdb9db (Tue Oct 12 
13:52:19 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17460: [FLINK-24515] MailboxExecutor#submit swallows exceptions

2021-10-12 Thread GitBox


flinkbot commented on pull request #17460:
URL: https://github.com/apache/flink/pull/17460#issuecomment-941032272


   
   ## CI report:
   
   * ce893d0fb77003619b23fc0b8108b858dafdb9db UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fapaul commented on a change in pull request #17459: [FLINK-24397] Remove TableSchema usages from Flink connectors

2021-10-12 Thread GitBox


fapaul commented on a change in pull request #17459:
URL: https://github.com/apache/flink/pull/17459#discussion_r727157348



##
File path: 
flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/table/JdbcLookupFunction.java
##
@@ -1,313 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.connector.jdbc.table;
-
-import org.apache.flink.annotation.VisibleForTesting;
-import org.apache.flink.api.common.typeinfo.TypeInformation;
-import org.apache.flink.api.java.typeutils.RowTypeInfo;
-import 
org.apache.flink.connector.jdbc.internal.connection.JdbcConnectionProvider;
-import 
org.apache.flink.connector.jdbc.internal.connection.SimpleJdbcConnectionProvider;
-import org.apache.flink.connector.jdbc.internal.options.JdbcConnectorOptions;
-import org.apache.flink.connector.jdbc.internal.options.JdbcLookupOptions;
-import 
org.apache.flink.connector.jdbc.statement.FieldNamedPreparedStatementImpl;
-import org.apache.flink.connector.jdbc.utils.JdbcTypeUtil;
-import org.apache.flink.connector.jdbc.utils.JdbcUtils;
-import org.apache.flink.table.functions.FunctionContext;
-import org.apache.flink.table.functions.TableFunction;
-import org.apache.flink.types.Row;
-
-import org.apache.flink.shaded.guava30.com.google.common.cache.Cache;
-import org.apache.flink.shaded.guava30.com.google.common.cache.CacheBuilder;
-
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.io.IOException;
-import java.sql.Connection;
-import java.sql.PreparedStatement;
-import java.sql.ResultSet;
-import java.sql.SQLException;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.HashMap;
-import java.util.List;
-import java.util.concurrent.TimeUnit;
-
-import static 
org.apache.flink.connector.jdbc.utils.JdbcUtils.getFieldFromResultSet;
-import static org.apache.flink.util.Preconditions.checkArgument;
-import static org.apache.flink.util.Preconditions.checkNotNull;
-
-/**
- * A {@link TableFunction} to query fields from JDBC by keys. The query 
template like:
- *
- * 
- * SELECT c, d, e, f from T where a = ? and b = ?
- * 
- *
- * Support cache the result to avoid frequent accessing to remote 
databases. 1.The cacheMaxSize
- * is -1 means not use cache. 2.For real-time data, you need to set the TTL of 
cache.
- */
-public class JdbcLookupFunction extends TableFunction {

Review comment:
   Yes there is a `RowData` one 
https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/table/JdbcRowDataLookupFunction.java




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17345: [FLINK-24227][connectors] FLIP-171: Added Kinesis Data Streams Sink i…

2021-10-12 Thread GitBox


flinkbot edited a comment on pull request #17345:
URL: https://github.com/apache/flink/pull/17345#issuecomment-926109717


   
   ## CI report:
   
   * 375a3b3fdbb7fb73a700a07e707f5521e2ad6ff6 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24983)
 
   * 3edc8a26d8a9bff3a8878570d038d53e54c5a4ba Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24985)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dawidwys commented on pull request #17440: [FLINK-24468][runtime] Wait for the channel activation before creating partition request client

2021-10-12 Thread GitBox


dawidwys commented on pull request #17440:
URL: https://github.com/apache/flink/pull/17440#issuecomment-941031617


   I found the reason why the exception was swallowed. Please see: 
https://issues.apache.org/jira/browse/FLINK-24515.
   
   I think we should not swallow exceptions if debloating fails. Therefore I'd 
remove the `try/catch` block.
   
   As for the `waitForActivation`, I need to think about it for a bit more.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fapaul commented on a change in pull request #17459: [FLINK-24397] Remove TableSchema usages from Flink connectors

2021-10-12 Thread GitBox


fapaul commented on a change in pull request #17459:
URL: https://github.com/apache/flink/pull/17459#discussion_r727156469



##
File path: 
flink-connectors/flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/JdbcDataTypeTest.java
##
@@ -101,50 +101,46 @@
 createTestItem("postgresql", "ARRAY"),
 
 // Unsupported types throws errors.
-createTestItem(
-"derby", "BINARY", "The Derby dialect doesn't support 
type: BINARY(1)."),
 createTestItem(
 "derby",
-"VARBINARY(10)",
-"The Derby dialect doesn't support type: 
VARBINARY(10)."),
+"BINARY",
+"Unsupported conversion from data type 'BINARY(1)' 
(conversion class: [B) to type information. Only data types that originated 
from type information fully support a reverse conversion."),

Review comment:
   Unfortunately, I am not really sure what the correct behavior here is. 
The problem is that this complete test never used the new JDBC connector and 
always tested with the old one which is now removed. It appears that with the 
new connector the exception signatures have changed.
   
   Are you proposing to drop the test class completely?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24515) MailboxExecutor#submit swallows exceptions

2021-10-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-24515:
---
Labels: pull-request-available  (was: )

> MailboxExecutor#submit swallows exceptions
> --
>
> Key: FLINK-24515
> URL: https://issues.apache.org/jira/browse/FLINK-24515
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.14.0, 1.12.5, 1.13.2
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.12.6, 1.13.3, 1.15.0, 1.14.1
>
>
> If a {{RunnableWithException}}/{{Callable}} is submitted via the 
> {{MailboxExecutor#submit}} any exceptions thrown from it will be swallowed.
> It is caused by the {{FutureTaskWithException}} implementation. The 
> {{FutureTask#run}} does not throw an exception, but it sets it as its 
> internal state. The exception will be thrown only when {{FutureTask#get()}} 
> is called.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dawidwys opened a new pull request #17460: [FLINK-24515] MailboxExecutor#submit swallows exceptions

2021-10-12 Thread GitBox


dawidwys opened a new pull request #17460:
URL: https://github.com/apache/flink/pull/17460


   ## What is the purpose of the change
   
   The PR fixes {{MailboxExecutor#submit}} that it properly throws exception 
from within the mailbox.
   
   ## Verifying this change
   
   All tests in `TaskMailboxProcessorTest` succeed. Especially 
`org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxProcessorTest#testSubmittingRunnableWithException`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (**yes** / no 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17345: [FLINK-24227][connectors] FLIP-171: Added Kinesis Data Streams Sink i…

2021-10-12 Thread GitBox


flinkbot edited a comment on pull request #17345:
URL: https://github.com/apache/flink/pull/17345#issuecomment-926109717


   
   ## CI report:
   
   * 375a3b3fdbb7fb73a700a07e707f5521e2ad6ff6 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24983)
 
   * 3edc8a26d8a9bff3a8878570d038d53e54c5a4ba Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24985)
 
   * 202b39e05a73f8644d7cc3b46ed3511052d35fd4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17345: [FLINK-24227][connectors] FLIP-171: Added Kinesis Data Streams Sink i…

2021-10-12 Thread GitBox


flinkbot edited a comment on pull request #17345:
URL: https://github.com/apache/flink/pull/17345#issuecomment-926109717


   
   ## CI report:
   
   * 375a3b3fdbb7fb73a700a07e707f5521e2ad6ff6 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24983)
 
   * 3edc8a26d8a9bff3a8878570d038d53e54c5a4ba Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24985)
 
   * d77c5fb7cef82d07e9976a4e1ebaff1e16311602 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fapaul commented on a change in pull request #17459: [FLINK-24397] Remove TableSchema usages from Flink connectors

2021-10-12 Thread GitBox


fapaul commented on a change in pull request #17459:
URL: https://github.com/apache/flink/pull/17459#discussion_r727142712



##
File path: 
flink-connectors/flink-connector-hbase-base/src/main/java/org/apache/flink/connector/hbase/table/HBaseConnectorOptionsUtil.java
##
@@ -57,8 +58,8 @@
  * type columns in the schema. The PRIMARY KEY constraint is optional, if 
exist, the primary key
  * constraint must be defined on the single row key field.
  */
-public static void validatePrimaryKey(TableSchema schema) {
-HBaseTableSchema hbaseSchema = 
HBaseTableSchema.fromTableSchema(schema);
+public static void validatePrimaryKey(DataType dataType, Schema schema) {

Review comment:
   The method requires knowledge about the primary key. AFAIK `DataType` 
does not offer such information. The only other option here would be to pass a 
`ResolvedSchema` (not sure that works for all the callers)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17345: [FLINK-24227][connectors] FLIP-171: Added Kinesis Data Streams Sink i…

2021-10-12 Thread GitBox


flinkbot edited a comment on pull request #17345:
URL: https://github.com/apache/flink/pull/17345#issuecomment-926109717


   
   ## CI report:
   
   * 375a3b3fdbb7fb73a700a07e707f5521e2ad6ff6 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24983)
 
   * 3edc8a26d8a9bff3a8878570d038d53e54c5a4ba Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24985)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   >