[GitHub] [flink] hapihu commented on a change in pull request #17225: [FLINK-24221][doc]Translate "JAR Statements" page of "SQL" into Chinese

2021-09-12 Thread GitBox


hapihu commented on a change in pull request #17225:
URL: https://github.com/apache/flink/pull/17225#discussion_r707025605



##
File path: docs/content.zh/docs/dev/table/sql/jar.md
##
@@ -68,22 +70,22 @@ Flink SQL> REMOVE JAR '/path/hello.jar';
 ADD JAR '.jar'
 ```
 
-Currently it only supports to add the local jar into the session classloader.
+目前只支持将本地 jar 添加到会话类类加载器(session classloader)中。
 
 ## REMOVE JAR

Review comment:
   Sorry, I thought the title was in English, so I didn't need to write the 
link tag.
   (The link tag is automatically generated when the title is in English.)
   I've added the link tag .
   Please help me review it again~Thank you ~




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gaoyunhaii commented on pull request #17023: [FLINK-24043][runtime] Reuse the code of 'check savepoint preconditions'.

2021-09-12 Thread GitBox


gaoyunhaii commented on pull request #17023:
URL: https://github.com/apache/flink/pull/17023#issuecomment-917864832


   Very thanks @RocMarshal for the updates! LGTM and will merge~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dianfu commented on pull request #17248: [FLINK-24260][python] Limit requests to 2.26.0 or above only for python 3.6+

2021-09-12 Thread GitBox


dianfu commented on pull request #17248:
URL: https://github.com/apache/flink/pull/17248#issuecomment-917864557


   Merging... It has passed in my personal azure pipeline: 
https://dev.azure.com/dianfu/Flink/_build/results?buildId=534=results


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-24220) Translate "RESET Statements" page of "SQL" into Chinese

2021-09-12 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-24220.
---
Fix Version/s: 1.15
   Resolution: Fixed

Fixed in master: 33641f59eccc2f87d1fca784054f304a222fcfdd


> Translate "RESET Statements" page of "SQL" into Chinese
> ---
>
> Key: FLINK-24220
> URL: https://issues.apache.org/jira/browse/FLINK-24220
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: wuguihu
>Assignee: wuguihu
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.15
>
>
> [https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/table/sql/reset/]
> docs/content.zh/docs/dev/table/sql/reset.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #17224: [FLINK-24220][doc]Translate "RESET Statements" page of "SQL" into Chi…

2021-09-12 Thread GitBox


wuchong merged pull request #17224:
URL: https://github.com/apache/flink/pull/17224


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal commented on a change in pull request #17225: [FLINK-24221][doc]Translate "JAR Statements" page of "SQL" into Chinese

2021-09-12 Thread GitBox


RocMarshal commented on a change in pull request #17225:
URL: https://github.com/apache/flink/pull/17225#discussion_r707016750



##
File path: docs/content.zh/docs/dev/table/sql/jar.md
##
@@ -68,22 +70,22 @@ Flink SQL> REMOVE JAR '/path/hello.jar';
 ADD JAR '.jar'
 ```
 
-Currently it only supports to add the local jar into the session classloader.
+目前只支持将本地 jar 添加到会话类类加载器(session classloader)中。
 
 ## REMOVE JAR
 
 ```sql
 REMOVE JAR '.jar'
 ```
 
-Currently it only supports to remove the jar that is added by the [`ADD 
JAR`](#add-jar) statements.
+目前只支持删除 [`ADD JAR`](#add-jar) 语句添加的 jar。
 
 ## SHOW JARS

Review comment:
   @hapihu link tag?

##
File path: docs/content.zh/docs/dev/table/sql/jar.md
##
@@ -68,22 +70,22 @@ Flink SQL> REMOVE JAR '/path/hello.jar';
 ADD JAR '.jar'
 ```
 
-Currently it only supports to add the local jar into the session classloader.
+目前只支持将本地 jar 添加到会话类类加载器(session classloader)中。
 
 ## REMOVE JAR

Review comment:
   link tag?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16606: [FLINK-21357][runtime/statebackend]Periodic materialization for generalized incremental checkpoints

2021-09-12 Thread GitBox


flinkbot edited a comment on pull request #16606:
URL: https://github.com/apache/flink/pull/16606#issuecomment-887431748


   
   ## CI report:
   
   * 264be5cc6a0485171413099e8b64b9e917d06e85 UNKNOWN
   * 1b7da8565a2ab9560f1aad65007930c91945087f UNKNOWN
   * f77f6bd12ea5a6b1cf8f698c8b36bfab394d627b UNKNOWN
   * 75dec43024d91b896d488a4c9e979d486228398a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=23973)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17225: [FLINK-24221][doc]Translate "JAR Statements" page of "SQL" into Chinese

2021-09-12 Thread GitBox


flinkbot edited a comment on pull request #17225:
URL: https://github.com/apache/flink/pull/17225#issuecomment-916276718


   
   ## CI report:
   
   * c59ff823b92cebd91202add79f258683d2b1b347 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=23963)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17198: [FLINK-24212][kerbernets] fix the problem that kerberos krb5.conf file is mounted as empty directory, not a expected file

2021-09-12 Thread GitBox


flinkbot edited a comment on pull request #17198:
URL: https://github.com/apache/flink/pull/17198#issuecomment-915217186


   
   ## CI report:
   
   *  Unknown: [CANCELED](TBD) 
   * fe3037be48b5944e8a48b2b510c42612ad255c05 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=23972)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23725) HadoopFsCommitter, file rename failure

2021-09-12 Thread todd (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413898#comment-17413898
 ] 

todd commented on FLINK-23725:
--

[~sewen]    Do you have time to read this question?

> HadoopFsCommitter, file rename failure
> --
>
> Key: FLINK-23725
> URL: https://issues.apache.org/jira/browse/FLINK-23725
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Connectors / Hadoop 
> Compatibility, FileSystems
>Affects Versions: 1.11.1, 1.12.1
>Reporter: todd
>Priority: Major
>
> When the HDFS file is written, if the part file exists, only false will be 
> returned if the duplicate name fails.Whether to throw an exception that 
> already exists in the part, or print related logs.
>  
> ```
> org.apache.flink.runtime.fs.hdfs.HadoopRecoverableFsDataOutputStream.HadoopFsCommitter#commit
> public void commit() throws IOException {
>  final Path src = recoverable.tempFile();
>  final Path dest = recoverable.targetFile();
>  final long expectedLength = recoverable.offset();
>  try {
>      //always   return false or ture
>     fs.rename(src, dest);
>  } catch (IOException e) {
>  throw new IOException(
>  "Committing file by rename failed: " + src + " to " + dest, e);
>  }
> }
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #16606: [FLINK-21357][runtime/statebackend]Periodic materialization for generalized incremental checkpoints

2021-09-12 Thread GitBox


flinkbot edited a comment on pull request #16606:
URL: https://github.com/apache/flink/pull/16606#issuecomment-887431748


   
   ## CI report:
   
   * 264be5cc6a0485171413099e8b64b9e917d06e85 UNKNOWN
   * 1b7da8565a2ab9560f1aad65007930c91945087f UNKNOWN
   * 3421b81c2502f61112bd131a7336c16e3dd30925 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=23189)
 
   * f77f6bd12ea5a6b1cf8f698c8b36bfab394d627b UNKNOWN
   * 75dec43024d91b896d488a4c9e979d486228398a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17249: Release 1.10

2021-09-12 Thread GitBox


flinkbot commented on pull request #17249:
URL: https://github.com/apache/flink/pull/17249#issuecomment-917833353


   
   ## CI report:
   
   * 3d95391ca025266debd4149c33c005901a90b66e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17198: [FLINK-24212][kerbernets] fix the problem that kerberos krb5.conf file is mounted as empty directory, not a expected file

2021-09-12 Thread GitBox


flinkbot edited a comment on pull request #17198:
URL: https://github.com/apache/flink/pull/17198#issuecomment-915217186


   
   ## CI report:
   
   *  Unknown: [CANCELED](TBD) 
   * fe3037be48b5944e8a48b2b510c42612ad255c05 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16606: [FLINK-21357][runtime/statebackend]Periodic materialization for generalized incremental checkpoints

2021-09-12 Thread GitBox


flinkbot edited a comment on pull request #16606:
URL: https://github.com/apache/flink/pull/16606#issuecomment-887431748


   
   ## CI report:
   
   * 264be5cc6a0485171413099e8b64b9e917d06e85 UNKNOWN
   * 1b7da8565a2ab9560f1aad65007930c91945087f UNKNOWN
   * 3421b81c2502f61112bd131a7336c16e3dd30925 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=23189)
 
   * f77f6bd12ea5a6b1cf8f698c8b36bfab394d627b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lzshlzsh commented on pull request #17198: [FLINK-24212][kerbernets] fix the problem that kerberos krb5.conf file is mounted as empty directory, not a expected file

2021-09-12 Thread GitBox


lzshlzsh commented on pull request #17198:
URL: https://github.com/apache/flink/pull/17198#issuecomment-917826604


   @flinkbot run travis
   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17249: Release 1.10

2021-09-12 Thread GitBox


flinkbot commented on pull request #17249:
URL: https://github.com/apache/flink/pull/17249#issuecomment-917822333


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 3d95391ca025266debd4149c33c005901a90b66e (Mon Sep 13 
04:09:15 UTC 2021)
   
   **Warnings:**
* **41 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **Invalid pull request title: No valid Jira ID provided**
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24262) Remove unused ContainerOverlays

2021-09-12 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413893#comment-17413893
 ] 

Yangze Guo commented on FLINK-24262:


[~chesnay] Would you assign this to me?

> Remove unused ContainerOverlays
> ---
>
> Key: FLINK-24262
> URL: https://issues.apache.org/jira/browse/FLINK-24262
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0
>Reporter: Yangze Guo
>Priority: Major
> Fix For: 1.14.1
>
>
> In FLINK-23118, we drop mesos support. Thus, we need also remove the unused 
> container overlays from our code base.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lzshlzsh commented on a change in pull request #17198: [FLINK-24212][kerbernets] fix the problem that kerberos krb5.conf file is mounted as empty directory, not a expected file

2021-09-12 Thread GitBox


lzshlzsh commented on a change in pull request #17198:
URL: https://github.com/apache/flink/pull/17198#discussion_r706985563



##
File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/decorators/KerberosMountDecorator.java
##
@@ -107,7 +107,7 @@ public FlinkPod decorateFlinkPod(FlinkPod flinkPod) {
 .withItems(
 new KeyToPathBuilder()
 .withKey(krb5Conf.getName())
-.withPath(krb5Conf.getName())
+.withPath("krb5.conf")

Review comment:
   Thanks for your review.  added 
`org.apache.flink.kubernetes.utils.Constants.KERBEROS_KRB5CONF_FILE`
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-24262) Remove unused ContainerOverlays

2021-09-12 Thread Yangze Guo (Jira)
Yangze Guo created FLINK-24262:
--

 Summary: Remove unused ContainerOverlays
 Key: FLINK-24262
 URL: https://issues.apache.org/jira/browse/FLINK-24262
 Project: Flink
  Issue Type: Technical Debt
  Components: Runtime / Coordination
Affects Versions: 1.14.0
Reporter: Yangze Guo
 Fix For: 1.14.1


In FLINK-23118, we drop mesos support. Thus, we need also remove the unused 
container overlays from our code base.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #17208: [BP-1.14][FLINK-23773][connector/kafka] Mark empty splits as finished to cleanup states in SplitFetcher

2021-09-12 Thread GitBox


flinkbot edited a comment on pull request #17208:
URL: https://github.com/apache/flink/pull/17208#issuecomment-915763233


   
   ## CI report:
   
   * 5b9a5d1c9ddb5d53f4ef6558bb214b2a05300498 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=23969)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=23917)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23493) python tests hang on Azure

2021-09-12 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413890#comment-17413890
 ] 

Xintong Song commented on FLINK-23493:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23955=logs=bdd9ea51-4de2-506a-d4d9-f3930e4d2355=dd50312f-73b5-56b5-c172-4d81d03e2ef1=22245

> python tests hang on Azure
> --
>
> Key: FLINK-23493
> URL: https://issues.apache.org/jira/browse/FLINK-23493
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.14.0, 1.13.1, 1.12.4
>Reporter: Dawid Wysakowicz
>Assignee: Huang Xingbo
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20898=logs=821b528f-1eed-5598-a3b4-7f748b13f261=4fad9527-b9a5-5015-1b70-8356e5c91490=22829



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24261) KafkaSourceITCase.testMultipleSplits fails due to "Cannot create topic"

2021-09-12 Thread Xintong Song (Jira)
Xintong Song created FLINK-24261:


 Summary: KafkaSourceITCase.testMultipleSplits fails due to "Cannot 
create topic"
 Key: FLINK-24261
 URL: https://issues.apache.org/jira/browse/FLINK-24261
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.15.0
Reporter: Xintong Song
 Fix For: 1.15.0


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23955=logs=b0097207-033c-5d9a-b48c-6d4796fbe60d=8338a7d2-16f7-52e5-f576-4b7b3071eb3d=7119

{code}
Sep 13 01:14:27 [ERROR] Tests run: 12, Failures: 0, Errors: 1, Skipped: 0, Time 
elapsed: 180.412 s <<< FAILURE! - in 
org.apache.flink.connector.kafka.source.KafkaSourceITCase
Sep 13 01:14:27 [ERROR] testMultipleSplits{TestEnvironment, ExternalContext}[1] 
 Time elapsed: 120.244 s  <<< ERROR!
Sep 13 01:14:27 java.lang.RuntimeException: Cannot create topic 
'kafka-single-topic-7245292146378659602'
Sep 13 01:14:27 at 
org.apache.flink.connector.kafka.source.testutils.KafkaSingleTopicExternalContext.createTopic(KafkaSingleTopicExternalContext.java:100)
Sep 13 01:14:27 at 
org.apache.flink.connector.kafka.source.testutils.KafkaSingleTopicExternalContext.createSourceSplitDataWriter(KafkaSingleTopicExternalContext.java:142)
Sep 13 01:14:27 at 
org.apache.flink.connectors.test.common.testsuites.SourceTestSuiteBase.generateAndWriteTestData(SourceTestSuiteBase.java:301)
Sep 13 01:14:27 at 
org.apache.flink.connectors.test.common.testsuites.SourceTestSuiteBase.testMultipleSplits(SourceTestSuiteBase.java:142)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lvchongyi opened a new pull request #17249: Release 1.10

2021-09-12 Thread GitBox


lvchongyi opened a new pull request #17249:
URL: https://github.com/apache/flink/pull/17249


   how to get metrics of job total count.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Reopened] (FLINK-24137) Python tests fail with "Exception in thread read_grpc_client_inputs"

2021-09-12 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song reopened FLINK-24137:
--

> Python tests fail with "Exception in thread read_grpc_client_inputs"
> 
>
> Key: FLINK-24137
> URL: https://issues.apache.org/jira/browse/FLINK-24137
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.14.0
>Reporter: Dawid Wysakowicz
>Assignee: Dian Fu
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23443=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901=24681
> {code}
> Sep 01 02:26:21 E   Caused by: java.lang.RuntimeException: 
> Failed to create stage bundle factory! INFO:root:Initializing python harness: 
> /__w/1/s/flink-python/pyflink/fn_execution/beam/beam_boot.py --id=1-1 
> --provision_endpoint=localhost:44544
> Sep 01 02:26:21 E   
> Sep 01 02:26:21 E at 
> org.apache.flink.streaming.api.runners.python.beam.BeamPythonFunctionRunner.createStageBundleFactory(BeamPythonFunctionRunner.java:566)
> Sep 01 02:26:21 E at 
> org.apache.flink.streaming.api.runners.python.beam.BeamPythonFunctionRunner.open(BeamPythonFunctionRunner.java:255)
> Sep 01 02:26:21 E at 
> org.apache.flink.streaming.api.operators.python.AbstractPythonFunctionOperator.open(AbstractPythonFunctionOperator.java:131)
> Sep 01 02:26:21 E at 
> org.apache.flink.streaming.api.operators.python.AbstractOneInputPythonFunctionOperator.open(AbstractOneInputPythonFunctionOperator.java:116)
> Sep 01 02:26:21 E at 
> org.apache.flink.streaming.api.operators.python.PythonProcessOperator.open(PythonProcessOperator.java:59)
> Sep 01 02:26:21 E at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:110)
> Sep 01 02:26:21 E at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:691)
> Sep 01 02:26:21 E at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
> Sep 01 02:26:21 E at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:667)
> Sep 01 02:26:21 E at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:639)
> Sep 01 02:26:21 E at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)
> Sep 01 02:26:21 E at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927)
> Sep 01 02:26:21 E at 
> org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766)
> Sep 01 02:26:21 E at 
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:575)
> Sep 01 02:26:21 E at java.lang.Thread.run(Thread.java:748)
> Sep 01 02:26:21 E   Caused by: 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.UncheckedExecutionException:
>  java.lang.IllegalStateException: Process died with exit code 0
> Sep 01 02:26:21 E at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2050)
> Sep 01 02:26:21 E at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.get(LocalCache.java:3952)
> Sep 01 02:26:21 E at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3974)
> Sep 01 02:26:21 E at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4958)
> Sep 01 02:26:21 E at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4964)
> Sep 01 02:26:21 E at 
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.(DefaultJobBundleFactory.java:451)
> Sep 01 02:26:21 E at 
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.(DefaultJobBundleFactory.java:436)
> Sep 01 02:26:21 E at 
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory.forStage(DefaultJobBundleFactory.java:303)
> Sep 01 02:26:21 E at 
> 

[jira] [Commented] (FLINK-24137) Python tests fail with "Exception in thread read_grpc_client_inputs"

2021-09-12 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413889#comment-17413889
 ] 

Xintong Song commented on FLINK-24137:
--

[~dianfu]
It looks like the problem still exist.
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23955=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901=23820

> Python tests fail with "Exception in thread read_grpc_client_inputs"
> 
>
> Key: FLINK-24137
> URL: https://issues.apache.org/jira/browse/FLINK-24137
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.14.0
>Reporter: Dawid Wysakowicz
>Assignee: Dian Fu
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23443=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901=24681
> {code}
> Sep 01 02:26:21 E   Caused by: java.lang.RuntimeException: 
> Failed to create stage bundle factory! INFO:root:Initializing python harness: 
> /__w/1/s/flink-python/pyflink/fn_execution/beam/beam_boot.py --id=1-1 
> --provision_endpoint=localhost:44544
> Sep 01 02:26:21 E   
> Sep 01 02:26:21 E at 
> org.apache.flink.streaming.api.runners.python.beam.BeamPythonFunctionRunner.createStageBundleFactory(BeamPythonFunctionRunner.java:566)
> Sep 01 02:26:21 E at 
> org.apache.flink.streaming.api.runners.python.beam.BeamPythonFunctionRunner.open(BeamPythonFunctionRunner.java:255)
> Sep 01 02:26:21 E at 
> org.apache.flink.streaming.api.operators.python.AbstractPythonFunctionOperator.open(AbstractPythonFunctionOperator.java:131)
> Sep 01 02:26:21 E at 
> org.apache.flink.streaming.api.operators.python.AbstractOneInputPythonFunctionOperator.open(AbstractOneInputPythonFunctionOperator.java:116)
> Sep 01 02:26:21 E at 
> org.apache.flink.streaming.api.operators.python.PythonProcessOperator.open(PythonProcessOperator.java:59)
> Sep 01 02:26:21 E at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:110)
> Sep 01 02:26:21 E at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:691)
> Sep 01 02:26:21 E at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
> Sep 01 02:26:21 E at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:667)
> Sep 01 02:26:21 E at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:639)
> Sep 01 02:26:21 E at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)
> Sep 01 02:26:21 E at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927)
> Sep 01 02:26:21 E at 
> org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766)
> Sep 01 02:26:21 E at 
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:575)
> Sep 01 02:26:21 E at java.lang.Thread.run(Thread.java:748)
> Sep 01 02:26:21 E   Caused by: 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.UncheckedExecutionException:
>  java.lang.IllegalStateException: Process died with exit code 0
> Sep 01 02:26:21 E at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2050)
> Sep 01 02:26:21 E at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.get(LocalCache.java:3952)
> Sep 01 02:26:21 E at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3974)
> Sep 01 02:26:21 E at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4958)
> Sep 01 02:26:21 E at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4964)
> Sep 01 02:26:21 E at 
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.(DefaultJobBundleFactory.java:451)
> Sep 01 02:26:21 E at 
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.(DefaultJobBundleFactory.java:436)
> Sep 01 

[jira] [Assigned] (FLINK-23821) Test loopback mode to allow Python UDF worker and client reuse the same Python VM

2021-09-12 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song reassigned FLINK-23821:


Assignee: Xintong Song

> Test loopback mode to allow Python UDF worker and client reuse the same 
> Python VM
> -
>
> Key: FLINK-23821
> URL: https://issues.apache.org/jira/browse/FLINK-23821
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Huang Xingbo
>Assignee: Xintong Song
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.14.0
>
>
> The newly introduced feature allows users to debug their python functions 
> directly in IDEs such as PyCharm.
> For the details of debugging, you can refer to 
> [doc|https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/python/debugging/#local-debug]
>  and for the details of how to debug in PyCharm, you can refer to the 
> [doc|https://www.jetbrains.com/help/pycharm/debugging-your-first-python-application.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] RocMarshal commented on pull request #17221: [FLINK-24217][doc]Translate "LOAD Statements" page of "SQL" into Chinese

2021-09-12 Thread GitBox


RocMarshal commented on pull request #17221:
URL: https://github.com/apache/flink/pull/17221#issuecomment-917807010


   @wuchong @hapihu OK. I'll check it ASAP.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal commented on pull request #17223: [FLINK-24219][doc]Translate "SET Statements" page of "SQL" into Chinese

2021-09-12 Thread GitBox


RocMarshal commented on pull request #17223:
URL: https://github.com/apache/flink/pull/17223#issuecomment-917806862


   @wuchong @hapihu OK. I'll check it ASAP.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal commented on pull request #17222: [FLINK-24218][doc]Translate "UNLOAD Statements" page of "SQL" into Ch…

2021-09-12 Thread GitBox


RocMarshal commented on pull request #17222:
URL: https://github.com/apache/flink/pull/17222#issuecomment-917806952


   @wuchong @hapihu OK. I'll check it ASAP.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal commented on pull request #17224: [FLINK-24220][doc]Translate "RESET Statements" page of "SQL" into Chi…

2021-09-12 Thread GitBox


RocMarshal commented on pull request #17224:
URL: https://github.com/apache/flink/pull/17224#issuecomment-917806795


   @wuchong @hapihu OK. I'll check it ASAP.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal commented on pull request #17225: [FLINK-24221][doc]Translate "JAR Statements" page of "SQL" into Chinese

2021-09-12 Thread GitBox


RocMarshal commented on pull request #17225:
URL: https://github.com/apache/flink/pull/17225#issuecomment-917806732


   @wuchong @hapihu OK. I'll check it ASAP.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-24244) Add logging about whether it's executed in loopback mode

2021-09-12 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-24244.
---
Resolution: Fixed

Merged to
- master via 9f1c2d83e5fb3b43782c26fbd065b8c90604d8b5
- release-1.14 via b3a97b122a88a768197c2943ad7e6135babb87c6

> Add logging about whether it's executed in loopback mode
> 
>
> Key: FLINK-24244
> URL: https://issues.apache.org/jira/browse/FLINK-24244
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Dian Fu
>Assignee: Dian Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> Currently, it's unclear whether a job is running in loopback mode or process 
> mode, it would be great to add some logging to make it clear. This would be 
> helpful for debugging. It makes it clear whether a failed test is running in 
> loopback mode or process mode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dianfu closed pull request #17234: [FLINK-24244][python] Logging whether it's executed in loopback mode

2021-09-12 Thread GitBox


dianfu closed pull request #17234:
URL: https://github.com/apache/flink/pull/17234


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17248: [FLINK-24260][python] Limit requests to 2.26.0 or above only for python 3.6+

2021-09-12 Thread GitBox


flinkbot edited a comment on pull request #17248:
URL: https://github.com/apache/flink/pull/17248#issuecomment-917790409


   
   ## CI report:
   
   * 76108408f1e9fabcc84ddc6def9b8cde09e5e477 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=23964)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17208: [BP-1.14][FLINK-23773][connector/kafka] Mark empty splits as finished to cleanup states in SplitFetcher

2021-09-12 Thread GitBox


flinkbot edited a comment on pull request #17208:
URL: https://github.com/apache/flink/pull/17208#issuecomment-915763233


   
   ## CI report:
   
   * 5b9a5d1c9ddb5d53f4ef6558bb214b2a05300498 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=23917)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=23969)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17225: [FLINK-24221][doc]Translate "JAR Statements" page of "SQL" into Chinese

2021-09-12 Thread GitBox


flinkbot edited a comment on pull request #17225:
URL: https://github.com/apache/flink/pull/17225#issuecomment-916276718


   
   ## CI report:
   
   * 1c3ff74b1ddb81ebeef27319fabca7a5f845809b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=23873)
 
   * c59ff823b92cebd91202add79f258683d2b1b347 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=23963)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-18880) Respect configurations defined in flink-conf.yaml and environment variables when executing in local mode

2021-09-12 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-18880.
---
Fix Version/s: (was: 1.11.5)
   1.13.3
   Resolution: Fixed

Fixed in:
- master via e3d716783a8421638095a3938e3a9d8f7911f4ba
- release-1.14 via ed022d21708626ded50b7d0e5b6a1aa79d8b59aa
- release-1.13 via 978d90715e1cddef816c2ddfb08ba3b27f38758c

> Respect configurations defined in flink-conf.yaml and environment variables 
> when executing in local mode
> 
>
> Key: FLINK-18880
> URL: https://issues.apache.org/jira/browse/FLINK-18880
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Assignee: Dian Fu
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
> Fix For: 1.14.0, 1.13.3
>
>
> Currently, the configurations defined in flink-conf.yaml and environment 
> variables are not respected when PyFlink jobs are executed in local mode. It 
> will cause a few problems, including but not limited to the following:
> - Users could not configure the heap memory used by the gateway server. It 
> may cause OOM issues in scenarios such as Table.to_pandas when the content of 
> the Table is big. 
> - There is no easy way for users to specify hadoop classpath



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-18880) Respect configurations defined in flink-conf.yaml and environment variables when executing in local mode

2021-09-12 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reassigned FLINK-18880:
---

Assignee: Dian Fu

> Respect configurations defined in flink-conf.yaml and environment variables 
> when executing in local mode
> 
>
> Key: FLINK-18880
> URL: https://issues.apache.org/jira/browse/FLINK-18880
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Assignee: Dian Fu
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
> Fix For: 1.14.0, 1.11.5
>
>
> Currently, the configurations defined in flink-conf.yaml and environment 
> variables are not respected when PyFlink jobs are executed in local mode. It 
> will cause a few problems, including but not limited to the following:
> - Users could not configure the heap memory used by the gateway server. It 
> may cause OOM issues in scenarios such as Table.to_pandas when the content of 
> the Table is big. 
> - There is no easy way for users to specify hadoop classpath



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on pull request #17224: [FLINK-24220][doc]Translate "RESET Statements" page of "SQL" into Chi…

2021-09-12 Thread GitBox


wuchong commented on pull request #17224:
URL: https://github.com/apache/flink/pull/17224#issuecomment-917803380


   cc @RocMarshal , do you have time to help reviewing this? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #17225: [FLINK-24221][doc]Translate "JAR Statements" page of "SQL" into Chinese

2021-09-12 Thread GitBox


wuchong commented on pull request #17225:
URL: https://github.com/apache/flink/pull/17225#issuecomment-917803355


   cc @RocMarshal , do you have time to help reviewing this? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #17221: [FLINK-24217][doc]Translate "LOAD Statements" page of "SQL" into Chinese

2021-09-12 Thread GitBox


wuchong commented on pull request #17221:
URL: https://github.com/apache/flink/pull/17221#issuecomment-917803278


   cc @RocMarshal , do you have time to help reviewing this? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #17223: [FLINK-24219][doc]Translate "SET Statements" page of "SQL" into Chinese

2021-09-12 Thread GitBox


wuchong commented on pull request #17223:
URL: https://github.com/apache/flink/pull/17223#issuecomment-917803322


   cc @RocMarshal , do you have time to help reviewing this? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #17222: [FLINK-24218][doc]Translate "UNLOAD Statements" page of "SQL" into Ch…

2021-09-12 Thread GitBox


wuchong commented on pull request #17222:
URL: https://github.com/apache/flink/pull/17222#issuecomment-917803228


   cc @RocMarshal , do you have time to help reviewing this? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24254) Support configuration hints for table sources and sinks

2021-09-12 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413881#comment-17413881
 ] 

Jark Wu commented on FLINK-24254:
-

I would like to not mix them. {{OPTIONS}} is used for "table options". 


> Support configuration hints for table sources and sinks
> ---
>
> Key: FLINK-24254
> URL: https://issues.apache.org/jira/browse/FLINK-24254
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Timo Walther
>Priority: Major
>
> We currently offer config option that modify the behavior of sources and 
> sinks. Such as:
> {code}
> table.exec.source.cdc-events-duplicate
> table.exec.sink.not-null-enforcer
> table.exec.sink.upsert-materialize
> {code}
> Instead of defining them globally, it should be possible to set them more 
> fine-grained via hints. Either we reuse the {{OPTIONS(key=value, ...)}} hint, 
> or come up with new category such as {{CONFIG(key=value, ...)}} to not mix 
> table factory options with configuration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23828) KafkaSourceITCase.testIdleReader fails on azure

2021-09-12 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413880#comment-17413880
 ] 

Xintong Song commented on FLINK-23828:
--

Downgrading to Critical. No instances reported for 10 days.

> KafkaSourceITCase.testIdleReader fails on azure
> ---
>
> Key: FLINK-23828
> URL: https://issues.apache.org/jira/browse/FLINK-23828
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
>Reporter: Xintong Song
>Assignee: Qingsheng Ren
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22284=logs=c5f0071e-1851-543e-9a45-9ac140befc32=15a22db7-8faa-5b34-3920-d33c9f0ca23c=7355
> {code}
> Aug 16 14:25:00 [ERROR] Tests run: 12, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 67.241 s <<< FAILURE! - in 
> org.apache.flink.connector.kafka.source.KafkaSourceITCase
> Aug 16 14:25:00 [ERROR] testIdleReader{TestEnvironment, ExternalContext}[1]  
> Time elapsed: 0.918 s  <<< FAILURE!
> Aug 16 14:25:00 java.lang.AssertionError: 
> Aug 16 14:25:00 
> Aug 16 14:25:00 Expected: Records consumed by Flink should be identical to 
> test data and preserve the order in multiple splits
> Aug 16 14:25:00  but: Unexpected record 'la3OaJDch7vuUXDmGOYf'
> Aug 16 14:25:00   at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> Aug 16 14:25:00   at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8)
> Aug 16 14:25:00   at 
> org.apache.flink.connectors.test.common.testsuites.SourceTestSuiteBase.testIdleReader(SourceTestSuiteBase.java:193)
> Aug 16 14:25:00   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Aug 16 14:25:00   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Aug 16 14:25:00   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Aug 16 14:25:00   at java.lang.reflect.Method.invoke(Method.java:498)
> Aug 16 14:25:00   at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210)
> Aug 16 14:25:00   at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65)
> Aug 16 14:25:00   at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139)
> Aug 16 14:25:00 

[GitHub] [flink] dianfu closed pull request #17216: [FLINK-18880][python] Respect configurations defined in flink-conf.yaml and environment variables when executing in local mode

2021-09-12 Thread GitBox


dianfu closed pull request #17216:
URL: https://github.com/apache/flink/pull/17216


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-23828) KafkaSourceITCase.testIdleReader fails on azure

2021-09-12 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-23828:
-
Priority: Critical  (was: Blocker)

> KafkaSourceITCase.testIdleReader fails on azure
> ---
>
> Key: FLINK-23828
> URL: https://issues.apache.org/jira/browse/FLINK-23828
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
>Reporter: Xintong Song
>Assignee: Qingsheng Ren
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22284=logs=c5f0071e-1851-543e-9a45-9ac140befc32=15a22db7-8faa-5b34-3920-d33c9f0ca23c=7355
> {code}
> Aug 16 14:25:00 [ERROR] Tests run: 12, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 67.241 s <<< FAILURE! - in 
> org.apache.flink.connector.kafka.source.KafkaSourceITCase
> Aug 16 14:25:00 [ERROR] testIdleReader{TestEnvironment, ExternalContext}[1]  
> Time elapsed: 0.918 s  <<< FAILURE!
> Aug 16 14:25:00 java.lang.AssertionError: 
> Aug 16 14:25:00 
> Aug 16 14:25:00 Expected: Records consumed by Flink should be identical to 
> test data and preserve the order in multiple splits
> Aug 16 14:25:00  but: Unexpected record 'la3OaJDch7vuUXDmGOYf'
> Aug 16 14:25:00   at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> Aug 16 14:25:00   at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8)
> Aug 16 14:25:00   at 
> org.apache.flink.connectors.test.common.testsuites.SourceTestSuiteBase.testIdleReader(SourceTestSuiteBase.java:193)
> Aug 16 14:25:00   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Aug 16 14:25:00   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Aug 16 14:25:00   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Aug 16 14:25:00   at java.lang.reflect.Method.invoke(Method.java:498)
> Aug 16 14:25:00   at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210)
> Aug 16 14:25:00   at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131)
> Aug 16 14:25:00   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65)
> Aug 16 14:25:00   at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139)
> Aug 16 14:25:00   at 
> 

[GitHub] [flink] dianfu commented on a change in pull request #16798: [FLINK-23651][python] Support RabbitMQ in PyFlink

2021-09-12 Thread GitBox


dianfu commented on a change in pull request #16798:
URL: https://github.com/apache/flink/pull/16798#discussion_r706975452



##
File path: flink-connectors/flink-connector-rabbitmq/pom.xml
##
@@ -92,4 +92,31 @@ under the License.
 

 
+   
+   
+   
+   org.apache.maven.plugins
+   maven-shade-plugin
+   
+   
+   shade-flink
+   
+   
+   
+   
com.rabbitmq:amqp-client
+   
+   
+   
+   
+   
com.rabbitmq
+   
org.apache.flink.rabbitmq.shaded.com.rabbitmq
+   
+   
+   
+   
+   
+   
+   
+   
+

Review comment:
   This makes the RabbitMQ connector becoming a fat jar. We need to do the 
following for a fat jar:
   - Usually it will create a fat jar in a separate module, e.g 
flink-sql-connector-kinesis vs flink-connector-kinesis
   - Need to add a NOTICE file which declares the bundled libraries and their 
licenses
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ commented on a change in pull request #17182: [release] Create 1.14 release-notes

2021-09-12 Thread GitBox


KarmaGYZ commented on a change in pull request #17182:
URL: https://github.com/apache/flink/pull/17182#discussion_r706972483



##
File path: docs/content.zh/release-notes/flink-1.14.md
##
@@ -0,0 +1,424 @@
+---
+title: "Release Notes - Flink 1.14"
+---
+
+
+# Release Notes - Flink 1.14
+
+These release notes discuss important aspects, such as configuration, 
behavior, or dependencies,
+that changed between Flink 1.13 and Flink 1.14. Please read these notes 
carefully if you are
+planning to upgrade your Flink version to 1.14.
+
+### DataStream API
+
+ Expose a consistent GlobalDataExchangeMode
+
+# [FLINK-23402](https://issues.apache.org/jira/browse/FLINK-23402)
+
+The default DataStream API shuffle mode for batch executions has been changed 
to blocking exchanges
+for all edges of the stream graph. A new option `execution.batch-shuffle-mode` 
allows to change it
+to pipelined behavior if necessary.
+
+ Allow @TypeInfo annotation on POJO field declarations
+
+# [FLINK-12141](https://issues.apache.org/jira/browse/FLINK-12141)
+
+`@TypeInfo` annotations can now also be used on POJO fields which, for 
example, can help to define
+custom serializers for third-party classes that can otherwise not be annotated 
themselves.
+
+### Table & SQL
+
+ Use pipeline name consistently across DataStream API and Table API
+
+# [FLINK-23646](https://issues.apache.org/jira/browse/FLINK-23646)
+
+The default job name for DataStream API programs in batch mode has changed 
from `"Flink Streaming Job"` to
+`"Flink Batch Job"`. A custom name can be set with config option 
`pipeline.name`.
+
+ Propagate unique keys for fromChangelogStream
+
+# [FLINK-24033](https://issues.apache.org/jira/browse/FLINK-24033)
+
+Compared to 1.13.2, `StreamTableEnvironment.fromChangelogStream` might produce 
a different stream
+because primary keys were not properly considered before.
+
+ Support new type inference for Table#flatMap
+
+# [FLINK-16769](https://issues.apache.org/jira/browse/FLINK-16769)
+
+`Table.flatMap()` supports the new type system now. Users are requested to 
upgrade their functions.
+
+ Add Scala implicit conversions for new API methods
+
+# [FLINK-22590](https://issues.apache.org/jira/browse/FLINK-22590)
+
+The Scala implicits that convert between DataStream API and Table API have 
been updated to the new
+methods of FLIP-136.
+
+The changes might require an update of pipelines that used `toTable` or 
implicit conversions from
+`Table` to `DataStream[Row]`.
+
+ Remove YAML environment file support in SQL Client
+
+# [FLINK-22540](https://issues.apache.org/jira/browse/FLINK-22540)
+
+The sql-client-defaults.yaml YAML file was deprecated in 1.13 release and now 
it is totally removed
+in this release. As an alternative, you can use the `-i` startup option to 
execute an initialization SQL
+file to setup the SQL Client session. The initialization SQL file can use 
Flink DDLs to
+define available catalogs, table sources and sinks, user-defined functions, 
and other properties
+required for execution and deployment.
+
+See more: 
https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/table/sqlclient/#initialize-session-using-sql-files
+
+ Remove the legacy planner code base
+
+# [FLINK-22864](https://issues.apache.org/jira/browse/FLINK-22864)
+
+The old Table/SQL planner has been removed. BatchTableEnvironment and DataSet 
API interop with Table
+API are not supported anymore. Use the unified TableEnvironment for batch and 
stream processing with
+the new planner or the DataStream API in batch execution mode.
+
+Users are encouraged to update their pipelines. Otherwise Flink 1.13 is the 
last version that offers
+the old functionality.
+
+ Remove "blink" suffix from table modules
+
+# [FLINK-22879](https://issues.apache.org/jira/browse/FLINK-22879)
+
+The following Maven modules have been renamed:
+* flink-table-planner-blink -> flink-table-planner
+* flink-table-runtime-blink -> flink-table-runtime
+* flink-table-uber-blink ->flink-table-uber
+
+It might be required to update job JAR dependencies. Note that
+flink-table-planner and flink-table-uber used to contain the legacy planner 
before Flink 1.14 and
+now contain the only officially supported planner (i.e. previously known as 
'Blink' planner).
+
+ Remove BatchTableEnvironment and related API classes
+
+# [FLINK-22877](https://issues.apache.org/jira/browse/FLINK-22877)
+
+Due to the removal of BatchTableEnvironment, BatchTableSource and 
BatchTableSink have been removed
+as well. Use DynamicTableSource and DynamicTableSink instead. They support the 
old InputFormat and
+OutputFormat interfaces as runtime providers if necessary.
+
+ Remove TableEnvironment#connect
+
+# [FLINK-23063](https://issues.apache.org/jira/browse/FLINK-23063)
+
+The deprecated `TableEnvironment#connect()` method has been removed. Use the
+new 

[jira] [Closed] (FLINK-21924) Fine Grained Resource Management Phase 1

2021-09-12 Thread Yangze Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yangze Guo closed FLINK-21924.
--
Release Note: 
Flink now supports controling the resource consumption of your workload in a 
finer granularity. This feature is currently only available to DataStream API.

See more: 
https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/finegrained_resource/
  Resolution: Resolved

> Fine Grained Resource Management Phase 1
> 
>
> Key: FLINK-21924
> URL: https://issues.apache.org/jira/browse/FLINK-21924
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Coordination
>Reporter: Xintong Song
>Priority: Major
>  Labels: Umbrella
> Fix For: 1.14.0
>
>
> This ticket serves as the umbrella of remaining tasks for delivering the fine 
> grained resource management feature to end users.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23821) Test loopback mode to allow Python UDF worker and client reuse the same Python VM

2021-09-12 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413873#comment-17413873
 ] 

Xintong Song commented on FLINK-23821:
--

I've reached out to [~Jiangang] offline.
Given that he's also working on another blocker FLINK-23969, we will try to 
find another person to help with this effort.

> Test loopback mode to allow Python UDF worker and client reuse the same 
> Python VM
> -
>
> Key: FLINK-23821
> URL: https://issues.apache.org/jira/browse/FLINK-23821
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Huang Xingbo
>Assignee: Liu
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.14.0
>
>
> The newly introduced feature allows users to debug their python functions 
> directly in IDEs such as PyCharm.
> For the details of debugging, you can refer to 
> [doc|https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/python/debugging/#local-debug]
>  and for the details of how to debug in PyCharm, you can refer to the 
> [doc|https://www.jetbrains.com/help/pycharm/debugging-your-first-python-application.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-23821) Test loopback mode to allow Python UDF worker and client reuse the same Python VM

2021-09-12 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song reassigned FLINK-23821:


Assignee: (was: Liu)

> Test loopback mode to allow Python UDF worker and client reuse the same 
> Python VM
> -
>
> Key: FLINK-23821
> URL: https://issues.apache.org/jira/browse/FLINK-23821
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Huang Xingbo
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.14.0
>
>
> The newly introduced feature allows users to debug their python functions 
> directly in IDEs such as PyCharm.
> For the details of debugging, you can refer to 
> [doc|https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/python/debugging/#local-debug]
>  and for the details of how to debug in PyCharm, you can refer to the 
> [doc|https://www.jetbrains.com/help/pycharm/debugging-your-first-python-application.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23821) Test loopback mode to allow Python UDF worker and client reuse the same Python VM

2021-09-12 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413872#comment-17413872
 ] 

Huang Xingbo commented on FLINK-23821:
--

Hi [~Jiangang], for the python_worker_execution_mode parameter, we do not 
intend to expose it to users, we just manipulate this value in the test. As you 
can see, in `test_stream_execution_environment.py` and `test_dependency.py` we 
have tested the loopback mode, while other tests we will use the process mode. 
The main value of loopback mode is to allow users to debug when running python 
udf in local, so as a tester, you only need to test whether you can debug 
locally.


> Test loopback mode to allow Python UDF worker and client reuse the same 
> Python VM
> -
>
> Key: FLINK-23821
> URL: https://issues.apache.org/jira/browse/FLINK-23821
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Huang Xingbo
>Assignee: Liu
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.14.0
>
>
> The newly introduced feature allows users to debug their python functions 
> directly in IDEs such as PyCharm.
> For the details of debugging, you can refer to 
> [doc|https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/python/debugging/#local-debug]
>  and for the details of how to debug in PyCharm, you can refer to the 
> [doc|https://www.jetbrains.com/help/pycharm/debugging-your-first-python-application.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] becketqin merged pull request #17243: [BP-1.13][FLINK-23773][connector/kafka] Mark empty splits as finished to cleanup states in SplitFetcher

2021-09-12 Thread GitBox


becketqin merged pull request #17243:
URL: https://github.com/apache/flink/pull/17243


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-24065) Upgrade the TwoPhaseCommitSink to support empty transaction after finished

2021-09-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao closed FLINK-24065.
---
Resolution: Fixed

> Upgrade the TwoPhaseCommitSink to support empty transaction after finished
> --
>
> Key: FLINK-24065
> URL: https://issues.apache.org/jira/browse/FLINK-24065
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Common, Runtime / Checkpointing
>Affects Versions: 1.14.0
>Reporter: Yun Gao
>Assignee: Yun Gao
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> In https://issues.apache.org/jira/browse/FLINK-23473 for the 
> TwoPhaseCommitSink, we would not create new transactions after finished to 
> avoid we have transactions left after job finished. However, since with the 
> current implementation of the TwoPhaseCommitSink, we would have to write the 
> transactions into the state for each checkpoint, and the state does not 
> support null transaction now, thus there would be NullPointerException in 
> this case. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-24065) Upgrade the TwoPhaseCommitSink to support empty transaction after finished

2021-09-12 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413871#comment-17413871
 ] 

Yun Gao commented on FLINK-24065:
-

Fix on master via fdf40d2e0efe2eed77ca9633121691c8d1e744cb
Fix on release-1.14 via 22e76173c43000ddc4ff5ec6372df8ef1acf2057

> Upgrade the TwoPhaseCommitSink to support empty transaction after finished
> --
>
> Key: FLINK-24065
> URL: https://issues.apache.org/jira/browse/FLINK-24065
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Common, Runtime / Checkpointing
>Affects Versions: 1.14.0
>Reporter: Yun Gao
>Assignee: Yun Gao
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> In https://issues.apache.org/jira/browse/FLINK-23473 for the 
> TwoPhaseCommitSink, we would not create new transactions after finished to 
> avoid we have transactions left after job finished. However, since with the 
> current implementation of the TwoPhaseCommitSink, we would have to write the 
> transactions into the state for each checkpoint, and the state does not 
> support null transaction now, thus there would be NullPointerException in 
> this case. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] becketqin commented on pull request #17208: [BP-1.14][FLINK-23773][connector/kafka] Mark empty splits as finished to cleanup states in SplitFetcher

2021-09-12 Thread GitBox


becketqin commented on pull request #17208:
URL: https://github.com/apache/flink/pull/17208#issuecomment-917790750


   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17248: [FLINK-24260][python] Limit requests to 2.26.0 or above only for python 3.6+

2021-09-12 Thread GitBox


flinkbot commented on pull request #17248:
URL: https://github.com/apache/flink/pull/17248#issuecomment-917790409


   
   ## CI report:
   
   * 76108408f1e9fabcc84ddc6def9b8cde09e5e477 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gaoyunhaii closed pull request #17231: (1.14) [FLINK-24065][connector] Upgrade the state of TwoPhaseCommitSink to support empty transaction after finished

2021-09-12 Thread GitBox


gaoyunhaii closed pull request #17231:
URL: https://github.com/apache/flink/pull/17231


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17225: [FLINK-24221][doc]Translate "JAR Statements" page of "SQL" into Chinese

2021-09-12 Thread GitBox


flinkbot edited a comment on pull request #17225:
URL: https://github.com/apache/flink/pull/17225#issuecomment-916276718


   
   ## CI report:
   
   * 1c3ff74b1ddb81ebeef27319fabca7a5f845809b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=23873)
 
   * c59ff823b92cebd91202add79f258683d2b1b347 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gaoyunhaii closed pull request #17120: [FLINK-24065][connector] Upgrade the state of TwoPhaseCommitSink to support empty transaction after finished

2021-09-12 Thread GitBox


gaoyunhaii closed pull request #17120:
URL: https://github.com/apache/flink/pull/17120


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24139) Push down more predicates through Join in stream mode

2021-09-12 Thread godfrey he (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413870#comment-17413870
 ] 

godfrey he commented on FLINK-24139:


[~trushev] Thanks for reporting this improvement

> Push down more predicates through Join in stream mode
> -
>
> Key: FLINK-24139
> URL: https://issues.apache.org/jira/browse/FLINK-24139
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: Alexander Trushev
>Priority: Minor
> Attachments: q13_after.json, q13_after.png, q13_after.txt, 
> q13_before.json, q13_before.png, q13_before.txt
>
>
> h3. Context
> Rule {{JoinDependentConditionDerivationRule}} is introduced in FLINK-12509. 
> This rule rewrites join condition in such way that more predicates can be 
> pushed down through join. For example,
>  # Source A = [a0, a1, a2], source B = [b0, b1]
>  # {code:sql}select * from A join B on a0 = b0 where (a1 = 0 and b1 = 0) or 
> a2 = 0{code}
>  # {{JoinDependentConditionDerivationRule}} transforms condition ((a1 and b1) 
> or a2) to (((a1 and b1) or a2) and (a1 or a2))
>  # {{JoinConditionPushRule}} pushes (a1 or a2) to A source
> It is a good optimization that can lead to performance improvement of query 
> execution.
>  Currently, {{JoinDependentConditionDerivationRule}} is used only in batch 
> mode.
> h3. Proposal
> Enable {{JoinDependentConditionDerivationRule}} in stream mode.
> h3. Benefit
> Experiment based on [https://github.com/ververica/flink-sql-benchmark]
>  Cluster – 4 nodes each 2 slots
>  Dataset – tpcds_bin_orc_20
>  Before – 1.14.0-rc0
>  After – 1.14.0-rc0 + patched {{FlinkStreamProgram}} including 
> {{JoinDependentConditionDerivationRule}}
> ||TPC-DS 20 GB||Before||After||
> |q13 stream mode|83 s|8 s|
> Query plan, stream graph, dashboard visualization before and after the patch 
> are in the attachment
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #16798: [FLINK-23651][python] Support RabbitMQ in PyFlink

2021-09-12 Thread GitBox


flinkbot edited a comment on pull request #16798:
URL: https://github.com/apache/flink/pull/16798#issuecomment-897826601


   
   ## CI report:
   
   * 79547c7995e297da35a88c28517c2463c9c930c3 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=23959)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23821) Test loopback mode to allow Python UDF worker and client reuse the same Python VM

2021-09-12 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413867#comment-17413867
 ] 

Xintong Song commented on FLINK-23821:
--

[~Jiangang], how are things going with this?

> Test loopback mode to allow Python UDF worker and client reuse the same 
> Python VM
> -
>
> Key: FLINK-23821
> URL: https://issues.apache.org/jira/browse/FLINK-23821
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Huang Xingbo
>Assignee: Liu
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.14.0
>
>
> The newly introduced feature allows users to debug their python functions 
> directly in IDEs such as PyCharm.
> For the details of debugging, you can refer to 
> [doc|https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/python/debugging/#local-debug]
>  and for the details of how to debug in PyCharm, you can refer to the 
> [doc|https://www.jetbrains.com/help/pycharm/debugging-your-first-python-application.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23969) Test Pulsar source end 2 end

2021-09-12 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413866#comment-17413866
 ] 

Xintong Song commented on FLINK-23969:
--

[~Jiangang], how are things going with this?

> Test Pulsar source end 2 end
> 
>
> Key: FLINK-23969
> URL: https://issues.apache.org/jira/browse/FLINK-23969
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Pulsar
>Reporter: Arvid Heise
>Assignee: Liu
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.14.0
>
>
> Write a test application using Pulsar Source and execute it in distributed 
> fashion. Check fault-tolerance by crashing and restarting a TM.
> Ideally, we test different subscription modes and sticky keys in particular.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] KarmaGYZ commented on a change in pull request #17198: [FLINK-24212][kerbernets] fix the problem that kerberos krb5.conf file is mounted as empty directory, not a expected file

2021-09-12 Thread GitBox


KarmaGYZ commented on a change in pull request #17198:
URL: https://github.com/apache/flink/pull/17198#discussion_r706958728



##
File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/decorators/KerberosMountDecorator.java
##
@@ -107,7 +107,7 @@ public FlinkPod decorateFlinkPod(FlinkPod flinkPod) {
 .withItems(
 new KeyToPathBuilder()
 .withKey(krb5Conf.getName())
-.withPath(krb5Conf.getName())
+.withPath("krb5.conf")

Review comment:
   nit: it would be good to introduce a constant for it in 
`org.apache.flink.kubernetes.utils.Constants`.

##
File path: 
flink-kubernetes/src/test/java/org/apache/flink/kubernetes/kubeclient/decorators/KerberosMountDecoratorTest.java
##
@@ -108,4 +135,36 @@ public void testDecoratedFlinkContainer() {
 Constants.KERBEROS_KRB5CONF_MOUNT_DIR + "/krb5.conf",
 krb5ConfVolumeMount.getMountPath());
 }
+
+@Test
+public void testDecoratedFlinkPodVolumes() {
+final FlinkPod resultFlinkPod = 
kerberosMountDecorator.decorateFlinkPod(baseFlinkPod);
+List volumes = 
resultFlinkPod.getPodWithoutMainContainer().getSpec().getVolumes();
+assertEquals(2, volumes.size());
+
+final Volume keytabVolume =
+volumes.stream()
+.filter(x -> 
x.getName().equals(Constants.KERBEROS_KEYTAB_VOLUME))
+.collect(Collectors.toList())
+.get(0);
+final Volume krb5ConfVolume =
+volumes.stream()
+.filter(x -> 
x.getName().equals(Constants.KERBEROS_KRB5CONF_VOLUME))
+.collect(Collectors.toList())
+.get(0);
+assertNotNull(keytabVolume.getSecret());
+assertEquals(
+kerberosMountDecorator.getKerberosKeytabSecretName(
+testingKubernetesParameters.getClusterId()),
+keytabVolume.getSecret().getSecretName());
+
+assertNotNull(krb5ConfVolume.getConfigMap());
+assertEquals(
+kerberosMountDecorator.getKerberosKrb5confConfigMapName(

Review comment:
   ```suggestion
   KerberosMountDecorator.getKerberosKrb5confConfigMapName(
   ```

##
File path: 
flink-kubernetes/src/test/java/org/apache/flink/kubernetes/kubeclient/decorators/KerberosMountDecoratorTest.java
##
@@ -108,4 +135,36 @@ public void testDecoratedFlinkContainer() {
 Constants.KERBEROS_KRB5CONF_MOUNT_DIR + "/krb5.conf",
 krb5ConfVolumeMount.getMountPath());
 }
+
+@Test
+public void testDecoratedFlinkPodVolumes() {
+final FlinkPod resultFlinkPod = 
kerberosMountDecorator.decorateFlinkPod(baseFlinkPod);
+List volumes = 
resultFlinkPod.getPodWithoutMainContainer().getSpec().getVolumes();
+assertEquals(2, volumes.size());
+
+final Volume keytabVolume =
+volumes.stream()
+.filter(x -> 
x.getName().equals(Constants.KERBEROS_KEYTAB_VOLUME))
+.collect(Collectors.toList())
+.get(0);
+final Volume krb5ConfVolume =
+volumes.stream()
+.filter(x -> 
x.getName().equals(Constants.KERBEROS_KRB5CONF_VOLUME))
+.collect(Collectors.toList())
+.get(0);
+assertNotNull(keytabVolume.getSecret());
+assertEquals(
+kerberosMountDecorator.getKerberosKeytabSecretName(

Review comment:
   ```suggestion
   KerberosMountDecorator.getKerberosKeytabSecretName(
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gaoyunhaii commented on pull request #17120: [FLINK-24065][connector] Upgrade the state of TwoPhaseCommitSink to support empty transaction after finished

2021-09-12 Thread GitBox


gaoyunhaii commented on pull request #17120:
URL: https://github.com/apache/flink/pull/17120#issuecomment-917787550


   Very thanks @dawidwys for the review! will merge~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23749) Testing Window Join

2021-09-12 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413864#comment-17413864
 ] 

Xintong Song commented on FLINK-23749:
--

[~q977734161], how are things going?

> Testing Window Join
> ---
>
> Key: FLINK-23749
> URL: https://issues.apache.org/jira/browse/FLINK-23749
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Reporter: JING ZHANG
>Assignee: lixiaobao
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.14.0
>
>
> The window join requires the join on condition contains window starts 
> equality of input tables and window ends equality of input tables. The 
> semantic of window join is the same to the [DataStream window 
> join|https://ci.apache.org/projects/flink/flink-docs-master/dev/stream/operators/joining.html#window-join].
> {code:java}
> SELECT ...
> FROM L [LEFT|RIGHT|FULL OUTER] JOIN R -- L and R are relations applied 
> windowing TVF
> ON L.window_start = R.window_start AND L.window_end = R.window_end AND ...
> {code}
> In the future, we can also simplify the join on clause to only include the 
> window start equality if the windowing TVF is {{TUMBLE}} or {{HOP}} . 
> Currently, the windowing TVFs must be the same of left and right inputs. This 
> can be extended in the future, for example, tumbling windows join sliding 
> windows with the same window size.
> Currently, Flink not only supports Window Join which follows after [Window 
> Aggregation|https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/table/sql/queries/window-agg/].
>   But also supports Window Join which follows after [Windowing 
> TVF|https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/table/sql/queries/window-tvf/]
>  .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #17248: [FLINK-24260][python] Limit requests to 2.26.0 or above only for python 3.6+

2021-09-12 Thread GitBox


flinkbot commented on pull request #17248:
URL: https://github.com/apache/flink/pull/17248#issuecomment-917786563


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 76108408f1e9fabcc84ddc6def9b8cde09e5e477 (Mon Sep 13 
02:28:51 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-24260).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24260) py35-cython fails due to could not find requests>=2.26.0

2021-09-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-24260:
---
Labels: pull-request-available test-stability  (was: test-stability)

> py35-cython fails due to could not find requests>=2.26.0
> 
>
> Key: FLINK-24260
> URL: https://issues.apache.org/jira/browse/FLINK-24260
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.12.5
>Reporter: Xintong Song
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.6
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23942=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=8d78fe4f-d658-5c70-12f8-4921589024c3=21015
> {code}
> Collecting requests>=2.26.0 (from apache-flink==1.12.dev0)
>   Could not find a version that satisfies the requirement requests>=2.26.0 
> (from apache-flink==1.12.dev0) (from versions: 0.2.0, 0.2.1, 0.2.2, 0.2.3, 
> 0.2.4, 0.3.0, 0.3.1, 0.3.2, 0.3.3, 0.3.4, 0.4.0, 0.4.1, 0.5.0, 0.5.1, 0.6.0, 
> 0.6.1, 0.6.2, 0.6.3, 0.6.4, 0.6.5, 0.6.6, 0.7.0, 0.7.1, 0.7.2, 0.7.3, 0.7.4, 
> 0.7.5, 0.7.6, 0.8.0, 0.8.1, 0.8.2, 0.8.3, 0.8.4, 0.8.5, 0.8.6, 0.8.7, 0.8.8, 
> 0.8.9, 0.9.0, 0.9.1, 0.9.2, 0.9.3, 0.10.0, 0.10.1, 0.10.2, 0.10.3, 0.10.4, 
> 0.10.6, 0.10.7, 0.10.8, 0.11.1, 0.11.2, 0.12.0, 0.12.1, 0.13.0, 0.13.1, 
> 0.13.2, 0.13.3, 0.13.4, 0.13.5, 0.13.6, 0.13.7, 0.13.8, 0.13.9, 0.14.0, 
> 0.14.1, 0.14.2, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.1.0, 1.2.0, 1.2.1, 
> 1.2.2, 1.2.3, 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.2.1, 2.3.0, 2.4.0, 2.4.1, 2.4.2, 
> 2.4.3, 2.5.0, 2.5.1, 2.5.2, 2.5.3, 2.6.0, 2.6.1, 2.6.2, 2.7.0, 2.8.0, 2.8.1, 
> 2.9.0, 2.9.1, 2.9.2, 2.10.0, 2.11.0, 2.11.1, 2.12.0, 2.12.1, 2.12.2, 2.12.3, 
> 2.12.4, 2.12.5, 2.13.0, 2.14.0, 2.14.1, 2.14.2, 2.15.1, 2.16.0, 2.16.1, 
> 2.16.2, 2.16.3, 2.16.4, 2.16.5, 2.17.0, 2.17.1, 2.17.2, 2.17.3, 2.18.0, 
> 2.18.1, 2.18.2, 2.18.3, 2.18.4, 2.19.0, 2.19.1, 2.20.0, 2.20.1, 2.21.0, 
> 2.22.0, 2.23.0, 2.24.0, 2.25.0, 2.25.1)
> No matching distribution found for requests>=2.26.0 (from 
> apache-flink==1.12.dev0)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dianfu opened a new pull request #17248: [FLINK-24260][python] Limit requests to 2.26.0 or above only for python 3.6+

2021-09-12 Thread GitBox


dianfu opened a new pull request #17248:
URL: https://github.com/apache/flink/pull/17248


   ## What is the purpose of the change
   
   *This pull request limits requests to 2.26.0 or above only for python 3.6+ 
as it's only available for Python 3.6+*
   
   
   ## Verifying this change
   
   This change is a trivial rework without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23047) CassandraConnectorITCase.testCassandraBatchTupleFormat fails on azure

2021-09-12 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413863#comment-17413863
 ] 

Xintong Song commented on FLINK-23047:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23956=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20=13647

> CassandraConnectorITCase.testCassandraBatchTupleFormat fails on azure
> -
>
> Key: FLINK-23047
> URL: https://issues.apache.org/jira/browse/FLINK-23047
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Cassandra
>Affects Versions: 1.14.0, 1.12.4, 1.13.2
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19176=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20=13995
> {code}
> [ERROR] Tests run: 17, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 157.28 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase
> [ERROR] 
> testCassandraBatchTupleFormat(org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase)
>   Time elapsed: 12.052 s  <<< ERROR!
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: /127.0.0.1:9042 
> (com.datastax.driver.core.exceptions.OperationTimedOutException: [/127.0.0.1] 
> Timed out waiting for server response))
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)
>   at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
>   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
>   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
>   at 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.createTable(CassandraConnectorITCase.java:234)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> 

[GitHub] [flink] HuangXingBo commented on a change in pull request #17216: [FLINK-18880][python] Respect configurations defined in flink-conf.yaml and environment variables when executing in local mod

2021-09-12 Thread GitBox


HuangXingBo commented on a change in pull request #17216:
URL: https://github.com/apache/flink/pull/17216#discussion_r706960738



##
File path: flink-python/pyflink/pyflink_gateway_server.py
##
@@ -30,27 +30,41 @@
 
 from pyflink.find_flink_home import _find_flink_home, _find_flink_source_root
 
+KEY_ENV_LOG_DIR = "env.log.dir"
+KEY_ENV_YARN_CONF_DIR = "env.yarn.conf.dir"
+KEY_ENV_HADOOP_CONF_DIR = "env.hadoop.conf.dir"
+KEY_ENV_HBASE_CONF_DIR = "env.hbase.conf.dir"
+KEY_ENV_JAVA_HOME = "env.java.home"
+KEY_ENV_JAVA_OPTS = "env.java.opts"
+
 
 def on_windows():
 return platform.system() == "Windows"
 
 
-def find_java_executable():
-java_executable = "java.exe" if on_windows() else "java"
-flink_home = _find_flink_home()
-flink_conf_path = os.path.join(flink_home, "conf", "flink-conf.yaml")
-java_home = None
-
+def read_from_config(key, default_value, flink_conf_file):
+value = default_value

Review comment:
   ```suggestion
   def read_from_config(key, value, flink_conf_file):
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24260) py35-cython fails due to could not find requests>=2.26.0

2021-09-12 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413862#comment-17413862
 ] 

Xintong Song commented on FLINK-24260:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23956=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=8d78fe4f-d658-5c70-12f8-4921589024c3=21016

> py35-cython fails due to could not find requests>=2.26.0
> 
>
> Key: FLINK-24260
> URL: https://issues.apache.org/jira/browse/FLINK-24260
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.12.5
>Reporter: Xintong Song
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.12.6
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23942=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=8d78fe4f-d658-5c70-12f8-4921589024c3=21015
> {code}
> Collecting requests>=2.26.0 (from apache-flink==1.12.dev0)
>   Could not find a version that satisfies the requirement requests>=2.26.0 
> (from apache-flink==1.12.dev0) (from versions: 0.2.0, 0.2.1, 0.2.2, 0.2.3, 
> 0.2.4, 0.3.0, 0.3.1, 0.3.2, 0.3.3, 0.3.4, 0.4.0, 0.4.1, 0.5.0, 0.5.1, 0.6.0, 
> 0.6.1, 0.6.2, 0.6.3, 0.6.4, 0.6.5, 0.6.6, 0.7.0, 0.7.1, 0.7.2, 0.7.3, 0.7.4, 
> 0.7.5, 0.7.6, 0.8.0, 0.8.1, 0.8.2, 0.8.3, 0.8.4, 0.8.5, 0.8.6, 0.8.7, 0.8.8, 
> 0.8.9, 0.9.0, 0.9.1, 0.9.2, 0.9.3, 0.10.0, 0.10.1, 0.10.2, 0.10.3, 0.10.4, 
> 0.10.6, 0.10.7, 0.10.8, 0.11.1, 0.11.2, 0.12.0, 0.12.1, 0.13.0, 0.13.1, 
> 0.13.2, 0.13.3, 0.13.4, 0.13.5, 0.13.6, 0.13.7, 0.13.8, 0.13.9, 0.14.0, 
> 0.14.1, 0.14.2, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.1.0, 1.2.0, 1.2.1, 
> 1.2.2, 1.2.3, 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.2.1, 2.3.0, 2.4.0, 2.4.1, 2.4.2, 
> 2.4.3, 2.5.0, 2.5.1, 2.5.2, 2.5.3, 2.6.0, 2.6.1, 2.6.2, 2.7.0, 2.8.0, 2.8.1, 
> 2.9.0, 2.9.1, 2.9.2, 2.10.0, 2.11.0, 2.11.1, 2.12.0, 2.12.1, 2.12.2, 2.12.3, 
> 2.12.4, 2.12.5, 2.13.0, 2.14.0, 2.14.1, 2.14.2, 2.15.1, 2.16.0, 2.16.1, 
> 2.16.2, 2.16.3, 2.16.4, 2.16.5, 2.17.0, 2.17.1, 2.17.2, 2.17.3, 2.18.0, 
> 2.18.1, 2.18.2, 2.18.3, 2.18.4, 2.19.0, 2.19.1, 2.20.0, 2.20.1, 2.21.0, 
> 2.22.0, 2.23.0, 2.24.0, 2.25.0, 2.25.1)
> No matching distribution found for requests>=2.26.0 (from 
> apache-flink==1.12.dev0)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23493) python tests hang on Azure

2021-09-12 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413861#comment-17413861
 ] 

Xintong Song commented on FLINK-23493:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23953=logs=bf5e383b-9fd3-5f02-ca1c-8f788e2e76d3=85189c57-d8a0-5c9c-b61d-fc05cfac62cf=23028

> python tests hang on Azure
> --
>
> Key: FLINK-23493
> URL: https://issues.apache.org/jira/browse/FLINK-23493
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.14.0, 1.13.1, 1.12.4
>Reporter: Dawid Wysakowicz
>Assignee: Huang Xingbo
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20898=logs=821b528f-1eed-5598-a3b4-7f748b13f261=4fad9527-b9a5-5015-1b70-8356e5c91490=22829



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23493) python tests hang on Azure

2021-09-12 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413858#comment-17413858
 ] 

Xintong Song commented on FLINK-23493:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23950=logs=3e4dd1a2-fe2f-5e5d-a581-48087e718d53=b4612f28-e3b5-5853-8a8b-610ae894217a=22449

> python tests hang on Azure
> --
>
> Key: FLINK-23493
> URL: https://issues.apache.org/jira/browse/FLINK-23493
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.14.0, 1.13.1, 1.12.4
>Reporter: Dawid Wysakowicz
>Assignee: Huang Xingbo
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20898=logs=821b528f-1eed-5598-a3b4-7f748b13f261=4fad9527-b9a5-5015-1b70-8356e5c91490=22829



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22626) KafkaITCase.testTimestamps fails on Azure

2021-09-12 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413857#comment-17413857
 ] 

Xintong Song commented on FLINK-22626:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23951=logs=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f=f266c805-9429-58ed-2f9e-482e7b82f58b=6261

> KafkaITCase.testTimestamps fails on Azure
> -
>
> Key: FLINK-22626
> URL: https://issues.apache.org/jira/browse/FLINK-22626
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.12.3, 1.13.1
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: auto-deprioritized-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17819=logs=72d4811f-9f0d-5fd0-014a-0bc26b72b642=c1d93a6a-ba91-515d-3196-2ee8019fbda7=6708
> {code}
> Caused by: org.apache.kafka.common.protocol.types.SchemaException: Error 
> reading field 'api_keys': Error reading array of size 131096, only 50 bytes 
> available
>   at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:110)
>   at 
> org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:324)
>   at 
> org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:162)
>   at 
> org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:719)
>   at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:833)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:556)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:212)
>   at 
> org.apache.kafka.clients.consumer.internals.Fetcher.getTopicMetadata(Fetcher.java:368)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1926)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1894)
>   at 
> org.apache.flink.streaming.connectors.kafka.internals.KafkaPartitionDiscoverer.getAllPartitionsForTopics(KafkaPartitionDiscoverer.java:75)
>   at 
> org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer.discoverPartitions(AbstractPartitionDiscoverer.java:133)
>   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.open(FlinkKafkaConsumerBase.java:577)
>   at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34)
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:428)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$2(StreamTask.java:545)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:93)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:535)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:575)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:758)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:573)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22626) KafkaITCase.testTimestamps fails on Azure

2021-09-12 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413856#comment-17413856
 ] 

Xintong Song commented on FLINK-22626:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23951=logs=1fc6e7bf-633c-5081-c32a-9dea24b05730=80a658d1-f7f6-5d93-2758-53ac19fd5b19=6715

> KafkaITCase.testTimestamps fails on Azure
> -
>
> Key: FLINK-22626
> URL: https://issues.apache.org/jira/browse/FLINK-22626
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.12.3, 1.13.1
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: auto-deprioritized-major, stale-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17819=logs=72d4811f-9f0d-5fd0-014a-0bc26b72b642=c1d93a6a-ba91-515d-3196-2ee8019fbda7=6708
> {code}
> Caused by: org.apache.kafka.common.protocol.types.SchemaException: Error 
> reading field 'api_keys': Error reading array of size 131096, only 50 bytes 
> available
>   at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:110)
>   at 
> org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:324)
>   at 
> org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:162)
>   at 
> org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:719)
>   at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:833)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:556)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:212)
>   at 
> org.apache.kafka.clients.consumer.internals.Fetcher.getTopicMetadata(Fetcher.java:368)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1926)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1894)
>   at 
> org.apache.flink.streaming.connectors.kafka.internals.KafkaPartitionDiscoverer.getAllPartitionsForTopics(KafkaPartitionDiscoverer.java:75)
>   at 
> org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer.discoverPartitions(AbstractPartitionDiscoverer.java:133)
>   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.open(FlinkKafkaConsumerBase.java:577)
>   at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34)
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:428)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$2(StreamTask.java:545)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:93)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:535)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:575)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:758)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:573)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22626) KafkaITCase.testTimestamps fails on Azure

2021-09-12 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-22626:
-
Labels: auto-deprioritized-major test-stability  (was: 
auto-deprioritized-major stale-major test-stability)

> KafkaITCase.testTimestamps fails on Azure
> -
>
> Key: FLINK-22626
> URL: https://issues.apache.org/jira/browse/FLINK-22626
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.12.3, 1.13.1
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: auto-deprioritized-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17819=logs=72d4811f-9f0d-5fd0-014a-0bc26b72b642=c1d93a6a-ba91-515d-3196-2ee8019fbda7=6708
> {code}
> Caused by: org.apache.kafka.common.protocol.types.SchemaException: Error 
> reading field 'api_keys': Error reading array of size 131096, only 50 bytes 
> available
>   at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:110)
>   at 
> org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:324)
>   at 
> org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:162)
>   at 
> org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:719)
>   at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:833)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:556)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:212)
>   at 
> org.apache.kafka.clients.consumer.internals.Fetcher.getTopicMetadata(Fetcher.java:368)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1926)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1894)
>   at 
> org.apache.flink.streaming.connectors.kafka.internals.KafkaPartitionDiscoverer.getAllPartitionsForTopics(KafkaPartitionDiscoverer.java:75)
>   at 
> org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer.discoverPartitions(AbstractPartitionDiscoverer.java:133)
>   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.open(FlinkKafkaConsumerBase.java:577)
>   at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34)
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:428)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$2(StreamTask.java:545)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:93)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:535)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:575)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:758)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:573)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23493) python tests hang on Azure

2021-09-12 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413855#comment-17413855
 ] 

Xintong Song commented on FLINK-23493:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23951=logs=821b528f-1eed-5598-a3b4-7f748b13f261=4fad9527-b9a5-5015-1b70-8356e5c91490=21840

> python tests hang on Azure
> --
>
> Key: FLINK-23493
> URL: https://issues.apache.org/jira/browse/FLINK-23493
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.14.0, 1.13.1, 1.12.4
>Reporter: Dawid Wysakowicz
>Assignee: Huang Xingbo
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20898=logs=821b528f-1eed-5598-a3b4-7f748b13f261=4fad9527-b9a5-5015-1b70-8356e5c91490=22829



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-24260) py35-cython fails due to could not find requests>=2.26.0

2021-09-12 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413854#comment-17413854
 ] 

Xintong Song commented on FLINK-24260:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23951=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=8d78fe4f-d658-5c70-12f8-4921589024c3=21017

> py35-cython fails due to could not find requests>=2.26.0
> 
>
> Key: FLINK-24260
> URL: https://issues.apache.org/jira/browse/FLINK-24260
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.12.5
>Reporter: Xintong Song
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.12.6
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23942=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=8d78fe4f-d658-5c70-12f8-4921589024c3=21015
> {code}
> Collecting requests>=2.26.0 (from apache-flink==1.12.dev0)
>   Could not find a version that satisfies the requirement requests>=2.26.0 
> (from apache-flink==1.12.dev0) (from versions: 0.2.0, 0.2.1, 0.2.2, 0.2.3, 
> 0.2.4, 0.3.0, 0.3.1, 0.3.2, 0.3.3, 0.3.4, 0.4.0, 0.4.1, 0.5.0, 0.5.1, 0.6.0, 
> 0.6.1, 0.6.2, 0.6.3, 0.6.4, 0.6.5, 0.6.6, 0.7.0, 0.7.1, 0.7.2, 0.7.3, 0.7.4, 
> 0.7.5, 0.7.6, 0.8.0, 0.8.1, 0.8.2, 0.8.3, 0.8.4, 0.8.5, 0.8.6, 0.8.7, 0.8.8, 
> 0.8.9, 0.9.0, 0.9.1, 0.9.2, 0.9.3, 0.10.0, 0.10.1, 0.10.2, 0.10.3, 0.10.4, 
> 0.10.6, 0.10.7, 0.10.8, 0.11.1, 0.11.2, 0.12.0, 0.12.1, 0.13.0, 0.13.1, 
> 0.13.2, 0.13.3, 0.13.4, 0.13.5, 0.13.6, 0.13.7, 0.13.8, 0.13.9, 0.14.0, 
> 0.14.1, 0.14.2, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.1.0, 1.2.0, 1.2.1, 
> 1.2.2, 1.2.3, 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.2.1, 2.3.0, 2.4.0, 2.4.1, 2.4.2, 
> 2.4.3, 2.5.0, 2.5.1, 2.5.2, 2.5.3, 2.6.0, 2.6.1, 2.6.2, 2.7.0, 2.8.0, 2.8.1, 
> 2.9.0, 2.9.1, 2.9.2, 2.10.0, 2.11.0, 2.11.1, 2.12.0, 2.12.1, 2.12.2, 2.12.3, 
> 2.12.4, 2.12.5, 2.13.0, 2.14.0, 2.14.1, 2.14.2, 2.15.1, 2.16.0, 2.16.1, 
> 2.16.2, 2.16.3, 2.16.4, 2.16.5, 2.17.0, 2.17.1, 2.17.2, 2.17.3, 2.18.0, 
> 2.18.1, 2.18.2, 2.18.3, 2.18.4, 2.19.0, 2.19.1, 2.20.0, 2.20.1, 2.21.0, 
> 2.22.0, 2.23.0, 2.24.0, 2.25.0, 2.25.1)
> No matching distribution found for requests>=2.26.0 (from 
> apache-flink==1.12.dev0)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20928) KafkaSourceReaderTest.testOffsetCommitOnCheckpointComplete:189->pollUntil:270 » Timeout

2021-09-12 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413852#comment-17413852
 ] 

Xintong Song commented on FLINK-20928:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23944=logs=c5612577-f1f7-5977-6ff6-7432788526f7=ffa8837a-b445-534e-cdf4-db364cf8235d=7017

> KafkaSourceReaderTest.testOffsetCommitOnCheckpointComplete:189->pollUntil:270 
> » Timeout
> ---
>
> Key: FLINK-20928
> URL: https://issues.apache.org/jira/browse/FLINK-20928
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0, 1.14.0
>Reporter: Robert Metzger
>Assignee: Qingsheng Ren
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11861=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5
> {code}
> [ERROR] Tests run: 8, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 93.992 s <<< FAILURE! - in 
> org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest
> [ERROR] 
> testOffsetCommitOnCheckpointComplete(org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest)
>   Time elapsed: 60.086 s  <<< ERROR!
> java.util.concurrent.TimeoutException: The offset commit did not finish 
> before timeout.
>   at 
> org.apache.flink.core.testutils.CommonTestUtils.waitUtil(CommonTestUtils.java:210)
>   at 
> org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest.pollUntil(KafkaSourceReaderTest.java:270)
>   at 
> org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest.testOffsetCommitOnCheckpointComplete(KafkaSourceReaderTest.java:189)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24241) test_table_environment_api.py fail with NPE

2021-09-12 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-24241:
-
Priority: Critical  (was: Major)

> test_table_environment_api.py fail with NPE
> ---
>
> Key: FLINK-24241
> URL: https://issues.apache.org/jira/browse/FLINK-24241
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Table SQL / Planner
>Affects Versions: 1.14.0, 1.15.0
>Reporter: Xintong Song
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.14.0, 1.15.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23876=logs=821b528f-1eed-5598-a3b4-7f748b13f261=6bb545dd-772d-5d8c-f258-f5085fba3295=23263
> {code}
> Sep 10 03:03:39 E   py4j.protocol.Py4JJavaError: An error 
> occurred while calling o16211.execute.
> Sep 10 03:03:39 E   : java.lang.NullPointerException
> Sep 10 03:03:39 E at 
> java.util.Objects.requireNonNull(Objects.java:203)
> Sep 10 03:03:39 E at 
> org.apache.calcite.rel.metadata.RelMetadataQuery.(RelMetadataQuery.java:144)
> Sep 10 03:03:39 E at 
> org.apache.calcite.rel.metadata.RelMetadataQuery.(RelMetadataQuery.java:108)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMetadataQuery.(FlinkRelMetadataQuery.java:78)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMetadataQuery.instance(FlinkRelMetadataQuery.java:59)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.calcite.FlinkRelOptClusterFactory$$anon$1.get(FlinkRelOptClusterFactory.scala:39)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.calcite.FlinkRelOptClusterFactory$$anon$1.get(FlinkRelOptClusterFactory.scala:38)
> Sep 10 03:03:39 E at 
> org.apache.calcite.plan.RelOptCluster.getMetadataQuery(RelOptCluster.java:178)
> Sep 10 03:03:39 E at 
> org.apache.calcite.rel.metadata.RelMdUtil.clearCache(RelMdUtil.java:965)
> Sep 10 03:03:39 E at 
> org.apache.calcite.plan.hep.HepPlanner.buildFinalPlan(HepPlanner.java:942)
> Sep 10 03:03:39 E at 
> org.apache.calcite.plan.hep.HepPlanner.buildFinalPlan(HepPlanner.java:939)
> Sep 10 03:03:39 E at 
> org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:194)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:60)
> Sep 10 03:03:39 E at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
> Sep 10 03:03:39 E at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
> Sep 10 03:03:39 E at 
> scala.collection.Iterator$class.foreach(Iterator.scala:891)
> Sep 10 03:03:39 E at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> Sep 10 03:03:39 E at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> Sep 10 03:03:39 E at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> Sep 10 03:03:39 E at 
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
> Sep 10 03:03:39 E at 
> scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:60)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:55)
> Sep 10 03:03:39 E at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
> Sep 10 03:03:39 E at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
> Sep 10 03:03:39 E at 
> scala.collection.immutable.Range.foreach(Range.scala:160)
> 

[jira] [Commented] (FLINK-24241) test_table_environment_api.py fail with NPE

2021-09-12 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413851#comment-17413851
 ] 

Xintong Song commented on FLINK-24241:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23944=logs=e92ecf6d-e207-5a42-7ff7-528ff0c5b259=40fc352e-9b4c-5fd8-363f-628f24b01ec2=24292

> test_table_environment_api.py fail with NPE
> ---
>
> Key: FLINK-24241
> URL: https://issues.apache.org/jira/browse/FLINK-24241
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Table SQL / Planner
>Affects Versions: 1.15.0
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
> Fix For: 1.15.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23876=logs=821b528f-1eed-5598-a3b4-7f748b13f261=6bb545dd-772d-5d8c-f258-f5085fba3295=23263
> {code}
> Sep 10 03:03:39 E   py4j.protocol.Py4JJavaError: An error 
> occurred while calling o16211.execute.
> Sep 10 03:03:39 E   : java.lang.NullPointerException
> Sep 10 03:03:39 E at 
> java.util.Objects.requireNonNull(Objects.java:203)
> Sep 10 03:03:39 E at 
> org.apache.calcite.rel.metadata.RelMetadataQuery.(RelMetadataQuery.java:144)
> Sep 10 03:03:39 E at 
> org.apache.calcite.rel.metadata.RelMetadataQuery.(RelMetadataQuery.java:108)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMetadataQuery.(FlinkRelMetadataQuery.java:78)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMetadataQuery.instance(FlinkRelMetadataQuery.java:59)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.calcite.FlinkRelOptClusterFactory$$anon$1.get(FlinkRelOptClusterFactory.scala:39)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.calcite.FlinkRelOptClusterFactory$$anon$1.get(FlinkRelOptClusterFactory.scala:38)
> Sep 10 03:03:39 E at 
> org.apache.calcite.plan.RelOptCluster.getMetadataQuery(RelOptCluster.java:178)
> Sep 10 03:03:39 E at 
> org.apache.calcite.rel.metadata.RelMdUtil.clearCache(RelMdUtil.java:965)
> Sep 10 03:03:39 E at 
> org.apache.calcite.plan.hep.HepPlanner.buildFinalPlan(HepPlanner.java:942)
> Sep 10 03:03:39 E at 
> org.apache.calcite.plan.hep.HepPlanner.buildFinalPlan(HepPlanner.java:939)
> Sep 10 03:03:39 E at 
> org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:194)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:60)
> Sep 10 03:03:39 E at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
> Sep 10 03:03:39 E at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
> Sep 10 03:03:39 E at 
> scala.collection.Iterator$class.foreach(Iterator.scala:891)
> Sep 10 03:03:39 E at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> Sep 10 03:03:39 E at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> Sep 10 03:03:39 E at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> Sep 10 03:03:39 E at 
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
> Sep 10 03:03:39 E at 
> scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:60)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:55)
> Sep 10 03:03:39 E at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
> Sep 10 03:03:39 E at 
> 

[jira] [Updated] (FLINK-24241) test_table_environment_api.py fail with NPE

2021-09-12 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-24241:
-
Fix Version/s: 1.14.0

> test_table_environment_api.py fail with NPE
> ---
>
> Key: FLINK-24241
> URL: https://issues.apache.org/jira/browse/FLINK-24241
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Table SQL / Planner
>Affects Versions: 1.14.0, 1.15.0
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
> Fix For: 1.14.0, 1.15.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23876=logs=821b528f-1eed-5598-a3b4-7f748b13f261=6bb545dd-772d-5d8c-f258-f5085fba3295=23263
> {code}
> Sep 10 03:03:39 E   py4j.protocol.Py4JJavaError: An error 
> occurred while calling o16211.execute.
> Sep 10 03:03:39 E   : java.lang.NullPointerException
> Sep 10 03:03:39 E at 
> java.util.Objects.requireNonNull(Objects.java:203)
> Sep 10 03:03:39 E at 
> org.apache.calcite.rel.metadata.RelMetadataQuery.(RelMetadataQuery.java:144)
> Sep 10 03:03:39 E at 
> org.apache.calcite.rel.metadata.RelMetadataQuery.(RelMetadataQuery.java:108)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMetadataQuery.(FlinkRelMetadataQuery.java:78)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMetadataQuery.instance(FlinkRelMetadataQuery.java:59)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.calcite.FlinkRelOptClusterFactory$$anon$1.get(FlinkRelOptClusterFactory.scala:39)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.calcite.FlinkRelOptClusterFactory$$anon$1.get(FlinkRelOptClusterFactory.scala:38)
> Sep 10 03:03:39 E at 
> org.apache.calcite.plan.RelOptCluster.getMetadataQuery(RelOptCluster.java:178)
> Sep 10 03:03:39 E at 
> org.apache.calcite.rel.metadata.RelMdUtil.clearCache(RelMdUtil.java:965)
> Sep 10 03:03:39 E at 
> org.apache.calcite.plan.hep.HepPlanner.buildFinalPlan(HepPlanner.java:942)
> Sep 10 03:03:39 E at 
> org.apache.calcite.plan.hep.HepPlanner.buildFinalPlan(HepPlanner.java:939)
> Sep 10 03:03:39 E at 
> org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:194)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:60)
> Sep 10 03:03:39 E at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
> Sep 10 03:03:39 E at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
> Sep 10 03:03:39 E at 
> scala.collection.Iterator$class.foreach(Iterator.scala:891)
> Sep 10 03:03:39 E at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> Sep 10 03:03:39 E at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> Sep 10 03:03:39 E at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> Sep 10 03:03:39 E at 
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
> Sep 10 03:03:39 E at 
> scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:60)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:55)
> Sep 10 03:03:39 E at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
> Sep 10 03:03:39 E at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
> Sep 10 03:03:39 E at 
> scala.collection.immutable.Range.foreach(Range.scala:160)
> Sep 10 

[jira] [Updated] (FLINK-24241) test_table_environment_api.py fail with NPE

2021-09-12 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-24241:
-
Affects Version/s: 1.14.0

> test_table_environment_api.py fail with NPE
> ---
>
> Key: FLINK-24241
> URL: https://issues.apache.org/jira/browse/FLINK-24241
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Table SQL / Planner
>Affects Versions: 1.14.0, 1.15.0
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
> Fix For: 1.15.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23876=logs=821b528f-1eed-5598-a3b4-7f748b13f261=6bb545dd-772d-5d8c-f258-f5085fba3295=23263
> {code}
> Sep 10 03:03:39 E   py4j.protocol.Py4JJavaError: An error 
> occurred while calling o16211.execute.
> Sep 10 03:03:39 E   : java.lang.NullPointerException
> Sep 10 03:03:39 E at 
> java.util.Objects.requireNonNull(Objects.java:203)
> Sep 10 03:03:39 E at 
> org.apache.calcite.rel.metadata.RelMetadataQuery.(RelMetadataQuery.java:144)
> Sep 10 03:03:39 E at 
> org.apache.calcite.rel.metadata.RelMetadataQuery.(RelMetadataQuery.java:108)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMetadataQuery.(FlinkRelMetadataQuery.java:78)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMetadataQuery.instance(FlinkRelMetadataQuery.java:59)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.calcite.FlinkRelOptClusterFactory$$anon$1.get(FlinkRelOptClusterFactory.scala:39)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.calcite.FlinkRelOptClusterFactory$$anon$1.get(FlinkRelOptClusterFactory.scala:38)
> Sep 10 03:03:39 E at 
> org.apache.calcite.plan.RelOptCluster.getMetadataQuery(RelOptCluster.java:178)
> Sep 10 03:03:39 E at 
> org.apache.calcite.rel.metadata.RelMdUtil.clearCache(RelMdUtil.java:965)
> Sep 10 03:03:39 E at 
> org.apache.calcite.plan.hep.HepPlanner.buildFinalPlan(HepPlanner.java:942)
> Sep 10 03:03:39 E at 
> org.apache.calcite.plan.hep.HepPlanner.buildFinalPlan(HepPlanner.java:939)
> Sep 10 03:03:39 E at 
> org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:194)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:60)
> Sep 10 03:03:39 E at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
> Sep 10 03:03:39 E at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
> Sep 10 03:03:39 E at 
> scala.collection.Iterator$class.foreach(Iterator.scala:891)
> Sep 10 03:03:39 E at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> Sep 10 03:03:39 E at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> Sep 10 03:03:39 E at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> Sep 10 03:03:39 E at 
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
> Sep 10 03:03:39 E at 
> scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:60)
> Sep 10 03:03:39 E at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:55)
> Sep 10 03:03:39 E at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
> Sep 10 03:03:39 E at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
> Sep 10 03:03:39 E at 
> scala.collection.immutable.Range.foreach(Range.scala:160)
> Sep 10 03:03:39 E 

[jira] [Commented] (FLINK-22889) JdbcExactlyOnceSinkE2eTest.testInsert hangs on azure

2021-09-12 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413850#comment-17413850
 ] 

Xintong Song commented on FLINK-22889:
--

Instance on 1.13:
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23943=logs=e9af9cde-9a65-5281-a58e-2c8511d36983=b6c4efed-9c7d-55ea-03a9-9bd7d5b08e4c=13447

> JdbcExactlyOnceSinkE2eTest.testInsert hangs on azure
> 
>
> Key: FLINK-22889
> URL: https://issues.apache.org/jira/browse/FLINK-22889
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.14.0, 1.13.1
>Reporter: Dawid Wysakowicz
>Assignee: Roman Khachatryan
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18690=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=bfbc6239-57a0-5db0-63f3-41551b4f7d51=16658



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-24260) py35-cython fails due to could not find requests>=2.26.0

2021-09-12 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413849#comment-17413849
 ] 

Dian Fu commented on FLINK-24260:
-

[~xtsong] Thanks. I will fix it ASAP

> py35-cython fails due to could not find requests>=2.26.0
> 
>
> Key: FLINK-24260
> URL: https://issues.apache.org/jira/browse/FLINK-24260
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.12.5
>Reporter: Xintong Song
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.12.6
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23942=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=8d78fe4f-d658-5c70-12f8-4921589024c3=21015
> {code}
> Collecting requests>=2.26.0 (from apache-flink==1.12.dev0)
>   Could not find a version that satisfies the requirement requests>=2.26.0 
> (from apache-flink==1.12.dev0) (from versions: 0.2.0, 0.2.1, 0.2.2, 0.2.3, 
> 0.2.4, 0.3.0, 0.3.1, 0.3.2, 0.3.3, 0.3.4, 0.4.0, 0.4.1, 0.5.0, 0.5.1, 0.6.0, 
> 0.6.1, 0.6.2, 0.6.3, 0.6.4, 0.6.5, 0.6.6, 0.7.0, 0.7.1, 0.7.2, 0.7.3, 0.7.4, 
> 0.7.5, 0.7.6, 0.8.0, 0.8.1, 0.8.2, 0.8.3, 0.8.4, 0.8.5, 0.8.6, 0.8.7, 0.8.8, 
> 0.8.9, 0.9.0, 0.9.1, 0.9.2, 0.9.3, 0.10.0, 0.10.1, 0.10.2, 0.10.3, 0.10.4, 
> 0.10.6, 0.10.7, 0.10.8, 0.11.1, 0.11.2, 0.12.0, 0.12.1, 0.13.0, 0.13.1, 
> 0.13.2, 0.13.3, 0.13.4, 0.13.5, 0.13.6, 0.13.7, 0.13.8, 0.13.9, 0.14.0, 
> 0.14.1, 0.14.2, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.1.0, 1.2.0, 1.2.1, 
> 1.2.2, 1.2.3, 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.2.1, 2.3.0, 2.4.0, 2.4.1, 2.4.2, 
> 2.4.3, 2.5.0, 2.5.1, 2.5.2, 2.5.3, 2.6.0, 2.6.1, 2.6.2, 2.7.0, 2.8.0, 2.8.1, 
> 2.9.0, 2.9.1, 2.9.2, 2.10.0, 2.11.0, 2.11.1, 2.12.0, 2.12.1, 2.12.2, 2.12.3, 
> 2.12.4, 2.12.5, 2.13.0, 2.14.0, 2.14.1, 2.14.2, 2.15.1, 2.16.0, 2.16.1, 
> 2.16.2, 2.16.3, 2.16.4, 2.16.5, 2.17.0, 2.17.1, 2.17.2, 2.17.3, 2.18.0, 
> 2.18.1, 2.18.2, 2.18.3, 2.18.4, 2.19.0, 2.19.1, 2.20.0, 2.20.1, 2.21.0, 
> 2.22.0, 2.23.0, 2.24.0, 2.25.0, 2.25.1)
> No matching distribution found for requests>=2.26.0 (from 
> apache-flink==1.12.dev0)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23493) python tests hang on Azure

2021-09-12 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413848#comment-17413848
 ] 

Xintong Song commented on FLINK-23493:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23941=logs=ff2e2ea5-07e3-5521-7b04-a4fc3ad765e9=1ec6382b-bafe-5817-63ae-eda7d4be718e=23723

> python tests hang on Azure
> --
>
> Key: FLINK-23493
> URL: https://issues.apache.org/jira/browse/FLINK-23493
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.14.0, 1.13.1, 1.12.4
>Reporter: Dawid Wysakowicz
>Assignee: Huang Xingbo
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20898=logs=821b528f-1eed-5598-a3b4-7f748b13f261=4fad9527-b9a5-5015-1b70-8356e5c91490=22829



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] cuibo01 closed pull request #17247: Support customized jdbc calalog

2021-09-12 Thread GitBox


cuibo01 closed pull request #17247:
URL: https://github.com/apache/flink/pull/17247


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] cuibo01 opened a new pull request #17247: Support customized jdbc calalog

2021-09-12 Thread GitBox


cuibo01 opened a new pull request #17247:
URL: https://github.com/apache/flink/pull/17247


   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24260) py35-cython fails due to could not find requests>=2.26.0

2021-09-12 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413843#comment-17413843
 ] 

Xintong Song commented on FLINK-24260:
--

cc [~dianfu]

> py35-cython fails due to could not find requests>=2.26.0
> 
>
> Key: FLINK-24260
> URL: https://issues.apache.org/jira/browse/FLINK-24260
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.12.5
>Reporter: Xintong Song
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.12.6
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23942=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=8d78fe4f-d658-5c70-12f8-4921589024c3=21015
> {code}
> Collecting requests>=2.26.0 (from apache-flink==1.12.dev0)
>   Could not find a version that satisfies the requirement requests>=2.26.0 
> (from apache-flink==1.12.dev0) (from versions: 0.2.0, 0.2.1, 0.2.2, 0.2.3, 
> 0.2.4, 0.3.0, 0.3.1, 0.3.2, 0.3.3, 0.3.4, 0.4.0, 0.4.1, 0.5.0, 0.5.1, 0.6.0, 
> 0.6.1, 0.6.2, 0.6.3, 0.6.4, 0.6.5, 0.6.6, 0.7.0, 0.7.1, 0.7.2, 0.7.3, 0.7.4, 
> 0.7.5, 0.7.6, 0.8.0, 0.8.1, 0.8.2, 0.8.3, 0.8.4, 0.8.5, 0.8.6, 0.8.7, 0.8.8, 
> 0.8.9, 0.9.0, 0.9.1, 0.9.2, 0.9.3, 0.10.0, 0.10.1, 0.10.2, 0.10.3, 0.10.4, 
> 0.10.6, 0.10.7, 0.10.8, 0.11.1, 0.11.2, 0.12.0, 0.12.1, 0.13.0, 0.13.1, 
> 0.13.2, 0.13.3, 0.13.4, 0.13.5, 0.13.6, 0.13.7, 0.13.8, 0.13.9, 0.14.0, 
> 0.14.1, 0.14.2, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.1.0, 1.2.0, 1.2.1, 
> 1.2.2, 1.2.3, 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.2.1, 2.3.0, 2.4.0, 2.4.1, 2.4.2, 
> 2.4.3, 2.5.0, 2.5.1, 2.5.2, 2.5.3, 2.6.0, 2.6.1, 2.6.2, 2.7.0, 2.8.0, 2.8.1, 
> 2.9.0, 2.9.1, 2.9.2, 2.10.0, 2.11.0, 2.11.1, 2.12.0, 2.12.1, 2.12.2, 2.12.3, 
> 2.12.4, 2.12.5, 2.13.0, 2.14.0, 2.14.1, 2.14.2, 2.15.1, 2.16.0, 2.16.1, 
> 2.16.2, 2.16.3, 2.16.4, 2.16.5, 2.17.0, 2.17.1, 2.17.2, 2.17.3, 2.18.0, 
> 2.18.1, 2.18.2, 2.18.3, 2.18.4, 2.19.0, 2.19.1, 2.20.0, 2.20.1, 2.21.0, 
> 2.22.0, 2.23.0, 2.24.0, 2.25.0, 2.25.1)
> No matching distribution found for requests>=2.26.0 (from 
> apache-flink==1.12.dev0)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24260) py35-cython fails due to could not find requests>=2.26.0

2021-09-12 Thread Xintong Song (Jira)
Xintong Song created FLINK-24260:


 Summary: py35-cython fails due to could not find requests>=2.26.0
 Key: FLINK-24260
 URL: https://issues.apache.org/jira/browse/FLINK-24260
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Affects Versions: 1.12.5
Reporter: Xintong Song
 Fix For: 1.12.6


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23942=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=8d78fe4f-d658-5c70-12f8-4921589024c3=21015

{code}
Collecting requests>=2.26.0 (from apache-flink==1.12.dev0)
  Could not find a version that satisfies the requirement requests>=2.26.0 
(from apache-flink==1.12.dev0) (from versions: 0.2.0, 0.2.1, 0.2.2, 0.2.3, 
0.2.4, 0.3.0, 0.3.1, 0.3.2, 0.3.3, 0.3.4, 0.4.0, 0.4.1, 0.5.0, 0.5.1, 0.6.0, 
0.6.1, 0.6.2, 0.6.3, 0.6.4, 0.6.5, 0.6.6, 0.7.0, 0.7.1, 0.7.2, 0.7.3, 0.7.4, 
0.7.5, 0.7.6, 0.8.0, 0.8.1, 0.8.2, 0.8.3, 0.8.4, 0.8.5, 0.8.6, 0.8.7, 0.8.8, 
0.8.9, 0.9.0, 0.9.1, 0.9.2, 0.9.3, 0.10.0, 0.10.1, 0.10.2, 0.10.3, 0.10.4, 
0.10.6, 0.10.7, 0.10.8, 0.11.1, 0.11.2, 0.12.0, 0.12.1, 0.13.0, 0.13.1, 0.13.2, 
0.13.3, 0.13.4, 0.13.5, 0.13.6, 0.13.7, 0.13.8, 0.13.9, 0.14.0, 0.14.1, 0.14.2, 
1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.1.0, 1.2.0, 1.2.1, 1.2.2, 1.2.3, 2.0.0, 
2.0.1, 2.1.0, 2.2.0, 2.2.1, 2.3.0, 2.4.0, 2.4.1, 2.4.2, 2.4.3, 2.5.0, 2.5.1, 
2.5.2, 2.5.3, 2.6.0, 2.6.1, 2.6.2, 2.7.0, 2.8.0, 2.8.1, 2.9.0, 2.9.1, 2.9.2, 
2.10.0, 2.11.0, 2.11.1, 2.12.0, 2.12.1, 2.12.2, 2.12.3, 2.12.4, 2.12.5, 2.13.0, 
2.14.0, 2.14.1, 2.14.2, 2.15.1, 2.16.0, 2.16.1, 2.16.2, 2.16.3, 2.16.4, 2.16.5, 
2.17.0, 2.17.1, 2.17.2, 2.17.3, 2.18.0, 2.18.1, 2.18.2, 2.18.3, 2.18.4, 2.19.0, 
2.19.1, 2.20.0, 2.20.1, 2.21.0, 2.22.0, 2.23.0, 2.24.0, 2.25.0, 2.25.1)
No matching distribution found for requests>=2.26.0 (from 
apache-flink==1.12.dev0)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23493) python tests hang on Azure

2021-09-12 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413840#comment-17413840
 ] 

Xintong Song commented on FLINK-23493:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23942=logs=821b528f-1eed-5598-a3b4-7f748b13f261=4fad9527-b9a5-5015-1b70-8356e5c91490=21839

> python tests hang on Azure
> --
>
> Key: FLINK-23493
> URL: https://issues.apache.org/jira/browse/FLINK-23493
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.14.0, 1.13.1, 1.12.4
>Reporter: Dawid Wysakowicz
>Assignee: Huang Xingbo
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20898=logs=821b528f-1eed-5598-a3b4-7f748b13f261=4fad9527-b9a5-5015-1b70-8356e5c91490=22829



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22366) HiveSinkCompactionITCase fails on azure

2021-09-12 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413839#comment-17413839
 ] 

Xintong Song commented on FLINK-22366:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23925=logs=fc5181b0-e452-5c8f-68de-1097947f6483=62110053-334f-5295-a0ab-80dd7e2babbf=22304

> HiveSinkCompactionITCase fails on azure
> ---
>
> Key: FLINK-22366
> URL: https://issues.apache.org/jira/browse/FLINK-22366
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Ecosystem
>Affects Versions: 1.13.0, 1.12.5
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16818=logs=245e1f2e-ba5b-5570-d689-25ae21e5302f=e7f339b2-a7c3-57d9-00af-3712d4b15354=23420
> {code}
>  [ERROR] testNonPartition[format = 
> sequencefile](org.apache.flink.connectors.hive.HiveSinkCompactionITCase)  
> Time elapsed: 4.999 s  <<< FAILURE!
> Apr 19 22:25:10 java.lang.AssertionError: expected:<[+I[0, 0, 0], +I[0, 0, 
> 0], +I[1, 1, 1], +I[1, 1, 1], +I[2, 2, 2], +I[2, 2, 2], +I[3, 3, 3], +I[3, 3, 
> 3], +I[4, 4, 4], +I[4, 4, 4], +I[5, 5, 5], +I[5, 5, 5], +I[6, 6, 6], +I[6, 6, 
> 6], +I[7, 7, 7], +I[7, 7, 7], +I[8, 8, 8], +I[8, 8, 8], +I[9, 9, 9], +I[9, 9, 
> 9], +I[10, 0, 0], +I[10, 0, 0], +I[11, 1, 1], +I[11, 1, 1], +I[12, 2, 2], 
> +I[12, 2, 2], +I[13, 3, 3], +I[13, 3, 3], +I[14, 4, 4], +I[14, 4, 4], +I[15, 
> 5, 5], +I[15, 5, 5], +I[16, 6, 6], +I[16, 6, 6], +I[17, 7, 7], +I[17, 7, 7], 
> +I[18, 8, 8], +I[18, 8, 8], +I[19, 9, 9], +I[19, 9, 9], +I[20, 0, 0], +I[20, 
> 0, 0], +I[21, 1, 1], +I[21, 1, 1], +I[22, 2, 2], +I[22, 2, 2], +I[23, 3, 3], 
> +I[23, 3, 3], +I[24, 4, 4], +I[24, 4, 4], +I[25, 5, 5], +I[25, 5, 5], +I[26, 
> 6, 6], +I[26, 6, 6], +I[27, 7, 7], +I[27, 7, 7], +I[28, 8, 8], +I[28, 8, 8], 
> +I[29, 9, 9], +I[29, 9, 9], +I[30, 0, 0], +I[30, 0, 0], +I[31, 1, 1], +I[31, 
> 1, 1], +I[32, 2, 2], +I[32, 2, 2], +I[33, 3, 3], +I[33, 3, 3], +I[34, 4, 4], 
> +I[34, 4, 4], +I[35, 5, 5], +I[35, 5, 5], +I[36, 6, 6], +I[36, 6, 6], +I[37, 
> 7, 7], +I[37, 7, 7], +I[38, 8, 8], +I[38, 8, 8], +I[39, 9, 9], +I[39, 9, 9], 
> +I[40, 0, 0], +I[40, 0, 0], +I[41, 1, 1], +I[41, 1, 1], +I[42, 2, 2], +I[42, 
> 2, 2], +I[43, 3, 3], +I[43, 3, 3], +I[44, 4, 4], +I[44, 4, 4], +I[45, 5, 5], 
> +I[45, 5, 5], +I[46, 6, 6], +I[46, 6, 6], +I[47, 7, 7], +I[47, 7, 7], +I[48, 
> 8, 8], +I[48, 8, 8], +I[49, 9, 9], +I[49, 9, 9], +I[50, 0, 0], +I[50, 0, 0], 
> +I[51, 1, 1], +I[51, 1, 1], +I[52, 2, 2], +I[52, 2, 2], +I[53, 3, 3], +I[53, 
> 3, 3], +I[54, 4, 4], +I[54, 4, 4], +I[55, 5, 5], +I[55, 5, 5], +I[56, 6, 6], 
> +I[56, 6, 6], +I[57, 7, 7], +I[57, 7, 7], +I[58, 8, 8], +I[58, 8, 8], +I[59, 
> 9, 9], +I[59, 9, 9], +I[60, 0, 0], +I[60, 0, 0], +I[61, 1, 1], +I[61, 1, 1], 
> +I[62, 2, 2], +I[62, 2, 2], +I[63, 3, 3], +I[63, 3, 3], +I[64, 4, 4], +I[64, 
> 4, 4], +I[65, 5, 5], +I[65, 5, 5], +I[66, 6, 6], +I[66, 6, 6], +I[67, 7, 7], 
> +I[67, 7, 7], +I[68, 8, 8], +I[68, 8, 8], +I[69, 9, 9], +I[69, 9, 9], +I[70, 
> 0, 0], +I[70, 0, 0], +I[71, 1, 1], +I[71, 1, 1], +I[72, 2, 2], +I[72, 2, 2], 
> +I[73, 3, 3], +I[73, 3, 3], +I[74, 4, 4], +I[74, 4, 4], +I[75, 5, 5], +I[75, 
> 5, 5], +I[76, 6, 6], +I[76, 6, 6], +I[77, 7, 7], +I[77, 7, 7], +I[78, 8, 8], 
> +I[78, 8, 8], +I[79, 9, 9], +I[79, 9, 9], +I[80, 0, 0], +I[80, 0, 0], +I[81, 
> 1, 1], +I[81, 1, 1], +I[82, 2, 2], +I[82, 2, 2], +I[83, 3, 3], +I[83, 3, 3], 
> +I[84, 4, 4], +I[84, 4, 4], +I[85, 5, 5], +I[85, 5, 5], +I[86, 6, 6], +I[86, 
> 6, 6], +I[87, 7, 7], +I[87, 7, 7], +I[88, 8, 8], +I[88, 8, 8], +I[89, 9, 9], 
> +I[89, 9, 9], +I[90, 0, 0], +I[90, 0, 0], +I[91, 1, 1], +I[91, 1, 1], +I[92, 
> 2, 2], +I[92, 2, 2], +I[93, 3, 3], +I[93, 3, 3], +I[94, 4, 4], +I[94, 4, 4], 
> +I[95, 5, 5], +I[95, 5, 5], +I[96, 6, 6], +I[96, 6, 6], +I[97, 7, 7], +I[97, 
> 7, 7], +I[98, 8, 8], +I[98, 8, 8], +I[99, 9, 9], +I[99, 9, 9]]> but 
> was:<[+I[0, 0, 0], +I[1, 1, 1], +I[2, 2, 2], +I[3, 3, 3], +I[4, 4, 4], +I[5, 
> 5, 5], +I[6, 6, 6], +I[7, 7, 7], +I[8, 8, 8], +I[9, 9, 9], +I[10, 0, 0], 
> +I[11, 1, 1], +I[12, 2, 2], +I[13, 3, 3], +I[14, 4, 4], +I[15, 5, 5], +I[16, 
> 6, 6], +I[17, 7, 7], +I[18, 8, 8], +I[19, 9, 9], +I[20, 0, 0], +I[21, 1, 1], 
> +I[22, 2, 2], +I[23, 3, 3], +I[24, 4, 4], +I[25, 5, 5], +I[26, 6, 6], +I[27, 
> 7, 7], +I[28, 8, 8], +I[29, 9, 9], +I[30, 0, 0], +I[31, 1, 1], +I[32, 2, 2], 
> +I[33, 3, 3], +I[34, 4, 4], +I[35, 5, 5], +I[36, 6, 6], +I[37, 7, 7], +I[38, 
> 8, 8], +I[39, 9, 9], +I[40, 0, 0], +I[41, 1, 1], +I[42, 2, 2], +I[43, 3, 3], 
> +I[44, 4, 4], +I[45, 5, 5], +I[46, 6, 6], +I[47, 7, 7], +I[48, 8, 8], +I[49, 
> 9, 9], +I[50, 0, 0], +I[51, 1, 1], +I[52, 2, 2], +I[53, 3, 3], +I[54, 4, 4], 
> +I[55, 5, 5], +I[56, 

[jira] [Updated] (FLINK-22366) HiveSinkCompactionITCase fails on azure

2021-09-12 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-22366:
-
Affects Version/s: 1.12.5

> HiveSinkCompactionITCase fails on azure
> ---
>
> Key: FLINK-22366
> URL: https://issues.apache.org/jira/browse/FLINK-22366
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Ecosystem
>Affects Versions: 1.13.0, 1.12.5
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16818=logs=245e1f2e-ba5b-5570-d689-25ae21e5302f=e7f339b2-a7c3-57d9-00af-3712d4b15354=23420
> {code}
>  [ERROR] testNonPartition[format = 
> sequencefile](org.apache.flink.connectors.hive.HiveSinkCompactionITCase)  
> Time elapsed: 4.999 s  <<< FAILURE!
> Apr 19 22:25:10 java.lang.AssertionError: expected:<[+I[0, 0, 0], +I[0, 0, 
> 0], +I[1, 1, 1], +I[1, 1, 1], +I[2, 2, 2], +I[2, 2, 2], +I[3, 3, 3], +I[3, 3, 
> 3], +I[4, 4, 4], +I[4, 4, 4], +I[5, 5, 5], +I[5, 5, 5], +I[6, 6, 6], +I[6, 6, 
> 6], +I[7, 7, 7], +I[7, 7, 7], +I[8, 8, 8], +I[8, 8, 8], +I[9, 9, 9], +I[9, 9, 
> 9], +I[10, 0, 0], +I[10, 0, 0], +I[11, 1, 1], +I[11, 1, 1], +I[12, 2, 2], 
> +I[12, 2, 2], +I[13, 3, 3], +I[13, 3, 3], +I[14, 4, 4], +I[14, 4, 4], +I[15, 
> 5, 5], +I[15, 5, 5], +I[16, 6, 6], +I[16, 6, 6], +I[17, 7, 7], +I[17, 7, 7], 
> +I[18, 8, 8], +I[18, 8, 8], +I[19, 9, 9], +I[19, 9, 9], +I[20, 0, 0], +I[20, 
> 0, 0], +I[21, 1, 1], +I[21, 1, 1], +I[22, 2, 2], +I[22, 2, 2], +I[23, 3, 3], 
> +I[23, 3, 3], +I[24, 4, 4], +I[24, 4, 4], +I[25, 5, 5], +I[25, 5, 5], +I[26, 
> 6, 6], +I[26, 6, 6], +I[27, 7, 7], +I[27, 7, 7], +I[28, 8, 8], +I[28, 8, 8], 
> +I[29, 9, 9], +I[29, 9, 9], +I[30, 0, 0], +I[30, 0, 0], +I[31, 1, 1], +I[31, 
> 1, 1], +I[32, 2, 2], +I[32, 2, 2], +I[33, 3, 3], +I[33, 3, 3], +I[34, 4, 4], 
> +I[34, 4, 4], +I[35, 5, 5], +I[35, 5, 5], +I[36, 6, 6], +I[36, 6, 6], +I[37, 
> 7, 7], +I[37, 7, 7], +I[38, 8, 8], +I[38, 8, 8], +I[39, 9, 9], +I[39, 9, 9], 
> +I[40, 0, 0], +I[40, 0, 0], +I[41, 1, 1], +I[41, 1, 1], +I[42, 2, 2], +I[42, 
> 2, 2], +I[43, 3, 3], +I[43, 3, 3], +I[44, 4, 4], +I[44, 4, 4], +I[45, 5, 5], 
> +I[45, 5, 5], +I[46, 6, 6], +I[46, 6, 6], +I[47, 7, 7], +I[47, 7, 7], +I[48, 
> 8, 8], +I[48, 8, 8], +I[49, 9, 9], +I[49, 9, 9], +I[50, 0, 0], +I[50, 0, 0], 
> +I[51, 1, 1], +I[51, 1, 1], +I[52, 2, 2], +I[52, 2, 2], +I[53, 3, 3], +I[53, 
> 3, 3], +I[54, 4, 4], +I[54, 4, 4], +I[55, 5, 5], +I[55, 5, 5], +I[56, 6, 6], 
> +I[56, 6, 6], +I[57, 7, 7], +I[57, 7, 7], +I[58, 8, 8], +I[58, 8, 8], +I[59, 
> 9, 9], +I[59, 9, 9], +I[60, 0, 0], +I[60, 0, 0], +I[61, 1, 1], +I[61, 1, 1], 
> +I[62, 2, 2], +I[62, 2, 2], +I[63, 3, 3], +I[63, 3, 3], +I[64, 4, 4], +I[64, 
> 4, 4], +I[65, 5, 5], +I[65, 5, 5], +I[66, 6, 6], +I[66, 6, 6], +I[67, 7, 7], 
> +I[67, 7, 7], +I[68, 8, 8], +I[68, 8, 8], +I[69, 9, 9], +I[69, 9, 9], +I[70, 
> 0, 0], +I[70, 0, 0], +I[71, 1, 1], +I[71, 1, 1], +I[72, 2, 2], +I[72, 2, 2], 
> +I[73, 3, 3], +I[73, 3, 3], +I[74, 4, 4], +I[74, 4, 4], +I[75, 5, 5], +I[75, 
> 5, 5], +I[76, 6, 6], +I[76, 6, 6], +I[77, 7, 7], +I[77, 7, 7], +I[78, 8, 8], 
> +I[78, 8, 8], +I[79, 9, 9], +I[79, 9, 9], +I[80, 0, 0], +I[80, 0, 0], +I[81, 
> 1, 1], +I[81, 1, 1], +I[82, 2, 2], +I[82, 2, 2], +I[83, 3, 3], +I[83, 3, 3], 
> +I[84, 4, 4], +I[84, 4, 4], +I[85, 5, 5], +I[85, 5, 5], +I[86, 6, 6], +I[86, 
> 6, 6], +I[87, 7, 7], +I[87, 7, 7], +I[88, 8, 8], +I[88, 8, 8], +I[89, 9, 9], 
> +I[89, 9, 9], +I[90, 0, 0], +I[90, 0, 0], +I[91, 1, 1], +I[91, 1, 1], +I[92, 
> 2, 2], +I[92, 2, 2], +I[93, 3, 3], +I[93, 3, 3], +I[94, 4, 4], +I[94, 4, 4], 
> +I[95, 5, 5], +I[95, 5, 5], +I[96, 6, 6], +I[96, 6, 6], +I[97, 7, 7], +I[97, 
> 7, 7], +I[98, 8, 8], +I[98, 8, 8], +I[99, 9, 9], +I[99, 9, 9]]> but 
> was:<[+I[0, 0, 0], +I[1, 1, 1], +I[2, 2, 2], +I[3, 3, 3], +I[4, 4, 4], +I[5, 
> 5, 5], +I[6, 6, 6], +I[7, 7, 7], +I[8, 8, 8], +I[9, 9, 9], +I[10, 0, 0], 
> +I[11, 1, 1], +I[12, 2, 2], +I[13, 3, 3], +I[14, 4, 4], +I[15, 5, 5], +I[16, 
> 6, 6], +I[17, 7, 7], +I[18, 8, 8], +I[19, 9, 9], +I[20, 0, 0], +I[21, 1, 1], 
> +I[22, 2, 2], +I[23, 3, 3], +I[24, 4, 4], +I[25, 5, 5], +I[26, 6, 6], +I[27, 
> 7, 7], +I[28, 8, 8], +I[29, 9, 9], +I[30, 0, 0], +I[31, 1, 1], +I[32, 2, 2], 
> +I[33, 3, 3], +I[34, 4, 4], +I[35, 5, 5], +I[36, 6, 6], +I[37, 7, 7], +I[38, 
> 8, 8], +I[39, 9, 9], +I[40, 0, 0], +I[41, 1, 1], +I[42, 2, 2], +I[43, 3, 3], 
> +I[44, 4, 4], +I[45, 5, 5], +I[46, 6, 6], +I[47, 7, 7], +I[48, 8, 8], +I[49, 
> 9, 9], +I[50, 0, 0], +I[51, 1, 1], +I[52, 2, 2], +I[53, 3, 3], +I[54, 4, 4], 
> +I[55, 5, 5], +I[56, 6, 6], +I[57, 7, 7], +I[58, 8, 8], +I[59, 9, 9], +I[60, 
> 0, 0], +I[61, 1, 1], +I[62, 2, 2], +I[63, 3, 3], +I[64, 4, 4], +I[65, 5, 5], 
> +I[66, 6, 6], +I[67, 7, 7], 

[jira] [Commented] (FLINK-24221) Translate "JAR Statements" page of "SQL" into Chinese

2021-09-12 Thread wuguihu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413838#comment-17413838
 ] 

wuguihu commented on FLINK-24221:
-

Hi, [~jark]  :D
Excuse me for taking up your time.
I have finished this ticket. Would you like to review it for me.
Thank you very much!

> Translate "JAR Statements" page of "SQL" into Chinese
> -
>
> Key: FLINK-24221
> URL: https://issues.apache.org/jira/browse/FLINK-24221
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: wuguihu
>Assignee: wuguihu
>Priority: Minor
>  Labels: pull-request-available
>
> [https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/table/sql/jar/]
> docs/content.zh/docs/dev/table/sql/jar.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-24220) Translate "RESET Statements" page of "SQL" into Chinese

2021-09-12 Thread wuguihu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413837#comment-17413837
 ] 

wuguihu commented on FLINK-24220:
-

Hi, [~jark]  :D
Excuse me for taking up your time.
I have finished this ticket. Would you like to review it for me.
Thank you very much!

> Translate "RESET Statements" page of "SQL" into Chinese
> ---
>
> Key: FLINK-24220
> URL: https://issues.apache.org/jira/browse/FLINK-24220
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: wuguihu
>Assignee: wuguihu
>Priority: Minor
>  Labels: pull-request-available
>
> [https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/table/sql/reset/]
> docs/content.zh/docs/dev/table/sql/reset.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-24218) Translate "UNLOAD Statements" page of "SQL" into Chinese

2021-09-12 Thread wuguihu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413835#comment-17413835
 ] 

wuguihu commented on FLINK-24218:
-

Hi, [~jark]  :D
Excuse me for taking up your time.
I have finished this ticket. Would you like to review it for me.
Thank you very much!

> Translate "UNLOAD Statements" page of "SQL" into Chinese
> 
>
> Key: FLINK-24218
> URL: https://issues.apache.org/jira/browse/FLINK-24218
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: wuguihu
>Assignee: wuguihu
>Priority: Minor
>  Labels: pull-request-available
>
> [https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/table/sql/unload/]
> docs/content.zh/docs/dev/table/sql/unload.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-24219) Translate "SET Statements" page of "SQL" into Chinese

2021-09-12 Thread wuguihu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17413836#comment-17413836
 ] 

wuguihu commented on FLINK-24219:
-

Hi, [~jark]  :D
Excuse me for taking up your time.
I have finished this ticket. Would you like to review it for me.
Thank you very much!

> Translate "SET Statements" page of "SQL" into Chinese
> -
>
> Key: FLINK-24219
> URL: https://issues.apache.org/jira/browse/FLINK-24219
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: wuguihu
>Assignee: wuguihu
>Priority: Minor
>  Labels: pull-request-available
>
> [https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/table/sql/set/]
> docs/content.zh/docs/dev/table/sql/set.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >