[GitHub] [flink] tianon commented on pull request #14630: [FLINK-20915][docker] Move docker entrypoint to distribution

2021-01-15 Thread GitBox


tianon commented on pull request #14630:
URL: https://github.com/apache/flink/pull/14630#issuecomment-761165429


   I agree that the Docker image being a complete black box is a bad thing, 
which is exactly why I'm confused by this PR.  Let me try to rephrase what I'm 
suggesting:
   
   Take a look at the `docker-entrypoint.sh` as it stands today, and identify 
what problems it solves that are _not_ currently solved by the existing `flink` 
wrapper script(s) that are shipped with the upstream Flink distribution.
   
   Determine what makes sense to add to the Flink distribution to help with 
those use cases that are currently not being serviced, and use the integration 
of those changes into `docker-entrypoint.sh` as a "barometer" of sorts for 
whether you're succeeding at decreasing the amount of above-and-beyond 
scripting necessary to accomplish these core Flink use cases (because at the 
end of the day, the length of `docker-entrypoint.sh` is demonstrating that 
there are gaps in how Flink is expected to be run).
   
   I am not suggesting that the code in the current entrypoint script should be 
simply moved elsewhere, because that doesn't actually solve the underlying 
problem that the way the Flink distribution officially expects to be run and 
the way it's being run have diverged.
   
   As a concrete example, if someone downloads the binary distribution of Flink 
and tries to run it on a VM, are they expected to set an appropriate 
`CLASSPATH` variable?  Why or why not?  How can we improve the experience for 
them in such a way that it also improves the experience for Docker?  
(Alternatively, if they are not, why does the Docker image need to, and how can 
we bring those two usages into a more symbiotic alignment?)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14594: [FLINK-20551][docs] Make SQL documentation Blink only

2021-01-15 Thread GitBox


flinkbot edited a comment on pull request #14594:
URL: https://github.com/apache/flink/pull/14594#issuecomment-756890853


   
   ## CI report:
   
   * 2062801ba381ee60fedb9c8ce07b6000dd485ab7 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11824)
 
   * bdfba9d2127fd0f843c3d57a953decec82eb15ff UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14619: [FLINK-20862][table] Support creating DataType from TypeInformation

2021-01-15 Thread GitBox


flinkbot edited a comment on pull request #14619:
URL: https://github.com/apache/flink/pull/14619#issuecomment-758635914


   
   ## CI report:
   
   * 421fbf52f2ec18801b430ddeec0be5332a56788f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12129)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14594: [FLINK-20551][docs] Make SQL documentation Blink only

2021-01-15 Thread GitBox


flinkbot edited a comment on pull request #14594:
URL: https://github.com/apache/flink/pull/14594#issuecomment-756890853


   
   ## CI report:
   
   * bdfba9d2127fd0f843c3d57a953decec82eb15ff Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12131)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14610: [FLINK-20909][table-planner-blink] Fix deduplicate mini-batch interval infer

2021-01-15 Thread GitBox


flinkbot edited a comment on pull request #14610:
URL: https://github.com/apache/flink/pull/14610#issuecomment-758033846


   
   ## CI report:
   
   * 384aa1d095bc02c376ec2a0af43501a19e2a81ef Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12123)
 
   * 0482293fe8ca9fdd51defaa6db8e86b35dac1c5f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12135)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-20996) Using a cryptographically weak Pseudo Random Number Generator (PRNG)

2021-01-15 Thread Ya Xiao (Jira)
Ya Xiao created FLINK-20996:
---

 Summary: Using a cryptographically weak Pseudo Random Number 
Generator (PRNG)
 Key: FLINK-20996
 URL: https://issues.apache.org/jira/browse/FLINK-20996
 Project: Flink
  Issue Type: Improvement
Reporter: Ya Xiao


We are a security research team at Virginia Tech. We are doing an empirical 
study about the usefulness of the existing security vulnerability detection 
tools. The following is a reported vulnerability by certain tools. We'll so 
appreciate it if you can give any feedback on it.

*Vulnerability Description:*

{color:#172b4d}In file 
flink/flink-end-to-end-tests/flink-stream-state-ttl-test/src/main/java/org/apache/flink/streaming/tests/verify/AbstractTtlStateVerifier.java{color},
 use java.util.Random instead of java.security.SecureRandom at Line 39.

*Security Impact:*

Java.util.Random is not cryptographically strong and may expose sensitive 
information to certain types of attacks when used in a security context.

*Useful Resources*:

[https://cwe.mitre.org/data/definitions/338.html]

*Solution we suggest:*

Replace it with SecureRandom

*Please share with us your opinions/comments if there is any:*

Is the bug report helpful?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20329) Elasticsearch7DynamicSinkITCase hangs

2021-01-15 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17266490#comment-17266490
 ] 

Huang Xingbo commented on FLINK-20329:
--

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=12124=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20]

 

> Elasticsearch7DynamicSinkITCase hangs
> -
>
> Key: FLINK-20329
> URL: https://issues.apache.org/jira/browse/FLINK-20329
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.13.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10052=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20
> {code}
> 2020-11-24T16:04:05.9260517Z [INFO] Running 
> org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7DynamicSinkITCase
> 2020-11-24T16:19:25.5481231Z 
> ==
> 2020-11-24T16:19:25.5483549Z Process produced no output for 900 seconds.
> 2020-11-24T16:19:25.5484064Z 
> ==
> 2020-11-24T16:19:25.5484498Z 
> ==
> 2020-11-24T16:19:25.5484882Z The following Java processes are running (JPS)
> 2020-11-24T16:19:25.5485475Z 
> ==
> 2020-11-24T16:19:25.5694497Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-11-24T16:19:25.7263048Z 16192 surefirebooter5057948964630155904.jar
> 2020-11-24T16:19:25.7263515Z 18566 Jps
> 2020-11-24T16:19:25.7263709Z 959 Launcher
> 2020-11-24T16:19:25.7411148Z 
> ==
> 2020-11-24T16:19:25.7427013Z Printing stack trace of Java process 16192
> 2020-11-24T16:19:25.7427369Z 
> ==
> 2020-11-24T16:19:25.7484365Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-11-24T16:19:26.0848776Z 2020-11-24 16:19:26
> 2020-11-24T16:19:26.0849578Z Full thread dump OpenJDK 64-Bit Server VM 
> (25.275-b01 mixed mode):
> 2020-11-24T16:19:26.0849831Z 
> 2020-11-24T16:19:26.0850185Z "Attach Listener" #32 daemon prio=9 os_prio=0 
> tid=0x7fc148001000 nid=0x48e7 waiting on condition [0x]
> 2020-11-24T16:19:26.0850595Zjava.lang.Thread.State: RUNNABLE
> 2020-11-24T16:19:26.0850814Z 
> 2020-11-24T16:19:26.0851375Z "testcontainers-ryuk" #31 daemon prio=5 
> os_prio=0 tid=0x7fc251232000 nid=0x3fb0 in Object.wait() 
> [0x7fc1012c4000]
> 2020-11-24T16:19:26.0854688Zjava.lang.Thread.State: TIMED_WAITING (on 
> object monitor)
> 2020-11-24T16:19:26.0855379Z  at java.lang.Object.wait(Native Method)
> 2020-11-24T16:19:26.0855844Z  at 
> org.testcontainers.utility.ResourceReaper.lambda$null$1(ResourceReaper.java:142)
> 2020-11-24T16:19:26.0857272Z  - locked <0x8e2bd2d0> (a 
> java.util.ArrayList)
> 2020-11-24T16:19:26.0857977Z  at 
> org.testcontainers.utility.ResourceReaper$$Lambda$93/1981729428.run(Unknown 
> Source)
> 2020-11-24T16:19:26.0858471Z  at 
> org.rnorth.ducttape.ratelimits.RateLimiter.doWhenReady(RateLimiter.java:27)
> 2020-11-24T16:19:26.0858961Z  at 
> org.testcontainers.utility.ResourceReaper.lambda$start$2(ResourceReaper.java:133)
> 2020-11-24T16:19:26.0859422Z  at 
> org.testcontainers.utility.ResourceReaper$$Lambda$92/40191541.run(Unknown 
> Source)
> 2020-11-24T16:19:26.0859788Z  at java.lang.Thread.run(Thread.java:748)
> 2020-11-24T16:19:26.0860030Z 
> 2020-11-24T16:19:26.0860371Z "process reaper" #24 daemon prio=10 os_prio=0 
> tid=0x7fc0f803b800 nid=0x3f92 waiting on condition [0x7fc10296e000]
> 2020-11-24T16:19:26.0860913Zjava.lang.Thread.State: TIMED_WAITING 
> (parking)
> 2020-11-24T16:19:26.0861387Z  at sun.misc.Unsafe.park(Native Method)
> 2020-11-24T16:19:26.0862495Z  - parking to wait for  <0x8814bf30> (a 
> java.util.concurrent.SynchronousQueue$TransferStack)
> 2020-11-24T16:19:26.0863253Z  at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> 2020-11-24T16:19:26.0863760Z  at 
> java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
> 2020-11-24T16:19:26.0864274Z  at 
> java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
> 2020-11-24T16:19:26.0864762Z  at 
> java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
> 2020-11-24T16:19:26.0865299Z  at 
> 

[GitHub] [flink] flinkbot edited a comment on pull request #14666: [FLINK-17085][kubernetes] Remove FlinkKubeClient.handleException

2021-01-15 Thread GitBox


flinkbot edited a comment on pull request #14666:
URL: https://github.com/apache/flink/pull/14666#issuecomment-760926129


   
   ## CI report:
   
   * 8dc1342fec505ced26932fae17a394b456b2dac2 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12130)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19519) Add support for port range in taskmanager.data.port

2021-01-15 Thread Truong Duc Kien (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17266472#comment-17266472
 ] 

Truong Duc Kien commented on FLINK-19519:
-

Sure. I'll do it.

> Add support for port range in taskmanager.data.port
> ---
>
> Key: FLINK-19519
> URL: https://issues.apache.org/jira/browse/FLINK-19519
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration, Runtime / Coordination
>Affects Versions: 1.11.2
>Reporter: Truong Duc Kien
>Priority: Minor
>
> Flink should add support for port range in {{taskmanager.data.port}}
> This is very helpful when running in restrictive network environments, and 
> port range are already available for many other port settings.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20996) Using a cryptographically weak Pseudo Random Number Generator (PRNG)

2021-01-15 Thread Ya Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ya Xiao updated FLINK-20996:

Description: 
We are a security research team at Virginia Tech. We are doing an empirical 
study about the usefulness of the existing security vulnerability detection 
tools. The following is a reported vulnerability by certain tools. We'll so 
appreciate it if you can give any feedback on it.

*Vulnerability Description:*

{color:#172b4d}In file 
{color}[flink/flink-end-to-end-tests/flink-stream-state-ttl-test/src/main/java/org/apache/flink/streaming/tests/verify/AbstractTtlStateVerifier.java,|https://github.com/apache/flink/blob/97bfd049951f8d52a2e0aed14265074c4255ead0/flink-end-to-end-tests/flink-stream-state-ttl-test/src/main/java/org/apache/flink/streaming/tests/verify/AbstractTtlStateVerifier.java]
 use java.util.Random instead of java.security.SecureRandom at Line 39.

*Security Impact:*

Java.util.Random is not cryptographically strong and may expose sensitive 
information to certain types of attacks when used in a security context.

*Useful Resources*:

[https://cwe.mitre.org/data/definitions/338.html]

*Solution we suggest:*

Replace it with SecureRandom

*Please share with us your opinions/comments if there is any:*

Is the bug report helpful?

  was:
We are a security research team at Virginia Tech. We are doing an empirical 
study about the usefulness of the existing security vulnerability detection 
tools. The following is a reported vulnerability by certain tools. We'll so 
appreciate it if you can give any feedback on it.

*Vulnerability Description:*

{color:#172b4d}In file 
flink/flink-end-to-end-tests/flink-stream-state-ttl-test/src/main/java/org/apache/flink/streaming/tests/verify/AbstractTtlStateVerifier.java{color},
 use java.util.Random instead of java.security.SecureRandom at Line 39.

*Security Impact:*

Java.util.Random is not cryptographically strong and may expose sensitive 
information to certain types of attacks when used in a security context.

*Useful Resources*:

[https://cwe.mitre.org/data/definitions/338.html]

*Solution we suggest:*

Replace it with SecureRandom

*Please share with us your opinions/comments if there is any:*

Is the bug report helpful?


> Using a cryptographically weak Pseudo Random Number Generator (PRNG)
> 
>
> Key: FLINK-20996
> URL: https://issues.apache.org/jira/browse/FLINK-20996
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ya Xiao
>Priority: Major
>
> We are a security research team at Virginia Tech. We are doing an empirical 
> study about the usefulness of the existing security vulnerability detection 
> tools. The following is a reported vulnerability by certain tools. We'll so 
> appreciate it if you can give any feedback on it.
> *Vulnerability Description:*
> {color:#172b4d}In file 
> {color}[flink/flink-end-to-end-tests/flink-stream-state-ttl-test/src/main/java/org/apache/flink/streaming/tests/verify/AbstractTtlStateVerifier.java,|https://github.com/apache/flink/blob/97bfd049951f8d52a2e0aed14265074c4255ead0/flink-end-to-end-tests/flink-stream-state-ttl-test/src/main/java/org/apache/flink/streaming/tests/verify/AbstractTtlStateVerifier.java]
>  use java.util.Random instead of java.security.SecureRandom at Line 39.
> *Security Impact:*
> Java.util.Random is not cryptographically strong and may expose sensitive 
> information to certain types of attacks when used in a security context.
> *Useful Resources*:
> [https://cwe.mitre.org/data/definitions/338.html]
> *Solution we suggest:*
> Replace it with SecureRandom
> *Please share with us your opinions/comments if there is any:*
> Is the bug report helpful?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20990) Service account property ignored for Kubernetes Standalone deployment

2021-01-15 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17266513#comment-17266513
 ] 

Yang Wang commented on FLINK-20990:
---

Thanks for sharing the result. I believe it could also help others.

> Service account property ignored for Kubernetes Standalone deployment
> -
>
> Key: FLINK-20990
> URL: https://issues.apache.org/jira/browse/FLINK-20990
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.12.0
>Reporter: Damian G
>Priority: Major
>
> We're using Kubernetes Standalone solution to deploy Flink on Kubernetes 
> cluster. We created helm chart resources with following documentation: 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.12/deployment/resource-providers/standalone/kubernetes.html]
> The problem is that on 'production' environment the default service account 
> is restricted from creating configmaps. I added 
> _kubernetes.jobmanager.service-account_ property to flink-conf.yml to use 
> different service account, but the error still says that the 'default' 
> service account has no permission to create config maps. I'm trying to 
> reproduce this on my local Kubernetes cluster, so:
> I'm creating ClusterRoleBinding for ClusterRole 'view' and assign it to 
> 'flink-sa' service account in order to check if the creation of configmaps is 
> now impossible
> In flink-conf.yaml I'm adding property 
> _kubernetes.jobmanager.service-account: flink-sa_
> The cluster still creates configmaps and works correctly - meaning it doesn't 
> use read-only service account I provided for it.
> Therefore I cannot change service account that Flink is using on 'production' 
> environment - it will always use the default one.
> Shouldn't the option to configure which service account Flink deployment is 
> using work for both Native Kubernetes deployment and Standalone Kubernetes 
> deployment?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14610: [FLINK-20909][table-planner-blink] Fix deduplicate mini-batch interval infer

2021-01-15 Thread GitBox


flinkbot edited a comment on pull request #14610:
URL: https://github.com/apache/flink/pull/14610#issuecomment-758033846


   
   ## CI report:
   
   * 384aa1d095bc02c376ec2a0af43501a19e2a81ef Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12123)
 
   * 0482293fe8ca9fdd51defaa6db8e86b35dac1c5f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20972) TwoPhaseCommitSinkFunction Output a large amount of EventData

2021-01-15 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17265793#comment-17265793
 ] 

Yun Gao commented on FLINK-20972:
-

The core issue seems to me is that since TwoPhaseOutputCommit is a common base 
class, and different implementation might have different requirements on the 
log. There might be some user indeed want to output the content of handle in 
some way, and I think we'd better not disable them from doing this. Thus keep 
toString() seems to be a proper way to achieve this target. 

> TwoPhaseCommitSinkFunction Output a large amount of EventData
> -
>
> Key: FLINK-20972
> URL: https://issues.apache.org/jira/browse/FLINK-20972
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.12.0
> Environment: flink 1.4.0 +
>Reporter: huajiewang
>Priority: Minor
>  Labels: easyfix, pull-request-available
> Attachments: 1610682498960.jpg, 1610682603148.jpg, 
> Jdbc2PCSinkFunction.scala
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> in TwoPhaseCommitSinkFunctionOutput Maybe A large number of EventData will be 
> output(log.info),which will cause IO bottleneck and disk waste.
>  
>  my code in the attachment, A large number event data output in the log 
> output by flink , e.g: 
> {code:java}
> Jdbc2PCSinkFunction 1/1 - checkpoint 4 complete, committing transaction 
> TransactionHolde {handle=Transaction(b420c880a951403984f231dd7e33597b, 
> ListBuffer(insert into table(field1,field2) value ('11','22') ... ... ), 
> transactionStartTime=1610426158532} from checkpoint 4{code}
> in TwoPhaseCommitSinkFunction about LOG.info code is as follows:
> !1610682498960.jpg|width=838,height=630!
> {code:java}
> LOG.info(
> "{} - checkpoint {} complete, committing transaction {} from 
> checkpoint {}",
> name(),
> checkpointId,
> pendingTransaction,
> pendingTransactionCheckpointId); {code}
> will be invoke pendingTransaction'toString method (pendingTransaction is 
> TransactionHolder'instance) 
> TransactionHolder'toString method code is:
> !1610682603148.jpg|width=859,height=327!
> {code:java}
> @Override
> public String toString() {
> return "TransactionHolder{"
> + "handle="
> +  handle
> + ", transactionStartTime="
> + transactionStartTime
> + '}';
> }{code}
>  handle is the concrete realization of my Transaction! There is a parameter 
> of List type in my Transaction, which is used to receive data. as a result, 
> these data are printed out(log.info)
>   
>   
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] twalthr opened a new pull request #14660: [FLINK-20883][table-api-java] Support calling SQL expressions in Table API

2021-01-15 Thread GitBox


twalthr opened a new pull request #14660:
URL: https://github.com/apache/flink/pull/14660


   ## What is the purpose of the change
   
   This supports calling SQL expressions in Table API. It introduces `callSql` 
which can be used as a regular scalar expression.
   
   ## Brief change log
   
   - Introduces `callSql` in all APIs
   - Updates several locations and visitors to deal with expressions coming 
from the planner
   
   ## Verifying this change
   
   This change added tests and can be verified as follows: `MiscFunctionITCase`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: yes
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? yes
 - If yes, how is the feature documented? docs
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rmetzger merged pull request #14450: [FLINK-19971][hbase] Fix HBase connector dependencies

2021-01-15 Thread GitBox


rmetzger merged pull request #14450:
URL: https://github.com/apache/flink/pull/14450


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-19971) flink-connector-hbase-base depends on hbase-server, instead of hbase-client

2021-01-15 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger reassigned FLINK-19971:
--

Assignee: Miklos Gergely

> flink-connector-hbase-base depends on hbase-server, instead of hbase-client
> ---
>
> Key: FLINK-19971
> URL: https://issues.apache.org/jira/browse/FLINK-19971
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Reporter: Robert Metzger
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available
>
> flink-connector-hbase-base depends on hbase-server, instead of hbase-client 
> and hbase-common.
> I believe hbase-server is only needed for starting a Minicluster and 
> therefore should be in the test scope.
> This was introduced by FLINK-1928.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20833) Expose pluggable interface for exception analysis and metrics reporting in Execution Graph

2021-01-15 Thread Zhenqiu Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17265795#comment-17265795
 ] 

Zhenqiu Huang commented on FLINK-20833:
---

[~rmetzger]
Thanks for these suggestions. 
1) I think the name of ExceptionListener is more reasonable. 
2) Yes, the implementation can be loaded in service provider. As long as the 
implementation is in the flink's classpath, it can be loaded.
3) I prefer to use Flink's metrics system.

I did a poc on the agreement we have. Please review it.
https://github.com/HuangZhenQiu/flink/commit/903c7746217c0cb91a2eff15a72de873ad48a5e7








> Expose pluggable interface for  exception analysis and metrics reporting in 
> Execution Graph
> ---
>
> Key: FLINK-20833
> URL: https://issues.apache.org/jira/browse/FLINK-20833
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.12.0
>Reporter: Zhenqiu Huang
>Priority: Minor
>
> For platform users of Apache flink, people usually want to classify the 
> failure reason( for example user code, networking, dependencies and etc) for 
> Flink jobs and emit metrics for those analyzed results. So that platform can 
> provide an accurate value for system reliability by distinguishing the 
> failure due to user logic from the system issues. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19971) flink-connector-hbase-base depends on hbase-server, instead of hbase-client

2021-01-15 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-19971:
---
Fix Version/s: 1.13.0

> flink-connector-hbase-base depends on hbase-server, instead of hbase-client
> ---
>
> Key: FLINK-19971
> URL: https://issues.apache.org/jira/browse/FLINK-19971
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Reporter: Robert Metzger
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> flink-connector-hbase-base depends on hbase-server, instead of hbase-client 
> and hbase-common.
> I believe hbase-server is only needed for starting a Minicluster and 
> therefore should be in the test scope.
> This was introduced by FLINK-1928.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-19971) flink-connector-hbase-base depends on hbase-server, instead of hbase-client

2021-01-15 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-19971.
--
Resolution: Fixed

Merged to master in 
https://github.com/apache/flink/commit/7fe1bca22a2093113ff268ef6e33d89cea1c4d48

Thanks for addressing this!

> flink-connector-hbase-base depends on hbase-server, instead of hbase-client
> ---
>
> Key: FLINK-19971
> URL: https://issues.apache.org/jira/browse/FLINK-19971
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Reporter: Robert Metzger
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> flink-connector-hbase-base depends on hbase-server, instead of hbase-client 
> and hbase-common.
> I believe hbase-server is only needed for starting a Minicluster and 
> therefore should be in the test scope.
> This was introduced by FLINK-1928.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #14660: [FLINK-20883][table-api-java] Support calling SQL expressions in Table API

2021-01-15 Thread GitBox


flinkbot commented on pull request #14660:
URL: https://github.com/apache/flink/pull/14660#issuecomment-760725035


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 08a9fff38a4698b216fbc9cd5c25c7875db0e827 (Fri Jan 15 
08:04:39 UTC 2021)
   
   **Warnings:**
* Documentation files were touched, but no `.zh.md` files: Update Chinese 
documentation or file Jira ticket.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14621: [FLINK-20933][python] Config Python Operator Use Managed Memory In Python DataStream

2021-01-15 Thread GitBox


flinkbot edited a comment on pull request #14621:
URL: https://github.com/apache/flink/pull/14621#issuecomment-75873


   
   ## CI report:
   
   * dab22fa1d5e18201415322cbd928767223258e16 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12026)
 
   * e0aa9de7e35bc1ac79a3d938454a9b5368445c69 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14645: [FLINK-20967][docs] HBase properties 'properties.*' function Description

2021-01-15 Thread GitBox


flinkbot edited a comment on pull request #14645:
URL: https://github.com/apache/flink/pull/14645#issuecomment-760128893


   
   ## CI report:
   
   * 130e09ba6abaec1b647f89fed3aee7bb364977da Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12099)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14640: [FLINK-20937][Table API/SQL]Drop table can fails, when i create a tab…

2021-01-15 Thread GitBox


flinkbot edited a comment on pull request #14640:
URL: https://github.com/apache/flink/pull/14640#issuecomment-760055828


   
   ## CI report:
   
   * 17ac8f48c293e42e7c5e63c1724112048fc57827 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12087)
 
   * f913de48135dfcd55733736432a234707959975a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Reopened] (FLINK-20906) Update copyright year to 2021 for NOTICE files

2021-01-15 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song reopened FLINK-20906:
--

Reopen for 1.10 branch.

> Update copyright year to 2021 for NOTICE files
> --
>
> Key: FLINK-20906
> URL: https://issues.apache.org/jira/browse/FLINK-20906
> Project: Flink
>  Issue Type: Task
>  Components: Release System
>Reporter: Xintong Song
>Assignee: Xintong Song
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.3, 1.13.0, 1.11.4, 1.12.1
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20906) Update copyright year to 2021 for NOTICE files

2021-01-15 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-20906:
-
Fix Version/s: 1.11.3

> Update copyright year to 2021 for NOTICE files
> --
>
> Key: FLINK-20906
> URL: https://issues.apache.org/jira/browse/FLINK-20906
> Project: Flink
>  Issue Type: Task
>  Components: Release System
>Reporter: Xintong Song
>Assignee: Xintong Song
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.3, 1.13.0, 1.11.4, 1.12.1
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-6042) Display last n exceptions/causes for job restarts in Web UI

2021-01-15 Thread Matthias (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-6042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17265799#comment-17265799
 ] 

Matthias commented on FLINK-6042:
-

{quote}Thanks for the proposal [~mapohl]. I have a few comments

1) When doing restarts with the new scheduler, then we will recreate the 
{{ExecutionGraph}}. Hence, exposing these error infos on the 
{{AccessExecutionGraph}} might not work.
{quote}
My plan was to not expose the {{ErrorInfo}} through {{ExecutionGraph}} but 
through {{ArchivedExecutionGraph}} which is a kind of serializable copy of 
{{ExecutionGraph}} and also implements {{AccessExecutionGraph}}. Restarting the 
{{ExecutionGraph}} shouldn't harm the {{ErrorInfo}} collection as it is held in 
the {{SchedulerNG}} implementation. The {{ErrorInfo}} collection would be used 
as an additional parameter when instantiating {{ArchivedExecutionGraph}} in 
[SchedulerBase.requestJob()|https://github.com/apache/flink/blob/ac968b83675e64712b4d35dbc166e09808c2156b/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SchedulerBase.java#L800].

{quote}2) {{UpdateSchedulerNgOnInternalFailuresListener}} will only be called 
if an exception on the JM occurs. If there is a normal task failure, then we 
will call {{updateTaskExecutionState}}
{quote}
Thanks for the hint: {{SchedulerNG.updateTaskExecutionState(..)}} should work 
as well. This would then collect all the {{Exception}}s thrown in the different 
tasks. We could then utilize the {{JobStatusListener}} interface to identify 
the {{Exception}} that causes the job to restart and group all previously 
collected {{ErrorInfo}}s (that were not already assigned to a different root 
cause) under this root cause.

{quote}3) It would be great to group the exceptions wrt to their restart cycles 
in the web UI. So seeing the root causes for a restart and then being able to 
expand the view to see the task failures for this specific restart would be 
awesome.
{quote}
The {{ErrorInfo}} groups mentioned in 2) could be then returned through the 
newly introduced access method described in 1) and forwarded by the 
{{JobExceptionsHandler}} to the web UI.

> Display last n exceptions/causes for job restarts in Web UI
> ---
>
> Key: FLINK-6042
> URL: https://issues.apache.org/jira/browse/FLINK-6042
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination, Runtime / Web Frontend
>Affects Versions: 1.3.0
>Reporter: Till Rohrmann
>Assignee: Matthias
>Priority: Major
>
> Users requested that it would be nice to see the last {{n}} exceptions 
> causing a job restart in the Web UI. This will help to more easily debug and 
> operate a job.
> We could store the root causes for failures similar to how prior executions 
> are stored in the {{ExecutionVertex}} using the {{EvictingBoundedList}} and 
> then serve this information via the Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-20833) Expose pluggable interface for exception analysis and metrics reporting in Execution Graph

2021-01-15 Thread Zhenqiu Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17265795#comment-17265795
 ] 

Zhenqiu Huang edited comment on FLINK-20833 at 1/15/21, 8:07 AM:
-

[~rmetzger]
Thanks for these suggestions. 
1) I think the name of ExceptionListener is more reasonable. 
2) Yes, the implementation can be loaded in service provider. As long as the 
implementation is in the flink's classpath, it can be loaded.
3) I prefer to use Flink's metrics system.

I did a poc on the agreement we have. Please review it. If we agree on the 
basic interface, I will further add test cases to enhance the PR.
https://github.com/HuangZhenQiu/flink/commit/903c7746217c0cb91a2eff15a72de873ad48a5e7









was (Author: zhenqiuhuang):
[~rmetzger]
Thanks for these suggestions. 
1) I think the name of ExceptionListener is more reasonable. 
2) Yes, the implementation can be loaded in service provider. As long as the 
implementation is in the flink's classpath, it can be loaded.
3) I prefer to use Flink's metrics system.

I did a poc on the agreement we have. Please review it.
https://github.com/HuangZhenQiu/flink/commit/903c7746217c0cb91a2eff15a72de873ad48a5e7








> Expose pluggable interface for  exception analysis and metrics reporting in 
> Execution Graph
> ---
>
> Key: FLINK-20833
> URL: https://issues.apache.org/jira/browse/FLINK-20833
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.12.0
>Reporter: Zhenqiu Huang
>Priority: Minor
>
> For platform users of Apache flink, people usually want to classify the 
> failure reason( for example user code, networking, dependencies and etc) for 
> Flink jobs and emit metrics for those analyzed results. So that platform can 
> provide an accurate value for system reliability by distinguishing the 
> failure due to user logic from the system issues. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xintongsong opened a new pull request #14661: [FLINK-20906][legal] Update copyright year to 2021 for NOTICE files.

2021-01-15 Thread GitBox


xintongsong opened a new pull request #14661:
URL: https://github.com/apache/flink/pull/14661


   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't 
know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-6042) Display last n exceptions/causes for job restarts in Web UI

2021-01-15 Thread Matthias (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-6042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17265799#comment-17265799
 ] 

Matthias edited comment on FLINK-6042 at 1/15/21, 8:08 AM:
---

{quote}Thanks for the proposal [~mapohl]. I have a few comments

1) When doing restarts with the new scheduler, then we will recreate the 
{{ExecutionGraph}}. Hence, exposing these error infos on the 
{{AccessExecutionGraph}} might not work.
{quote}
My plan was to not expose the {{ErrorInfo}} through {{ExecutionGraph}} but 
through {{ArchivedExecutionGraph}} which is a kind of serializable copy of 
{{ExecutionGraph}} and also implements {{AccessExecutionGraph}}. Restarting the 
{{ExecutionGraph}} shouldn't harm the {{ErrorInfo}} collection as it is held in 
the {{SchedulerNG}} implementation. The {{ErrorInfo}} collection would be used 
as an additional parameter when instantiating {{ArchivedExecutionGraph}} in 
[SchedulerBase.requestJob()|https://github.com/apache/flink/blob/ac968b83675e64712b4d35dbc166e09808c2156b/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SchedulerBase.java#L800].

{quote}2) {{UpdateSchedulerNgOnInternalFailuresListener}} will only be called 
if an exception on the JM occurs. If there is a normal task failure, then we 
will call {{updateTaskExecutionState}}
{quote}
Thanks for the hint: {{SchedulerNG.updateTaskExecutionState(..)}} should work 
as well. This would then collect all the {{Exception}}s thrown in the different 
tasks. We could then utilize the {{JobStatusListener}} interface to identify 
the {{Exception}} that causes the job to restart and group all previously 
collected {{ErrorInfo}}s (that were not already assigned to a different root 
cause) under this root cause. Here, I'm not 100% sure, yet, whether triggering 
the consolidation of the {{ErrorInfo}}s works that easily.

{quote}3) It would be great to group the exceptions wrt to their restart cycles 
in the web UI. So seeing the root causes for a restart and then being able to 
expand the view to see the task failures for this specific restart would be 
awesome.
{quote}
The {{ErrorInfo}} groups mentioned in 2) could be then returned through the 
newly introduced access method described in 1) and forwarded by the 
{{JobExceptionsHandler}} to the web UI.


was (Author: mapohl):
{quote}Thanks for the proposal [~mapohl]. I have a few comments

1) When doing restarts with the new scheduler, then we will recreate the 
{{ExecutionGraph}}. Hence, exposing these error infos on the 
{{AccessExecutionGraph}} might not work.
{quote}
My plan was to not expose the {{ErrorInfo}} through {{ExecutionGraph}} but 
through {{ArchivedExecutionGraph}} which is a kind of serializable copy of 
{{ExecutionGraph}} and also implements {{AccessExecutionGraph}}. Restarting the 
{{ExecutionGraph}} shouldn't harm the {{ErrorInfo}} collection as it is held in 
the {{SchedulerNG}} implementation. The {{ErrorInfo}} collection would be used 
as an additional parameter when instantiating {{ArchivedExecutionGraph}} in 
[SchedulerBase.requestJob()|https://github.com/apache/flink/blob/ac968b83675e64712b4d35dbc166e09808c2156b/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SchedulerBase.java#L800].

{quote}2) {{UpdateSchedulerNgOnInternalFailuresListener}} will only be called 
if an exception on the JM occurs. If there is a normal task failure, then we 
will call {{updateTaskExecutionState}}
{quote}
Thanks for the hint: {{SchedulerNG.updateTaskExecutionState(..)}} should work 
as well. This would then collect all the {{Exception}}s thrown in the different 
tasks. We could then utilize the {{JobStatusListener}} interface to identify 
the {{Exception}} that causes the job to restart and group all previously 
collected {{ErrorInfo}}s (that were not already assigned to a different root 
cause) under this root cause.

{quote}3) It would be great to group the exceptions wrt to their restart cycles 
in the web UI. So seeing the root causes for a restart and then being able to 
expand the view to see the task failures for this specific restart would be 
awesome.
{quote}
The {{ErrorInfo}} groups mentioned in 2) could be then returned through the 
newly introduced access method described in 1) and forwarded by the 
{{JobExceptionsHandler}} to the web UI.

> Display last n exceptions/causes for job restarts in Web UI
> ---
>
> Key: FLINK-6042
> URL: https://issues.apache.org/jira/browse/FLINK-6042
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination, Runtime / Web Frontend
>Affects Versions: 1.3.0
>Reporter: Till Rohrmann
>Assignee: Matthias
>Priority: Major
>
> Users requested that it would be nice to see the last {{n}} exceptions 
> causing a job restart in the Web UI. This will 

[jira] [Comment Edited] (FLINK-6042) Display last n exceptions/causes for job restarts in Web UI

2021-01-15 Thread Matthias (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-6042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17265799#comment-17265799
 ] 

Matthias edited comment on FLINK-6042 at 1/15/21, 8:09 AM:
---

{quote}Thanks for the proposal [~mapohl]. I have a few comments

1) When doing restarts with the new scheduler, then we will recreate the 
{{ExecutionGraph}}. Hence, exposing these error infos on the 
{{AccessExecutionGraph}} might not work.
{quote}
My plan was to not expose the {{ErrorInfo}} through {{ExecutionGraph}} but 
through {{ArchivedExecutionGraph}} which is a kind of serializable copy of 
{{ExecutionGraph}} and also implements {{AccessExecutionGraph}}. Restarting the 
{{ExecutionGraph}} shouldn't harm the {{ErrorInfo}} collection as it is held in 
the {{SchedulerNG}} implementation. The {{ErrorInfo}} collection would be used 
as an additional parameter when instantiating {{ArchivedExecutionGraph}} in 
[SchedulerBase.requestJob()|https://github.com/apache/flink/blob/ac968b83675e64712b4d35dbc166e09808c2156b/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SchedulerBase.java#L800].
{quote}2) {{UpdateSchedulerNgOnInternalFailuresListener}} will only be called 
if an exception on the JM occurs. If there is a normal task failure, then we 
will call {{updateTaskExecutionState}}
{quote}
Thanks for the hint: {{SchedulerNG.updateTaskExecutionState(..)}} should work 
as well. This would then collect all the {{Exceptions}} thrown in the different 
tasks. We could then utilize the {{JobStatusListener}} interface to identify 
the {{Exception}} that causes the job to restart and group all previously 
collected \{{ErrorInfos}} (that were not already assigned to a different root 
cause) under this root cause. Here, I'm not 100% sure, yet, whether triggering 
the consolidation of the \{{ErrorInfos}} works that easily.
{quote}3) It would be great to group the exceptions wrt to their restart cycles 
in the web UI. So seeing the root causes for a restart and then being able to 
expand the view to see the task failures for this specific restart would be 
awesome.
{quote}
The {{ErrorInfo}} groups mentioned in 2) could be then returned through the 
newly introduced access method described in 1) and forwarded by the 
{{JobExceptionsHandler}} to the web UI.


was (Author: mapohl):
{quote}Thanks for the proposal [~mapohl]. I have a few comments

1) When doing restarts with the new scheduler, then we will recreate the 
{{ExecutionGraph}}. Hence, exposing these error infos on the 
{{AccessExecutionGraph}} might not work.
{quote}
My plan was to not expose the {{ErrorInfo}} through {{ExecutionGraph}} but 
through {{ArchivedExecutionGraph}} which is a kind of serializable copy of 
{{ExecutionGraph}} and also implements {{AccessExecutionGraph}}. Restarting the 
{{ExecutionGraph}} shouldn't harm the {{ErrorInfo}} collection as it is held in 
the {{SchedulerNG}} implementation. The {{ErrorInfo}} collection would be used 
as an additional parameter when instantiating {{ArchivedExecutionGraph}} in 
[SchedulerBase.requestJob()|https://github.com/apache/flink/blob/ac968b83675e64712b4d35dbc166e09808c2156b/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SchedulerBase.java#L800].

{quote}2) {{UpdateSchedulerNgOnInternalFailuresListener}} will only be called 
if an exception on the JM occurs. If there is a normal task failure, then we 
will call {{updateTaskExecutionState}}
{quote}
Thanks for the hint: {{SchedulerNG.updateTaskExecutionState(..)}} should work 
as well. This would then collect all the {{Exceptions}} thrown in the different 
tasks. We could then utilize the {{JobStatusListener}} interface to identify 
the {{Exception}} that causes the job to restart and group all previously 
collected {{ErrorInfo}}s (that were not already assigned to a different root 
cause) under this root cause. Here, I'm not 100% sure, yet, whether triggering 
the consolidation of the {{ErrorInfo}}s works that easily.

{quote}3) It would be great to group the exceptions wrt to their restart cycles 
in the web UI. So seeing the root causes for a restart and then being able to 
expand the view to see the task failures for this specific restart would be 
awesome.
{quote}
The {{ErrorInfo}} groups mentioned in 2) could be then returned through the 
newly introduced access method described in 1) and forwarded by the 
{{JobExceptionsHandler}} to the web UI.

> Display last n exceptions/causes for job restarts in Web UI
> ---
>
> Key: FLINK-6042
> URL: https://issues.apache.org/jira/browse/FLINK-6042
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination, Runtime / Web Frontend
>Affects Versions: 1.3.0
>Reporter: Till Rohrmann
>Assignee: Matthias
>Priority: Major
>
> Users 

[jira] [Comment Edited] (FLINK-6042) Display last n exceptions/causes for job restarts in Web UI

2021-01-15 Thread Matthias (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-6042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17265799#comment-17265799
 ] 

Matthias edited comment on FLINK-6042 at 1/15/21, 8:09 AM:
---

{quote}Thanks for the proposal [~mapohl]. I have a few comments

1) When doing restarts with the new scheduler, then we will recreate the 
{{ExecutionGraph}}. Hence, exposing these error infos on the 
{{AccessExecutionGraph}} might not work.
{quote}
My plan was to not expose the {{ErrorInfo}} through {{ExecutionGraph}} but 
through {{ArchivedExecutionGraph}} which is a kind of serializable copy of 
{{ExecutionGraph}} and also implements {{AccessExecutionGraph}}. Restarting the 
{{ExecutionGraph}} shouldn't harm the {{ErrorInfo}} collection as it is held in 
the {{SchedulerNG}} implementation. The {{ErrorInfo}} collection would be used 
as an additional parameter when instantiating {{ArchivedExecutionGraph}} in 
[SchedulerBase.requestJob()|https://github.com/apache/flink/blob/ac968b83675e64712b4d35dbc166e09808c2156b/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SchedulerBase.java#L800].

{quote}2) {{UpdateSchedulerNgOnInternalFailuresListener}} will only be called 
if an exception on the JM occurs. If there is a normal task failure, then we 
will call {{updateTaskExecutionState}}
{quote}
Thanks for the hint: {{SchedulerNG.updateTaskExecutionState(..)}} should work 
as well. This would then collect all the {{Exceptions}} thrown in the different 
tasks. We could then utilize the {{JobStatusListener}} interface to identify 
the {{Exception}} that causes the job to restart and group all previously 
collected {{ErrorInfo}}s (that were not already assigned to a different root 
cause) under this root cause. Here, I'm not 100% sure, yet, whether triggering 
the consolidation of the {{ErrorInfo}}s works that easily.

{quote}3) It would be great to group the exceptions wrt to their restart cycles 
in the web UI. So seeing the root causes for a restart and then being able to 
expand the view to see the task failures for this specific restart would be 
awesome.
{quote}
The {{ErrorInfo}} groups mentioned in 2) could be then returned through the 
newly introduced access method described in 1) and forwarded by the 
{{JobExceptionsHandler}} to the web UI.


was (Author: mapohl):
{quote}Thanks for the proposal [~mapohl]. I have a few comments

1) When doing restarts with the new scheduler, then we will recreate the 
{{ExecutionGraph}}. Hence, exposing these error infos on the 
{{AccessExecutionGraph}} might not work.
{quote}
My plan was to not expose the {{ErrorInfo}} through {{ExecutionGraph}} but 
through {{ArchivedExecutionGraph}} which is a kind of serializable copy of 
{{ExecutionGraph}} and also implements {{AccessExecutionGraph}}. Restarting the 
{{ExecutionGraph}} shouldn't harm the {{ErrorInfo}} collection as it is held in 
the {{SchedulerNG}} implementation. The {{ErrorInfo}} collection would be used 
as an additional parameter when instantiating {{ArchivedExecutionGraph}} in 
[SchedulerBase.requestJob()|https://github.com/apache/flink/blob/ac968b83675e64712b4d35dbc166e09808c2156b/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SchedulerBase.java#L800].

{quote}2) {{UpdateSchedulerNgOnInternalFailuresListener}} will only be called 
if an exception on the JM occurs. If there is a normal task failure, then we 
will call {{updateTaskExecutionState}}
{quote}
Thanks for the hint: {{SchedulerNG.updateTaskExecutionState(..)}} should work 
as well. This would then collect all the {{Exception}}s thrown in the different 
tasks. We could then utilize the {{JobStatusListener}} interface to identify 
the {{Exception}} that causes the job to restart and group all previously 
collected {{ErrorInfo}}s (that were not already assigned to a different root 
cause) under this root cause. Here, I'm not 100% sure, yet, whether triggering 
the consolidation of the {{ErrorInfo}}s works that easily.

{quote}3) It would be great to group the exceptions wrt to their restart cycles 
in the web UI. So seeing the root causes for a restart and then being able to 
expand the view to see the task failures for this specific restart would be 
awesome.
{quote}
The {{ErrorInfo}} groups mentioned in 2) could be then returned through the 
newly introduced access method described in 1) and forwarded by the 
{{JobExceptionsHandler}} to the web UI.

> Display last n exceptions/causes for job restarts in Web UI
> ---
>
> Key: FLINK-6042
> URL: https://issues.apache.org/jira/browse/FLINK-6042
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination, Runtime / Web Frontend
>Affects Versions: 1.3.0
>Reporter: Till Rohrmann
>Assignee: Matthias
>Priority: Major
>
> Users 

[GitHub] [flink] flinkbot commented on pull request #14661: [FLINK-20906][legal] Update copyright year to 2021 for NOTICE files.

2021-01-15 Thread GitBox


flinkbot commented on pull request #14661:
URL: https://github.com/apache/flink/pull/14661#issuecomment-760732637


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit b3861a8b686190d10f6847e29474b3ef23f3b46c (Fri Jan 15 
08:10:32 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18280) Kotlin adapters for Flink types?

2021-01-15 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17265803#comment-17265803
 ] 

Robert Metzger commented on FLINK-18280:


Cool! Thanks for sharing.

I would recommend you to submit this to https://flink-packages.org/ as well, 
for increased visibility! 

> Kotlin adapters for Flink types?
> 
>
> Key: FLINK-18280
> URL: https://issues.apache.org/jira/browse/FLINK-18280
> Project: Flink
>  Issue Type: Wish
>  Components: API / DataStream
>Affects Versions: 1.10.1
>Reporter: Marshall Pierce
>Priority: Minor
>
> Currently, using a Kotlin lambda for, say, a {{KeySelector}} doesn't work – 
> it needs to be an {{object}} expression for the runtime type detection to 
> work. At my day job we have started building up a handful of wrappers, like 
> this one for {{KeySelector}}:
> {code}
> inline fun  keySelector(crossinline block: (T) -> K): KeySelector 
> {
> return object : KeySelector {
> override fun getKey(value: T): K {
> return block(value)
> }
> }
> }
> {code}
> Usage looks like: {code}keySelector { it.fooId }{code}
> Surely not the only way to solve that problem, but it's been working smoothly 
> for us so far.
> Is there any interested in shipping these sorts of extensions as part of the 
> Flink project so users don't need to write them? It could be a wholly 
> separate artifact (or rather multiple artifacts, as there would probably be 
> one for flink core, one for flink streaming, etc). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20972) TwoPhaseCommitSinkFunction Output a large amount of EventData

2021-01-15 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17265802#comment-17265802
 ] 

Yun Gao commented on FLINK-20972:
-

And one more point, might be unrelated to this issue, is that to implement an 
exactly-once sink to database, use ordinary jdbc transaction is not enough, 
since generally jdbc transaction will be aborted on connection close, thus once 
there are failover, the connection to the database would be closed and all the 
pre-committed jdbc transaction would be lost. For implementation of jdbc 
excactly-once sink, XA transaction might be required (and the level of support 
for different database need to be checked). More information are available in 
[This 
discussion|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-JDBC-exactly-once-sink-td36424.html#a36431]
 and [This PR|https://github.com/apache/flink/pull/10847/files].

> TwoPhaseCommitSinkFunction Output a large amount of EventData
> -
>
> Key: FLINK-20972
> URL: https://issues.apache.org/jira/browse/FLINK-20972
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.12.0
> Environment: flink 1.4.0 +
>Reporter: huajiewang
>Priority: Minor
>  Labels: easyfix, pull-request-available
> Attachments: 1610682498960.jpg, 1610682603148.jpg, 
> Jdbc2PCSinkFunction.scala
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> in TwoPhaseCommitSinkFunctionOutput Maybe A large number of EventData will be 
> output(log.info),which will cause IO bottleneck and disk waste.
>  
>  my code in the attachment, A large number event data output in the log 
> output by flink , e.g: 
> {code:java}
> Jdbc2PCSinkFunction 1/1 - checkpoint 4 complete, committing transaction 
> TransactionHolde {handle=Transaction(b420c880a951403984f231dd7e33597b, 
> ListBuffer(insert into table(field1,field2) value ('11','22') ... ... ), 
> transactionStartTime=1610426158532} from checkpoint 4{code}
> in TwoPhaseCommitSinkFunction about LOG.info code is as follows:
> !1610682498960.jpg|width=838,height=630!
> {code:java}
> LOG.info(
> "{} - checkpoint {} complete, committing transaction {} from 
> checkpoint {}",
> name(),
> checkpointId,
> pendingTransaction,
> pendingTransactionCheckpointId); {code}
> will be invoke pendingTransaction'toString method (pendingTransaction is 
> TransactionHolder'instance) 
> TransactionHolder'toString method code is:
> !1610682603148.jpg|width=859,height=327!
> {code:java}
> @Override
> public String toString() {
> return "TransactionHolder{"
> + "handle="
> +  handle
> + ", transactionStartTime="
> + transactionStartTime
> + '}';
> }{code}
>  handle is the concrete realization of my Transaction! There is a parameter 
> of List type in my Transaction, which is used to receive data. as a result, 
> these data are printed out(log.info)
>   
>   
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xiaoHoly commented on a change in pull request #14531: [FLINK-20777][Connector][Kafka] Property "partition.discovery.interval.ms" shoule be enabled by default for unbounded mode, and

2021-01-15 Thread GitBox


xiaoHoly commented on a change in pull request #14531:
URL: https://github.com/apache/flink/pull/14531#discussion_r558004634



##
File path: 
flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSource.java
##
@@ -108,6 +108,15 @@
 return new KafkaSourceBuilder<>();
 }
 
+/**
+ * Returns the props for the Kafka Source.
+ *
+ * @return props for the Kafka Source.
+ */
+public Properties getProps() {

Review comment:
I think  getProps() and getBoundedness() in KafkaSource are similar,we 
need get props from KafkaSource object.
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xiaoHoly commented on a change in pull request #14531: [FLINK-20777][Connector][Kafka] Property "partition.discovery.interval.ms" shoule be enabled by default for unbounded mode, and

2021-01-15 Thread GitBox


xiaoHoly commented on a change in pull request #14531:
URL: https://github.com/apache/flink/pull/14531#discussion_r558006111



##
File path: 
flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/source/KafkaSourceITCase.java
##
@@ -82,6 +82,57 @@ public void testBasicRead() throws Exception {
 executeAndVerify(env, stream);
 }
 
+@Test
+public void testParitionDiscoverySetting() throws Exception {

Review comment:
   It sounds pretty good





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14621: [FLINK-20933][python] Config Python Operator Use Managed Memory In Python DataStream

2021-01-15 Thread GitBox


flinkbot edited a comment on pull request #14621:
URL: https://github.com/apache/flink/pull/14621#issuecomment-75873


   
   ## CI report:
   
   * dab22fa1d5e18201415322cbd928767223258e16 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12026)
 
   * e0aa9de7e35bc1ac79a3d938454a9b5368445c69 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12100)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14640: [FLINK-20937][Table API/SQL]Drop table can fails, when i create a tab…

2021-01-15 Thread GitBox


flinkbot edited a comment on pull request #14640:
URL: https://github.com/apache/flink/pull/14640#issuecomment-760055828


   
   ## CI report:
   
   * 17ac8f48c293e42e7c5e63c1724112048fc57827 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12087)
 
   * f913de48135dfcd55733736432a234707959975a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12102)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14652: [FLINK-20953][canal][json] Option 'canal-json.database.include' and 'canal-json.table.include' support regular expression

2021-01-15 Thread GitBox


flinkbot edited a comment on pull request #14652:
URL: https://github.com/apache/flink/pull/14652#issuecomment-760296852


   
   ## CI report:
   
   * 72601d907bc02eb3b74499a36f15f713406d911c Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12088)
 
   * 307e51f7f0aec480cd035bee12764f8ec10be2d7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14660: [FLINK-20883][table-api-java] Support calling SQL expressions in Table API

2021-01-15 Thread GitBox


flinkbot commented on pull request #14660:
URL: https://github.com/apache/flink/pull/14660#issuecomment-760748649


   
   ## CI report:
   
   * 08a9fff38a4698b216fbc9cd5c25c7875db0e827 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14661: [FLINK-20906][legal] Update copyright year to 2021 for NOTICE files.

2021-01-15 Thread GitBox


flinkbot commented on pull request #14661:
URL: https://github.com/apache/flink/pull/14661#issuecomment-760748810


   
   ## CI report:
   
   * b3861a8b686190d10f6847e29474b3ef23f3b46c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong merged pull request #14645: [FLINK-20967][docs] HBase properties 'properties.*' function Description

2021-01-15 Thread GitBox


wuchong merged pull request #14645:
URL: https://github.com/apache/flink/pull/14645


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] twalthr commented on a change in pull request #14594: [FLINK-20551][docs] Make SQL documentation Blink only

2021-01-15 Thread GitBox


twalthr commented on a change in pull request #14594:
URL: https://github.com/apache/flink/pull/14594#discussion_r558006245



##
File path: docs/dev/table/common.md
##
@@ -116,173 +106,119 @@ table_result...
 
 
 
-**Note:** Table API and SQL queries can be easily integrated with and embedded 
into DataStream or DataSet programs. Have a look at the [Integration with 
DataStream and DataSet API](#integration-with-datastream-and-dataset-api) 
section to learn how DataStreams and DataSets can be converted into Tables and 
vice versa.
+**Note:** Table API and SQL queries can be easily integrated with and embedded 
into DataStream programs.
+Have a look at the [Integration with DataStream](#integration-with-datastream) 
section to learn how DataStreams can be converted into Tables and vice versa.
 
 {% top %}
 
 Create a TableEnvironment
 -
 
-The `TableEnvironment` is a central concept of the Table API and SQL 
integration. It is responsible for:
+The `TableEnvironment` is the entrypoint for Table API and SQL integration and 
is responsible for:
 
 * Registering a `Table` in the internal catalog
 * Registering catalogs
 * Loading pluggable modules
 * Executing SQL queries
 * Registering a user-defined (scalar, table, or aggregation) function
-* Converting a `DataStream` or `DataSet` into a `Table`
-* Holding a reference to an `ExecutionEnvironment` or 
`StreamExecutionEnvironment`
-
-A `Table` is always bound to a specific `TableEnvironment`. It is not possible 
to combine tables of different TableEnvironments in the same query, e.g., to 
join or union them.
-
-A `TableEnvironment` is created by calling the static 
`BatchTableEnvironment.create()` or `StreamTableEnvironment.create()` method 
with a `StreamExecutionEnvironment` or an `ExecutionEnvironment` and an 
optional `TableConfig`. The `TableConfig` can be used to configure the 
`TableEnvironment` or to customize the query optimization and translation 
process (see [Query Optimization](#query-optimization)).
+* Converting a `DataStream` into a `Table`
+* Holding a reference to a `StreamExecutionEnvironment`
 
-Make sure to choose the specific planner 
`BatchTableEnvironment`/`StreamTableEnvironment` that matches your programming 
language.
-
-If both planner jars are on the classpath (the default behavior), you should 
explicitly set which planner to use in the current program.
+A `Table` is always bound to a specific `TableEnvironment`.
+It is not possible to combine tables of different TableEnvironments in the 
same query, e.g., to join or union them.
+A `TableEnvironment` is created by calling the static 
`TableEnvironment.create()` method.
 
 
 
 {% highlight java %}
-
-// **
-// FLINK STREAMING QUERY
-// **
-import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
 import org.apache.flink.table.api.EnvironmentSettings;
-import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
+import org.apache.flink.table.api.TableEnvironment;
 
-EnvironmentSettings fsSettings = 
EnvironmentSettings.newInstance().useOldPlanner().inStreamingMode().build();
-StreamExecutionEnvironment fsEnv = 
StreamExecutionEnvironment.getExecutionEnvironment();
-StreamTableEnvironment fsTableEnv = StreamTableEnvironment.create(fsEnv, 
fsSettings);
-// or TableEnvironment fsTableEnv = TableEnvironment.create(fsSettings);
+EnvironmentSettings settings = EnvironmentSettings
+.newInstance()
+.inStreamingMode()
+//.inBatchMode()
+.build();
 
-// **
-// FLINK BATCH QUERY
-// **
-import org.apache.flink.api.java.ExecutionEnvironment;
-import org.apache.flink.table.api.bridge.java.BatchTableEnvironment;
+TableEnvironment tEnv = TableEnvironment.create(setting);
+{% endhighlight %}
+
+
+{% highlight scala %}
+import org.apache.flink.table.api.{EnvironmentSettings, TableEnvironment}
 
-ExecutionEnvironment fbEnv = ExecutionEnvironment.getExecutionEnvironment();
-BatchTableEnvironment fbTableEnv = BatchTableEnvironment.create(fbEnv);
+val settings = EnvironmentSettings
+.newInstance()
+.inStreamingMode()
+//.inBatchMode()
+.build()
 
-// **
-// BLINK STREAMING QUERY
-// **
-import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
-import org.apache.flink.table.api.EnvironmentSettings;
-import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
+val tEnv = TableEnvironment.create(setting)
+{% endhighlight %}
+
+
+{% highlight python %}
+# ***
+# Streaming QUERY
+# ***
 
-StreamExecutionEnvironment bsEnv = 
StreamExecutionEnvironment.getExecutionEnvironment();
-EnvironmentSettings bsSettings = 
EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
-StreamTableEnvironment bsTableEnv = StreamTableEnvironment.create(bsEnv, 
bsSettings);
-// or TableEnvironment bsTableEnv = 

[GitHub] [flink] dixingxing0 commented on pull request #14634: [FLINK-20913][hive]Improve HiveConf creation

2021-01-15 Thread GitBox


dixingxing0 commented on pull request #14634:
URL: https://github.com/apache/flink/pull/14634#issuecomment-760749497


   > Thanks @dixingxing0 for working on this. I've left some comments.
   Thanks for the comments, I will address them.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20833) Expose pluggable interface for exception analysis and metrics reporting in Execution Graph

2021-01-15 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17265814#comment-17265814
 ] 

Robert Metzger commented on FLINK-20833:


Thanks a lot for providing a PoC! This makes the discussion a lot easier!

Do you know if there's already a metric for the number of exceptions, and the 
time since the last exception?
If not, it might make sense to add this as a default listener implementation?

Secondly, we are currently working on adding another scheduler. Once that is 
implemented, not all schedulers will support the ExceptionListener. I'm 
wondering whether we should move the initialization to another location (into 
the JobMaster, and then pass the listener into the scheduler factory?)

Discovering this feature will be very difficult, because of the ServiceLoader. 
Let's make sure we add this to the documentation.

Lastly, I guess we can use Flink's 
{{PluginUtils.createPluginManagerFromRootFolder(flinkConfig)}}, to use the 
Plugin mechanism. This will create a separate classloader per 
{{ExceptionListener}}, avoiding dependency conflicts with Flink's classpath (I 
haven't used this myself, but from a quick look, this seems easy to use).

> Expose pluggable interface for  exception analysis and metrics reporting in 
> Execution Graph
> ---
>
> Key: FLINK-20833
> URL: https://issues.apache.org/jira/browse/FLINK-20833
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.12.0
>Reporter: Zhenqiu Huang
>Priority: Minor
>
> For platform users of Apache flink, people usually want to classify the 
> failure reason( for example user code, networking, dependencies and etc) for 
> Flink jobs and emit metrics for those analyzed results. So that platform can 
> provide an accurate value for system reliability by distinguishing the 
> failure due to user logic from the system issues. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dixingxing0 edited a comment on pull request #14634: [FLINK-20913][hive]Improve HiveConf creation

2021-01-15 Thread GitBox


dixingxing0 edited a comment on pull request #14634:
URL: https://github.com/apache/flink/pull/14634#issuecomment-760749497


   > Thanks @dixingxing0 for working on this. I've left some comments.
   
   Thanks for the comments, I'am addressing them.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dixingxing0 edited a comment on pull request #14634: [FLINK-20913][hive]Improve HiveConf creation

2021-01-15 Thread GitBox


dixingxing0 edited a comment on pull request #14634:
URL: https://github.com/apache/flink/pull/14634#issuecomment-760749497


   > Thanks @dixingxing0 for working on this. I've left some comments.
   Thanks for the comments, I'am addressing them.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-20967) Add documentation for the new introduced 'properties.*' option in HBase connector

2021-01-15 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-20967.
---
Resolution: Fixed

Fixed in master: ee1453cd96cfc0b9633677ff2f514bd7b9feffdb

> Add documentation for the new introduced 'properties.*' option in HBase 
> connector
> -
>
> Key: FLINK-20967
> URL: https://issues.apache.org/jira/browse/FLINK-20967
> Project: Flink
>  Issue Type: Task
>  Components: Connectors / HBase, Documentation, Table SQL / Ecosystem
>Affects Versions: 1.13.0
>Reporter: WeiNan Zhao
>Assignee: WeiNan Zhao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> hbase connector version(1.4,2.2) can use 'properties.*' add configuration, 
> need to add Documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20886) Add the option to get a threaddump on checkpoint timeouts

2021-01-15 Thread Nico Kruber (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber updated FLINK-20886:

Labels: usability  (was: )

> Add the option to get a threaddump on checkpoint timeouts
> -
>
> Key: FLINK-20886
> URL: https://issues.apache.org/jira/browse/FLINK-20886
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Reporter: Nico Kruber
>Priority: Major
>  Labels: usability
>
> For debugging checkpoint timeouts, I was thinking about the following 
> addition to Flink:
> When a checkpoint times out and the async thread is still running, create a 
> thread dump [1] and either add this to the checkpoint stats, log it, or write 
> it out.
> This may help identifying where the checkpoint is stuck (maybe a lock, could 
> also be in a third party lib like the FS connectors,...). It would give us 
> some insights into what the thread is currently doing.
> Limiting the scope of the threads would be nice but may not be possible in 
> the general case since additional threads (spawned by the FS connector lib, 
> or otherwise connected) may interact with the async thread(s) by e.g. going 
> through the same locks. Maybe we can reduce the thread dumps to all async 
> threads of the failed checkpoint + all thready that interact with it, e.g. 
> via locks?
> I'm also not sure whether the ability to have thread dumps or not should be 
> user-configurable (Could it contain sensitive information from other jobs if 
> you run a session cluster? Is that even relevant since we don't give 
> isolation guarantees anyway?). If it is configurable, it should be on by 
> default.
> [1] https://crunchify.com/how-to-generate-java-thread-dump-programmatically/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20972) TwoPhaseCommitSinkFunction Output a large amount of EventData

2021-01-15 Thread huajiewang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17265817#comment-17265817
 ] 

huajiewang commented on FLINK-20972:


[~gaoyunhaii] What you said is reasonable, but in my opinion, this is a 
notification message output from the internal Flink framework, which is used to 
tell the user which batch of checkpoint completed. However, at present, there 
are no requirements and restrictions for the transaction type, so that the user 
can freely define it. A little carelessness will cause this problem, unless the 
user is very familiar with the processing logic of this code, In order to 
effectively avoid this problem, regarding the transaction class, if the user is 
required to participate in the output information when the checkpoint is 
completed, the Flink can completely define an interface type (Transaction) for 
the user to implement the interface. Therefore, I think this is the issue of 
Flink

> TwoPhaseCommitSinkFunction Output a large amount of EventData
> -
>
> Key: FLINK-20972
> URL: https://issues.apache.org/jira/browse/FLINK-20972
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.12.0
> Environment: flink 1.4.0 +
>Reporter: huajiewang
>Priority: Minor
>  Labels: easyfix, pull-request-available
> Attachments: 1610682498960.jpg, 1610682603148.jpg, 
> Jdbc2PCSinkFunction.scala
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> in TwoPhaseCommitSinkFunctionOutput Maybe A large number of EventData will be 
> output(log.info),which will cause IO bottleneck and disk waste.
>  
>  my code in the attachment, A large number event data output in the log 
> output by flink , e.g: 
> {code:java}
> Jdbc2PCSinkFunction 1/1 - checkpoint 4 complete, committing transaction 
> TransactionHolde {handle=Transaction(b420c880a951403984f231dd7e33597b, 
> ListBuffer(insert into table(field1,field2) value ('11','22') ... ... ), 
> transactionStartTime=1610426158532} from checkpoint 4{code}
> in TwoPhaseCommitSinkFunction about LOG.info code is as follows:
> !1610682498960.jpg|width=838,height=630!
> {code:java}
> LOG.info(
> "{} - checkpoint {} complete, committing transaction {} from 
> checkpoint {}",
> name(),
> checkpointId,
> pendingTransaction,
> pendingTransactionCheckpointId); {code}
> will be invoke pendingTransaction'toString method (pendingTransaction is 
> TransactionHolder'instance) 
> TransactionHolder'toString method code is:
> !1610682603148.jpg|width=859,height=327!
> {code:java}
> @Override
> public String toString() {
> return "TransactionHolder{"
> + "handle="
> +  handle
> + ", transactionStartTime="
> + transactionStartTime
> + '}';
> }{code}
>  handle is the concrete realization of my Transaction! There is a parameter 
> of List type in my Transaction, which is used to receive data. as a result, 
> these data are printed out(log.info)
>   
>   
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-20728) Pulsar SinkFunction

2021-01-15 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann reassigned FLINK-20728:
-

Assignee: Jianyun Zhao

> Pulsar SinkFunction
> ---
>
> Key: FLINK-20728
> URL: https://issues.apache.org/jira/browse/FLINK-20728
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Common
>Affects Versions: 1.13.0
>Reporter: Jianyun Zhao
>Assignee: Jianyun Zhao
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20728) Pulsar SinkFunction

2021-01-15 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-20728:
--
Fix Version/s: 1.13.0

> Pulsar SinkFunction
> ---
>
> Key: FLINK-20728
> URL: https://issues.apache.org/jira/browse/FLINK-20728
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Common
>Affects Versions: 1.13.0
>Reporter: Jianyun Zhao
>Assignee: Jianyun Zhao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-20727) Pulsar SourceFunction

2021-01-15 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann reassigned FLINK-20727:
-

Assignee: Jianyun Zhao

> Pulsar SourceFunction
> -
>
> Key: FLINK-20727
> URL: https://issues.apache.org/jira/browse/FLINK-20727
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Common
>Affects Versions: 1.13.0
>Reporter: Jianyun Zhao
>Assignee: Jianyun Zhao
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20726) Introduce Pulsar connector

2021-01-15 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-20726:
--
Fix Version/s: 1.13.0

> Introduce Pulsar connector
> --
>
> Key: FLINK-20726
> URL: https://issues.apache.org/jira/browse/FLINK-20726
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Affects Versions: 1.13.0
>Reporter: Jianyun Zhao
>Priority: Major
> Fix For: 1.13.0
>
>
> Pulsar is an important player in messaging middleware, and it is essential 
> for Flink to support Pulsar.
> Our existing code is maintained at 
> [streamnative/pulsar-flink|https://github.com/streamnative/pulsar-flink] , 
> next we will split it into several pr merges back to the community.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20727) Pulsar SourceFunction

2021-01-15 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-20727:
--
Fix Version/s: 1.13.0

> Pulsar SourceFunction
> -
>
> Key: FLINK-20727
> URL: https://issues.apache.org/jira/browse/FLINK-20727
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Common
>Affects Versions: 1.13.0
>Reporter: Jianyun Zhao
>Assignee: Jianyun Zhao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-20726) Introduce Pulsar connector

2021-01-15 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann reassigned FLINK-20726:
-

Assignee: Jianyun Zhao

> Introduce Pulsar connector
> --
>
> Key: FLINK-20726
> URL: https://issues.apache.org/jira/browse/FLINK-20726
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Affects Versions: 1.13.0
>Reporter: Jianyun Zhao
>Assignee: Jianyun Zhao
>Priority: Major
> Fix For: 1.13.0
>
>
> Pulsar is an important player in messaging middleware, and it is essential 
> for Flink to support Pulsar.
> Our existing code is maintained at 
> [streamnative/pulsar-flink|https://github.com/streamnative/pulsar-flink] , 
> next we will split it into several pr merges back to the community.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-20972) TwoPhaseCommitSinkFunction Output a large amount of EventData

2021-01-15 Thread huajiewang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17265817#comment-17265817
 ] 

huajiewang edited comment on FLINK-20972 at 1/15/21, 8:39 AM:
--

[~gaoyunhaii]
My code is just an example,What you said makes sense, but in my opinion, this 
is a notification message output by the flink framework to tell the user which 
batch of checkpoint completed, but currently there are no requirements and 
restrictions for the Transaction type, so that users can Free definition, a 
little carelessness will cause this problem, unless the user is very familiar 
with the processing logic of this code, in order to effectively avoid this 
problem, about this Transaction class, if the flink output information when the 
checkpoint is completed requires user participation, then Flink can completely 
define an interface type (Transaction), allowing users to implement this 
interface. So I think this is the issue of Flink


was (Author: benjobs):
[~gaoyunhaii] What you said is reasonable, but in my opinion, this is a 
notification message output from the internal Flink framework, which is used to 
tell the user which batch of checkpoint completed. However, at present, there 
are no requirements and restrictions for the transaction type, so that the user 
can freely define it. A little carelessness will cause this problem, unless the 
user is very familiar with the processing logic of this code, In order to 
effectively avoid this problem, regarding the transaction class, if the user is 
required to participate in the output information when the checkpoint is 
completed, the Flink can completely define an interface type (Transaction) for 
the user to implement the interface. Therefore, I think this is the issue of 
Flink

> TwoPhaseCommitSinkFunction Output a large amount of EventData
> -
>
> Key: FLINK-20972
> URL: https://issues.apache.org/jira/browse/FLINK-20972
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.12.0
> Environment: flink 1.4.0 +
>Reporter: huajiewang
>Priority: Minor
>  Labels: easyfix, pull-request-available
> Attachments: 1610682498960.jpg, 1610682603148.jpg, 
> Jdbc2PCSinkFunction.scala
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> in TwoPhaseCommitSinkFunctionOutput Maybe A large number of EventData will be 
> output(log.info),which will cause IO bottleneck and disk waste.
>  
>  my code in the attachment, A large number event data output in the log 
> output by flink , e.g: 
> {code:java}
> Jdbc2PCSinkFunction 1/1 - checkpoint 4 complete, committing transaction 
> TransactionHolde {handle=Transaction(b420c880a951403984f231dd7e33597b, 
> ListBuffer(insert into table(field1,field2) value ('11','22') ... ... ), 
> transactionStartTime=1610426158532} from checkpoint 4{code}
> in TwoPhaseCommitSinkFunction about LOG.info code is as follows:
> !1610682498960.jpg|width=838,height=630!
> {code:java}
> LOG.info(
> "{} - checkpoint {} complete, committing transaction {} from 
> checkpoint {}",
> name(),
> checkpointId,
> pendingTransaction,
> pendingTransactionCheckpointId); {code}
> will be invoke pendingTransaction'toString method (pendingTransaction is 
> TransactionHolder'instance) 
> TransactionHolder'toString method code is:
> !1610682603148.jpg|width=859,height=327!
> {code:java}
> @Override
> public String toString() {
> return "TransactionHolder{"
> + "handle="
> +  handle
> + ", transactionStartTime="
> + transactionStartTime
> + '}';
> }{code}
>  handle is the concrete realization of my Transaction! There is a parameter 
> of List type in my Transaction, which is used to receive data. as a result, 
> these data are printed out(log.info)
>   
>   
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xintongsong closed pull request #14661: [FLINK-20906][legal] Update copyright year to 2021 for NOTICE files.

2021-01-15 Thread GitBox


xintongsong closed pull request #14661:
URL: https://github.com/apache/flink/pull/14661


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xintongsong commented on pull request #14661: [FLINK-20906][legal] Update copyright year to 2021 for NOTICE files.

2021-01-15 Thread GitBox


xintongsong commented on pull request #14661:
URL: https://github.com/apache/flink/pull/14661#issuecomment-760755365


   Closed via 6e38231d62d3d623c59b1cbac1135cf3b03d4031



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20726) Introduce Pulsar connector

2021-01-15 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17265821#comment-17265821
 ] 

Till Rohrmann commented on FLINK-20726:
---

Hi [~Jianyun Zhao], I've given you access rights to edit the FLIP. You can 
update it now.

> Introduce Pulsar connector
> --
>
> Key: FLINK-20726
> URL: https://issues.apache.org/jira/browse/FLINK-20726
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Affects Versions: 1.13.0
>Reporter: Jianyun Zhao
>Assignee: Jianyun Zhao
>Priority: Major
> Fix For: 1.13.0
>
>
> Pulsar is an important player in messaging middleware, and it is essential 
> for Flink to support Pulsar.
> Our existing code is maintained at 
> [streamnative/pulsar-flink|https://github.com/streamnative/pulsar-flink] , 
> next we will split it into several pr merges back to the community.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xiaoHoly commented on pull request #14616: [FLINK-20879][table-api] Use MemorySize type instead of String type for memory ConfigOption in ExecutionConfigOptions

2021-01-15 Thread GitBox


xiaoHoly commented on pull request #14616:
URL: https://github.com/apache/flink/pull/14616#issuecomment-760755775


   It looks like the build  was successful



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-20906) Update copyright year to 2021 for NOTICE files

2021-01-15 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song closed FLINK-20906.

Resolution: Fixed

Merged via
* master (1.13): 67d167ccd45046fc5ed222ac1f1e3ba5e6ec434b
* release-1.12: 87739e3788e0d420ec68cd065a975ad554e00dd3
* release-1.11: 8b7b17ba6a9f9a0443ef2b8a8fd824f682874fd5
* release-1.10: 6e38231d62d3d623c59b1cbac1135cf3b03d4031

Fixed for `flink-shaded` via
* master: 022d3f84965399189ba32c84ef196e8a2ab127ce

> Update copyright year to 2021 for NOTICE files
> --
>
> Key: FLINK-20906
> URL: https://issues.apache.org/jira/browse/FLINK-20906
> Project: Flink
>  Issue Type: Task
>  Components: Release System
>Reporter: Xintong Song
>Assignee: Xintong Song
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.13.0, 1.11.4, 1.12.1, 1.11.3
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20906) Update copyright year to 2021 for NOTICE files

2021-01-15 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-20906:
-
Fix Version/s: (was: 1.11.3)
   1.10.3

> Update copyright year to 2021 for NOTICE files
> --
>
> Key: FLINK-20906
> URL: https://issues.apache.org/jira/browse/FLINK-20906
> Project: Flink
>  Issue Type: Task
>  Components: Release System
>Reporter: Xintong Song
>Assignee: Xintong Song
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.3, 1.13.0, 1.11.4, 1.12.1
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] Myasuka opened a new pull request #14662: [FLINK-20675][checkpointing] Ensure asynchronous checkpoint failure could fail the job by default

2021-01-15 Thread GitBox


Myasuka opened a new pull request #14662:
URL: https://github.com/apache/flink/pull/14662


   ## What is the purpose of the change
   
   Compared with https://github.com/apache/flink/pull/14656, this PR add 
another commit to refactor interfaces to decline checkpoint with 
`CheckpointException` instead of previous `Throwable`.
   
   Currently, no mater how many times of async checkpoint failure, job would 
not trigger failover. This PR ensure the mechanism of failing the job by 
default when async phase of checkpoint failed, which was broken in FLINK-12364.
   
   ## Brief change log
   
 - Let `CHECKPOINT_ASYNC_EXCEPTION` could also be treated to fail the job.
 - Ensure decline checkpoint with specific `CheckpointException` instead of 
previous `throwable`.
 - If task is not running, never decline the checkpoint on task side to 
avoid unexpected failover again.
 - refactor interfaces to decline checkpoint with `CheckpointException` 
instead of previous `Throwable`.
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
 - Added 
`CheckpointFailureManagerITCase#testAsyncCheckpointFailureTriggerJobFailed` to 
ensure job would failed once async checkpoint failed.
 
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: **yes**
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20986) GenericTypeInfo equality issue

2021-01-15 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-20986:
---
Component/s: (was: Runtime / REST)

> GenericTypeInfo equality issue
> --
>
> Key: FLINK-20986
> URL: https://issues.apache.org/jira/browse/FLINK-20986
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.12.0
>Reporter: Tianshi Zhu
>Priority: Major
>  Labels: pull-request-available
>
> When trying to use Flink REST api to run a job that uses Flink table api with 
> blink planner, we encountered an issue about `Incompatible types of 
> expression and result type.` from 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator$$anonfun$generateResultExpression$1.apply(ExprCodeGenerator.scala:311).
>  This issue only happens after the first request has been handled 
> successfully.
>  
> After digging, we found that there are two static caches used inside calcite's
> RelDataTypeFactoryImpl (
> https://github.com/apache/calcite/blob/d9a81b88ad561e7e4cedae93e805e0d7a53a7f1a/core/src/main/java/org/apache/calcite/rel/type/RelDataTypeFactoryImpl.java#L352-L376
> ), which will remember the types they have seen. The `canonize` method is 
> called from FlinkTypeFactory 
> https://github.com/apache/flink/blob/89f9dcd70dc3a1433055e17775b2b2a2c796ca94/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/calcite/FlinkTypeFactory.scala#L292
>  
> This causes problem for us because in our experience, we have seen 
> GenericTypeInfo containing different Class instances in two different 
> REST requests, and they do not equal, because 
> [https://github.com/apache/flink/blob/89f9dcd70dc3a1433055e17775b2b2a2c796ca94/flink-core/src/main/java/org/apache/flink/api/java/typeutils/GenericTypeInfo.java#L124]
>  is using object equality. After `canonize`, the GenericTypeInfo for other 
> REST requests would be changed to the GenericTypeInfo used for the first REST 
> request, which is cached in RelDataTypeFactoryImpl. And this leads to the 
> incompatible type error mentioned above.
>  
> I want to propose using class name for equality comparison inside 
> GenericTypeInfo, and change hashCode method accordingly.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14616: [FLINK-20879][table-api] Use MemorySize type instead of String type for memory ConfigOption in ExecutionConfigOptions

2021-01-15 Thread GitBox


flinkbot edited a comment on pull request #14616:
URL: https://github.com/apache/flink/pull/14616#issuecomment-758526377


   
   ## CI report:
   
   * 448c026a402e045e050f405daf934a8a7c880c9d UNKNOWN
   * 043d57501a1c07d476e7b9490faa066eb947a98f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12090)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14627: [FLINK-20946][python] Optimize Python ValueState Implementation In PyFlink

2021-01-15 Thread GitBox


flinkbot edited a comment on pull request #14627:
URL: https://github.com/apache/flink/pull/14627#issuecomment-759272918


   
   ## CI report:
   
   * 293b8ca01400d911925460a29cc8d2bfb41a8522 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12093)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14637: [FLINK-20949][table-planner-blink] Separate the implementation of sink nodes

2021-01-15 Thread GitBox


flinkbot edited a comment on pull request #14637:
URL: https://github.com/apache/flink/pull/14637#issuecomment-759980085


   
   ## CI report:
   
   * 640270670c3ce146147fa7f0fd0f7305d8aa Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12094)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14656: [FLINK-20675][checkpointing] Ensure asynchronous checkpoint failure could fail the job by default

2021-01-15 Thread GitBox


flinkbot edited a comment on pull request #14656:
URL: https://github.com/apache/flink/pull/14656#issuecomment-760627740


   
   ## CI report:
   
   * 6cb6c9f2ed48920003bc7c51a785b154eed31ff7 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12091)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14652: [FLINK-20953][canal][json] Option 'canal-json.database.include' and 'canal-json.table.include' support regular expression

2021-01-15 Thread GitBox


flinkbot edited a comment on pull request #14652:
URL: https://github.com/apache/flink/pull/14652#issuecomment-760296852


   
   ## CI report:
   
   * 72601d907bc02eb3b74499a36f15f713406d911c Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12088)
 
   * 307e51f7f0aec480cd035bee12764f8ec10be2d7 UNKNOWN
   * f3d72bd76a56f59b9c5cde33d4722f5f67ef815d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14660: [FLINK-20883][table-api-java] Support calling SQL expressions in Table API

2021-01-15 Thread GitBox


flinkbot edited a comment on pull request #14660:
URL: https://github.com/apache/flink/pull/14660#issuecomment-760748649


   
   ## CI report:
   
   * 08a9fff38a4698b216fbc9cd5c25c7875db0e827 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12104)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14662: [FLINK-20675][checkpointing] Ensure asynchronous checkpoint failure could fail the job by default

2021-01-15 Thread GitBox


flinkbot commented on pull request #14662:
URL: https://github.com/apache/flink/pull/14662#issuecomment-760759671


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 83d11e2d31f495d56a0566c95f5d1045ae5ef0c3 (Fri Jan 15 
08:49:47 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20959) How to close Apache Flink REST API

2021-01-15 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17265826#comment-17265826
 ] 

Robert Metzger commented on FLINK-20959:


I believe Chesnay posted the wrong Jira ID. I guess he meant:  
https://issues.apache.org/jira/browse/FLINK-20875

What we generally recommend users is securing access to the REST API: restrict 
who can access the REST API. Not everyone in a company should be allowed 
accessing the REST API.
You could for example run Flink in a cluster that is in a (virtual) private 
network, where only a few people have access. Or you set up a firewall 
restricting access to Flink ports.

If you need to control who can access Flink, you can run Flink's REST API 
behind a reverse proxy (for example nginx).

> How to close Apache Flink REST API
> --
>
> Key: FLINK-20959
> URL: https://issues.apache.org/jira/browse/FLINK-20959
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
>Affects Versions: 1.10.2
>Reporter: wuchangwen
>Priority: Major
> Fix For: 1.10.2
>
>
> Apache Flink 1.10.2 has  CVE-2020-17518 vulnerability in the REST API. Now 
> that I want to turn off the REST API service, how should I set up the 
> configuration file?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16478) add restApi to modify loglevel

2021-01-15 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-16478:
--
Priority: Major  (was: Minor)

> add restApi to modify loglevel 
> ---
>
> Key: FLINK-16478
> URL: https://issues.apache.org/jira/browse/FLINK-16478
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Reporter: xiaodao
>Priority: Major
>
> sometimes we may need to change loglevel to get more information to resolved 
> bug, now we need to stop it and modify conf/log4j.properties and resubmit it 
> ,i think it's better to add rest api to modify loglevel.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20944) Launching in application mode requesting a ClusterIP rest service type results in an Exception

2021-01-15 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17265831#comment-17265831
 ] 

Robert Metzger commented on FLINK-20944:


[~fly_in_gis] are you taking care of this issue?

> Launching in application mode requesting a ClusterIP rest service type 
> results in an Exception
> --
>
> Key: FLINK-20944
> URL: https://issues.apache.org/jira/browse/FLINK-20944
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.12.0
> Environment: Kubernetes 1.17
> Flink 1.12
> Running ./bin/flink from an Ubuntu 18.04 host
>Reporter: Jason Brome
>Priority: Critical
> Fix For: 1.13.0, 1.12.2
>
>
> Run a Flink job in Kubernetes in application mode, specifying 
> kubernetes.rest-service.exposed.type=ClusterIP, results in the job being 
> started, however the call to ./bin/flink throws an UnknownHostException 
> Exception on the client.
> Command line:
> {{./bin/flink run-application --target kubernetes-application 
> -Dkubernetes.cluster-id=myjob-qa 
> -Dkubernetes.container.image=_SOME_REDACTED_PATH/somrepo/someimage_ 
> -Dkubernetes.service-account=flink-service-account 
> -Dkubernetes.namespace=myjob-qa 
> -Dkubernetes.rest-service.exposed.type=ClusterIP local:///opt/flink}}
> {{/usrlib/my-job.jar}}
> Output:
> 2021-01-12 20:29:19,047 INFO 
> org.apache.flink.kubernetes.utils.KubernetesUtils [] - Kubernetes deployment 
> requires a fixed port. Configuration blob.server.port will be set to 6124
> 2021-01-12 20:29:19,048 INFO 
> org.apache.flink.kubernetes.utils.KubernetesUtils [] - Kubernetes deployment 
> requires a fixed port. Configuration taskmanager.rpc.port will be set to 6122
> 2021-01-12 20:29:20,369 ERROR 
> org.apache.flink.kubernetes.kubeclient.Fabric8FlinkKubeClient [] - A 
> Kubernetes exception occurred.
> java.net.UnknownHostException: myjob-qa-rest.myjob-qa: Name or service not 
> known
>  at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) ~[?:1.8.0_275]
>  at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929) 
> ~[?:1.8.0_275]
>  at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324) 
> ~[?:1.8.0_275]
>  at java.net.InetAddress.getAllByName0(InetAddress.java:1277) ~[?:1.8.0_275]
>  at java.net.InetAddress.getAllByName(InetAddress.java:1193) ~[?:1.8.0_275]
>  at java.net.InetAddress.getAllByName(InetAddress.java:1127) ~[?:1.8.0_275]
>  at java.net.InetAddress.getByName(InetAddress.java:1077) ~[?:1.8.0_275]
>  at 
> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.getWebMonitorAddress(HighAvailabilityServicesUtils.java:193)
>  ~[flink-dist_2.12-1.12.0.jar:1.12.0]
>  at 
> org.apache.flink.kubernetes.KubernetesClusterDescriptor.lambda$createClusterClientProvider$0(KubernetesClusterDescriptor.java:114)
>  ~[flink-dist_2.12-1.12.0.jar:1.12.0]
>  at 
> org.apache.flink.kubernetes.KubernetesClusterDescriptor.deployApplicationCluster(KubernetesClusterDescriptor.java:185)
>  ~[flink-dist_2.12-1.12.0.jar:1.12.0]
>  at 
> org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:64)
>  ~[flink-dist_2.12-1.12.0.jar:1.12.0]
>  at 
> org.apache.flink.client.cli.CliFrontend.runApplication(CliFrontend.java:207) 
> ~[flink-dist_2.12-1.12.0.jar:1.12.0]
>  at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:974) 
> ~[flink-dist_2.12-1.12.0.jar:1.12.0]
>  at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1047) 
> ~[flink-dist_2.12-1.12.0.jar:1.12.0]
>  at 
> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
>  [flink-dist_2.12-1.12.0.jar:1.12.0]
>  at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1047) 
> [flink-dist_2.12-1.12.0.jar:1.12.0]
> 
>  The program finished with the following exception:
> java.lang.RuntimeException: 
> org.apache.flink.client.deployment.ClusterRetrieveException: Could not create 
> the RestClusterClient.
>  at 
> org.apache.flink.kubernetes.KubernetesClusterDescriptor.lambda$createClusterClientProvider$0(KubernetesClusterDescriptor.java:118)
>  at 
> org.apache.flink.kubernetes.KubernetesClusterDescriptor.deployApplicationCluster(KubernetesClusterDescriptor.java:185)
>  at 
> org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:64)
>  at 
> org.apache.flink.client.cli.CliFrontend.runApplication(CliFrontend.java:207)
>  at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:974)
>  at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1047)
>  at 
> 

[GitHub] [flink] wenlong88 opened a new pull request #14663: [FLINK-20925][table-planner-blink] Separate implemention of StreamExecLookupJoin and BatchExecLookupJoin

2021-01-15 Thread GitBox


wenlong88 opened a new pull request #14663:
URL: https://github.com/apache/flink/pull/14663


   ## What is the purpose of the change
   Separate the implementation of StreamExecLookupJoin and BatchExecLookupJoin.
   
   
   ## Brief change log
   Introduce StreamPhysicalLookupJoin and BatchPhysicalLookupJoin, and make 
their ExecNode only extended from ExecNode
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a refactoring rework covered by existing tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20925) Separate the implementation of StreamExecLookup and BatchExecLookup

2021-01-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20925:
---
Labels: pull-request-available  (was: )

> Separate the implementation of StreamExecLookup and BatchExecLookup
> ---
>
> Key: FLINK-20925
> URL: https://issues.apache.org/jira/browse/FLINK-20925
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Wenlong Lyu
>Assignee: Wenlong Lyu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #14616: [FLINK-20879][table-api] Use MemorySize type instead of String type for memory ConfigOption in ExecutionConfigOptions

2021-01-15 Thread GitBox


wuchong merged pull request #14616:
URL: https://github.com/apache/flink/pull/14616


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-20879) Use MemorySize type instead of String type for memory ConfigOption in ExecutionConfigOptions

2021-01-15 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-20879.
---
Resolution: Fixed

Fixed in master: bfb8d6866f5a19a12a1a7e46ff0fe615110a7fd6

> Use MemorySize type instead of String type for memory ConfigOption in 
> ExecutionConfigOptions
> 
>
> Key: FLINK-20879
> URL: https://issues.apache.org/jira/browse/FLINK-20879
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: godfrey he
>Assignee: jiawen xiao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> Currently, There are memory ConfigOptions in ExecutionConfigOptions such as 
> {{table.exec.resource.external-buffer-memory}}, 
> {{table.exec.resource.hash-agg.memory}}. They are all {{String}} type now. 
> While when we need to get the memory size value, the String value should be 
> converted to {{MemorySize}} type and then getting bytes value. Code likes:
> {code:java}
> val memoryBytes = MemorySize.parse(config.getConfiguration.getString(
>   ExecutionConfigOptions.TABLE_EXEC_RESOURCE_HASH_AGG_MEMORY)).getBytes
> {code}
> The above code can be simplified if we change the {{ConfigOption}} type from 
> {{String}} to {{MemorySize}} type. Many runtime {{ConfigOption}} s also use 
> {{MemorySize}} type to define memory config. So I suggest we use 
> {{MemorySize}} type instead of {{String}} type for memory {{ConfigOption}} in 
> {{ExecutionConfigOptions}}.
> Note: this is an incompatible change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #14663: [FLINK-20925][table-planner-blink] Separate implemention of StreamExecLookupJoin and BatchExecLookupJoin

2021-01-15 Thread GitBox


flinkbot commented on pull request #14663:
URL: https://github.com/apache/flink/pull/14663#issuecomment-760764364


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 1bac0fae97e00f2c2f5b2d9b0ad7f1ae97845804 (Fri Jan 15 
08:59:49 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lingya commented on a change in pull request #14271: [FLINK-12130][clients] Apply command line options to configuration be…

2021-01-15 Thread GitBox


lingya commented on a change in pull request #14271:
URL: https://github.com/apache/flink/pull/14271#discussion_r558080030



##
File path: 
flink-clients/src/main/java/org/apache/flink/client/cli/CliFrontend.java
##
@@ -630,66 +715,75 @@ public CommandLine getCommandLine(final Options 
commandOptions, final String[] a
}
 
/**
-* Executes the SAVEPOINT action.
-*
-* @param args Command line arguments for the savepoint action.
+* The 'savepoint' action.
 */
-   protected void savepoint(String[] args) throws Exception {
-   LOG.info("Running 'savepoint' command.");
+   protected class ActionSavepoint implements 
CommandAction {
 
-   final Options commandOptions = 
CliFrontendParser.getSavepointCommandOptions();
+   @Override
+   public SavepointOptions parse(String[] args) throws 
CliArgsException {
+   final Options commandOptions = 
CliFrontendParser.getSavepointCommandOptions();
 
-   final Options commandLineOptions = 
CliFrontendParser.mergeOptions(commandOptions, customCommandLineOptions);
+   final Options commandLineOptions = 
CliFrontendParser.mergeOptions(commandOptions, customCommandLineOptions);
 
-   final CommandLine commandLine = 
CliFrontendParser.parse(commandLineOptions, args, false);
+   final CommandLine commandLine = 
CliFrontendParser.parse(commandLineOptions, args, false);
 
-   final SavepointOptions savepointOptions = new 
SavepointOptions(commandLine);
+   return new SavepointOptions(commandLine);
+   }
 
-   // evaluate help flag
-   if (savepointOptions.isPrintHelp()) {
-   
CliFrontendParser.printHelpForSavepoint(customCommandLines);
-   return;
+   @Override
+   public Configuration getConfiguration(
+   SavepointOptions options,
+   CustomCommandLine activeCommandLine) throws Exception {
+   return getEffectiveConfiguration(activeCommandLine, 
options.getCommandLine());
}
 
-   final CustomCommandLine activeCommandLine = 
validateAndGetActiveCommandLine(commandLine);
+   @Override
+   public int runAction(
+   SavepointOptions options,
+   Configuration configuration) throws Exception {
+   LOG.info("Running 'savepoint' command.");
 
-   if (savepointOptions.isDispose()) {
-   runClusterAction(
-   activeCommandLine,
-   commandLine,
-   clusterClient -> 
disposeSavepoint(clusterClient, savepointOptions.getSavepointPath()));
-   } else {
-   String[] cleanedArgs = savepointOptions.getArgs();
+   if (options.isDispose()) {
+   runClusterAction(
+   configuration,
+   clusterClient -> 
disposeSavepoint(clusterClient, options.getSavepointPath()));
+   } else {

Review comment:
   @jiasheng55  @V1ncentzzZ  Any progress about this?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14652: [FLINK-20953][canal][json] Option 'canal-json.database.include' and 'canal-json.table.include' support regular expression

2021-01-15 Thread GitBox


flinkbot edited a comment on pull request #14652:
URL: https://github.com/apache/flink/pull/14652#issuecomment-760296852


   
   ## CI report:
   
   * 72601d907bc02eb3b74499a36f15f713406d911c Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12088)
 
   * 307e51f7f0aec480cd035bee12764f8ec10be2d7 UNKNOWN
   * f3d72bd76a56f59b9c5cde33d4722f5f67ef815d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12103)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14662: [FLINK-20675][checkpointing] Ensure asynchronous checkpoint failure could fail the job by default

2021-01-15 Thread GitBox


flinkbot commented on pull request #14662:
URL: https://github.com/apache/flink/pull/14662#issuecomment-760769285


   
   ## CI report:
   
   * 83d11e2d31f495d56a0566c95f5d1045ae5ef0c3 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] godfreyhe opened a new pull request #14664: [FLINK-20989][table-planner-blink] Functions in ExplodeFunctionUtil should handle null data to avoid NPE

2021-01-15 Thread GitBox


godfreyhe opened a new pull request #14664:
URL: https://github.com/apache/flink/pull/14664


   
   ## What is the purpose of the change
   
   *The following test case will encounter NPE:
   
   val t = tEnv.fromValues(
 DataTypes.ROW(
   DataTypes.FIELD("a", DataTypes.INT()),
   DataTypes.FIELD("b", DataTypes.ARRAY(DataTypes.STRING()))
 ),
 row(1, Array("aa", "bb", "cc")),
 row(2, null),
 row(3, Array("dd"))
   )
   tEnv.registerTable("T", t)
   tEnv.executeSql("SELECT a, s FROM T, UNNEST(T.b) as A (s)").print()
   
   The reason is functions in ExplodeFunctionUtil do not handle null data, this 
pr aims to fix the bug.*
   
   
   ## Brief change log
   
 - *Handle null data in ExplodeFunctionUtil*
   
   
   ## Verifying this change
   
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added ExploadFuntionUtilTest to verify each function*
 - *Extended UnnestITCase to verify the bug via e2e*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ **not documented**)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14663: [FLINK-20925][table-planner-blink] Separate implemention of StreamExecLookupJoin and BatchExecLookupJoin

2021-01-15 Thread GitBox


flinkbot commented on pull request #14663:
URL: https://github.com/apache/flink/pull/14663#issuecomment-760769392


   
   ## CI report:
   
   * 1bac0fae97e00f2c2f5b2d9b0ad7f1ae97845804 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20989) Functions in ExplodeFunctionUtil should handle null data to avoid NPE

2021-01-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20989:
---
Labels: pull-request-available  (was: )

> Functions in ExplodeFunctionUtil should handle null data to avoid NPE
> -
>
> Key: FLINK-20989
> URL: https://issues.apache.org/jira/browse/FLINK-20989
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.3
>Reporter: godfrey he
>Assignee: godfrey he
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.4
>
>
> The following test case will encounter NPE:
> {code:scala}
> val t = tEnv.fromValues(
>   DataTypes.ROW(
> DataTypes.FIELD("a", DataTypes.INT()),
> DataTypes.FIELD("b", DataTypes.ARRAY(DataTypes.STRING()))
>   ),
>   row(1, Array("aa", "bb", "cc")),
>   row(2, null),
>   row(3, Array("dd"))
> )
> tEnv.registerTable("T", t)
> tEnv.executeSql("SELECT a, s FROM T, UNNEST(T.b) as A (s)").print()
> {code}
> Exception is 
> {code:java}
> Caused by: java.lang.NullPointerException
>   at 
> scala.collection.mutable.ArrayOps$ofRef$.length$extension(ArrayOps.scala:192)
>   at scala.collection.mutable.ArrayOps$ofRef.length(ArrayOps.scala:192)
>   at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:32)
>   at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>   at 
> org.apache.flink.table.planner.plan.utils.ObjectExplodeTableFunc.eval(ExplodeFunctionUtil.scala:34)
> {code}
> The reason is functions in ExplodeFunctionUtil do not handle null data



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20989) Functions in ExplodeFunctionUtil should handle null data to avoid NPE

2021-01-15 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he updated FLINK-20989:
---
Description: 
The following test case will encounter NPE:

{code:scala}
val t = tEnv.fromValues(
  DataTypes.ROW(
DataTypes.FIELD("a", DataTypes.INT()),
DataTypes.FIELD("b", DataTypes.ARRAY(DataTypes.STRING()))
  ),
  row(1, Array("aa", "bb", "cc")),
  row(2, null),
  row(3, Array("dd"))
)
tEnv.registerTable("T", t)

tEnv.executeSql("SELECT a, s FROM T, UNNEST(T.b) as A (s)").print()
{code}

Exception is 

{code:java}
Caused by: java.lang.NullPointerException
at 
scala.collection.mutable.ArrayOps$ofRef$.length$extension(ArrayOps.scala:192)
at scala.collection.mutable.ArrayOps$ofRef.length(ArrayOps.scala:192)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:32)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at 
org.apache.flink.table.planner.plan.utils.ObjectExplodeTableFunc.eval(ExplodeFunctionUtil.scala:34)
{code}

The reason is functions in ExplodeFunctionUtil do not handle null data. Since 
1.12, the bug is fixed.


  was:
The following test case will encounter NPE:

{code:scala}
val t = tEnv.fromValues(
  DataTypes.ROW(
DataTypes.FIELD("a", DataTypes.INT()),
DataTypes.FIELD("b", DataTypes.ARRAY(DataTypes.STRING()))
  ),
  row(1, Array("aa", "bb", "cc")),
  row(2, null),
  row(3, Array("dd"))
)
tEnv.registerTable("T", t)

tEnv.executeSql("SELECT a, s FROM T, UNNEST(T.b) as A (s)").print()
{code}

Exception is 

{code:java}
Caused by: java.lang.NullPointerException
at 
scala.collection.mutable.ArrayOps$ofRef$.length$extension(ArrayOps.scala:192)
at scala.collection.mutable.ArrayOps$ofRef.length(ArrayOps.scala:192)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:32)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at 
org.apache.flink.table.planner.plan.utils.ObjectExplodeTableFunc.eval(ExplodeFunctionUtil.scala:34)
{code}

The reason is functions in ExplodeFunctionUtil do not handle null data



> Functions in ExplodeFunctionUtil should handle null data to avoid NPE
> -
>
> Key: FLINK-20989
> URL: https://issues.apache.org/jira/browse/FLINK-20989
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.3
>Reporter: godfrey he
>Assignee: godfrey he
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.4
>
>
> The following test case will encounter NPE:
> {code:scala}
> val t = tEnv.fromValues(
>   DataTypes.ROW(
> DataTypes.FIELD("a", DataTypes.INT()),
> DataTypes.FIELD("b", DataTypes.ARRAY(DataTypes.STRING()))
>   ),
>   row(1, Array("aa", "bb", "cc")),
>   row(2, null),
>   row(3, Array("dd"))
> )
> tEnv.registerTable("T", t)
> tEnv.executeSql("SELECT a, s FROM T, UNNEST(T.b) as A (s)").print()
> {code}
> Exception is 
> {code:java}
> Caused by: java.lang.NullPointerException
>   at 
> scala.collection.mutable.ArrayOps$ofRef$.length$extension(ArrayOps.scala:192)
>   at scala.collection.mutable.ArrayOps$ofRef.length(ArrayOps.scala:192)
>   at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:32)
>   at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>   at 
> org.apache.flink.table.planner.plan.utils.ObjectExplodeTableFunc.eval(ExplodeFunctionUtil.scala:34)
> {code}
> The reason is functions in ExplodeFunctionUtil do not handle null data. Since 
> 1.12, the bug is fixed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] rmetzger merged pull request #14650: [FLINK-20927][yarn] Update configuration option in YarnConfigOptions class

2021-01-15 Thread GitBox


rmetzger merged pull request #14650:
URL: https://github.com/apache/flink/pull/14650


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14664: [FLINK-20989][table-planner-blink] Functions in ExplodeFunctionUtil should handle null data to avoid NPE

2021-01-15 Thread GitBox


flinkbot commented on pull request #14664:
URL: https://github.com/apache/flink/pull/14664#issuecomment-760770600


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit ebfe3612fb61d00749d4be184c7362beb52c15f0 (Fri Jan 15 
09:12:10 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20927) Update configuration option in YarnConfigOptions class

2021-01-15 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17265847#comment-17265847
 ] 

Robert Metzger commented on FLINK-20927:


Merged to master in 
https://github.com/apache/flink/commit/49f8af05cb1abe236aa96dff91d511ccdeed3673

> Update configuration option in YarnConfigOptions class
> --
>
> Key: FLINK-20927
> URL: https://issues.apache.org/jira/browse/FLINK-20927
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: Ruguo Yu
>Assignee: Ruguo Yu
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.13.0
>
> Attachments: image-2021-01-11-18-36-20-723.png, 
> image-2021-01-11-18-36-44-811.png
>
>
> There are many configuration options that use a not recommended method to 
> build in _YarnConfigOptions_ class, mainly include the following:
> 1. Use the deprecated method 'defaultValue' directly instead of specifying 
> the data type.
> !image-2021-01-11-18-36-20-723.png|width=842,height=90!
> 2. Use String instead of Enum types.
> !image-2021-01-11-18-36-44-811.png|width=1128,height=139!
>  
> Therefore, for the above problem, we can update it with the new API, which 
> can also be consistent with KubernetesConfigOptions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20989) Functions in ExplodeFunctionUtil should handle null data to avoid NPE

2021-01-15 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he updated FLINK-20989:
---
Description: 
The following test case will encounter NPE:

{code:scala}
val t = tEnv.fromValues(
  DataTypes.ROW(
DataTypes.FIELD("a", DataTypes.INT()),
DataTypes.FIELD("b", DataTypes.ARRAY(DataTypes.STRING()))
  ),
  row(1, Array("aa", "bb", "cc")),
  row(2, null),
  row(3, Array("dd"))
)
tEnv.registerTable("T", t)

tEnv.executeSql("SELECT a, s FROM T, UNNEST(T.b) as A (s)").print()
{code}

Exception is 

{code:java}
Caused by: java.lang.NullPointerException
at 
scala.collection.mutable.ArrayOps$ofRef$.length$extension(ArrayOps.scala:192)
at scala.collection.mutable.ArrayOps$ofRef.length(ArrayOps.scala:192)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:32)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at 
org.apache.flink.table.planner.plan.utils.ObjectExplodeTableFunc.eval(ExplodeFunctionUtil.scala:34)
{code}

The reason is functions in ExplodeFunctionUtil do not handle null data. Since 
1.12, the bug is fixed, see https://issues.apache.org/jira/browse/FLINK-18528


  was:
The following test case will encounter NPE:

{code:scala}
val t = tEnv.fromValues(
  DataTypes.ROW(
DataTypes.FIELD("a", DataTypes.INT()),
DataTypes.FIELD("b", DataTypes.ARRAY(DataTypes.STRING()))
  ),
  row(1, Array("aa", "bb", "cc")),
  row(2, null),
  row(3, Array("dd"))
)
tEnv.registerTable("T", t)

tEnv.executeSql("SELECT a, s FROM T, UNNEST(T.b) as A (s)").print()
{code}

Exception is 

{code:java}
Caused by: java.lang.NullPointerException
at 
scala.collection.mutable.ArrayOps$ofRef$.length$extension(ArrayOps.scala:192)
at scala.collection.mutable.ArrayOps$ofRef.length(ArrayOps.scala:192)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:32)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at 
org.apache.flink.table.planner.plan.utils.ObjectExplodeTableFunc.eval(ExplodeFunctionUtil.scala:34)
{code}

The reason is functions in ExplodeFunctionUtil do not handle null data. Since 
1.12, the bug is fixed.



> Functions in ExplodeFunctionUtil should handle null data to avoid NPE
> -
>
> Key: FLINK-20989
> URL: https://issues.apache.org/jira/browse/FLINK-20989
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.3
>Reporter: godfrey he
>Assignee: godfrey he
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.4
>
>
> The following test case will encounter NPE:
> {code:scala}
> val t = tEnv.fromValues(
>   DataTypes.ROW(
> DataTypes.FIELD("a", DataTypes.INT()),
> DataTypes.FIELD("b", DataTypes.ARRAY(DataTypes.STRING()))
>   ),
>   row(1, Array("aa", "bb", "cc")),
>   row(2, null),
>   row(3, Array("dd"))
> )
> tEnv.registerTable("T", t)
> tEnv.executeSql("SELECT a, s FROM T, UNNEST(T.b) as A (s)").print()
> {code}
> Exception is 
> {code:java}
> Caused by: java.lang.NullPointerException
>   at 
> scala.collection.mutable.ArrayOps$ofRef$.length$extension(ArrayOps.scala:192)
>   at scala.collection.mutable.ArrayOps$ofRef.length(ArrayOps.scala:192)
>   at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:32)
>   at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>   at 
> org.apache.flink.table.planner.plan.utils.ObjectExplodeTableFunc.eval(ExplodeFunctionUtil.scala:34)
> {code}
> The reason is functions in ExplodeFunctionUtil do not handle null data. Since 
> 1.12, the bug is fixed, see https://issues.apache.org/jira/browse/FLINK-18528



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-20927) Update configuration option in YarnConfigOptions class

2021-01-15 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-20927.
--
Resolution: Fixed

> Update configuration option in YarnConfigOptions class
> --
>
> Key: FLINK-20927
> URL: https://issues.apache.org/jira/browse/FLINK-20927
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: Ruguo Yu
>Assignee: Ruguo Yu
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.13.0
>
> Attachments: image-2021-01-11-18-36-20-723.png, 
> image-2021-01-11-18-36-44-811.png
>
>
> There are many configuration options that use a not recommended method to 
> build in _YarnConfigOptions_ class, mainly include the following:
> 1. Use the deprecated method 'defaultValue' directly instead of specifying 
> the data type.
> !image-2021-01-11-18-36-20-723.png|width=842,height=90!
> 2. Use String instead of Enum types.
> !image-2021-01-11-18-36-44-811.png|width=1128,height=139!
>  
> Therefore, for the above problem, we can update it with the new API, which 
> can also be consistent with KubernetesConfigOptions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] godfreyhe commented on pull request #14664: [FLINK-20989][table-planner-blink] Functions in ExplodeFunctionUtil should handle null data to avoid NPE

2021-01-15 Thread GitBox


godfreyhe commented on pull request #14664:
URL: https://github.com/apache/flink/pull/14664#issuecomment-760770939


   cc @wuchong 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-6042) Display last n exceptions/causes for job restarts in Web UI

2021-01-15 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-6042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17265851#comment-17265851
 ] 

Till Rohrmann commented on FLINK-6042:
--

Any signature change to {{AccessExecutionGraph}} needed to let 
{{ArchivedExecutionGraph}} return more {{ErrorInfos}} would also affect the 
{{ExecutionGraph}}. Hence, it might be easier to introduce a {{JobInformation}} 
object which is sent to the REST handlers. This object could contain the 
{{AccessExecutionGraph}} and additional information (e.g. the exception 
history).

Collecting and grouping the exceptions might be not super trivial. Keep in mind 
that not every recovery is a global recovery and there can potentially be 
concurrent recovery actions happening if we are running an embarrassingly 
parallel job. Conceptually, I guess that one should start collecting the 
exceptions whenever a failure occurs. This will then either trigger a recovery 
or failing the job. If it is a recovery, then one would collect all exceptions 
for the set of tasks to be restarted until one has restarted this set. If the 
failure leads to a job failure, then we cancel the job and collect all 
exceptions until the job has been canceled.

Since there can be concurrent recoveries happening, I am not sure whether the 
{{JobStatusListener}} is sufficient because you wouldn't notice the second 
recovery operation if you are already in the {{JobStatus.RESTARTING}}.

> Display last n exceptions/causes for job restarts in Web UI
> ---
>
> Key: FLINK-6042
> URL: https://issues.apache.org/jira/browse/FLINK-6042
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination, Runtime / Web Frontend
>Affects Versions: 1.3.0
>Reporter: Till Rohrmann
>Assignee: Matthias
>Priority: Major
>
> Users requested that it would be nice to see the last {{n}} exceptions 
> causing a job restart in the Web UI. This will help to more easily debug and 
> operate a job.
> We could store the root causes for failures similar to how prior executions 
> are stored in the {{ExecutionVertex}} using the {{EvictingBoundedList}} and 
> then serve this information via the Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xiaoHoly commented on pull request #14531: [FLINK-20777][Connector][Kafka] Property "partition.discovery.interval.ms" shoule be enabled by default for unbounded mode, and disable for

2021-01-15 Thread GitBox


xiaoHoly commented on pull request #14531:
URL: https://github.com/apache/flink/pull/14531#issuecomment-760774148


   Hi,@becketqin .I have completed the addition of the unit test ,please take a 
look when you hava spare cycle



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-20659) YARNSessionCapacitySchedulerITCase.perJobYarnClusterOffHeap test failed with NPE

2021-01-15 Thread Matthias (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias reassigned FLINK-20659:


Assignee: Matthias

> YARNSessionCapacitySchedulerITCase.perJobYarnClusterOffHeap test failed with 
> NPE
> 
>
> Key: FLINK-20659
> URL: https://issues.apache.org/jira/browse/FLINK-20659
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.11.0, 1.12.0, 1.13.0
>Reporter: Huang Xingbo
>Assignee: Matthias
>Priority: Major
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10989=logs=fc5181b0-e452-5c8f-68de-1097947f6483=6b04ca5f-0b52-511d-19c9-52bf0d9fbdfa]
> {code:java}
> 2020-12-17T22:57:58.1994352Z Test 
> perJobYarnClusterOffHeap(org.apache.flink.yarn.YARNSessionCapacitySchedulerITCase)
>  failed with:
> 2020-12-17T22:57:58.1994893Z java.lang.NullPointerException: 
> java.lang.NullPointerException
> 2020-12-17T22:57:58.1995439Z  at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.getAggregateAppResourceUsage(RMAppAttemptMetrics.java:128)
> 2020-12-17T22:57:58.1996185Z  at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.getApplicationResourceUsageReport(RMAppAttemptImpl.java:900)
> 2020-12-17T22:57:58.1996919Z  at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.createAndGetApplicationReport(RMAppImpl.java:660)
> 2020-12-17T22:57:58.1997526Z  at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplications(ClientRMService.java:930)
> 2020-12-17T22:57:58.1998193Z  at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplications(ApplicationClientProtocolPBServiceImpl.java:273)
> 2020-12-17T22:57:58.1998960Z  at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:507)
> 2020-12-17T22:57:58.1999876Z  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
> 2020-12-17T22:57:58.2000346Z  at 
> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
> 2020-12-17T22:57:58.2000744Z  at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:847)
> 2020-12-17T22:57:58.2001532Z  at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:790)
> 2020-12-17T22:57:58.2001915Z  at 
> java.security.AccessController.doPrivileged(Native Method)
> 2020-12-17T22:57:58.2002286Z  at 
> javax.security.auth.Subject.doAs(Subject.java:422)
> 2020-12-17T22:57:58.2002734Z  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
> 2020-12-17T22:57:58.2003185Z  at 
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:2486)
> 2020-12-17T22:57:58.2003447Z 
> 2020-12-17T22:57:58.2003708Z  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 2020-12-17T22:57:58.2004233Z  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> 2020-12-17T22:57:58.2004810Z  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> 2020-12-17T22:57:58.2005468Z  at 
> java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> 2020-12-17T22:57:58.2005907Z  at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
> 2020-12-17T22:57:58.2006387Z  at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateRuntimeException(RPCUtil.java:85)
> 2020-12-17T22:57:58.2006920Z  at 
> org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:122)
> 2020-12-17T22:57:58.2007515Z  at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getApplications(ApplicationClientProtocolPBClientImpl.java:291)
> 2020-12-17T22:57:58.2008082Z  at 
> sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)
> 2020-12-17T22:57:58.2008518Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-12-17T22:57:58.2008964Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-12-17T22:57:58.2009430Z  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409)
> 2020-12-17T22:57:58.2010002Z  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
> 2020-12-17T22:57:58.2010554Z  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
> 2020-12-17T22:57:58.2011301Z  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> 2020-12-17T22:57:58.2011857Z  at 
> 

[GitHub] [flink] flinkbot edited a comment on pull request #14530: [FLINK-20348][kafka] Make "schema-registry.subject" optional for Kafka sink with avro-confluent format

2021-01-15 Thread GitBox


flinkbot edited a comment on pull request #14530:
URL: https://github.com/apache/flink/pull/14530#issuecomment-752828495


   
   ## CI report:
   
   * eaa0445691254f544c231ec0d2d55519af277b33 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12025)
 
   * fc88c9bf8bb2cb2c92695441dffcc1be5a21b1c1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14531: [FLINK-20777][Connector][Kafka] Property "partition.discovery.interval.ms" shoule be enabled by default for unbounded mode, and disab

2021-01-15 Thread GitBox


flinkbot edited a comment on pull request #14531:
URL: https://github.com/apache/flink/pull/14531#issuecomment-752828536


   
   ## CI report:
   
   * 638cd231fef2a3a5b866cf1d9e07884ead445c07 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12014)
 
   * 18ccb0196ac6025aff2aab2102ee6f0403e0d921 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14621: [FLINK-20933][python] Config Python Operator Use Managed Memory In Python DataStream

2021-01-15 Thread GitBox


flinkbot edited a comment on pull request #14621:
URL: https://github.com/apache/flink/pull/14621#issuecomment-75873


   
   ## CI report:
   
   * e0aa9de7e35bc1ac79a3d938454a9b5368445c69 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12100)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14662: [FLINK-20675][checkpointing] Ensure asynchronous checkpoint failure could fail the job by default

2021-01-15 Thread GitBox


flinkbot edited a comment on pull request #14662:
URL: https://github.com/apache/flink/pull/14662#issuecomment-760769285


   
   ## CI report:
   
   * 83d11e2d31f495d56a0566c95f5d1045ae5ef0c3 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12107)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   >