[GitHub] [flink] flinkbot edited a comment on issue #9688: [FLINK-13056][runtime] Introduce FastRestartPipelinedRegionStrategy

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #9688: [FLINK-13056][runtime] Introduce 
FastRestartPipelinedRegionStrategy
URL: https://github.com/apache/flink/pull/9688#issuecomment-531737820
 
 
   
   ## CI report:
   
   * 23278b65f27154ae066841334f0d0b06885e713a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/127786107)
   * 5831d972c3f2a68398438015352e5a5e0a8de9da : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129102026)
   * 5c1518ef1f35d14862d371c16dab863c26f22fd8 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129227917)
   * ef600d0ba1e6144def12001f086a784b810b62a0 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133934484)
   * bea922da7880fb3e882f680f5fc9e0124c0add4f : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133938989)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14370) KafkaProducerAtLeastOnceITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceRegularSink fails on Travis

2019-10-28 Thread Jiangjie Qin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated FLINK-14370:
-
Fix Version/s: 1.8.2
   1.9.2
   1.10.0

> KafkaProducerAtLeastOnceITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceRegularSink
>  fails on Travis
> ---
>
> Key: FLINK-14370
> URL: https://issues.apache.org/jira/browse/FLINK-14370
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0
>Reporter: Till Rohrmann
>Assignee: Jiangjie Qin
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.8.2, 1.10.0, 1.9.2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The 
> {{KafkaProducerAtLeastOnceITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceRegularSink}}
>  fails on Travis with
> {code}
> Test 
> testOneToOneAtLeastOnceRegularSink(org.apache.flink.streaming.connectors.kafka.KafkaProducerAtLeastOnceITCase)
>  failed with:
> java.lang.AssertionError: Job should fail!
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaProducerTestBase.testOneToOneAtLeastOnce(KafkaProducerTestBase.java:280)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaProducerTestBase.testOneToOneAtLeastOnceRegularSink(KafkaProducerTestBase.java:206)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}
> https://api.travis-ci.com/v3/job/244297223/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-14370) KafkaProducerAtLeastOnceITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceRegularSink fails on Travis

2019-10-28 Thread Jiangjie Qin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin closed FLINK-14370.

Release Note: The fix has been pushed master, release-1.8 and release-1.9. 
Closing the Jira issue.
  Resolution: Fixed

> KafkaProducerAtLeastOnceITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceRegularSink
>  fails on Travis
> ---
>
> Key: FLINK-14370
> URL: https://issues.apache.org/jira/browse/FLINK-14370
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0
>Reporter: Till Rohrmann
>Assignee: Jiangjie Qin
>Priority: Critical
>  Labels: pull-request-available, test-stability
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The 
> {{KafkaProducerAtLeastOnceITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceRegularSink}}
>  fails on Travis with
> {code}
> Test 
> testOneToOneAtLeastOnceRegularSink(org.apache.flink.streaming.connectors.kafka.KafkaProducerAtLeastOnceITCase)
>  failed with:
> java.lang.AssertionError: Job should fail!
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaProducerTestBase.testOneToOneAtLeastOnce(KafkaProducerTestBase.java:280)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaProducerTestBase.testOneToOneAtLeastOnceRegularSink(KafkaProducerTestBase.java:206)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}
> https://api.travis-ci.com/v3/job/244297223/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14135) Introduce vectorized orc InputFormat for blink runtime

2019-10-28 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961696#comment-16961696
 ] 

Jark Wu commented on FLINK-14135:
-

I assinged this ticket to you [~lzljs3620320]

> Introduce vectorized orc InputFormat for blink runtime
> ---
>
> Key: FLINK-14135
> URL: https://issues.apache.org/jira/browse/FLINK-14135
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / ORC
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>
> VectorizedOrcInputFormat is introduced to read orc data in batches.
> When returning each row of data, instead of actually retrieving each field, 
> we use BaseRow's abstraction to return a Columnar Row-like view.
> This will greatly improve the downstream filtered scenarios, so that there is 
> no need to access redundant fields on the filtered data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-14549) Bring more detail by using logicalType rather than conversionClass in exception msg

2019-10-28 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-14549:
---

Assignee: Leonard Xu

> Bring more detail by using logicalType rather than conversionClass  in 
> exception msg
> 
>
> Key: FLINK-14549
> URL: https://issues.apache.org/jira/browse/FLINK-14549
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.1
>Reporter: Leonard Xu
>Assignee: Leonard Xu
>Priority: Minor
> Fix For: 1.10.0
>
>
> We use DataType‘s conversionClass name in validating the query result's field 
> type and sink table schema which is no precise when the DataType has  
> variable parameters like  DECIMAL(p,s)、TIMESTAMP(p).
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> Field types of query result and registered TableSink 
> `default_catalog`.`default_database`.`q2_sinkTable` do not match.Exception in 
> thread "main" org.apache.flink.table.api.ValidationException: Field types of 
> query result and registered TableSink 
> `default_catalog`.`default_database`.`q2_sinkTable` do not match.Query result 
> schema: [d_week_seq1: Long, EXPR$1: BigDecimal, EXPR$2: BigDecimal, EXPR$3: 
> BigDecimal]TableSink schema:    [d_week_seq1: Long, EXPR$1: BigDecimal, 
> EXPR$2: BigDecimal, EXPR$3: BigDecimal] at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$.validateSink(TableSinkUtils.scala:68)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:179)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:178)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-14135) Introduce vectorized orc InputFormat for blink runtime

2019-10-28 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-14135:
---

Assignee: Jingsong Lee

> Introduce vectorized orc InputFormat for blink runtime
> ---
>
> Key: FLINK-14135
> URL: https://issues.apache.org/jira/browse/FLINK-14135
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / ORC
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>
> VectorizedOrcInputFormat is introduced to read orc data in batches.
> When returning each row of data, instead of actually retrieving each field, 
> we use BaseRow's abstraction to return a Columnar Row-like view.
> This will greatly improve the downstream filtered scenarios, so that there is 
> no need to access redundant fields on the filtered data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10000: [FLINK-14398][SQL/Legacy Planner]Further split input unboxing code into separate methods

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #1: [FLINK-14398][SQL/Legacy 
Planner]Further split input unboxing code into separate methods
URL: https://github.com/apache/flink/pull/1#issuecomment-546552903
 
 
   
   ## CI report:
   
   * 8c5fd072d4a8665c5d89a9b920b9c699baabed00 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133646189)
   * e0e89676bd1a7cf67f028076397bac109698c2bf : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9986: [FLINK-10933][kubernetes] Implement KubernetesSessionClusterEntrypoint.

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #9986: [FLINK-10933][kubernetes] Implement 
KubernetesSessionClusterEntrypoint.
URL: https://github.com/apache/flink/pull/9986#issuecomment-545891630
 
 
   
   ## CI report:
   
   * 593bf42620faf09c1accbd692494646194e3d574 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133371151)
   * bb9fbd1d51a478793f63ae8b6d6e92b6a5a53775 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/133940616)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9984: [FLINK-9495][kubernetes] Implement ResourceManager for Kubernetes.

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #9984: [FLINK-9495][kubernetes] Implement 
ResourceManager for Kubernetes.
URL: https://github.com/apache/flink/pull/9984#issuecomment-545881191
 
 
   
   ## CI report:
   
   * 802ebf37e3d932169f9826b40df483bb5e9ac064 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133366796)
   * f16938ce2fb38ae216def737d14643b94d6083a1 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/133940607)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9688: [FLINK-13056][runtime] Introduce FastRestartPipelinedRegionStrategy

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #9688: [FLINK-13056][runtime] Introduce 
FastRestartPipelinedRegionStrategy
URL: https://github.com/apache/flink/pull/9688#issuecomment-531737820
 
 
   
   ## CI report:
   
   * 23278b65f27154ae066841334f0d0b06885e713a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/127786107)
   * 5831d972c3f2a68398438015352e5a5e0a8de9da : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129102026)
   * 5c1518ef1f35d14862d371c16dab863c26f22fd8 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129227917)
   * ef600d0ba1e6144def12001f086a784b810b62a0 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133934484)
   * bea922da7880fb3e882f680f5fc9e0124c0add4f : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/133938989)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9986: [FLINK-10933][kubernetes] Implement KubernetesSessionClusterEntrypoint.

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #9986: [FLINK-10933][kubernetes] Implement 
KubernetesSessionClusterEntrypoint.
URL: https://github.com/apache/flink/pull/9986#issuecomment-545891630
 
 
   
   ## CI report:
   
   * 593bf42620faf09c1accbd692494646194e3d574 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133371151)
   * bb9fbd1d51a478793f63ae8b6d6e92b6a5a53775 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9984: [FLINK-9495][kubernetes] Implement ResourceManager for Kubernetes.

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #9984: [FLINK-9495][kubernetes] Implement 
ResourceManager for Kubernetes.
URL: https://github.com/apache/flink/pull/9984#issuecomment-545881191
 
 
   
   ## CI report:
   
   * 802ebf37e3d932169f9826b40df483bb5e9ac064 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133366796)
   * f16938ce2fb38ae216def737d14643b94d6083a1 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10435) Client sporadically hangs after Ctrl + C

2019-10-28 Thread Zili Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-10435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961671#comment-16961671
 ] 

Zili Chen commented on FLINK-10435:
---

I think the fix provided by [~fly_in_gis] is valid. Here is another question 
that which version shall we back port to. IMO fix versions contain 1.9.2 & 
1.10.0. What do you think?

> Client sporadically hangs after Ctrl + C
> 
>
> Key: FLINK-10435
> URL: https://issues.apache.org/jira/browse/FLINK-10435
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client, Deployment / YARN
>Affects Versions: 1.5.5, 1.6.2, 1.7.0, 1.9.1
>Reporter: Gary Yao
>Assignee: Yang Wang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When submitting a YARN job cluster in attached mode, the client hangs 
> indefinitely if Ctrl + C is pressed at the right time. One can recover from 
> this by sending SIGKILL.
> *Command to submit job*
> {code}
> HADOOP_CLASSPATH=`hadoop classpath` bin/flink run -m yarn-cluster 
> examples/streaming/WordCount.jar
> {code}
>  
> *Output/Stacktrace*
> {code}
> [hadoop@ip-172-31-45-22 flink-1.5.4]$ HADOOP_CLASSPATH=`hadoop classpath` 
> bin/flink run -m yarn-cluster examples/streaming/WordCount.jar
> Setting HADOOP_CONF_DIR=/etc/hadoop/conf because no HADOOP_CONF_DIR was set.
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/home/hadoop/flink-1.5.4/lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> 2018-09-26 12:01:04,241 INFO  org.apache.hadoop.yarn.client.RMProxy   
>   - Connecting to ResourceManager at 
> ip-172-31-45-22.eu-central-1.compute.internal/172.31.45.22:8032
> 2018-09-26 12:01:04,386 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli   
>   - No path for the flink jar passed. Using the location of class 
> org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
> 2018-09-26 12:01:04,386 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli   
>   - No path for the flink jar passed. Using the location of class 
> org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
> 2018-09-26 12:01:04,402 WARN  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Neither the 
> HADOOP_CONF_DIR nor the YARN_CONF_DIR environment variable is set. The Flink 
> YARN Client needs one of these to be set to properly load the Hadoop 
> configuration for accessing YARN.
> 2018-09-26 12:01:04,598 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Cluster 
> specification: ClusterSpecification{masterMemoryMB=1024, 
> taskManagerMemoryMB=1024, numberTaskManagers=1, slotsPerTaskManager=1}
> 2018-09-26 12:01:04,972 WARN  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor   - The 
> configuration directory ('/home/hadoop/flink-1.5.4/conf') contains both LOG4J 
> and Logback configuration files. Please delete or rename one of them.
> 2018-09-26 12:01:07,857 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Submitting 
> application master application_1537944258063_0017
> 2018-09-26 12:01:07,913 INFO  
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl - Submitted 
> application application_1537944258063_0017
> 2018-09-26 12:01:07,913 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Waiting for 
> the cluster to be allocated
> 2018-09-26 12:01:07,916 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Deploying 
> cluster, current state ACCEPTED
> ^C2018-09-26 12:01:08,851 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Cancelling 
> deployment from Deployment Failure Hook
> 2018-09-26 12:01:08,854 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Killing YARN 
> application
> 
>  The program finished with the following exception:
> org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't 
> deploy Yarn session cluster
>   at 
> org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploySessionCluster(AbstractYarnClusterDescriptor.java:410)
>   at 
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:258)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:214)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1025)
>   at 
> 

[GitHub] [flink] flinkbot edited a comment on issue #9688: [FLINK-13056][runtime] Introduce FastRestartPipelinedRegionStrategy

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #9688: [FLINK-13056][runtime] Introduce 
FastRestartPipelinedRegionStrategy
URL: https://github.com/apache/flink/pull/9688#issuecomment-531737820
 
 
   
   ## CI report:
   
   * 23278b65f27154ae066841334f0d0b06885e713a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/127786107)
   * 5831d972c3f2a68398438015352e5a5e0a8de9da : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129102026)
   * 5c1518ef1f35d14862d371c16dab863c26f22fd8 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129227917)
   * ef600d0ba1e6144def12001f086a784b810b62a0 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133934484)
   * bea922da7880fb3e882f680f5fc9e0124c0add4f : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wangyang0918 commented on a change in pull request #9986: [FLINK-10933][kubernetes] Implement KubernetesSessionClusterEntrypoint.

2019-10-28 Thread GitBox
wangyang0918 commented on a change in pull request #9986: 
[FLINK-10933][kubernetes] Implement KubernetesSessionClusterEntrypoint.
URL: https://github.com/apache/flink/pull/9986#discussion_r339889492
 
 

 ##
 File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/KubernetesSessionClusterEntrypoint.java
 ##
 @@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.kubernetes.entrypoint;
 
 Review comment:
   It is my bad. I have pushed the correct branch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14135) Introduce vectorized orc InputFormat for blink runtime

2019-10-28 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961663#comment-16961663
 ] 

Jingsong Lee commented on FLINK-14135:
--

[~jark] Can you assign this ticket to me?

> Introduce vectorized orc InputFormat for blink runtime
> ---
>
> Key: FLINK-14135
> URL: https://issues.apache.org/jira/browse/FLINK-14135
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / ORC
>Reporter: Jingsong Lee
>Priority: Major
>
> VectorizedOrcInputFormat is introduced to read orc data in batches.
> When returning each row of data, instead of actually retrieving each field, 
> we use BaseRow's abstraction to return a Columnar Row-like view.
> This will greatly improve the downstream filtered scenarios, so that there is 
> no need to access redundant fields on the filtered data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10013: [FLINK-13869][table-planner-blink][hive] Hive functions can not work in blink planner stream mode

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #10013: 
[FLINK-13869][table-planner-blink][hive] Hive functions can not work in blink 
planner stream mode
URL: https://github.com/apache/flink/pull/10013#issuecomment-546868067
 
 
   
   ## CI report:
   
   * 8c3d1a58bd358493197697e07b9d6542bf29e59a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133783896)
   * 2f8a9e6c8b7ff64e9373b35037cc72099dbcfcbe : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133932334)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] haodang commented on issue #10000: [FLINK-14398][SQL/Legacy Planner]Further split input unboxing code into separate methods

2019-10-28 Thread GitBox
haodang commented on issue #1: [FLINK-14398][SQL/Legacy Planner]Further 
split input unboxing code into separate methods
URL: https://github.com/apache/flink/pull/1#issuecomment-547247064
 
 
   Thank you very much @wuchong for the suggestions!  I'll add in the test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9688: [FLINK-13056][runtime] Introduce FastRestartPipelinedRegionStrategy

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #9688: [FLINK-13056][runtime] Introduce 
FastRestartPipelinedRegionStrategy
URL: https://github.com/apache/flink/pull/9688#issuecomment-531737820
 
 
   
   ## CI report:
   
   * 23278b65f27154ae066841334f0d0b06885e713a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/127786107)
   * 5831d972c3f2a68398438015352e5a5e0a8de9da : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129102026)
   * 5c1518ef1f35d14862d371c16dab863c26f22fd8 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129227917)
   * ef600d0ba1e6144def12001f086a784b810b62a0 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/133934484)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-7289) Memory allocation of RocksDB can be problematic in container environments

2019-10-28 Thread shengjk1 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961652#comment-16961652
 ] 

shengjk1 edited comment on FLINK-7289 at 10/29/19 3:55 AM:
---

Thanks [~yunta] and [~mikekap]. I have solved it. As Mike Kaplinskiy said, i 
put the jar  which only include BackendOptions.class  in the flink boot 
classpath - not in my application code classpath.


was (Author: shengjk1):
Thanks [~yunta] and [~mikekap]. I have solved it. As Mike Kaplinskiy said, i 
put the jar  which only include BackendOptions.class  in the flink boot 
classpath - not in my application code classpath .

> Memory allocation of RocksDB can be problematic in container environments
> -
>
> Key: FLINK-7289
> URL: https://issues.apache.org/jira/browse/FLINK-7289
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.7.2, 1.8.2, 1.9.0
>Reporter: Stefan Richter
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: completeRocksdbConfig.txt
>
>
> Flink's RocksDB based state backend allocates native memory. The amount of 
> allocated memory by RocksDB is not under the control of Flink or the JVM and 
> can (theoretically) grow without limits.
> In container environments, this can be problematic because the process can 
> exceed the memory budget of the container, and the process will get killed. 
> Currently, there is no other option than trusting RocksDB to be well behaved 
> and to follow its memory configurations. However, limiting RocksDB's memory 
> usage is not as easy as setting a single limit parameter. The memory limit is 
> determined by an interplay of several configuration parameters, which is 
> almost impossible to get right for users. Even worse, multiple RocksDB 
> instances can run inside the same process and make reasoning about the 
> configuration also dependent on the Flink job.
> Some information about the memory management in RocksDB can be found here:
> https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB
> https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide
> We should try to figure out ways to help users in one or more of the 
> following ways:
> - Some way to autotune or calculate the RocksDB configuration.
> - Conservative default values.
> - Additional documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-7289) Memory allocation of RocksDB can be problematic in container environments

2019-10-28 Thread shengjk1 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961652#comment-16961652
 ] 

shengjk1 commented on FLINK-7289:
-

Thanks [~yunta] and [~mikekap]. I have solved it. As Mike Kaplinskiy said, i 
put the jar  which only include BackendOptions.class  in the flink boot 
classpath - not in my application code classpath .

> Memory allocation of RocksDB can be problematic in container environments
> -
>
> Key: FLINK-7289
> URL: https://issues.apache.org/jira/browse/FLINK-7289
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.7.2, 1.8.2, 1.9.0
>Reporter: Stefan Richter
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: completeRocksdbConfig.txt
>
>
> Flink's RocksDB based state backend allocates native memory. The amount of 
> allocated memory by RocksDB is not under the control of Flink or the JVM and 
> can (theoretically) grow without limits.
> In container environments, this can be problematic because the process can 
> exceed the memory budget of the container, and the process will get killed. 
> Currently, there is no other option than trusting RocksDB to be well behaved 
> and to follow its memory configurations. However, limiting RocksDB's memory 
> usage is not as easy as setting a single limit parameter. The memory limit is 
> determined by an interplay of several configuration parameters, which is 
> almost impossible to get right for users. Even worse, multiple RocksDB 
> instances can run inside the same process and make reasoning about the 
> configuration also dependent on the Flink job.
> Some information about the memory management in RocksDB can be found here:
> https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB
> https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide
> We should try to figure out ways to help users in one or more of the 
> following ways:
> - Some way to autotune or calculate the RocksDB configuration.
> - Conservative default values.
> - Additional documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-10435) Client sporadically hangs after Ctrl + C

2019-10-28 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen reassigned FLINK-10435:
-

Assignee: Yang Wang

> Client sporadically hangs after Ctrl + C
> 
>
> Key: FLINK-10435
> URL: https://issues.apache.org/jira/browse/FLINK-10435
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client, Deployment / YARN
>Affects Versions: 1.5.5, 1.6.2, 1.7.0, 1.9.1
>Reporter: Gary Yao
>Assignee: Yang Wang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When submitting a YARN job cluster in attached mode, the client hangs 
> indefinitely if Ctrl + C is pressed at the right time. One can recover from 
> this by sending SIGKILL.
> *Command to submit job*
> {code}
> HADOOP_CLASSPATH=`hadoop classpath` bin/flink run -m yarn-cluster 
> examples/streaming/WordCount.jar
> {code}
>  
> *Output/Stacktrace*
> {code}
> [hadoop@ip-172-31-45-22 flink-1.5.4]$ HADOOP_CLASSPATH=`hadoop classpath` 
> bin/flink run -m yarn-cluster examples/streaming/WordCount.jar
> Setting HADOOP_CONF_DIR=/etc/hadoop/conf because no HADOOP_CONF_DIR was set.
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/home/hadoop/flink-1.5.4/lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> 2018-09-26 12:01:04,241 INFO  org.apache.hadoop.yarn.client.RMProxy   
>   - Connecting to ResourceManager at 
> ip-172-31-45-22.eu-central-1.compute.internal/172.31.45.22:8032
> 2018-09-26 12:01:04,386 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli   
>   - No path for the flink jar passed. Using the location of class 
> org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
> 2018-09-26 12:01:04,386 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli   
>   - No path for the flink jar passed. Using the location of class 
> org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
> 2018-09-26 12:01:04,402 WARN  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Neither the 
> HADOOP_CONF_DIR nor the YARN_CONF_DIR environment variable is set. The Flink 
> YARN Client needs one of these to be set to properly load the Hadoop 
> configuration for accessing YARN.
> 2018-09-26 12:01:04,598 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Cluster 
> specification: ClusterSpecification{masterMemoryMB=1024, 
> taskManagerMemoryMB=1024, numberTaskManagers=1, slotsPerTaskManager=1}
> 2018-09-26 12:01:04,972 WARN  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor   - The 
> configuration directory ('/home/hadoop/flink-1.5.4/conf') contains both LOG4J 
> and Logback configuration files. Please delete or rename one of them.
> 2018-09-26 12:01:07,857 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Submitting 
> application master application_1537944258063_0017
> 2018-09-26 12:01:07,913 INFO  
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl - Submitted 
> application application_1537944258063_0017
> 2018-09-26 12:01:07,913 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Waiting for 
> the cluster to be allocated
> 2018-09-26 12:01:07,916 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Deploying 
> cluster, current state ACCEPTED
> ^C2018-09-26 12:01:08,851 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Cancelling 
> deployment from Deployment Failure Hook
> 2018-09-26 12:01:08,854 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Killing YARN 
> application
> 
>  The program finished with the following exception:
> org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't 
> deploy Yarn session cluster
>   at 
> org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploySessionCluster(AbstractYarnClusterDescriptor.java:410)
>   at 
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:258)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:214)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1025)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$9(CliFrontend.java:1101)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at 

[GitHub] [flink] haodang commented on a change in pull request #10000: [FLINK-14398][SQL/Legacy Planner]Further split input unboxing code into separate methods

2019-10-28 Thread GitBox
haodang commented on a change in pull request #1: [FLINK-14398][SQL/Legacy 
Planner]Further split input unboxing code into separate methods
URL: https://github.com/apache/flink/pull/1#discussion_r339884830
 
 

 ##
 File path: 
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/codegen/CollectorCodeGenerator.scala
 ##
 @@ -73,19 +73,13 @@ class CollectorCodeGenerator(
 val input1TypeClass = boxedTypeTermForTypeInfo(input1)
 val input2TypeClass = boxedTypeTermForTypeInfo(collectedType)
 
-// declaration in case of code splits
-val recordMember = if (hasCodeSplits) {
-  s"private $input2TypeClass $input2Term;"
-} else {
-  ""
-}
+val inputDecleration =
+  List(s"private $input1TypeClass $input1Term;",
+   s"private $input2TypeClass $input2Term;")
 
-// assignment in case of code splits
-val recordAssignment = if (hasCodeSplits) {
-  s"$input2Term" // use member
-} else {
-  s"$input2TypeClass $input2Term" // local variable
-}
+val inputAssignment =
+  List(s"$input1Term = ($input1TypeClass) getInput();",
+   s"$input2Term = ($input2TypeClass) record;")
 
 Review comment:
   Any particular reason you'd prefer to write it out directly inside the class 
code?  I was thinking grouping them into `inputAssignment` and 
`inputDeclaration` might make generated class code look more succinct, 
organized, and thus easier to read.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #10010: [FLINK-10435][yarn]Client sporadically hangs after Ctrl + C

2019-10-28 Thread GitBox
TisonKun commented on a change in pull request #10010: 
[FLINK-10435][yarn]Client sporadically hangs after Ctrl + C
URL: https://github.com/apache/flink/pull/10010#discussion_r339884112
 
 

 ##
 File path: 
flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java
 ##
 @@ -1241,7 +1241,6 @@ private void failSessionDuringDeployment(YarnClient 
yarnClient, YarnClientApplic
// call (we don't know if the application has been 
deployed when the error occured).
LOG.debug("Error while killing YARN application", e);
}
-   yarnClient.stop();
 
 Review comment:
   Thanks for your clarification. Sounds reasonable.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] haodang removed a comment on issue #10000: [FLINK-14398][SQL/Legacy Planner]Further split input unboxing code into separate methods

2019-10-28 Thread GitBox
haodang removed a comment on issue #1: [FLINK-14398][SQL/Legacy 
Planner]Further split input unboxing code into separate methods
URL: https://github.com/apache/flink/pull/1#issuecomment-547243583
 
 
   Thank you very much for the suggestions @wuchong!  They all sound good and 
I'll incorporate them tomorrow.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] haodang commented on issue #10000: [FLINK-14398][SQL/Legacy Planner]Further split input unboxing code into separate methods

2019-10-28 Thread GitBox
haodang commented on issue #1: [FLINK-14398][SQL/Legacy Planner]Further 
split input unboxing code into separate methods
URL: https://github.com/apache/flink/pull/1#issuecomment-547243583
 
 
   Thank you very much for the suggestions @wuchong!  They all sound good and 
I'll incorporate them tomorrow.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11835) ZooKeeperLeaderElectionITCase.testJobExecutionOnClusterWithLeaderChange failed

2019-10-28 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-11835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961644#comment-16961644
 ] 

Yun Tang commented on FLINK-11835:
--

Another instance but with different stack

[https://api.travis-ci.org/v3/job/603576430/log.txt]

 
{code:java}
Test 
testJobExecutionOnClusterWithLeaderChange(org.apache.flink.test.runtime.leaderelection.ZooKeeperLeaderElectionITCase)
 failed with:
java.util.concurrent.ExecutionException: 
org.apache.flink.runtime.jobmaster.JobNotFinishedException: The job 
(34dfd9f8b7e3d6db11c0b00231555349) has been not been finished.
at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
at 
org.apache.flink.test.runtime.leaderelection.ZooKeeperLeaderElectionITCase.testJobExecutionOnClusterWithLeaderChange(ZooKeeperLeaderElectionITCase.java:135)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
Caused by: org.apache.flink.runtime.jobmaster.JobNotFinishedException: The job 
(34dfd9f8b7e3d6db11c0b00231555349) has been not been finished.
at 
org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.jobFinishedByOther(JobManagerRunnerImpl.java:247)
at 
org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.jobAlreadyDone(JobManagerRunnerImpl.java:348)
at 
org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.lambda$verifyJobSchedulingStatusAndStartJobManager$3(JobManagerRunnerImpl.java:309)
at 
java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:981)
at 
java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2124)
at 
org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.verifyJobSchedulingStatusAndStartJobManager(JobManagerRunnerImpl.java:306)
at 
org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.lambda$grantLeadership$2(JobManagerRunnerImpl.java:295)
at 
java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:981)
at 
java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2124)
at 
org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.grantLeadership(JobManagerRunnerImpl.java:292)
at 
org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionService.isLeader(ZooKeeperLeaderElectionService.java:236)
at 

[jira] [Comment Edited] (FLINK-14544) Resuming Externalized Checkpoint after terminal failure (file, async) end-to-end test fails on travis

2019-10-28 Thread Congxian Qiu(klion26) (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961638#comment-16961638
 ] 

Congxian Qiu(klion26) edited comment on FLINK-14544 at 10/29/19 3:36 AM:
-

from the log, found that the {{MailboxStateException}} throws out after 
{{Artificial failure in the second job.}}
{code:java}
2019-10-25 20:46:09,345 INFO  org.apache.flink.runtime.taskmanager.Task 
- FailureMapper (1/1) (466747dfea13738afd021da649dc53f4) switched 
from RUNNING to FAILED.
java.lang.Exception: Artificial failure.
at 
org.apache.flink.streaming.tests.FailureMapper.map(FailureMapper.java:59)
at 
org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:41)
at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:173)
at 
org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:151)
at 
org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:128)
at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:69)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:280)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:152)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:423)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:696)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:521)
at java.lang.Thread.run(Thread.java:748)
2019-10-25 20:46:09,361 ERROR org.apache.flink.runtime.taskmanager.Task 
- Error while canceling task FailureMapper (1/1).
java.util.concurrent.RejectedExecutionException: 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxStateException: Mailbox 
is in state CLOSED, but is required to be in state OPEN for put operations.
at 
org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxExecutorImpl.executeFirst(MailboxExecutorImpl.java:75)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxProcessor.sendPriorityLetter(MailboxProcessor.java:176)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxProcessor.allActionsCompleted(MailboxProcessor.java:172)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.cancel(StreamTask.java:540)
at 
org.apache.flink.runtime.taskmanager.Task.cancelInvokable(Task.java:1191)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:764)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:521)
at java.lang.Thread.run(Thread.java:748)
Caused by: 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxStateException: Mailbox 
is in state CLOSED, but is required to be in state OPEN for put operations.
at 
org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkPutStateConditions(TaskMailboxImpl.java:199)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.putHeadInternal(TaskMailboxImpl.java:141)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.putFirst(TaskMailboxImpl.java:131)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxExecutorImpl.executeFirst(MailboxExecutorImpl.java:73)
... 7 more
{code}
 

so from currently analysis, the test `Resuming Externalized Checkpoint after 
terminal failure (file, async) ` complete checkpoint in job 1, restore from 
checkpoint completed in job 1, and complete more checkpoint in job2 , but the 
log contains {{MailboxStateException}}, so we see the test failed.
 * the first job complete checkpoints 
 ** {{2019-10-25 20:46:08,614 INFO 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed 
checkpoint 2 for job 2d7c274d4561078c592df0bbb1dfad52 (156791 bytes in 367 
ms).}}
 * trigger artifical exception
 * retore from the checkpoint completed by the previous job
 ** 2019-10-25 20:46:13,358 INFO 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Starting job 
824b849f432dcffdeb0d18ab6b1f7d6c from savepoint 
[file:///home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-50753031208/externalized-chckpt-e2e-backend-dir/2d7c274d4561078c592df0bbb1dfad52/chk-2]
 ()
 2019-10-25 20:46:13,378 INFO 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Reset the 
checkpoint ID of job 824b849f432dcffdeb0d18ab6b1f7d6c to 3.
 2019-10-25 20:46:13,378 INFO 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Restoring job 
824b849f432dcffdeb0d18ab6b1f7d6c from latest valid checkpoint: Checkpoint 2 @ 0 
for 

[GitHub] [flink] wangyang0918 commented on a change in pull request #10010: [FLINK-10435][yarn]Client sporadically hangs after Ctrl + C

2019-10-28 Thread GitBox
wangyang0918 commented on a change in pull request #10010: 
[FLINK-10435][yarn]Client sporadically hangs after Ctrl + C
URL: https://github.com/apache/flink/pull/10010#discussion_r339882058
 
 

 ##
 File path: 
flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java
 ##
 @@ -1241,7 +1241,6 @@ private void failSessionDuringDeployment(YarnClient 
yarnClient, YarnClientApplic
// call (we don't know if the application has been 
deployed when the error occured).
LOG.debug("Error while killing YARN application", e);
}
-   yarnClient.stop();
 
 Review comment:
   In L525 and L511, it will throw `YarnDeploymentException` and then yarn 
client will be stopped in `CliFrontend->runProgram->clusterDescriptor.close()`. 
We should not stop yarn client twice. 
   
   We create yarn client outside and i think we should always use 
`clusterDescriptor.close()` to stop the yarn client.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14544) Resuming Externalized Checkpoint after terminal failure (file, async) end-to-end test fails on travis

2019-10-28 Thread Congxian Qiu(klion26) (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961638#comment-16961638
 ] 

Congxian Qiu(klion26) commented on FLINK-14544:
---

from the log, found that the {{MailboxStateException}} throws out after 
{{Artificial failure in the second job.}}
{code:java}
2019-10-25 20:46:09,345 INFO  org.apache.flink.runtime.taskmanager.Task 
- FailureMapper (1/1) (466747dfea13738afd021da649dc53f4) switched 
from RUNNING to FAILED.
java.lang.Exception: Artificial failure.
at 
org.apache.flink.streaming.tests.FailureMapper.map(FailureMapper.java:59)
at 
org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:41)
at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:173)
at 
org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:151)
at 
org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:128)
at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:69)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:280)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:152)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:423)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:696)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:521)
at java.lang.Thread.run(Thread.java:748)
2019-10-25 20:46:09,361 ERROR org.apache.flink.runtime.taskmanager.Task 
- Error while canceling task FailureMapper (1/1).
java.util.concurrent.RejectedExecutionException: 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxStateException: Mailbox 
is in state CLOSED, but is required to be in state OPEN for put operations.
at 
org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxExecutorImpl.executeFirst(MailboxExecutorImpl.java:75)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxProcessor.sendPriorityLetter(MailboxProcessor.java:176)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxProcessor.allActionsCompleted(MailboxProcessor.java:172)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.cancel(StreamTask.java:540)
at 
org.apache.flink.runtime.taskmanager.Task.cancelInvokable(Task.java:1191)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:764)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:521)
at java.lang.Thread.run(Thread.java:748)
Caused by: 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxStateException: Mailbox 
is in state CLOSED, but is required to be in state OPEN for put operations.
at 
org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkPutStateConditions(TaskMailboxImpl.java:199)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.putHeadInternal(TaskMailboxImpl.java:141)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.putFirst(TaskMailboxImpl.java:131)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxExecutorImpl.executeFirst(MailboxExecutorImpl.java:73)
... 7 more
{code}
 

so currently, the conclusion is that, the test `Resuming Externalized 
Checkpoint after terminal failure (file, async) ` complete checkpoint in job 1, 
restore from checkpoint , and complete more checkpoint in job2 , but the log 
contains {{MailboxStateException}}, so we see the test failed.
 * the first job complete checkpoints  {{2019-10-25 20:46:08,614 INFO 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed 
checkpoint 2 for job 2d7c274d4561078c592df0bbb1dfad52 (156791 bytes in 367 
ms).}}
 * trigger artifical exception
 * retore from the checkpoint completed by the previous job
 ** 2019-10-25 20:46:13,358 INFO 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Starting job 
824b849f432dcffdeb0d18ab6b1f7d6c from savepoint 
file:///home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-50753031208/externalized-chckpt-e2e-backend-dir/2d7c274d4561078c592df0bbb1dfad52/chk-2
 ()
2019-10-25 20:46:13,378 INFO 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Reset the 
checkpoint ID of job 824b849f432dcffdeb0d18ab6b1f7d6c to 3.
2019-10-25 20:46:13,378 INFO 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Restoring job 
824b849f432dcffdeb0d18ab6b1f7d6c from latest valid checkpoint: Checkpoint 2 @ 0 
for 824b849f432dcffdeb0d18ab6b1f7d6c.
 *    complete more new checkpoints in new job
 

[jira] [Updated] (FLINK-14550) can't use proctime attribute when register datastream for table and exist nested fields

2019-10-28 Thread hehuiyuan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hehuiyuan updated FLINK-14550:
--
Description: 
*_The data schame :_*

 

final String schemaString =
 "

{\"type\":\"record\",\"name\":\"GenericUser\",\"namespace\":\"org.apache.flink.formats.avro.generated\","
 + "\"fields\": [

{\"name\":\"name\",\"type\":\"string\"}

,

{\"name\":\"favorite_number\",\"type\":[\"int\",\"null\"]}

," +
 "

{\"name\":\"favorite_color\",\"type\":[\"string\",\"null\"]}

,

{\"name\":\"type_long_test\",\"type\":[\"long\",\"null\"]}

" +
 ",\{\"name\":\"type_double_test\",\"type\":\"double\"},

{\"name\":\"type_null_test\",\"type\":[\"null\",\"string\"]}

," +
 "

{\"name\":\"type_bool_test\",\"type\":[\"boolean\"]}

,{\"name\":\"type_array_string\",\"type\":" +
 
"\{\"type\":\"array\",\"items\":\"string\"}},{\"name\":\"type_array_boolean\",\"type\":{\"type\":\"array\","
 +
 "\"items\":\"boolean\"}},{\"name\":\"type_nullable_array\",\"type\":[\"null\",

{\"type\":\"array\"," + "\"items\":\"string\"}

],\"default\":null},{\"name\":\"type_enum\",\"type\":{\"type\":\"enum\"," +
 
"\"name\":\"Colors\",\"symbols\":[\"RED\",\"GREEN\",\"BLUE\"]}},{\"name\":\"type_map\",\"type\":{\"type\":\"map\","
 +
 "\"values\":\"long\"}},{\"name\":\"type_fixed\",\"type\":[\"null\",

{\"type\":\"fixed\",\"name\":\"Fixed16\"," + "\"size\":16}

],\"size\":16},

{\"name\":\"type_union\",\"type\":[\"null\",\"boolean\",\"long\",\"double\"]}

," +
 
*"{\"name\":\"type_nested\",\"type\":[\"null\",{\"type\":\"record\",\"name\":\"Address\",\"fields\":[**{\"name\":\"num\","
 
+"\"type\":\"int\"}**,\{\"name\":\"street\",\"type\":\"string\"},\{\"name\":\"city\",\"type\":\"string\"},"
 +*
 
*"\{\"name\":\"state\",\"type\":\"string\"},\{\"name\":\"zip\",\"type\":\"string\"}]}]},*

{\"name\":\"type_bytes\"," + "\"type\":\"bytes\"}

,\{\"name\":\"type_date\",\"type\":{\"type\":\"int\",\"logicalType\":\"date\"}},"
 +
 
"\{\"name\":\"type_time_millis\",\"type\":{\"type\":\"int\",\"logicalType\":\"time-millis\"}},{\"name\":\"type_time_micros\","
 +
 
"\"type\":\{\"type\":\"int\",\"logicalType\":\"time-micros\"}},{\"name\":\"type_timestamp_millis\",\"type\":{\"type\":\"long\","
 +
 
"\"logicalType\":\"timestamp-millis\"}},{\"name\":\"type_timestamp_micros\",\"type\":{\"type\":\"long\","
 +
 
"\"logicalType\":\"timestamp-micros\"}},{\"name\":\"type_decimal_bytes\",\"type\":{\"type\":\"bytes\","
 +
 
"\"logicalType\":\"decimal\",\"precision\":4,\"scale\":2}},{\"name\":\"type_decimal_fixed\",\"type\":{\"type\":\"fixed\","
 +
 
"\"name\":\"Fixed2\",\"size\":2,\"logicalType\":\"decimal\",\"precision\":4,\"scale\":2}}]}";

 

*_The code :_*

tableEnv.registerDataStream("source_table",deserilizeDataStreamRows,"type_nested$Address$street,userActionTime.proctime");

 

_*The error is as follows:*_

Exception in thread "main" org.apache.flink.table.api.TableException: The 
proctime attribute can only be appended to the table schema and not replace an 
existing field. Please move 'userActionTime' to the end of the schema.Exception 
in thread "main" org.apache.flink.table.api.TableException: The proctime 
attribute can only be appended to the table schema and not replace an existing 
field. Please move 'userActionTime' to the end of the schema. at 
org.apache.flink.table.api.StreamTableEnvironment.org$apache$flink$table$api$StreamTableEnvironment$$extractProctime$1(StreamTableEnvironment.scala:649)
 at 
org.apache.flink.table.api.StreamTableEnvironment$$anonfun$validateAndExtractTimeAttributes$1.apply(StreamTableEnvironment.scala:676)
 at 
org.apache.flink.table.api.StreamTableEnvironment$$anonfun$validateAndExtractTimeAttributes$1.apply(StreamTableEnvironment.scala:668)
 at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
 at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186) at 
org.apache.flink.table.api.StreamTableEnvironment.validateAndExtractTimeAttributes(StreamTableEnvironment.scala:668)
 at 
org.apache.flink.table.api.StreamTableEnvironment.registerDataStreamInternal(StreamTableEnvironment.scala:549)
 at 
org.apache.flink.table.api.java.StreamTableEnvironment.registerDataStream(StreamTableEnvironment.scala:136)
 at 
com.jd.flink.sql.demo.validate.schema.avro.AvroQuickStartMain.main(AvroQuickStartMain.java:145)

 

 

The code is ok.

tableEnv.registerDataStream("source_table",deserilizeDataStreamRows,"type_nested$Address$street");

 

The code is ok.

tableEnv.registerDataStream("source_table",deserilizeDataStreamRows,"type_nested,userActionTime.proctime");

 

 

 

  was:
*_The data schame :_*

 

final String schemaString =
 "

{\"type\":\"record\",\"name\":\"GenericUser\",\"namespace\":\"org.apache.flink.formats.avro.generated\","
 + "\"fields\": [\\{\"name\":\"name\",\"type\":\"string\"}

,{\"name\":\"favorite_number\",\"type\":[\"int\",\"null\"]}," +
 

[jira] [Updated] (FLINK-14550) can't use proctime attribute when register datastream for table and exist nested fields

2019-10-28 Thread hehuiyuan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hehuiyuan updated FLINK-14550:
--
Description: 
*_The data schame :_*

 

final String schemaString =
 "

{\"type\":\"record\",\"name\":\"GenericUser\",\"namespace\":\"org.apache.flink.formats.avro.generated\","
 + "\"fields\": [\\{\"name\":\"name\",\"type\":\"string\"}

,{\"name\":\"favorite_number\",\"type\":[\"int\",\"null\"]}," +
 
"{\"name\":\"favorite_color\",\"type\":[\"string\",\"null\"]},{\"name\":\"type_long_test\",\"type\":[\"long\",\"null\"]}"
 +
 
",\{\"name\":\"type_double_test\",\"type\":\"double\"},{\"name\":\"type_null_test\",\"type\":[\"null\",\"string\"]},"
 +
 
"{\"name\":\"type_bool_test\",\"type\":[\"boolean\"]},{\"name\":\"type_array_string\",\"type\":"
 +
 
"\{\"type\":\"array\",\"items\":\"string\"}},{\"name\":\"type_array_boolean\",\"type\":{\"type\":\"array\","
 +
 "\"items\":\"boolean\"}},{\"name\":\"type_nullable_array\",\"type\":[\"null\",

{\"type\":\"array\"," + "\"items\":\"string\"}

],\"default\":null},{\"name\":\"type_enum\",\"type\":{\"type\":\"enum\"," +
 
"\"name\":\"Colors\",\"symbols\":[\"RED\",\"GREEN\",\"BLUE\"]}},{\"name\":\"type_map\",\"type\":{\"type\":\"map\","
 +
 "\"values\":\"long\"}},{\"name\":\"type_fixed\",\"type\":[\"null\",

{\"type\":\"fixed\",\"name\":\"Fixed16\"," + "\"size\":16}

],\"size\":16},{\"name\":\"type_union\",\"type\":[\"null\",\"boolean\",\"long\",\"double\"]},"
 +
*"{\"name\":\"type_nested\",\"type\":[\"null\",{\"type\":\"record\",\"name\":\"Address\",\"fields\":[*

*{\"name\":\"num\"," +"\"type\":\"int\"}*

*,\{\"name\":\"street\",\"type\":\"string\"},\{\"name\":\"city\",\"type\":\"string\"},"
 +*
 
*"\{\"name\":\"state\",\"type\":\"string\"},\{\"name\":\"zip\",\"type\":\"string\"}]}]},*

{\"name\":\"type_bytes\"," + "\"type\":\"bytes\"}

,\{\"name\":\"type_date\",\"type\":{\"type\":\"int\",\"logicalType\":\"date\"}},"
 +
 
"\{\"name\":\"type_time_millis\",\"type\":{\"type\":\"int\",\"logicalType\":\"time-millis\"}},{\"name\":\"type_time_micros\","
 +
 
"\"type\":\{\"type\":\"int\",\"logicalType\":\"time-micros\"}},{\"name\":\"type_timestamp_millis\",\"type\":{\"type\":\"long\","
 +
 
"\"logicalType\":\"timestamp-millis\"}},{\"name\":\"type_timestamp_micros\",\"type\":{\"type\":\"long\","
 +
 
"\"logicalType\":\"timestamp-micros\"}},{\"name\":\"type_decimal_bytes\",\"type\":{\"type\":\"bytes\","
 +
 
"\"logicalType\":\"decimal\",\"precision\":4,\"scale\":2}},{\"name\":\"type_decimal_fixed\",\"type\":{\"type\":\"fixed\","
 +
 
"\"name\":\"Fixed2\",\"size\":2,\"logicalType\":\"decimal\",\"precision\":4,\"scale\":2}}]}";

 

*_The code :_*

tableEnv.registerDataStream("source_table",deserilizeDataStreamRows,"type_nested$Address$street,userActionTime.proctime");

 

_*The error is as follows:*_

Exception in thread "main" org.apache.flink.table.api.TableException: The 
proctime attribute can only be appended to the table schema and not replace an 
existing field. Please move 'userActionTime' to the end of the schema.Exception 
in thread "main" org.apache.flink.table.api.TableException: The proctime 
attribute can only be appended to the table schema and not replace an existing 
field. Please move 'userActionTime' to the end of the schema. at 
org.apache.flink.table.api.StreamTableEnvironment.org$apache$flink$table$api$StreamTableEnvironment$$extractProctime$1(StreamTableEnvironment.scala:649)
 at 
org.apache.flink.table.api.StreamTableEnvironment$$anonfun$validateAndExtractTimeAttributes$1.apply(StreamTableEnvironment.scala:676)
 at 
org.apache.flink.table.api.StreamTableEnvironment$$anonfun$validateAndExtractTimeAttributes$1.apply(StreamTableEnvironment.scala:668)
 at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
 at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186) at 
org.apache.flink.table.api.StreamTableEnvironment.validateAndExtractTimeAttributes(StreamTableEnvironment.scala:668)
 at 
org.apache.flink.table.api.StreamTableEnvironment.registerDataStreamInternal(StreamTableEnvironment.scala:549)
 at 
org.apache.flink.table.api.java.StreamTableEnvironment.registerDataStream(StreamTableEnvironment.scala:136)
 at 
com.jd.flink.sql.demo.validate.schema.avro.AvroQuickStartMain.main(AvroQuickStartMain.java:145)

 

 

The code is ok.

tableEnv.registerDataStream("source_table",deserilizeDataStreamRows,"type_nested$Address$street");

 

The code is ok.

tableEnv.registerDataStream("source_table",deserilizeDataStreamRows,"type_nested,userActionTime.proctime");

 

 

 

  was:
*_The data schame :_*

 

final String schemaString =
 
"{\"type\":\"record\",\"name\":\"GenericUser\",\"namespace\":\"org.apache.flink.formats.avro.generated\","
 +
 "\"fields\": 
[\{\"name\":\"name\",\"type\":\"string\"},\{\"name\":\"favorite_number\",\"type\":[\"int\",\"null\"]},"
 +
 

[GitHub] [flink] flinkbot edited a comment on issue #9688: [FLINK-13056][runtime] Introduce FastRestartPipelinedRegionStrategy

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #9688: [FLINK-13056][runtime] Introduce 
FastRestartPipelinedRegionStrategy
URL: https://github.com/apache/flink/pull/9688#issuecomment-531737820
 
 
   
   ## CI report:
   
   * 23278b65f27154ae066841334f0d0b06885e713a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/127786107)
   * 5831d972c3f2a68398438015352e5a5e0a8de9da : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129102026)
   * 5c1518ef1f35d14862d371c16dab863c26f22fd8 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129227917)
   * ef600d0ba1e6144def12001f086a784b810b62a0 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-14550) can't use proctime attribute when register datastream for table and exist nested fields

2019-10-28 Thread hehuiyuan (Jira)
hehuiyuan created FLINK-14550:
-

 Summary: can't use proctime attribute when register datastream for 
table and exist nested fields
 Key: FLINK-14550
 URL: https://issues.apache.org/jira/browse/FLINK-14550
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Reporter: hehuiyuan


*_The data schame :_*

 

final String schemaString =
 
"{\"type\":\"record\",\"name\":\"GenericUser\",\"namespace\":\"org.apache.flink.formats.avro.generated\","
 +
 "\"fields\": 
[\{\"name\":\"name\",\"type\":\"string\"},\{\"name\":\"favorite_number\",\"type\":[\"int\",\"null\"]},"
 +
 
"\{\"name\":\"favorite_color\",\"type\":[\"string\",\"null\"]},\{\"name\":\"type_long_test\",\"type\":[\"long\",\"null\"]}"
 +
 
",\{\"name\":\"type_double_test\",\"type\":\"double\"},\{\"name\":\"type_null_test\",\"type\":[\"null\",\"string\"]},"
 +
 
"\{\"name\":\"type_bool_test\",\"type\":[\"boolean\"]},{\"name\":\"type_array_string\",\"type\":"
 +
 
"\{\"type\":\"array\",\"items\":\"string\"}},{\"name\":\"type_array_boolean\",\"type\":{\"type\":\"array\","
 +
 
"\"items\":\"boolean\"}},{\"name\":\"type_nullable_array\",\"type\":[\"null\",{\"type\":\"array\","
 +
 
"\"items\":\"string\"}],\"default\":null},{\"name\":\"type_enum\",\"type\":{\"type\":\"enum\","
 +
 
"\"name\":\"Colors\",\"symbols\":[\"RED\",\"GREEN\",\"BLUE\"]}},{\"name\":\"type_map\",\"type\":{\"type\":\"map\","
 +
 
"\"values\":\"long\"}},{\"name\":\"type_fixed\",\"type\":[\"null\",{\"type\":\"fixed\",\"name\":\"Fixed16\","
 +
 
"\"size\":16}],\"size\":16},\{\"name\":\"type_union\",\"type\":[\"null\",\"boolean\",\"long\",\"double\"]},"
 +
 
*"{\"name\":\"type_nested\",\"type\":[\"null\",{\"type\":\"record\",\"name\":\"Address\",\"fields\":[{\"name\":\"num\","
 +*
 
*"\"type\":\"int\"},\{\"name\":\"street\",\"type\":\"string\"},\{\"name\":\"city\",\"type\":\"string\"},"
 +*
 
*"\{\"name\":\"state\",\"type\":\"string\"},\{\"name\":\"zip\",\"type\":\"string\"}]}]},*{\"name\":\"type_bytes\","
 +
 
"\"type\":\"bytes\"},\{\"name\":\"type_date\",\"type\":{\"type\":\"int\",\"logicalType\":\"date\"}},"
 +
 
"\{\"name\":\"type_time_millis\",\"type\":{\"type\":\"int\",\"logicalType\":\"time-millis\"}},{\"name\":\"type_time_micros\","
 +
 
"\"type\":\{\"type\":\"int\",\"logicalType\":\"time-micros\"}},{\"name\":\"type_timestamp_millis\",\"type\":{\"type\":\"long\","
 +
 
"\"logicalType\":\"timestamp-millis\"}},{\"name\":\"type_timestamp_micros\",\"type\":{\"type\":\"long\","
 +
 
"\"logicalType\":\"timestamp-micros\"}},{\"name\":\"type_decimal_bytes\",\"type\":{\"type\":\"bytes\","
 +
 
"\"logicalType\":\"decimal\",\"precision\":4,\"scale\":2}},{\"name\":\"type_decimal_fixed\",\"type\":{\"type\":\"fixed\","
 +
 
"\"name\":\"Fixed2\",\"size\":2,\"logicalType\":\"decimal\",\"precision\":4,\"scale\":2}}]}";

 

*_The code :_*

tableEnv.registerDataStream("source_table",deserilizeDataStreamRows,"type_nested$Address$street,userActionTime.proctime");

 

_*The error is as follows:*_

Exception in thread "main" org.apache.flink.table.api.TableException: The 
proctime attribute can only be appended to the table schema and not replace an 
existing field. Please move 'userActionTime' to the end of the schema.Exception 
in thread "main" org.apache.flink.table.api.TableException: The proctime 
attribute can only be appended to the table schema and not replace an existing 
field. Please move 'userActionTime' to the end of the schema. at 
org.apache.flink.table.api.StreamTableEnvironment.org$apache$flink$table$api$StreamTableEnvironment$$extractProctime$1(StreamTableEnvironment.scala:649)
 at 
org.apache.flink.table.api.StreamTableEnvironment$$anonfun$validateAndExtractTimeAttributes$1.apply(StreamTableEnvironment.scala:676)
 at 
org.apache.flink.table.api.StreamTableEnvironment$$anonfun$validateAndExtractTimeAttributes$1.apply(StreamTableEnvironment.scala:668)
 at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
 at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186) at 
org.apache.flink.table.api.StreamTableEnvironment.validateAndExtractTimeAttributes(StreamTableEnvironment.scala:668)
 at 
org.apache.flink.table.api.StreamTableEnvironment.registerDataStreamInternal(StreamTableEnvironment.scala:549)
 at 
org.apache.flink.table.api.java.StreamTableEnvironment.registerDataStream(StreamTableEnvironment.scala:136)
 at 
com.jd.flink.sql.demo.validate.schema.avro.AvroQuickStartMain.main(AvroQuickStartMain.java:145)

 

 

The code is ok.

tableEnv.registerDataStream("source_table",deserilizeDataStreamRows,"type_nested$Address$street");

 

The code is ok.

tableEnv.registerDataStream("source_table",deserilizeDataStreamRows,"type_nested,userActionTime.proctime");

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10013: [FLINK-13869][table-planner-blink][hive] Hive functions can not work in blink planner stream mode

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #10013: 
[FLINK-13869][table-planner-blink][hive] Hive functions can not work in blink 
planner stream mode
URL: https://github.com/apache/flink/pull/10013#issuecomment-546868067
 
 
   
   ## CI report:
   
   * 8c3d1a58bd358493197697e07b9d6542bf29e59a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133783896)
   * 2f8a9e6c8b7ff64e9373b35037cc72099dbcfcbe : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/133932334)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14424) Create tpc-ds end to end test to support all tpc-ds queries

2019-10-28 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961625#comment-16961625
 ] 

Leonard Xu commented on FLINK-14424:


[~wind_ljy] Yes,just like that tpch has done。

>  Create tpc-ds end to end test to support all tpc-ds queries
> 
>
> Key: FLINK-14424
> URL: https://issues.apache.org/jira/browse/FLINK-14424
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Leonard Xu
>Assignee: Leonard Xu
>Priority: Major
> Fix For: 1.10.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhuzhurk edited a comment on issue #9688: [FLINK-13056][runtime] Introduce FastRestartPipelinedRegionStrategy

2019-10-28 Thread GitBox
zhuzhurk edited a comment on issue #9688: [FLINK-13056][runtime] Introduce 
FastRestartPipelinedRegionStrategy
URL: https://github.com/apache/flink/pull/9688#issuecomment-532262379
 
 
   > How do you configure the `FastRestartPipelinedRegionStrategy`?
   
   The PR is updated to enable configuring the new strategy.
   A new valid value "region-fast" is added for config 
"jobmanager.execution.failover-strategy". 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14126) Elasticsearch Xpack Machine Learning doesn't support ARM

2019-10-28 Thread wangxiyuan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961620#comment-16961620
 ] 

wangxiyuan commented on FLINK-14126:


Error log:

org.elasticsearch.bootstrap.StartupException: ElasticsearchException[X-Pack is 
not supported and Machine Learning is not available for [linux-aarch64]; you 
can use the other X-Pack features (unsupported) by setting xpack.ml.enabled: 
false in elasticsearch.yml]
at 
org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:140) 
~[elasticsearch-6.3.1.jar:6.3.1]
at 
org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:127) 
~[elasticsearch-6.3.1.jar:6.3.1]
at 
org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86)
 ~[elasticsearch-6.3.1.jar:6.3.1]
at 
org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) 
~[elasticsearch-cli-6.3.1.jar:6.3.1]
at org.elasticsearch.cli.Command.main(Command.java:90) 
~[elasticsearch-cli-6.3.1.jar:6.3.1]
at 
org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:93) 
~[elasticsearch-6.3.1.jar:6.3.1]
at 
org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:86) 
~[elasticsearch-6.3.1.jar:6.3.1]


> Elasticsearch Xpack Machine Learning doesn't support ARM
> 
>
> Key: FLINK-14126
> URL: https://issues.apache.org/jira/browse/FLINK-14126
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.9.0
>Reporter: wangxiyuan
>Priority: Minor
> Fix For: 2.0.0
>
>
> Elasticsearch Xpack Machine Learning function is enabled by default if the 
> version is >=6.0. But This feature doesn't support ARM arch. So that in some 
> e2e tests, Elasticsearch  is failed to start.
> We should disable ML feature in this case on ARM.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lirui-apache commented on a change in pull request #9995: [FLINK-14526][hive] Support Hive version 1.1.0 and 1.1.1

2019-10-28 Thread GitBox
lirui-apache commented on a change in pull request #9995: [FLINK-14526][hive] 
Support Hive version 1.1.0 and 1.1.1
URL: https://github.com/apache/flink/pull/9995#discussion_r339872502
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/client/HiveShim.java
 ##
 @@ -33,16 +36,18 @@
 import org.apache.hadoop.hive.metastore.api.UnknownDBException;
 import org.apache.hadoop.hive.ql.udf.generic.SimpleGenericUDAFParameterInfo;
 import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;
+import org.apache.hadoop.hive.serde2.typeinfo.PrimitiveTypeInfo;
 import org.apache.thrift.TException;
 
 import java.io.IOException;
+import java.io.Serializable;
 import java.util.List;
 import java.util.Map;
 
 /**
  * A shim layer to support different versions of Hive.
  */
-public interface HiveShim {
+public interface HiveShim extends Serializable {
 
 Review comment:
   Tests pass before this PR only because we don't have enough test coverage to 
expose the issue. Before this PR, HiveShim is only used to get object 
inspectors for DATE and TIMESTAMP columns, while with the PR, HiveShim is 
needed for all columns. So we need the fix to pass the tests.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14549) Bring more detail by using logicalType rather than conversionClass in exception msg

2019-10-28 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961612#comment-16961612
 ] 

Jark Wu commented on FLINK-14549:
-

+1 to do this. Do you want to do this? [~Leonard Xu]

> Bring more detail by using logicalType rather than conversionClass  in 
> exception msg
> 
>
> Key: FLINK-14549
> URL: https://issues.apache.org/jira/browse/FLINK-14549
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.1
>Reporter: Leonard Xu
>Priority: Minor
> Fix For: 1.10.0
>
>
> We use DataType‘s conversionClass name in validating the query result's field 
> type and sink table schema which is no precise when the DataType has  
> variable parameters like  DECIMAL(p,s)、TIMESTAMP(p).
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> Field types of query result and registered TableSink 
> `default_catalog`.`default_database`.`q2_sinkTable` do not match.Exception in 
> thread "main" org.apache.flink.table.api.ValidationException: Field types of 
> query result and registered TableSink 
> `default_catalog`.`default_database`.`q2_sinkTable` do not match.Query result 
> schema: [d_week_seq1: Long, EXPR$1: BigDecimal, EXPR$2: BigDecimal, EXPR$3: 
> BigDecimal]TableSink schema:    [d_week_seq1: Long, EXPR$1: BigDecimal, 
> EXPR$2: BigDecimal, EXPR$3: BigDecimal] at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$.validateSink(TableSinkUtils.scala:68)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:179)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:178)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lirui-apache commented on issue #9927: [FLINK-14397][hive] Failed to run Hive UDTF with array arguments

2019-10-28 Thread GitBox
lirui-apache commented on issue #9927: [FLINK-14397][hive] Failed to run Hive 
UDTF with array arguments
URL: https://github.com/apache/flink/pull/9927#issuecomment-547229352
 
 
   @bowenli86 Thanks for the review and merge :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14537) Improve influxdb reporter performance for kafka source/sink

2019-10-28 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961608#comment-16961608
 ] 

Yun Tang commented on FLINK-14537:
--

[~ouyangwuli] Why not use grafana but influxdb cli to query metrics? What's 
important, you actually did not talk about your solution to improve the 
performance of Influxdb reporter.

> Improve influxdb reporter performance for kafka source/sink
> ---
>
> Key: FLINK-14537
> URL: https://issues.apache.org/jira/browse/FLINK-14537
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Metrics
>Affects Versions: 1.9.1
>Reporter: ouyangwulin
>Priority: Minor
> Fix For: 1.10.0, 1.9.2, 1.11.0
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> In our product env, our datasource mostly from kafka source. and influxdb 
> report use kafka topic and partition for create infuxdb measurements, It 
> makes 13531 measurements in influxdb. so It trouble for get measurements 
> which we want get the metric data, and It exaust performance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] TisonKun commented on issue #9974: [FLINK-14501][FLINK-14502] Decouple ClusterDescriptor/ClusterSpecification from CommandLine

2019-10-28 Thread GitBox
TisonKun commented on issue #9974: [FLINK-14501][FLINK-14502] Decouple 
ClusterDescriptor/ClusterSpecification from CommandLine
URL: https://github.com/apache/flink/pull/9974#issuecomment-547228392
 
 
   > ,
   
   Thanks for your clarification. I see the concern.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun edited a comment on issue #9974: [FLINK-14501][FLINK-14502] Decouple ClusterDescriptor/ClusterSpecification from CommandLine

2019-10-28 Thread GitBox
TisonKun edited a comment on issue #9974: [FLINK-14501][FLINK-14502] Decouple 
ClusterDescriptor/ClusterSpecification from CommandLine
URL: https://github.com/apache/flink/pull/9974#issuecomment-547228392
 
 
   Thanks for your clarification. I see the concern.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14374) Enable RegionFailoverITCase to pass with scheduler NG

2019-10-28 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-14374:

Description: 
RegionFailoverITCase currently fails with scheduler NG.
The failure cause is that it's using {{FailingRestartStrategy}} which is not 
supported in scheduler NG.

However, the usage of  {{FailingRestartStrategy}} seems not to be necessary. 
It's for verifying a special case(see FLINK-13452) of legacy scheduler which 
are less likely to happen in the future.

I'd propose to drop to the usage of {{FailingRestartStrategy}} in 
{{RegionFailoverITCase}} to make {{RegionFailoverITCase}} a simple test for 
streaming job on region failover.

  was:
RegionFailoverITCase currently fails with scheduler NG.
The failure cause is that it's using {{FailingRestartStrategy}} which is not 
supported in scheduler NG.

However, the usage of  {{FailingRestartStrategy}} seems not to be necessary. 
It's for verifying a special case of legacy scheduler which are less likely to 
happen in the future. The issue is fixed in FLINK-13452.

I'd propose to drop to the usage of {{FailingRestartStrategy}} in 
{{RegionFailoverITCase}} to make {{RegionFailoverITCase}} a simple test for 
streaming job on region failover.


> Enable RegionFailoverITCase to pass with scheduler NG
> -
>
> Key: FLINK-14374
> URL: https://issues.apache.org/jira/browse/FLINK-14374
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination, Tests
>Affects Versions: 1.10.0
>Reporter: Zhu Zhu
>Priority: Major
> Fix For: 1.10.0
>
>
> RegionFailoverITCase currently fails with scheduler NG.
> The failure cause is that it's using {{FailingRestartStrategy}} which is not 
> supported in scheduler NG.
> However, the usage of  {{FailingRestartStrategy}} seems not to be necessary. 
> It's for verifying a special case(see FLINK-13452) of legacy scheduler which 
> are less likely to happen in the future.
> I'd propose to drop to the usage of {{FailingRestartStrategy}} in 
> {{RegionFailoverITCase}} to make {{RegionFailoverITCase}} a simple test for 
> streaming job on region failover.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-14424) Create tpc-ds end to end test to support all tpc-ds queries

2019-10-28 Thread Jiayi Liao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiayi Liao updated FLINK-14424:
---
Comment: was deleted

(was: A quick ask, are you going to create a new tpc-ds module and put it into 
flink-end-to-end-tests? Is this your plan?)

>  Create tpc-ds end to end test to support all tpc-ds queries
> 
>
> Key: FLINK-14424
> URL: https://issues.apache.org/jira/browse/FLINK-14424
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Leonard Xu
>Assignee: Leonard Xu
>Priority: Major
> Fix For: 1.10.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] TisonKun commented on issue #10010: [FLINK-10435][yarn]Client sporadically hangs after Ctrl + C

2019-10-28 Thread GitBox
TisonKun commented on issue #10010: [FLINK-10435][yarn]Client sporadically 
hangs after Ctrl + C
URL: https://github.com/apache/flink/pull/10010#issuecomment-547225702
 
 
   > > Thanks for opening this pull request @wangyang0918 ! Generally it looks 
good. I have one question:
   > > What if `YarnClient#start` or `YarnClient#stop` throws an exception? Do 
you take them into consideration?
   > 
   > @TisonKun Thanks for your comments.
   > 
   > The shut down hook is a best effort clean up. And yarn client only throws 
`RuntimeException`. Also when we try to start the yarn client in shut down 
hook, another yarn client of `YarnClusterDescriptor` is running. So it should 
be started and stopped successfully.
   
   make sense. I have one more question above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #10010: [FLINK-10435][yarn]Client sporadically hangs after Ctrl + C

2019-10-28 Thread GitBox
TisonKun commented on a change in pull request #10010: 
[FLINK-10435][yarn]Client sporadically hangs after Ctrl + C
URL: https://github.com/apache/flink/pull/10010#discussion_r339868479
 
 

 ##
 File path: 
flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java
 ##
 @@ -1241,7 +1241,6 @@ private void failSessionDuringDeployment(YarnClient 
yarnClient, YarnClientApplic
// call (we don't know if the application has been 
deployed when the error occured).
LOG.debug("Error while killing YARN application", e);
}
-   yarnClient.stop();
 
 Review comment:
   I can see other caller of `failSessionDuringDeployment` such as L525 and 
L511. Why do you remove this line. IIUC now we don't stop yarnClient in these 
two cases.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14537) Improve influxdb reporter performance for kafka source/sink

2019-10-28 Thread ouyangwulin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961599#comment-16961599
 ] 

ouyangwulin commented on FLINK-14537:
-

[~wind_ljy],I always use influxdb cli, `show measurements` for look up 
measurement. now It will exausts 2 minutes this command. and It's diffcults for 
get the measurement which I want.

 

[~yunta] some kafka metrics is need  monitor ,like 'records_lag_max' inc. and I 
think  https://issues.apache.org/jira/browse/FLINK-13418  is necessary.

> Improve influxdb reporter performance for kafka source/sink
> ---
>
> Key: FLINK-14537
> URL: https://issues.apache.org/jira/browse/FLINK-14537
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Metrics
>Affects Versions: 1.9.1
>Reporter: ouyangwulin
>Priority: Minor
> Fix For: 1.10.0, 1.9.2, 1.11.0
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> In our product env, our datasource mostly from kafka source. and influxdb 
> report use kafka topic and partition for create infuxdb measurements, It 
> makes 13531 measurements in influxdb. so It trouble for get measurements 
> which we want get the metric data, and It exaust performance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-12668) Introduce fromParallelElements for generating DataStreamSource

2019-10-28 Thread Jiayi Liao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiayi Liao closed FLINK-12668.
--
Fix Version/s: (was: 1.10.0)
   Resolution: Invalid

> Introduce fromParallelElements for generating DataStreamSource
> --
>
> Key: FLINK-12668
> URL: https://issues.apache.org/jira/browse/FLINK-12668
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.8.0
>Reporter: Jiayi Liao
>Assignee: Jiayi Liao
>Priority: Major
>
> We've already have fromElements function in StreamExecutionEnvironment to 
> generate a non-parallel DataStreamSource. We should introduce a similar 
> fromParallelElements function because:
> 1. The current implementations of ParallelSourceFunction are mostly bound to 
> external resources like kafka source. And we need a more lightweight parallel 
> source function that can be easily created. The SplittableIterator is too 
> heavy by the way.
> 2. It's very useful if we want to verify or test something in a parallel 
> processing environment.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-13854) Support Aggregating in Join and CoGroup

2019-10-28 Thread Jiayi Liao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiayi Liao closed FLINK-13854.
--
Resolution: Invalid

> Support Aggregating in Join and CoGroup
> ---
>
> Key: FLINK-13854
> URL: https://issues.apache.org/jira/browse/FLINK-13854
> Project: Flink
>  Issue Type: New Feature
>  Components: API / DataStream
>Affects Versions: 1.9.0
>Reporter: Jiayi Liao
>Priority: Major
>
> In WindowStream we can use  windowStream.aggregate(AggregateFunction, 
> WindowFunction) to aggregate input records in real-time.   
> I think we should support similar api in JoinedStreams and CoGroupStreams, 
> because it's a very huge cost by storing the records log in state backend, 
> especially when we don't need the specific records.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10021: [hotfix][typo] Rename withNewBucketAssignerAnd[Rolling]Policy

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #10021: [hotfix][typo] Rename 
withNewBucketAssignerAnd[Rolling]Policy
URL: https://github.com/apache/flink/pull/10021#issuecomment-547177459
 
 
   
   ## CI report:
   
   * d345162a7060b40731d4210eb17b0062be4a69ea : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133914289)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9974: [FLINK-14501][FLINK-14502] Decouple ClusterDescriptor/ClusterSpecification from CommandLine

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #9974: [FLINK-14501][FLINK-14502] Decouple 
ClusterDescriptor/ClusterSpecification from CommandLine
URL: https://github.com/apache/flink/pull/9974#issuecomment-545348099
 
 
   
   ## CI report:
   
   * 07d3f11426dbdcc4fd961641da0a573914926aca : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133151822)
   * 01501b0226a44884eb321a7fe9c2647d50c7dff6 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133792232)
   * 2eba494890f7aa5056ce6ddf8ddcd804e4ca07e3 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133892001)
   * a311ce5f48cc3aa438fc87f2561c2d25bca73cc2 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133916747)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-14397) Failed to run Hive UDTF with array arguments

2019-10-28 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-14397.

Fix Version/s: 1.10.0
   Resolution: Fixed

master: e18320b76047af4e15297e3e89b6c46ef3dae9bf

> Failed to run Hive UDTF with array arguments
> 
>
> Key: FLINK-14397
> URL: https://issues.apache.org/jira/browse/FLINK-14397
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Tried to call 
> {{org.apache.hadoop.hive.contrib.udtf.example.GenericUDTFExplode2}} (in 
> hive-contrib) with query:  "{{select x,y from foo, lateral 
> table(hiveudtf(arr)) as T(x,y)}}". Failed with exception:
> {noformat}
> java.lang.ClassCastException: [Ljava.lang.Object; cannot be cast to 
> [Ljava.lang.Integer;
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] bowenli86 commented on issue #9927: [FLINK-14397][hive] Failed to run Hive UDTF with array arguments

2019-10-28 Thread GitBox
bowenli86 commented on issue #9927: [FLINK-14397][hive] Failed to run Hive UDTF 
with array arguments
URL: https://github.com/apache/flink/pull/9927#issuecomment-547192410
 
 
   @lirui-apache thanks for reminding me. merged now


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14397) Failed to run Hive UDTF with array arguments

2019-10-28 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-14397:
-
Affects Version/s: 1.10.0
   1.9.0

> Failed to run Hive UDTF with array arguments
> 
>
> Key: FLINK-14397
> URL: https://issues.apache.org/jira/browse/FLINK-14397
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Tried to call 
> {{org.apache.hadoop.hive.contrib.udtf.example.GenericUDTFExplode2}} (in 
> hive-contrib) with query:  "{{select x,y from foo, lateral 
> table(hiveudtf(arr)) as T(x,y)}}". Failed with exception:
> {noformat}
> java.lang.ClassCastException: [Ljava.lang.Object; cannot be cast to 
> [Ljava.lang.Integer;
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] asfgit closed pull request #9927: [FLINK-14397][hive] Failed to run Hive UDTF with array arguments

2019-10-28 Thread GitBox
asfgit closed pull request #9927: [FLINK-14397][hive] Failed to run Hive UDTF 
with array arguments
URL: https://github.com/apache/flink/pull/9927
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on a change in pull request #9962: [FLINK-14218][table] support precise function reference in FunctionCatalog

2019-10-28 Thread GitBox
bowenli86 commented on a change in pull request #9962: [FLINK-14218][table] 
support precise function reference in FunctionCatalog
URL: https://github.com/apache/flink/pull/9962#discussion_r339838103
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/functions/FunctionIdentifier.java
 ##
 @@ -0,0 +1,132 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.functions;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.util.StringUtils;
+
+import java.io.Serializable;
+import java.util.Objects;
+import java.util.Optional;
+
+import static org.apache.flink.table.utils.EncodingUtils.escapeIdentifier;
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * Identifies a temporary function with function name or a catalog function 
with a fully qualified identifier.
 
 Review comment:
   good catch. typo, should be
   
   ```suggestion
* Identifies a system function with function name or a catalog function 
with a fully qualified identifier.
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9974: [FLINK-14501][FLINK-14502] Decouple ClusterDescriptor/ClusterSpecification from CommandLine

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #9974: [FLINK-14501][FLINK-14502] Decouple 
ClusterDescriptor/ClusterSpecification from CommandLine
URL: https://github.com/apache/flink/pull/9974#issuecomment-545348099
 
 
   
   ## CI report:
   
   * 07d3f11426dbdcc4fd961641da0a573914926aca : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133151822)
   * 01501b0226a44884eb321a7fe9c2647d50c7dff6 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133792232)
   * 2eba494890f7aa5056ce6ddf8ddcd804e4ca07e3 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133892001)
   * a311ce5f48cc3aa438fc87f2561c2d25bca73cc2 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10021: [hotfix][typo] Rename withNewBucketAssignerAnd[Rolling]Policy

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #10021: [hotfix][typo] Rename 
withNewBucketAssignerAnd[Rolling]Policy
URL: https://github.com/apache/flink/pull/10021#issuecomment-547177459
 
 
   
   ## CI report:
   
   * d345162a7060b40731d4210eb17b0062be4a69ea : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/133914289)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] kl0u commented on issue #9974: [FLINK-14501][FLINK-14502] Decouple ClusterDescriptor/ClusterSpecification from CommandLine

2019-10-28 Thread GitBox
kl0u commented on issue #9974: [FLINK-14501][FLINK-14502] Decouple 
ClusterDescriptor/ClusterSpecification from CommandLine
URL: https://github.com/apache/flink/pull/9974#issuecomment-547181779
 
 
   Thanks for the review @TisonKun ! 
   
   I integrated your comments, apart from the one for the change to the 
`DefaultClusterClientServiceLoader`. The reason for that is that there is a 
(wrapped) exception thrown in the `.next()` of the iterator that we need to 
catch, but at the same time continue the iteration. To see this "in action" 
feel free to compile (with and without your change) and run the following 
end-to-end test:
   
   ```
   FLINK_DIR=build-target flink-end-to-end-tests/run-single-test.sh 
"flink-end-to-end-tests/test-scripts/test_state_migration.sh"
   ```
   
   I will merge as soon as Travis gives the green light.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10021: [hotfix][typo] Rename withNewBucketAssignerAnd[Rolling]Policy

2019-10-28 Thread GitBox
flinkbot commented on issue #10021: [hotfix][typo] Rename 
withNewBucketAssignerAnd[Rolling]Policy
URL: https://github.com/apache/flink/pull/10021#issuecomment-547177459
 
 
   
   ## CI report:
   
   * d345162a7060b40731d4210eb17b0062be4a69ea : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10019: [hotfix][docs] REST label in SSL figure was rotated out of view

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #10019: [hotfix][docs] REST label in SSL 
figure was rotated out of view
URL: https://github.com/apache/flink/pull/10019#issuecomment-546974174
 
 
   
   ## CI report:
   
   * b45e285f780c93fee41118c45bdad54812909a99 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133832770)
   * 40559852f42707557b7a834d31a593bf58d2faec : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133904396)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10021: [hotfix][typo] Rename withNewBucketAssignerAnd[Rolling]Policy

2019-10-28 Thread GitBox
flinkbot commented on issue #10021: [hotfix][typo] Rename 
withNewBucketAssignerAnd[Rolling]Policy
URL: https://github.com/apache/flink/pull/10021#issuecomment-547174745
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit d345162a7060b40731d4210eb17b0062be4a69ea (Mon Oct 28 
22:34:58 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] nryanov opened a new pull request #10021: [hotfix][typo] Rename withNewBucketAssignerAnd[Rolling]Policy

2019-10-28 Thread GitBox
nryanov opened a new pull request #10021: [hotfix][typo] Rename 
withNewBucketAssignerAnd[Rolling]Policy
URL: https://github.com/apache/flink/pull/10021
 
 
   ## What is the purpose of the change
   There are two existing methods in StreamingFileSink:
   withRollingPolicy and withBucketAssigner which can be used separately. Also 
there is method withNewBucketAssignerAndPolicy which can be used as replacement 
for two mentioned methods. 
   It seems that this method should be named as 
withNewBucketAssignerAndRollingPolicy.
   
   ## Brief change log
   Rename method withNewBucketAssignerAndPolicy to 
withNewBucketAssignerAnd**Rolling**Policy
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   This change is already covered by existing tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: yes (?)
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9960: [FLINK-14476] Extend PartitionTracker to support promotions

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #9960: [FLINK-14476] Extend PartitionTracker 
to support promotions
URL: https://github.com/apache/flink/pull/9960#issuecomment-544551169
 
 
   
   ## CI report:
   
   * 2cf1a8e029da523cdf2df9504f4d6f44da31ceb1 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/132835876)
   * 35cbb21bf39d6df7bd83b87877df73c2509127e4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133714533)
   * 06642ae1fc6e90eb5e8025bded5f1f0bf076ff8c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133717161)
   * 915b728ac3509bac047e82fb778b9a8a587d7250 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133900367)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10019: [hotfix][docs] REST label in SSL figure was rotated out of view

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #10019: [hotfix][docs] REST label in SSL 
figure was rotated out of view
URL: https://github.com/apache/flink/pull/10019#issuecomment-546974174
 
 
   
   ## CI report:
   
   * b45e285f780c93fee41118c45bdad54812909a99 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133832770)
   * 40559852f42707557b7a834d31a593bf58d2faec : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/133904396)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9974: [FLINK-14501][FLINK-14502] Decouple ClusterDescriptor/ClusterSpecification from CommandLine

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #9974: [FLINK-14501][FLINK-14502] Decouple 
ClusterDescriptor/ClusterSpecification from CommandLine
URL: https://github.com/apache/flink/pull/9974#issuecomment-545348099
 
 
   
   ## CI report:
   
   * 07d3f11426dbdcc4fd961641da0a573914926aca : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133151822)
   * 01501b0226a44884eb321a7fe9c2647d50c7dff6 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133792232)
   * 2eba494890f7aa5056ce6ddf8ddcd804e4ca07e3 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133892001)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9960: [FLINK-14476] Extend PartitionTracker to support promotions

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #9960: [FLINK-14476] Extend PartitionTracker 
to support promotions
URL: https://github.com/apache/flink/pull/9960#issuecomment-544551169
 
 
   
   ## CI report:
   
   * 2cf1a8e029da523cdf2df9504f4d6f44da31ceb1 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/132835876)
   * 35cbb21bf39d6df7bd83b87877df73c2509127e4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133714533)
   * 06642ae1fc6e90eb5e8025bded5f1f0bf076ff8c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133717161)
   * 915b728ac3509bac047e82fb778b9a8a587d7250 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/133900367)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10019: [hotfix][docs] REST label in SSL figure was rotated out of view

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #10019: [hotfix][docs] REST label in SSL 
figure was rotated out of view
URL: https://github.com/apache/flink/pull/10019#issuecomment-546974174
 
 
   
   ## CI report:
   
   * b45e285f780c93fee41118c45bdad54812909a99 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133832770)
   * 40559852f42707557b7a834d31a593bf58d2faec : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9960: [FLINK-14476] Extend PartitionTracker to support promotions

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #9960: [FLINK-14476] Extend PartitionTracker 
to support promotions
URL: https://github.com/apache/flink/pull/9960#issuecomment-544551169
 
 
   
   ## CI report:
   
   * 2cf1a8e029da523cdf2df9504f4d6f44da31ceb1 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/132835876)
   * 35cbb21bf39d6df7bd83b87877df73c2509127e4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133714533)
   * 06642ae1fc6e90eb5e8025bded5f1f0bf076ff8c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133717161)
   * 915b728ac3509bac047e82fb778b9a8a587d7250 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] alpinegizmo commented on a change in pull request #10019: [hotfix][docs] REST label in SSL figure was rotated out of view

2019-10-28 Thread GitBox
alpinegizmo commented on a change in pull request #10019: [hotfix][docs] REST 
label in SSL figure was rotated out of view
URL: https://github.com/apache/flink/pull/10019#discussion_r339782523
 
 

 ##
 File path: docs/fig/ssl_internal_external.svg
 ##
 @@ -1,22 +1,4 @@
 

[GitHub] [flink] zentol commented on a change in pull request #10019: [hotfix][docs] REST label in SSL figure was rotated out of view

2019-10-28 Thread GitBox
zentol commented on a change in pull request #10019: [hotfix][docs] REST label 
in SSL figure was rotated out of view
URL: https://github.com/apache/flink/pull/10019#discussion_r339778653
 
 

 ##
 File path: docs/fig/ssl_internal_external.svg
 ##
 @@ -1,22 +1,4 @@
 

[GitHub] [flink] flinkbot edited a comment on issue #9974: [FLINK-14501][FLINK-14502] Decouple ClusterDescriptor/ClusterSpecification from CommandLine

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #9974: [FLINK-14501][FLINK-14502] Decouple 
ClusterDescriptor/ClusterSpecification from CommandLine
URL: https://github.com/apache/flink/pull/9974#issuecomment-545348099
 
 
   
   ## CI report:
   
   * 07d3f11426dbdcc4fd961641da0a573914926aca : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133151822)
   * 01501b0226a44884eb321a7fe9c2647d50c7dff6 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133792232)
   * 2eba494890f7aa5056ce6ddf8ddcd804e4ca07e3 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/133892001)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9974: [FLINK-14501][FLINK-14502] Decouple ClusterDescriptor/ClusterSpecification from CommandLine

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #9974: [FLINK-14501][FLINK-14502] Decouple 
ClusterDescriptor/ClusterSpecification from CommandLine
URL: https://github.com/apache/flink/pull/9974#issuecomment-545348099
 
 
   
   ## CI report:
   
   * 07d3f11426dbdcc4fd961641da0a573914926aca : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133151822)
   * 01501b0226a44884eb321a7fe9c2647d50c7dff6 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133792232)
   * 2eba494890f7aa5056ce6ddf8ddcd804e4ca07e3 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9920: [FLINK-14389][runtime] Restore task state before restarting tasks in DefaultScheduler

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #9920: [FLINK-14389][runtime] Restore task 
state before restarting tasks in DefaultScheduler
URL: https://github.com/apache/flink/pull/9920#issuecomment-542997479
 
 
   
   ## CI report:
   
   * c7e058ec893217ef5c8c3cf3ae5e893a7ccf4b8f : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/132271237)
   * 234c7e1d480adda4a90e3d7e5bd04ee58ba33152 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133366849)
   * 9c2f710538a4773acbdc768bd9a10976f010cac6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133647868)
   * 49342d06b8d52c67ac9f1761c13bf12d26dbce90 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133649345)
   * b28fb132bd418c0c36756c6f53f4d59683ab7831 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133721701)
   * 080699d17f74d291b97e10b18f41d95f67844aad : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133821315)
   * e2942f200186291e37ee4a41b8234591ba2ea476 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133869955)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9920: [FLINK-14389][runtime] Restore task state before restarting tasks in DefaultScheduler

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #9920: [FLINK-14389][runtime] Restore task 
state before restarting tasks in DefaultScheduler
URL: https://github.com/apache/flink/pull/9920#issuecomment-542997479
 
 
   
   ## CI report:
   
   * c7e058ec893217ef5c8c3cf3ae5e893a7ccf4b8f : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/132271237)
   * 234c7e1d480adda4a90e3d7e5bd04ee58ba33152 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133366849)
   * 9c2f710538a4773acbdc768bd9a10976f010cac6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133647868)
   * 49342d06b8d52c67ac9f1761c13bf12d26dbce90 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133649345)
   * b28fb132bd418c0c36756c6f53f4d59683ab7831 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133721701)
   * 080699d17f74d291b97e10b18f41d95f67844aad : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133821315)
   * e2942f200186291e37ee4a41b8234591ba2ea476 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/133869955)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xuefuz commented on a change in pull request #9962: [FLINK-14218][table] support precise function reference in FunctionCatalog

2019-10-28 Thread GitBox
xuefuz commented on a change in pull request #9962: [FLINK-14218][table] 
support precise function reference in FunctionCatalog
URL: https://github.com/apache/flink/pull/9962#discussion_r339711107
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/functions/FunctionIdentifier.java
 ##
 @@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.functions;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.util.StringUtils;
+
+import java.io.Serializable;
+import java.util.Objects;
+import java.util.Optional;
+
+import static org.apache.flink.table.utils.EncodingUtils.escapeIdentifier;
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * Identifies a temporary or catalog function. An identifier must be fully 
qualified.
+ * Function catalog is responsible for resolving an identifier to a function.
+ */
+@PublicEvolving
+public final class FunctionIdentifier implements Serializable {
+
+   private final ObjectIdentifier objectIdentifier;
+
+   private final String functionName;
+
+   public static FunctionIdentifier of(ObjectIdentifier oi){
+   return new FunctionIdentifier(oi);
+   }
+
+   public static FunctionIdentifier of(String functionName){
+   return new FunctionIdentifier(functionName);
+   }
+
+   private FunctionIdentifier(ObjectIdentifier objectIdentifier){
+   this.objectIdentifier = checkNotNull(objectIdentifier, "Object 
identifier cannot be null");
+   functionName = null;
+   }
+
+   private FunctionIdentifier(String functionName){
+   checkArgument(!StringUtils.isNullOrWhitespaceOnly(functionName),
+   "function name cannot be null or empty string");
+   this.functionName = functionName;
+   this.objectIdentifier = null;
+   }
+
+   public Optional getIdentifier(){
+   return Optional.ofNullable(objectIdentifier);
+   }
+
+   public Optional getSimpleName(){
+   return Optional.ofNullable(functionName);
+   }
+
+   /**
+* Returns a string that fully serializes this instance. The serialized 
string can be used for
+* transmitting or persisting an object identifier.
+*/
+   public String asSerializableString() {
 
 Review comment:
   0k.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xuefuz commented on a change in pull request #9995: [FLINK-14526][hive] Support Hive version 1.1.0 and 1.1.1

2019-10-28 Thread GitBox
xuefuz commented on a change in pull request #9995: [FLINK-14526][hive] Support 
Hive version 1.1.0 and 1.1.1
URL: https://github.com/apache/flink/pull/9995#discussion_r339701308
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/client/HiveShim.java
 ##
 @@ -33,16 +36,18 @@
 import org.apache.hadoop.hive.metastore.api.UnknownDBException;
 import org.apache.hadoop.hive.ql.udf.generic.SimpleGenericUDAFParameterInfo;
 import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;
+import org.apache.hadoop.hive.serde2.typeinfo.PrimitiveTypeInfo;
 import org.apache.thrift.TException;
 
 import java.io.IOException;
+import java.io.Serializable;
 import java.util.List;
 import java.util.Map;
 
 /**
  * A shim layer to support different versions of Hive.
  */
-public interface HiveShim {
+public interface HiveShim extends Serializable {
 
 Review comment:
   Does the test fail before this PR, as HiveShim is not serializable before. 
If we need the fix to pass the test, we can put in the same PR to reduce the 
overhead.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhuzhurk commented on a change in pull request #9920: [FLINK-14389][runtime] Restore task state before restarting tasks in DefaultScheduler

2019-10-28 Thread GitBox
zhuzhurk commented on a change in pull request #9920: [FLINK-14389][runtime] 
Restore task state before restarting tasks in DefaultScheduler
URL: https://github.com/apache/flink/pull/9920#discussion_r339683817
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/DefaultScheduler.java
 ##
 @@ -212,6 +212,13 @@ private Runnable restartTasks(final 
Set executionVertexV
 

resetForNewExecutionIfInTerminalState(verticesToRestart);
 
+   try {
+   restoreState(verticesToRestart);
 
 Review comment:
   done.
   I'd like to keep the restore block directly in 
`DefaultScheduler#restartTasks` so that we can return to skip 
schedulingStrategy.restartTasks is an error happen in restore. Otherwise we 
need to change `resetForNewExecutions` to have an return value.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhuzhurk commented on a change in pull request #9920: [FLINK-14389][runtime] Restore task state before restarting tasks in DefaultScheduler

2019-10-28 Thread GitBox
zhuzhurk commented on a change in pull request #9920: [FLINK-14389][runtime] 
Restore task state before restarting tasks in DefaultScheduler
URL: https://github.com/apache/flink/pull/9920#discussion_r339682440
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/DefaultSchedulerTest.java
 ##
 @@ -388,6 +394,127 @@ public void vertexIsNotAffectedByOutdatedDeployment() {
assertThat(sv1.getState(), 
is(equalTo(ExecutionState.SCHEDULED)));
}
 
+   @Test
+   public void abortPendingCheckpointsWhenRestartingTasks() {
+   final JobGraph jobGraph = singleNonParallelJobVertexJobGraph();
+   enableCheckpointing(jobGraph);
+
+   final DefaultScheduler scheduler = 
createSchedulerAndStartScheduling(jobGraph);
+
+   final ArchivedExecutionVertex onlyExecutionVertex = 
Iterables.getOnlyElement(scheduler.requestJob().getAllExecutionVertices());
+   final ExecutionAttemptID attemptId = 
onlyExecutionVertex.getCurrentExecutionAttempt().getAttemptId();
+   scheduler.updateTaskExecutionState(new 
TaskExecutionState(jobGraph.getJobID(), attemptId, ExecutionState.RUNNING));
+
+   final CheckpointCoordinator checkpointCoordinator = 
getCheckpointCoordinator(scheduler);
+
+   
checkpointCoordinator.triggerCheckpoint(System.currentTimeMillis(),  false);
+   
assertThat(checkpointCoordinator.getNumberOfPendingCheckpoints(), 
is(equalTo(1)));
+
+   scheduler.updateTaskExecutionState(new 
TaskExecutionState(jobGraph.getJobID(), attemptId, ExecutionState.FAILED));
+   taskRestartExecutor.triggerScheduledTasks();
+   
assertThat(checkpointCoordinator.getNumberOfPendingCheckpoints(), 
is(equalTo(0)));
+   }
+
+   @Test
+   public void restoreStateWhenRestartingTasks() throws Exception {
+   final JobGraph jobGraph = singleNonParallelJobVertexJobGraph();
+   enableCheckpointing(jobGraph);
+
+   final DefaultScheduler scheduler = 
createSchedulerAndStartScheduling(jobGraph);
+
+   final ArchivedExecutionVertex onlyExecutionVertex = 
Iterables.getOnlyElement(scheduler.requestJob().getAllExecutionVertices());
+   final ExecutionAttemptID attemptId = 
onlyExecutionVertex.getCurrentExecutionAttempt().getAttemptId();
+   scheduler.updateTaskExecutionState(new 
TaskExecutionState(jobGraph.getJobID(), attemptId, ExecutionState.RUNNING));
+
+   final CheckpointCoordinator checkpointCoordinator = 
getCheckpointCoordinator(scheduler);
+
+   // register a stateful master hook to help verify state restore
+   final TestMasterHook masterHook = new 
TestMasterHook("testHook");
+   checkpointCoordinator.addMasterHook(masterHook);
+
+   // complete one checkpoint for state restore
+   
checkpointCoordinator.triggerCheckpoint(System.currentTimeMillis(),  false);
+   acknowledgeCheckpoint(checkpointCoordinator, 
jobGraph.getJobID(), attemptId);
+
+   scheduler.updateTaskExecutionState(new 
TaskExecutionState(jobGraph.getJobID(), attemptId, ExecutionState.FAILED));
+   taskRestartExecutor.triggerScheduledTasks();
+   assertThat(masterHook.getRestoreCount(), is(equalTo(1)));
+   }
+
+   @Test
+   public void failGlobalWhenRestoringStateFails() throws Exception {
+   final JobGraph jobGraph = singleNonParallelJobVertexJobGraph();
+   final JobVertex onlyJobVertex = getOnlyJobVertex(jobGraph);
+   enableCheckpointing(jobGraph);
+
+   final DefaultScheduler scheduler = 
createSchedulerAndStartScheduling(jobGraph);
+
+   final ArchivedExecutionVertex onlyExecutionVertex = 
Iterables.getOnlyElement(scheduler.requestJob().getAllExecutionVertices());
+   final ExecutionAttemptID attemptId = 
onlyExecutionVertex.getCurrentExecutionAttempt().getAttemptId();
+   scheduler.updateTaskExecutionState(new 
TaskExecutionState(jobGraph.getJobID(), attemptId, ExecutionState.RUNNING));
+
+   final CheckpointCoordinator checkpointCoordinator = 
getCheckpointCoordinator(scheduler);
+
+   // register a master hook to fail state restore
+   final TestMasterHook masterHook = new 
TestMasterHook("testHook");
+   masterHook.enableFailOnRestore();
+   checkpointCoordinator.addMasterHook(masterHook);
+
+   // complete one checkpoint for state restore
+   
checkpointCoordinator.triggerCheckpoint(System.currentTimeMillis(),  false);
+   acknowledgeCheckpoint(checkpointCoordinator, 
jobGraph.getJobID(), attemptId);
+
+   scheduler.updateTaskExecutionState(new 
TaskExecutionState(jobGraph.getJobID(), attemptId, ExecutionState.FAILED));
+   

[GitHub] [flink] zhuzhurk commented on a change in pull request #9920: [FLINK-14389][runtime] Restore task state before restarting tasks in DefaultScheduler

2019-10-28 Thread GitBox
zhuzhurk commented on a change in pull request #9920: [FLINK-14389][runtime] 
Restore task state before restarting tasks in DefaultScheduler
URL: https://github.com/apache/flink/pull/9920#discussion_r339682365
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/DefaultSchedulerTest.java
 ##
 @@ -388,6 +394,127 @@ public void vertexIsNotAffectedByOutdatedDeployment() {
assertThat(sv1.getState(), 
is(equalTo(ExecutionState.SCHEDULED)));
}
 
+   @Test
+   public void abortPendingCheckpointsWhenRestartingTasks() {
+   final JobGraph jobGraph = singleNonParallelJobVertexJobGraph();
+   enableCheckpointing(jobGraph);
+
+   final DefaultScheduler scheduler = 
createSchedulerAndStartScheduling(jobGraph);
+
+   final ArchivedExecutionVertex onlyExecutionVertex = 
Iterables.getOnlyElement(scheduler.requestJob().getAllExecutionVertices());
+   final ExecutionAttemptID attemptId = 
onlyExecutionVertex.getCurrentExecutionAttempt().getAttemptId();
+   scheduler.updateTaskExecutionState(new 
TaskExecutionState(jobGraph.getJobID(), attemptId, ExecutionState.RUNNING));
+
+   final CheckpointCoordinator checkpointCoordinator = 
getCheckpointCoordinator(scheduler);
+
+   
checkpointCoordinator.triggerCheckpoint(System.currentTimeMillis(),  false);
+   
assertThat(checkpointCoordinator.getNumberOfPendingCheckpoints(), 
is(equalTo(1)));
+
+   scheduler.updateTaskExecutionState(new 
TaskExecutionState(jobGraph.getJobID(), attemptId, ExecutionState.FAILED));
+   taskRestartExecutor.triggerScheduledTasks();
+   
assertThat(checkpointCoordinator.getNumberOfPendingCheckpoints(), 
is(equalTo(0)));
+   }
+
+   @Test
+   public void restoreStateWhenRestartingTasks() throws Exception {
+   final JobGraph jobGraph = singleNonParallelJobVertexJobGraph();
+   enableCheckpointing(jobGraph);
+
+   final DefaultScheduler scheduler = 
createSchedulerAndStartScheduling(jobGraph);
+
+   final ArchivedExecutionVertex onlyExecutionVertex = 
Iterables.getOnlyElement(scheduler.requestJob().getAllExecutionVertices());
+   final ExecutionAttemptID attemptId = 
onlyExecutionVertex.getCurrentExecutionAttempt().getAttemptId();
+   scheduler.updateTaskExecutionState(new 
TaskExecutionState(jobGraph.getJobID(), attemptId, ExecutionState.RUNNING));
+
+   final CheckpointCoordinator checkpointCoordinator = 
getCheckpointCoordinator(scheduler);
+
+   // register a stateful master hook to help verify state restore
+   final TestMasterHook masterHook = new 
TestMasterHook("testHook");
+   checkpointCoordinator.addMasterHook(masterHook);
+
+   // complete one checkpoint for state restore
+   
checkpointCoordinator.triggerCheckpoint(System.currentTimeMillis(),  false);
+   acknowledgeCheckpoint(checkpointCoordinator, 
jobGraph.getJobID(), attemptId);
+
+   scheduler.updateTaskExecutionState(new 
TaskExecutionState(jobGraph.getJobID(), attemptId, ExecutionState.FAILED));
+   taskRestartExecutor.triggerScheduledTasks();
+   assertThat(masterHook.getRestoreCount(), is(equalTo(1)));
+   }
+
+   @Test
+   public void failGlobalWhenRestoringStateFails() throws Exception {
+   final JobGraph jobGraph = singleNonParallelJobVertexJobGraph();
+   final JobVertex onlyJobVertex = getOnlyJobVertex(jobGraph);
+   enableCheckpointing(jobGraph);
+
+   final DefaultScheduler scheduler = 
createSchedulerAndStartScheduling(jobGraph);
+
+   final ArchivedExecutionVertex onlyExecutionVertex = 
Iterables.getOnlyElement(scheduler.requestJob().getAllExecutionVertices());
+   final ExecutionAttemptID attemptId = 
onlyExecutionVertex.getCurrentExecutionAttempt().getAttemptId();
+   scheduler.updateTaskExecutionState(new 
TaskExecutionState(jobGraph.getJobID(), attemptId, ExecutionState.RUNNING));
+
+   final CheckpointCoordinator checkpointCoordinator = 
getCheckpointCoordinator(scheduler);
+
+   // register a master hook to fail state restore
+   final TestMasterHook masterHook = new 
TestMasterHook("testHook");
+   masterHook.enableFailOnRestore();
+   checkpointCoordinator.addMasterHook(masterHook);
+
+   // complete one checkpoint for state restore
+   
checkpointCoordinator.triggerCheckpoint(System.currentTimeMillis(),  false);
+   acknowledgeCheckpoint(checkpointCoordinator, 
jobGraph.getJobID(), attemptId);
+
+   scheduler.updateTaskExecutionState(new 
TaskExecutionState(jobGraph.getJobID(), attemptId, ExecutionState.FAILED));
+   

[GitHub] [flink] TisonKun commented on a change in pull request #9974: [FLINK-14501][FLINK-14502] Decouple ClusterDescriptor/ClusterSpecification from CommandLine

2019-10-28 Thread GitBox
TisonKun commented on a change in pull request #9974: 
[FLINK-14501][FLINK-14502] Decouple ClusterDescriptor/ClusterSpecification from 
CommandLine
URL: https://github.com/apache/flink/pull/9974#discussion_r339674656
 
 

 ##
 File path: 
flink-clients/src/test/java/org/apache/flink/client/cli/CliFrontendRunTest.java
 ##
 @@ -145,12 +147,23 @@ public void testParallelismWithOverflow() throws 
Exception {
// 

 
public static void verifyCliFrontend(
-   AbstractCustomCommandLine cli,
+   AbstractCustomCommandLine cli,
String[] parameters,
int expectedParallelism,
boolean isDetached) throws Exception {
RunTestingCliFrontend testFrontend =
-   new RunTestingCliFrontend(cli, expectedParallelism, 
isDetached);
+   new RunTestingCliFrontend(new 
DefaultClusterClientServiceLoader(), cli, expectedParallelism, isDetached);
+   testFrontend.run(parameters); // verifies the expected values 
(see below)
+   }
+
+   public static void verifyCliFrontend(
 
 Review comment:
   This utility is only used in `CliFrontendRunWithYarnTest`. Please move there.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #9974: [FLINK-14501][FLINK-14502] Decouple ClusterDescriptor/ClusterSpecification from CommandLine

2019-10-28 Thread GitBox
TisonKun commented on a change in pull request #9974: 
[FLINK-14501][FLINK-14502] Decouple ClusterDescriptor/ClusterSpecification from 
CommandLine
URL: https://github.com/apache/flink/pull/9974#discussion_r339676856
 
 

 ##
 File path: 
flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java
 ##
 @@ -674,6 +672,15 @@ private void checkYarnQueues(YarnClient yarnClient) {
}
}
 
+   private boolean containsFileWithEnding(final Set files, final 
String suffix) {
 
 Review comment:
   Unused anywhere.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14537) Improve influxdb reporter performance for kafka source/sink

2019-10-28 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961243#comment-16961243
 ] 

Yun Tang commented on FLINK-14537:
--

I think the main cause is due to many unnecessary tags, which is still in 
discussion of https://issues.apache.org/jira/browse/FLINK-13418 .

 

If your environment suffers from too many kafka metrics, you could also pass 
[{{flink.disable-metrics}}|https://github.com/apache/flink/blob/96640cad3d770756cb6e70c73b25bd4269065775/flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBase.java#L106]
 into consumer properties to disable kafka reporting metrics via Flink.

> Improve influxdb reporter performance for kafka source/sink
> ---
>
> Key: FLINK-14537
> URL: https://issues.apache.org/jira/browse/FLINK-14537
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Metrics
>Affects Versions: 1.9.1
>Reporter: ouyangwulin
>Priority: Minor
> Fix For: 1.10.0, 1.9.2, 1.11.0
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> In our product env, our datasource mostly from kafka source. and influxdb 
> report use kafka topic and partition for create infuxdb measurements, It 
> makes 13531 measurements in influxdb. so It trouble for get measurements 
> which we want get the metric data, and It exaust performance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-14544) Resuming Externalized Checkpoint after terminal failure (file, async) end-to-end test fails on travis

2019-10-28 Thread Congxian Qiu(klion26) (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961208#comment-16961208
 ] 

Congxian Qiu(klion26) edited comment on FLINK-14544 at 10/28/19 4:47 PM:
-

>From the given log,  If I understand right, this test failed because the log 
>contains exception of {{MailboxStateException. we can filter out the 
>"}}{{MailboxStateException" before counting the exception counts. and need to 
>figure out why the MailboxStateException throws out.}}

 
{code:java}
Running externalized checkpoints test, with ORIGINAL_DOP=2 NEW_DOP=2 and 
STATE_BACKEND_TYPE=file STATE_BACKEND_FILE_ASYNC=true STATE_BACKEND_ROCKSDB_INC 
 REMENTAL=false SIMULATE_FAILURE=true ...^M
19106 Waiting for job (2d7c274d4561078c592df0bbb1dfad52) to reach terminal 
state FAILED ...^M
19107 Job (2d7c274d4561078c592df0bbb1dfad52) reached terminal state FAILED^M
19108 Restoring job with externalized checkpoint at 
/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-50753031208/extern
  alized-chckpt-e2e-backend-dir/2d7c274d4561078c592df0bbb1dfad52/chk-2 ...^M
19109 Job (824b849f432dcffdeb0d18ab6b1f7d6c) is running.^M
19110 Checking for errors...^M
19111 Found error in log files:^M
..
25083 2019-10-25 20:46:09,361 ERROR org.apache.flink.runtime.taskmanager.Task   
  - Error while canceling task FailureMapper (1/1).^M
25084 java.util.concurrent.RejectedExecutionException: 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxStateException: Mailbox 
is in state CLOSED, but   is required to be in state OPEN for put 
operations.^M
25085 at 
org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxExecutorImpl.executeFirst(MailboxExecutorImpl.java:75)^M
25086 at 
org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxProcessor.sendPriorityLetter(MailboxProcessor.java:176)^M
25087 at 
org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxProcessor.allActionsCompleted(MailboxProcessor.java:172)^M
25088 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.cancel(StreamTask.java:540)^M
25089 at 
org.apache.flink.runtime.taskmanager.Task.cancelInvokable(Task.java:1191)^M
25090 at 
org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:764)^M
25091 at org.apache.flink.runtime.taskmanager.Task.run(Task.java:521)^M
25092 at java.lang.Thread.run(Thread.java:748)^M
25093 Caused by: 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxStateException: Mailbox 
is in state CLOSED, but is required to be in state OPEN for p  ut 
operations.^M
25094 at 
org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkPutStateConditions(TaskMailboxImpl.java:199)^M
25095 at 
org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.putHeadInternal(TaskMailboxImpl.java:141)^M
25096 at 
org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.putFirst(TaskMailboxImpl.java:131)^M
25097 at 
org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxExecutorImpl.executeFirst(MailboxExecutorImpl.java:73)^M
25098 ... 7 more^M
{code}


was (Author: klion26):
>From the given log,  If I understand right, this test failed because the log 
>contains exception of {{MailboxStateException. we can filter out the 
>"}}{{MailboxStateException" before counting the exception counts.}}

 
{code:java}
Running externalized checkpoints test, with ORIGINAL_DOP=2 NEW_DOP=2 and 
STATE_BACKEND_TYPE=file STATE_BACKEND_FILE_ASYNC=true STATE_BACKEND_ROCKSDB_INC 
 REMENTAL=false SIMULATE_FAILURE=true ...^M
19106 Waiting for job (2d7c274d4561078c592df0bbb1dfad52) to reach terminal 
state FAILED ...^M
19107 Job (2d7c274d4561078c592df0bbb1dfad52) reached terminal state FAILED^M
19108 Restoring job with externalized checkpoint at 
/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-50753031208/extern
  alized-chckpt-e2e-backend-dir/2d7c274d4561078c592df0bbb1dfad52/chk-2 ...^M
19109 Job (824b849f432dcffdeb0d18ab6b1f7d6c) is running.^M
19110 Checking for errors...^M
19111 Found error in log files:^M
..
25083 2019-10-25 20:46:09,361 ERROR org.apache.flink.runtime.taskmanager.Task   
  - Error while canceling task FailureMapper (1/1).^M
25084 java.util.concurrent.RejectedExecutionException: 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxStateException: Mailbox 
is in state CLOSED, but   is required to be in state OPEN for put 
operations.^M
25085 at 
org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxExecutorImpl.executeFirst(MailboxExecutorImpl.java:75)^M
25086 at 
org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxProcessor.sendPriorityLetter(MailboxProcessor.java:176)^M
25087 at 

[GitHub] [flink] TisonKun commented on issue #9974: [FLINK-14501][FLINK-14502] Decouple ClusterDescriptor/ClusterSpecification from CommandLine

2019-10-28 Thread GitBox
TisonKun commented on issue #9974: [FLINK-14501][FLINK-14502] Decouple 
ClusterDescriptor/ClusterSpecification from CommandLine
URL: https://github.com/apache/flink/pull/9974#issuecomment-547033503
 
 
   Thanks for your update @kl0u ! Sorry for replying late. I will give it a 
review now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #9971: [FLINK-14490][table] Add methods for interacting with temporary objects

2019-10-28 Thread GitBox
KurtYoung commented on a change in pull request #9971: [FLINK-14490][table] Add 
methods for interacting with temporary objects
URL: https://github.com/apache/flink/pull/9971#discussion_r339658621
 
 

 ##
 File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/delegation/Parser.java
 ##
 @@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.delegation;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.catalog.UnresolvedIdentifier;
+import org.apache.flink.table.operations.Operation;
+import org.apache.flink.table.operations.QueryOperation;
+
+import java.util.List;
+
+/**
+ * Provides methods for parsing SQL objects from a SQL string.
+ */
+@Internal
+public interface Parser {
+
+   /**
+* Entry point for parsing SQL queries expressed as a String.
+*
+* Note:If the created {@link Operation} is a {@link 
QueryOperation}
+* it must be in a form that will be understood by the
+* {@link Planner#translate(List)} method.
+*
+* The produced Operation trees should already be validated.
+*
+* @param statement the SQL statement to evaluate
+* @return parsed queries as trees of relational {@link Operation}s
+* @throws org.apache.flink.table.api.SqlParserException when failed to 
parse the statement
+*/
+   List parse(String statement);
+
+   /**
+* Entry point for parsing SQL identifiers expressed as a String.
+*
+* @param identifier the SQL identifier to parse
+* @return parsed identifier
+* @throws org.apache.flink.table.api.SqlParserException when failed to 
parse the identifier
+*/
+   UnresolvedIdentifier parseIdentifier(String identifier);
 
 Review comment:
   After some more thoughts, I think this API is also reasonable. My first 
impression about this is, normally Parser have to deal with a whole SQL string, 
and table/function identifies are among the SQL string. But I missed other 
scenarios like when people register or delete table/function, the user will 
provide the identify directly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14544) Resuming Externalized Checkpoint after terminal failure (file, async) end-to-end test fails on travis

2019-10-28 Thread Congxian Qiu(klion26) (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961208#comment-16961208
 ] 

Congxian Qiu(klion26) commented on FLINK-14544:
---

>From the given log,  If I understand right, this test failed because the log 
>contains exception of {{MailboxStateException. we can filter out the 
>"}}{{MailboxStateException" before counting the exception counts.}}{{}}

 
{code:java}
Running externalized checkpoints test, with ORIGINAL_DOP=2 NEW_DOP=2 and 
STATE_BACKEND_TYPE=file STATE_BACKEND_FILE_ASYNC=true STATE_BACKEND_ROCKSDB_INC 
 REMENTAL=false SIMULATE_FAILURE=true ...^M
19106 Waiting for job (2d7c274d4561078c592df0bbb1dfad52) to reach terminal 
state FAILED ...^M
19107 Job (2d7c274d4561078c592df0bbb1dfad52) reached terminal state FAILED^M
19108 Restoring job with externalized checkpoint at 
/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-50753031208/extern
  alized-chckpt-e2e-backend-dir/2d7c274d4561078c592df0bbb1dfad52/chk-2 ...^M
19109 Job (824b849f432dcffdeb0d18ab6b1f7d6c) is running.^M
19110 Checking for errors...^M
19111 Found error in log files:^M
..
25083 2019-10-25 20:46:09,361 ERROR org.apache.flink.runtime.taskmanager.Task   
  - Error while canceling task FailureMapper (1/1).^M
25084 java.util.concurrent.RejectedExecutionException: 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxStateException: Mailbox 
is in state CLOSED, but   is required to be in state OPEN for put 
operations.^M
25085 at 
org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxExecutorImpl.executeFirst(MailboxExecutorImpl.java:75)^M
25086 at 
org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxProcessor.sendPriorityLetter(MailboxProcessor.java:176)^M
25087 at 
org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxProcessor.allActionsCompleted(MailboxProcessor.java:172)^M
25088 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.cancel(StreamTask.java:540)^M
25089 at 
org.apache.flink.runtime.taskmanager.Task.cancelInvokable(Task.java:1191)^M
25090 at 
org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:764)^M
25091 at org.apache.flink.runtime.taskmanager.Task.run(Task.java:521)^M
25092 at java.lang.Thread.run(Thread.java:748)^M
25093 Caused by: 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxStateException: Mailbox 
is in state CLOSED, but is required to be in state OPEN for p  ut 
operations.^M
25094 at 
org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkPutStateConditions(TaskMailboxImpl.java:199)^M
25095 at 
org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.putHeadInternal(TaskMailboxImpl.java:141)^M
25096 at 
org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.putFirst(TaskMailboxImpl.java:131)^M
25097 at 
org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxExecutorImpl.executeFirst(MailboxExecutorImpl.java:73)^M
25098 ... 7 more^M
{code}

> Resuming Externalized Checkpoint after terminal failure (file, async) 
> end-to-end test fails on travis
> -
>
> Key: FLINK-14544
> URL: https://issues.apache.org/jira/browse/FLINK-14544
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Tests
>Affects Versions: 1.10.0
>Reporter: Yu Li
>Priority: Blocker
>  Labels: test-stability
>
> From the log we could see below error message and then the job was terminated 
> due to job exceeded the maximum log length. 
> {code}
> 2019-10-25 20:46:09,361 ERROR org.apache.flink.runtime.taskmanager.Task   
>   - Error while canceling task FailureMapper (1/1).
> java.util.concurrent.RejectedExecutionException: 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxStateException: 
> Mailbox is in state CLOSED, but is required to be in state OPEN for put 
> operations.
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxExecutorImpl.executeFirst(MailboxExecutorImpl.java:75)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxProcessor.sendPriorityLetter(MailboxProcessor.java:176)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxProcessor.allActionsCompleted(MailboxProcessor.java:172)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.cancel(StreamTask.java:540)
>   at 
> org.apache.flink.runtime.taskmanager.Task.cancelInvokable(Task.java:1191)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:764)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:521)
>

[jira] [Comment Edited] (FLINK-14544) Resuming Externalized Checkpoint after terminal failure (file, async) end-to-end test fails on travis

2019-10-28 Thread Congxian Qiu(klion26) (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961208#comment-16961208
 ] 

Congxian Qiu(klion26) edited comment on FLINK-14544 at 10/28/19 4:14 PM:
-

>From the given log,  If I understand right, this test failed because the log 
>contains exception of {{MailboxStateException. we can filter out the 
>"}}{{MailboxStateException" before counting the exception counts.}}

 
{code:java}
Running externalized checkpoints test, with ORIGINAL_DOP=2 NEW_DOP=2 and 
STATE_BACKEND_TYPE=file STATE_BACKEND_FILE_ASYNC=true STATE_BACKEND_ROCKSDB_INC 
 REMENTAL=false SIMULATE_FAILURE=true ...^M
19106 Waiting for job (2d7c274d4561078c592df0bbb1dfad52) to reach terminal 
state FAILED ...^M
19107 Job (2d7c274d4561078c592df0bbb1dfad52) reached terminal state FAILED^M
19108 Restoring job with externalized checkpoint at 
/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-50753031208/extern
  alized-chckpt-e2e-backend-dir/2d7c274d4561078c592df0bbb1dfad52/chk-2 ...^M
19109 Job (824b849f432dcffdeb0d18ab6b1f7d6c) is running.^M
19110 Checking for errors...^M
19111 Found error in log files:^M
..
25083 2019-10-25 20:46:09,361 ERROR org.apache.flink.runtime.taskmanager.Task   
  - Error while canceling task FailureMapper (1/1).^M
25084 java.util.concurrent.RejectedExecutionException: 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxStateException: Mailbox 
is in state CLOSED, but   is required to be in state OPEN for put 
operations.^M
25085 at 
org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxExecutorImpl.executeFirst(MailboxExecutorImpl.java:75)^M
25086 at 
org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxProcessor.sendPriorityLetter(MailboxProcessor.java:176)^M
25087 at 
org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxProcessor.allActionsCompleted(MailboxProcessor.java:172)^M
25088 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.cancel(StreamTask.java:540)^M
25089 at 
org.apache.flink.runtime.taskmanager.Task.cancelInvokable(Task.java:1191)^M
25090 at 
org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:764)^M
25091 at org.apache.flink.runtime.taskmanager.Task.run(Task.java:521)^M
25092 at java.lang.Thread.run(Thread.java:748)^M
25093 Caused by: 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxStateException: Mailbox 
is in state CLOSED, but is required to be in state OPEN for p  ut 
operations.^M
25094 at 
org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkPutStateConditions(TaskMailboxImpl.java:199)^M
25095 at 
org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.putHeadInternal(TaskMailboxImpl.java:141)^M
25096 at 
org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.putFirst(TaskMailboxImpl.java:131)^M
25097 at 
org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxExecutorImpl.executeFirst(MailboxExecutorImpl.java:73)^M
25098 ... 7 more^M
{code}


was (Author: klion26):
>From the given log,  If I understand right, this test failed because the log 
>contains exception of {{MailboxStateException. we can filter out the 
>"}}{{MailboxStateException" before counting the exception counts.}}{{}}

 
{code:java}
Running externalized checkpoints test, with ORIGINAL_DOP=2 NEW_DOP=2 and 
STATE_BACKEND_TYPE=file STATE_BACKEND_FILE_ASYNC=true STATE_BACKEND_ROCKSDB_INC 
 REMENTAL=false SIMULATE_FAILURE=true ...^M
19106 Waiting for job (2d7c274d4561078c592df0bbb1dfad52) to reach terminal 
state FAILED ...^M
19107 Job (2d7c274d4561078c592df0bbb1dfad52) reached terminal state FAILED^M
19108 Restoring job with externalized checkpoint at 
/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-50753031208/extern
  alized-chckpt-e2e-backend-dir/2d7c274d4561078c592df0bbb1dfad52/chk-2 ...^M
19109 Job (824b849f432dcffdeb0d18ab6b1f7d6c) is running.^M
19110 Checking for errors...^M
19111 Found error in log files:^M
..
25083 2019-10-25 20:46:09,361 ERROR org.apache.flink.runtime.taskmanager.Task   
  - Error while canceling task FailureMapper (1/1).^M
25084 java.util.concurrent.RejectedExecutionException: 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxStateException: Mailbox 
is in state CLOSED, but   is required to be in state OPEN for put 
operations.^M
25085 at 
org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxExecutorImpl.executeFirst(MailboxExecutorImpl.java:75)^M
25086 at 
org.apache.flink.streaming.runtime.tasks.mailbox.execution.MailboxProcessor.sendPriorityLetter(MailboxProcessor.java:176)^M
25087 at 

[GitHub] [flink] GJL commented on issue #10014: [FLINK-12526] Remove STATE_UPDATER in ExecutionGraph

2019-10-28 Thread GitBox
GJL commented on issue #10014: [FLINK-12526] Remove STATE_UPDATER in 
ExecutionGraph
URL: https://github.com/apache/flink/pull/10014#issuecomment-547021122
 
 
   @yanghua I will take a look tomorrow.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] GJL commented on a change in pull request #9920: [FLINK-14389][runtime] Restore task state before restarting tasks in DefaultScheduler

2019-10-28 Thread GitBox
GJL commented on a change in pull request #9920: [FLINK-14389][runtime] Restore 
task state before restarting tasks in DefaultScheduler
URL: https://github.com/apache/flink/pull/9920#discussion_r339654176
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/DefaultScheduler.java
 ##
 @@ -212,6 +212,13 @@ private Runnable restartTasks(final 
Set executionVertexV
 

resetForNewExecutionIfInTerminalState(verticesToRestart);
 
+   try {
+   restoreState(verticesToRestart);
 
 Review comment:
   It would be better to check if all vertices are in terminal state.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10020: [FLINK-12147] [metrics] Update influxdb-java to remove writes of invalid values to InfluxDB

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #10020: [FLINK-12147] [metrics] Update 
influxdb-java to remove writes of invalid values to InfluxDB
URL: https://github.com/apache/flink/pull/10020#issuecomment-546974232
 
 
   
   ## CI report:
   
   * 02beb6ed94e7b27fd3ff9242a0669138828a55cb : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133832814)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10016: [FLINK-14547][table-planner-blink] Fix UDF cannot be in the join condition in blink planner

2019-10-28 Thread GitBox
flinkbot edited a comment on issue #10016: [FLINK-14547][table-planner-blink] 
Fix UDF cannot be in the join condition in blink planner
URL: https://github.com/apache/flink/pull/10016#issuecomment-546907883
 
 
   
   ## CI report:
   
   * 0af01dfef17a8ebcafd92137f87a96023258e124 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133799812)
   * 4214827738bb890398bd27417c10217771b70906 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133832718)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] yanghua commented on issue #10014: [FLINK-12526] Remove STATE_UPDATER in ExecutionGraph

2019-10-28 Thread GitBox
yanghua commented on issue #10014: [FLINK-12526] Remove STATE_UPDATER in 
ExecutionGraph
URL: https://github.com/apache/flink/pull/10014#issuecomment-547016198
 
 
   @GJL The Travis is green now. Anything need to be changed, please let me 
know, thanks~


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] dawidwys commented on a change in pull request #9971: [FLINK-14490][table] Add methods for interacting with temporary objects

2019-10-28 Thread GitBox
dawidwys commented on a change in pull request #9971: [FLINK-14490][table] Add 
methods for interacting with temporary objects
URL: https://github.com/apache/flink/pull/9971#discussion_r339647504
 
 

 ##
 File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/delegation/Parser.java
 ##
 @@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.delegation;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.catalog.UnresolvedIdentifier;
+import org.apache.flink.table.operations.Operation;
+import org.apache.flink.table.operations.QueryOperation;
+
+import java.util.List;
+
+/**
+ * Provides methods for parsing SQL objects from a SQL string.
+ */
+@Internal
+public interface Parser {
+
+   /**
+* Entry point for parsing SQL queries expressed as a String.
+*
+* Note:If the created {@link Operation} is a {@link 
QueryOperation}
+* it must be in a form that will be understood by the
+* {@link Planner#translate(List)} method.
+*
+* The produced Operation trees should already be validated.
+*
+* @param statement the SQL statement to evaluate
+* @return parsed queries as trees of relational {@link Operation}s
+* @throws org.apache.flink.table.api.SqlParserException when failed to 
parse the statement
+*/
+   List parse(String statement);
+
+   /**
+* Entry point for parsing SQL identifiers expressed as a String.
+*
+* @param identifier the SQL identifier to parse
+* @return parsed identifier
+* @throws org.apache.flink.table.api.SqlParserException when failed to 
parse the identifier
+*/
+   UnresolvedIdentifier parseIdentifier(String identifier);
 
 Review comment:
   Why do you think it's not intuitive for a parser to create an 
UnresolvedIdentifier? 
   
   The alternative would be to return a `String[]`, but I think it's cleaner to 
return a proper object rather than string's array. This way it already aligns 
the array with catalog/database/object.
   
   BTW Calcite behaves similarly. `SqlParser` creates a `SqlIdentifier`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] dawidwys commented on a change in pull request #9971: [FLINK-14490][table] Add methods for interacting with temporary objects

2019-10-28 Thread GitBox
dawidwys commented on a change in pull request #9971: [FLINK-14490][table] Add 
methods for interacting with temporary objects
URL: https://github.com/apache/flink/pull/9971#discussion_r339647504
 
 

 ##
 File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/delegation/Parser.java
 ##
 @@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.delegation;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.catalog.UnresolvedIdentifier;
+import org.apache.flink.table.operations.Operation;
+import org.apache.flink.table.operations.QueryOperation;
+
+import java.util.List;
+
+/**
+ * Provides methods for parsing SQL objects from a SQL string.
+ */
+@Internal
+public interface Parser {
+
+   /**
+* Entry point for parsing SQL queries expressed as a String.
+*
+* Note:If the created {@link Operation} is a {@link 
QueryOperation}
+* it must be in a form that will be understood by the
+* {@link Planner#translate(List)} method.
+*
+* The produced Operation trees should already be validated.
+*
+* @param statement the SQL statement to evaluate
+* @return parsed queries as trees of relational {@link Operation}s
+* @throws org.apache.flink.table.api.SqlParserException when failed to 
parse the statement
+*/
+   List parse(String statement);
+
+   /**
+* Entry point for parsing SQL identifiers expressed as a String.
+*
+* @param identifier the SQL identifier to parse
+* @return parsed identifier
+* @throws org.apache.flink.table.api.SqlParserException when failed to 
parse the identifier
+*/
+   UnresolvedIdentifier parseIdentifier(String identifier);
 
 Review comment:
   Why do you think it's not intuitive for a parser to create an 
UnresolvedIdentifier? 
   
   The alternative would be to return a String[], but I think it's cleaner to 
return a proper object rather than string's array. This way it already aligns 
the array with catalog/database/object.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] dawidwys commented on a change in pull request #9971: [FLINK-14490][table] Add methods for interacting with temporary objects

2019-10-28 Thread GitBox
dawidwys commented on a change in pull request #9971: [FLINK-14490][table] Add 
methods for interacting with temporary objects
URL: https://github.com/apache/flink/pull/9971#discussion_r339647504
 
 

 ##
 File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/delegation/Parser.java
 ##
 @@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.delegation;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.catalog.UnresolvedIdentifier;
+import org.apache.flink.table.operations.Operation;
+import org.apache.flink.table.operations.QueryOperation;
+
+import java.util.List;
+
+/**
+ * Provides methods for parsing SQL objects from a SQL string.
+ */
+@Internal
+public interface Parser {
+
+   /**
+* Entry point for parsing SQL queries expressed as a String.
+*
+* Note:If the created {@link Operation} is a {@link 
QueryOperation}
+* it must be in a form that will be understood by the
+* {@link Planner#translate(List)} method.
+*
+* The produced Operation trees should already be validated.
+*
+* @param statement the SQL statement to evaluate
+* @return parsed queries as trees of relational {@link Operation}s
+* @throws org.apache.flink.table.api.SqlParserException when failed to 
parse the statement
+*/
+   List parse(String statement);
+
+   /**
+* Entry point for parsing SQL identifiers expressed as a String.
+*
+* @param identifier the SQL identifier to parse
+* @return parsed identifier
+* @throws org.apache.flink.table.api.SqlParserException when failed to 
parse the identifier
+*/
+   UnresolvedIdentifier parseIdentifier(String identifier);
 
 Review comment:
   Why do you think it's not intuitive for a parser to create an 
UnresolvedIdentifier? 
   
   The alternative would be to return a `String[]`, but I think it's cleaner to 
return a proper object rather than string's array. This way it already aligns 
the array with catalog/database/object.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] ifndef-SleePy commented on issue #9853: [FLINK-13904][checkpointing] Avoid competition of checkpoint triggering

2019-10-28 Thread GitBox
ifndef-SleePy commented on issue #9853: [FLINK-13904][checkpointing] Avoid 
competition of checkpoint triggering
URL: https://github.com/apache/flink/pull/9853#issuecomment-547015390
 
 
   @pnowojski , wow! The issue has been opened over 4 years. It really 
surprises me.
   Thanks for detailed response. I feel like I have understood Github better 
and deeper, haha.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] ifndef-SleePy commented on issue #9885: [FLINK-14344][checkpointing] Snapshots master hook state asynchronously

2019-10-28 Thread GitBox
ifndef-SleePy commented on issue #9885: [FLINK-14344][checkpointing] Snapshots 
master hook state asynchronously
URL: https://github.com/apache/flink/pull/9885#issuecomment-547011917
 
 
   @pnowojski , thanks for reminding. Will rebase master later.
   
   Regarding to the dependences of subtasks.
   This PR blocks https://issues.apache.org/jira/browse/FLINK-13905.
   However the https://issues.apache.org/jira/browse/FLINK-13848 is quite 
independent. It should be applied in very end of refactoring. All the IO 
operations and locks should be taken care of well before executing the logic in 
main thread.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   >