[jira] [Commented] (FLINK-14742) Unstable tests TaskExecutorTest#testSubmitTaskBeforeAcceptSlot

2020-01-16 Thread Biao Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017779#comment-17017779
 ] 

Biao Liu commented on FLINK-14742:
--

Such a subtle case if it couldn't be reproduced :)
I have checked the test case, but didn't find the clue. Nice work [~kkl0u]!

> Unstable tests TaskExecutorTest#testSubmitTaskBeforeAcceptSlot
> --
>
> Key: FLINK-14742
> URL: https://issues.apache.org/jira/browse/FLINK-14742
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.10.0
>Reporter: Zili Chen
>Assignee: Kostas Kloudas
>Priority: Critical
> Fix For: 1.10.0
>
>
> deadlock.
> {code}
> "main" #1 prio=5 os_prio=0 tid=0x7f1f8800b800 nid=0x356 waiting on 
> condition [0x7f1f8e65c000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x86e9e9c0> (a 
> java.util.concurrent.CompletableFuture$Signaller)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
>   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
>   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
>   at 
> org.apache.flink.runtime.taskexecutor.TaskExecutorTest.testSubmitTaskBeforeAcceptSlot(TaskExecutorTest.java:1108)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
> {code}
> full log https://api.travis-ci.org/v3/job/611275566/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15611) KafkaITCase.testOneToOneSources fails on Travis

2020-01-16 Thread Biao Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017767#comment-17017767
 ] 

Biao Liu commented on FLINK-15611:
--

{quote}
...
Caused by: java.lang.Exception: Received a duplicate: 4924
at 
org.apache.flink.streaming.connectors.kafka.testutils.ValidatingExactlyOnceSink.invoke(ValidatingExactlyOnceSink.java:57)
at 
org.apache.flink.streaming.connectors.kafka.testutils.ValidatingExactlyOnceSink.invoke(ValidatingExactlyOnceSink.java:36)
at 
org.apache.flink.streaming.api.functions.sink.SinkFunction.invoke(SinkFunction.java:52)
at 
org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)
at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:170)
at 
org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:151)
at 
org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:128)
at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:69)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:311)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:187)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:487)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:470)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:702)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:527)
at java.lang.Thread.run(Thread.java:748)
{quote}

It looks like a serious issue. The exactly once semantics here seems to be 
broken.

> KafkaITCase.testOneToOneSources fails on Travis
> ---
>
> Key: FLINK-15611
> URL: https://issues.apache.org/jira/browse/FLINK-15611
> Project: Flink
>  Issue Type: Bug
>Reporter: Yangze Guo
>Priority: Critical
> Fix For: 1.10.0
>
>
> {{The test KafkaITCase.testOneToOneSources failed on Travis.}}
> {code:java}
> 03:15:02,019 INFO  
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl  - 
> Deleting topic scale-down-before-first-checkpoint
> 03:15:02,037 INFO  
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase  - 
> 
> Test 
> testScaleDownBeforeFirstCheckpoint(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase)
>  successfully run.
> 
> 03:15:02,038 INFO  org.apache.flink.streaming.connectors.kafka.KafkaTestBase  
>- -
> 03:15:02,038 INFO  org.apache.flink.streaming.connectors.kafka.KafkaTestBase  
>- Shut down KafkaTestBase 
> 03:15:02,038 INFO  org.apache.flink.streaming.connectors.kafka.KafkaTestBase  
>- -
> 03:15:25,728 INFO  org.apache.flink.streaming.connectors.kafka.KafkaTestBase  
>- -
> 03:15:25,728 INFO  org.apache.flink.streaming.connectors.kafka.KafkaTestBase  
>- KafkaTestBase finished
> 03:15:25,728 INFO  org.apache.flink.streaming.connectors.kafka.KafkaTestBase  
>- -
> 03:15:25.731 [INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time 
> elapsed: 245.845 s - in 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase
> 03:15:26.099 [INFO] 
> 03:15:26.099 [INFO] Results:
> 03:15:26.099 [INFO] 
> 03:15:26.099 [ERROR] Failures: 
> 03:15:26.099 [ERROR]   
> KafkaITCase.testOneToOneSources:97->KafkaConsumerTestBase.runOneToOneExactlyOnceTest:862
>  Test failed: Job execution failed.
> {code}
> https://api.travis-ci.com/v3/job/276124537/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10879: [FLINK-15595][table] Remove CoreMudule in ModuleManager

2020-01-16 Thread GitBox
flinkbot edited a comment on issue #10879: [FLINK-15595][table] Remove 
CoreMudule in ModuleManager
URL: https://github.com/apache/flink/pull/10879#issuecomment-575502614
 
 
   
   ## CI report:
   
   * dea387f141fd1d91c2082e39d9a10bab4890747d Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/144889679) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4421)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10880: [FLINK-15469][SQL] Update UpsertStreamTableSink and RetractStreamTabl…

2020-01-16 Thread GitBox
flinkbot commented on issue #10880: [FLINK-15469][SQL] Update 
UpsertStreamTableSink and RetractStreamTabl…
URL: https://github.com/apache/flink/pull/10880#issuecomment-575512352
 
 
   
   ## CI report:
   
   * e0f8cdbe764bc1f9687e862467088de1ce260780 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10878: [FLINK-15599][table] SQL client requires both legacy and blink planner to be on the classpath

2020-01-16 Thread GitBox
flinkbot edited a comment on issue #10878: [FLINK-15599][table] SQL client 
requires both legacy and blink planner to be on the classpath
URL: https://github.com/apache/flink/pull/10878#issuecomment-575493814
 
 
   
   ## CI report:
   
   * 6ce81e21780883796248af5c87d2ec5f1dc5e0ef UNKNOWN
   * 47ab573d50ed82018632ed1106b9deb79e83d820 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/144866957) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4419)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10877: [FLINK-15602][table-planner-blink] Padding TIMESTAMP type to respect …

2020-01-16 Thread GitBox
flinkbot edited a comment on issue #10877: [FLINK-15602][table-planner-blink] 
Padding TIMESTAMP type to respect …
URL: https://github.com/apache/flink/pull/10877#issuecomment-575493649
 
 
   
   ## CI report:
   
   * c53d150308c6f08fc6c2d9bd6dd84b36e2021ef1 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/144866938) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4418)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10332: [FLINK-13905][checkpointing] Separate checkpoint triggering into several asynchronous stages

2020-01-16 Thread GitBox
flinkbot edited a comment on issue #10332: [FLINK-13905][checkpointing] 
Separate checkpoint triggering into several asynchronous stages
URL: https://github.com/apache/flink/pull/10332#issuecomment-559012596
 
 
   
   ## CI report:
   
   * 3198d4f64415b0d257733b7d214dece64c392081 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/138375484) 
   * 22c0207087c7e3fee9c2472470cf59f0a4cc3a2a Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/138433557) 
   * 0579ee0ad7bf99f51230813ae9ca3a189d7e475d Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/140628079) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3481)
 
   * f12b35af0ac084469f5c41556bd11f0b191265ba Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/143878207) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4254)
 
   * 879511efb0b0c15efe58d6728e073f366023c384 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/18410) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4350)
 
   * 91a493276a7658eb9cf7bb98969572a467cfeaa7 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144728716) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4397)
 
   * 43ca6bce736528c74742d2dc3c50ca0b43a6645e Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144866416) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4417)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10832: [FLINK-14163][runtime]Enforce synchronous registration of Execution#producedPartitions

2020-01-16 Thread GitBox
flinkbot edited a comment on issue #10832: [FLINK-14163][runtime]Enforce 
synchronous registration of Execution#producedPartitions
URL: https://github.com/apache/flink/pull/10832#issuecomment-573276729
 
 
   
   ## CI report:
   
   * 5eb1599945f1bee35c342f762cc25684013d2d83 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/143984465) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4264)
 
   * 73579bbc6b556fe42e55e21581c5994f84480843 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144033919) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4267)
 
   * e303d84258f1e63edd45867f6c4e89e181307ee5 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144299177) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4325)
 
   * 426112c59a86ec127040178efda1085231f1988f Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144326556) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4334)
 
   * fe731e4a039fba606be83a9535841c157a048d58 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144458002) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4355)
 
   * 1278b4488cb8bc05addaf359458ed4bae6c1967b Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144670693) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4386)
 
   * 3eba7e89df2c0b5ad6ba098aaabbab42be37df2f Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144675097) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4388)
 
   * 66afb2a54d5da715b3d59bdf58a4de12af833c2f Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144866402) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4416)
 
   * f2d45fd25663092d9a11e0798900fb886ed46ba3 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144889655) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4420)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15469) Update UpsertStreamTableSink and RetractStreamTableSink and related interface to new type system

2020-01-16 Thread Zhenghua Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017755#comment-17017755
 ] 

Zhenghua Gao commented on FLINK-15469:
--

After an initial POC, the getTypeClass is not needed because Upsert/Retract 
stream table sink always need java *Tuple2*. And, we don't need any changes for 
the *getOutputType*/*getConsumedDataType* because the codegen could use 
*getRecordDataType* directly.
 

> Update UpsertStreamTableSink and RetractStreamTableSink and related interface 
> to new type system
> 
>
> Key: FLINK-15469
> URL: https://issues.apache.org/jira/browse/FLINK-15469
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.10.0
>Reporter: Zhenghua Gao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently *UpsertStreamTableSink* can only returns TypeInformation of the 
> requested record, which can't support types with precision and scale, e.g. 
> TIMESTAMP(p), DECIMAL(p,s).
> A proposal is deprecating the *getRecordType* API and adding a 
> *getRecordDataType* API instead to return the data type of the requested 
> record.
> {code:java}
> /**
>  * Returns the requested record type.
>  * 
>  * @Deprecated This method will be removed in future versions. It's 
> recommended to use {@link #getRecordDataType()} instead.
>  */
> @Deprecated
> TypeInformation getRecordType();
> /*
>  * Returns the requested record data type.
>  */
> DataType getRecordDataType();
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-15469) Update UpsertStreamTableSink and RetractStreamTableSink and related interface to new type system

2020-01-16 Thread Zhenghua Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17014828#comment-17014828
 ] 

Zhenghua Gao edited comment on FLINK-15469 at 1/17/20 7:12 AM:
---

Hi [~lzljs3620320], after re-think the whole thing, we should bring the 
physical data types of the sink and the type class(java tuple2 or scale tuple2) 
to the planner so that our blink planner could handle the precision things. So 
there is a proposal as following:
 # deprecate getRecordType and introduce getRecordDataType instead
 # -remove getOutputType and introduce getConsumedDataType, which returns 
ROW-
 # -introduce getTypeClass interface, which returns type class for codegen-

What do you think? I will file a PR soon if this works. 

 
  
 


was (Author: docete):
Hi [~lzljs3620320], after re-think the whole thing, we should bring the 
physical data types of the sink and the type class(java tuple2 or scale tuple2) 
to the planner so that our blink planner could handle the precision things. So 
there is a proposal as following:
 # deprecate getRecordType and introduce getRecordDataType instead
 # remove getOutputType and introduce getConsumedDataType, which returns 
ROW
 # introduce getTypeClass interface, which returns type class for codegen

What do you think? I will file a PR soon if this works. 

 
 

> Update UpsertStreamTableSink and RetractStreamTableSink and related interface 
> to new type system
> 
>
> Key: FLINK-15469
> URL: https://issues.apache.org/jira/browse/FLINK-15469
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.10.0
>Reporter: Zhenghua Gao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently *UpsertStreamTableSink* can only returns TypeInformation of the 
> requested record, which can't support types with precision and scale, e.g. 
> TIMESTAMP(p), DECIMAL(p,s).
> A proposal is deprecating the *getRecordType* API and adding a 
> *getRecordDataType* API instead to return the data type of the requested 
> record.
> {code:java}
> /**
>  * Returns the requested record type.
>  * 
>  * @Deprecated This method will be removed in future versions. It's 
> recommended to use {@link #getRecordDataType()} instead.
>  */
> @Deprecated
> TypeInformation getRecordType();
> /*
>  * Returns the requested record data type.
>  */
> DataType getRecordDataType();
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10878: [FLINK-15599][table] SQL client requires both legacy and blink planner to be on the classpath

2020-01-16 Thread GitBox
flinkbot edited a comment on issue #10878: [FLINK-15599][table] SQL client 
requires both legacy and blink planner to be on the classpath
URL: https://github.com/apache/flink/pull/10878#issuecomment-575493814
 
 
   
   ## CI report:
   
   * 6ce81e21780883796248af5c87d2ec5f1dc5e0ef UNKNOWN
   * 47ab573d50ed82018632ed1106b9deb79e83d820 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10879: [FLINK-15595][table] Remove CoreMudule in ModuleManager

2020-01-16 Thread GitBox
flinkbot commented on issue #10879: [FLINK-15595][table] Remove CoreMudule in 
ModuleManager
URL: https://github.com/apache/flink/pull/10879#issuecomment-575502614
 
 
   
   ## CI report:
   
   * dea387f141fd1d91c2082e39d9a10bab4890747d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10877: [FLINK-15602][table-planner-blink] Padding TIMESTAMP type to respect …

2020-01-16 Thread GitBox
flinkbot edited a comment on issue #10877: [FLINK-15602][table-planner-blink] 
Padding TIMESTAMP type to respect …
URL: https://github.com/apache/flink/pull/10877#issuecomment-575493649
 
 
   
   ## CI report:
   
   * c53d150308c6f08fc6c2d9bd6dd84b36e2021ef1 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/144866938) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4418)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15469) Update UpsertStreamTableSink and RetractStreamTableSink and related interface to new type system

2020-01-16 Thread Zhenghua Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhenghua Gao updated FLINK-15469:
-
Summary: Update UpsertStreamTableSink and RetractStreamTableSink and 
related interface to new type system  (was: UpsertStreamTableSink should 
support new type system)

> Update UpsertStreamTableSink and RetractStreamTableSink and related interface 
> to new type system
> 
>
> Key: FLINK-15469
> URL: https://issues.apache.org/jira/browse/FLINK-15469
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.10.0
>Reporter: Zhenghua Gao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently *UpsertStreamTableSink* can only returns TypeInformation of the 
> requested record, which can't support types with precision and scale, e.g. 
> TIMESTAMP(p), DECIMAL(p,s).
> A proposal is deprecating the *getRecordType* API and adding a 
> *getRecordDataType* API instead to return the data type of the requested 
> record.
> {code:java}
> /**
>  * Returns the requested record type.
>  * 
>  * @Deprecated This method will be removed in future versions. It's 
> recommended to use {@link #getRecordDataType()} instead.
>  */
> @Deprecated
> TypeInformation getRecordType();
> /*
>  * Returns the requested record data type.
>  */
> DataType getRecordDataType();
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15573) Let Flink SQL PlannerExpressionParserImpl#FieldRefrence use Unicode as its default charset

2020-01-16 Thread Lsw_aka_laplace (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lsw_aka_laplace updated FLINK-15573:

Description: 
 UPDATE:

  Flink now uses Calcite for SQL planner, Calcite currently only support 
ISO8859-1 charset and the charset cannot be configured also. But even so, from 
my perspective, we still need to change the 
PlannerExpressionParserImpl#fieldRefrence‘s charset, cuz JavaIdentifier also 
cannot meet. 

  Considering about the implementation, PlannerExpressionParserImpl uses the 
Scala native parser tool, which reads and consumes `scala.Char`(or just regard 
it as java char type). For us, concerning only about char type is enough, which 
means on the implementation, in this case, we don‘t even care about the charset 
problem, leading to *A simple and backwards compatible solution*.

  The implementation almost the same as picture below indicates. Actually I 
have made this change in my company specific branch and deployed it. It works 
well~

 

**

Now I am talking about the `PlannerExpressionParserImpl`

    For now  the fieldRefrence‘s  charset is JavaIdentifier,why not change it 
to UnicodeIdentifier?

    Currently in my team, we do actually have this problem. For instance, data 
from Es always contains `@timestamp` field , which JavaIdentifier can not meet. 
So what we did is just let the fieldRefrence Charset use Unicode

 
{code:scala}
 lazy val extensionIdent: Parser[String] = ( "" ~> // handle whitespace 
rep1(acceptIf(Character.isUnicodeIdentifierStart)("identifier expected but '" + 
_ + "' found"), elem("identifier part", Character.isUnicodeIdentifierPart(: 
Char))) ^^ (.mkString) ) 
 lazy val fieldReference: PackratParser[UnresolvedReferenceExpression] = (STAR 
| ident | extensionIdent) ^^ { sym => unresolvedRef(sym) }{code}
 

It is simple but really makes sense~

 

mysql supports unicode ,see the picture below , field called `@@`  

!image-2020-01-15-21-49-19-373.png!

Looking forward for any opinion

 

  was:
 UPDATE:

  Flink now uses Calcite for SQL planner, Calcite currently only support 
ISO8859-1 charset and the charset cannot be configured also. But even so, from 
my perspective, we still need to change the 
PlannerExpressionParserImpl#fieldRefrence‘s charset, cuz JavaIdentifier also 
cannot meet. 

  Considering about the implementation, PlannerExpressionParserImpl uses the 
Scala native parser tool, which consumes `scala.Char`(or just regard it as java 
char type). For us, concerning only about char type is enough, which means on 
the implementation, in this case, we don‘t even care about the charset problem, 
leading to *A simple and backwards compatible solution*.

  The implementation almost the same as picture below indicates. Actually I 
have made this change in my company specific branch and deployed it. It works 
well~

 

**

Now I am talking about the `PlannerExpressionParserImpl`

    For now  the fieldRefrence‘s  charset is JavaIdentifier,why not change it 
to UnicodeIdentifier?

    Currently in my team, we do actually have this problem. For instance, data 
from Es always contains `@timestamp` field , which JavaIdentifier can not meet. 
So what we did is just let the fieldRefrence Charset use Unicode

 
{code:scala}
 lazy val extensionIdent: Parser[String] = ( "" ~> // handle whitespace 
rep1(acceptIf(Character.isUnicodeIdentifierStart)("identifier expected but '" + 
_ + "' found"), elem("identifier part", Character.isUnicodeIdentifierPart(: 
Char))) ^^ (.mkString) ) 
 lazy val fieldReference: PackratParser[UnresolvedReferenceExpression] = (STAR 
| ident | extensionIdent) ^^ { sym => unresolvedRef(sym) }{code}
 

It is simple but really makes sense~

 

mysql supports unicode ,see the picture below , field called `@@`  

!image-2020-01-15-21-49-19-373.png!

Looking forward for any opinion

 


> Let Flink SQL PlannerExpressionParserImpl#FieldRefrence use Unicode  as its 
> default charset  
> -
>
> Key: FLINK-15573
> URL: https://issues.apache.org/jira/browse/FLINK-15573
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: Lsw_aka_laplace
>Priority: Minor
> Attachments: image-2020-01-15-21-49-19-373.png
>
>
>  UPDATE:
>   Flink now uses Calcite for SQL planner, Calcite currently only support 
> ISO8859-1 charset and the charset cannot be configured also. But even so, 
> from my perspective, we still need to change the 
> PlannerExpressionParserImpl#fieldRefrence‘s charset, cuz JavaIdentifier also 
> cannot meet. 
>   Considering about the implementation, 

[GitHub] [flink] flinkbot edited a comment on issue #10332: [FLINK-13905][checkpointing] Separate checkpoint triggering into several asynchronous stages

2020-01-16 Thread GitBox
flinkbot edited a comment on issue #10332: [FLINK-13905][checkpointing] 
Separate checkpoint triggering into several asynchronous stages
URL: https://github.com/apache/flink/pull/10332#issuecomment-559012596
 
 
   
   ## CI report:
   
   * 3198d4f64415b0d257733b7d214dece64c392081 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/138375484) 
   * 22c0207087c7e3fee9c2472470cf59f0a4cc3a2a Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/138433557) 
   * 0579ee0ad7bf99f51230813ae9ca3a189d7e475d Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/140628079) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3481)
 
   * f12b35af0ac084469f5c41556bd11f0b191265ba Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/143878207) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4254)
 
   * 879511efb0b0c15efe58d6728e073f366023c384 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/18410) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4350)
 
   * 91a493276a7658eb9cf7bb98969572a467cfeaa7 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144728716) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4397)
 
   * 43ca6bce736528c74742d2dc3c50ca0b43a6645e Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/144866416) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4417)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15573) Let Flink SQL PlannerExpressionParserImpl#FieldRefrence use Unicode as its default charset

2020-01-16 Thread Lsw_aka_laplace (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lsw_aka_laplace updated FLINK-15573:

Description: 
 UPDATE:

  Flink now uses Calcite for SQL planner, Calcite currently only support 
ISO8859-1 charset and the charset cannot be configured also. But even so, from 
my perspective, we still need to change the 
PlannerExpressionParserImpl#fieldRefrence‘s charset, cuz JavaIdentifier also 
cannot meet. 

  Considering about the implementation, PlannerExpressionParserImpl uses the 
Scala native parser tool, which consumes `scala.Char`(or just regard it as java 
char type). For us, concerning only about char type is enough, which means on 
the implementation, in this case, we don‘t even care about the charset problem, 
leading to *A simple and backwards compatible solution*.

  The implementation almost the same as picture below indicates. Actually I 
have made this change in my company specific branch and deployed it. It works 
well~

 

**

Now I am talking about the `PlannerExpressionParserImpl`

    For now  the fieldRefrence‘s  charset is JavaIdentifier,why not change it 
to UnicodeIdentifier?

    Currently in my team, we do actually have this problem. For instance, data 
from Es always contains `@timestamp` field , which JavaIdentifier can not meet. 
So what we did is just let the fieldRefrence Charset use Unicode

 
{code:scala}
 lazy val extensionIdent: Parser[String] = ( "" ~> // handle whitespace 
rep1(acceptIf(Character.isUnicodeIdentifierStart)("identifier expected but '" + 
_ + "' found"), elem("identifier part", Character.isUnicodeIdentifierPart(: 
Char))) ^^ (.mkString) ) 
 lazy val fieldReference: PackratParser[UnresolvedReferenceExpression] = (STAR 
| ident | extensionIdent) ^^ { sym => unresolvedRef(sym) }{code}
 

It is simple but really makes sense~

 

mysql supports unicode ,see the picture below , field called `@@`  

!image-2020-01-15-21-49-19-373.png!

Looking forward for any opinion

 

  was:
Now I am talking about the `PlannerExpressionParserImpl`

    For now  the fieldRefrence‘s  charset is JavaIdentifier,why not change it 
to UnicodeIdentifier?

    Currently in my team, we do actually have this problem. For instance, data 
from Es always contains `@timestamp` field , which JavaIdentifier can not meet. 
So what we did is just let the fieldRefrence Charset use Unicode

 
{code:scala}
 lazy val extensionIdent: Parser[String] = ( "" ~> // handle whitespace 
rep1(acceptIf(Character.isUnicodeIdentifierStart)("identifier expected but '" + 
_ + "' found"), elem("identifier part", Character.isUnicodeIdentifierPart(: 
Char))) ^^ (.mkString) ) 
 lazy val fieldReference: PackratParser[UnresolvedReferenceExpression] = (STAR 
| ident | extensionIdent) ^^ { sym => unresolvedRef(sym) }{code}
 

It is simple but really makes sense~

 

mysql supports unicode ,see the picture below , field called `@@`  

!image-2020-01-15-21-49-19-373.png!

Looking forward for any opinion

 


> Let Flink SQL PlannerExpressionParserImpl#FieldRefrence use Unicode  as its 
> default charset  
> -
>
> Key: FLINK-15573
> URL: https://issues.apache.org/jira/browse/FLINK-15573
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: Lsw_aka_laplace
>Priority: Minor
> Attachments: image-2020-01-15-21-49-19-373.png
>
>
>  UPDATE:
>   Flink now uses Calcite for SQL planner, Calcite currently only support 
> ISO8859-1 charset and the charset cannot be configured also. But even so, 
> from my perspective, we still need to change the 
> PlannerExpressionParserImpl#fieldRefrence‘s charset, cuz JavaIdentifier also 
> cannot meet. 
>   Considering about the implementation, PlannerExpressionParserImpl uses the 
> Scala native parser tool, which consumes `scala.Char`(or just regard it as 
> java char type). For us, concerning only about char type is enough, which 
> means on the implementation, in this case, we don‘t even care about the 
> charset problem, leading to *A simple and backwards compatible solution*.
>   The implementation almost the same as picture below indicates. Actually I 
> have made this change in my company specific branch and deployed it. It works 
> well~
>  
> **
> Now I am talking about the `PlannerExpressionParserImpl`
>     For now  the fieldRefrence‘s  charset is JavaIdentifier,why not change it 
> to UnicodeIdentifier?
>     Currently in my team, we do actually have this problem. For instance, 
> data from Es always contains `@timestamp` field , which JavaIdentifier can 
> not meet. So what we did is just 

[GitHub] [flink] flinkbot commented on issue #10880: [FLINK-15469][SQL] Update UpsertStreamTableSink and RetractStreamTabl…

2020-01-16 Thread GitBox
flinkbot commented on issue #10880: [FLINK-15469][SQL] Update 
UpsertStreamTableSink and RetractStreamTabl…
URL: https://github.com/apache/flink/pull/10880#issuecomment-575502134
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit e0f8cdbe764bc1f9687e862467088de1ce260780 (Fri Jan 17 
07:04:36 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-15469).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10832: [FLINK-14163][runtime]Enforce synchronous registration of Execution#producedPartitions

2020-01-16 Thread GitBox
flinkbot edited a comment on issue #10832: [FLINK-14163][runtime]Enforce 
synchronous registration of Execution#producedPartitions
URL: https://github.com/apache/flink/pull/10832#issuecomment-573276729
 
 
   
   ## CI report:
   
   * 5eb1599945f1bee35c342f762cc25684013d2d83 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/143984465) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4264)
 
   * 73579bbc6b556fe42e55e21581c5994f84480843 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144033919) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4267)
 
   * e303d84258f1e63edd45867f6c4e89e181307ee5 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144299177) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4325)
 
   * 426112c59a86ec127040178efda1085231f1988f Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144326556) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4334)
 
   * fe731e4a039fba606be83a9535841c157a048d58 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144458002) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4355)
 
   * 1278b4488cb8bc05addaf359458ed4bae6c1967b Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144670693) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4386)
 
   * 3eba7e89df2c0b5ad6ba098aaabbab42be37df2f Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144675097) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4388)
 
   * 66afb2a54d5da715b3d59bdf58a4de12af833c2f Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144866402) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4416)
 
   * f2d45fd25663092d9a11e0798900fb886ed46ba3 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10836: [FLINK-11589][Security] Support security module and context discovery via ServiceLoader

2020-01-16 Thread GitBox
flinkbot edited a comment on issue #10836: [FLINK-11589][Security] Support 
security module and context discovery via ServiceLoader
URL: https://github.com/apache/flink/pull/10836#issuecomment-573438212
 
 
   
   ## CI report:
   
   * 0e53d682360fe30462917c820c9aa866caa957b5 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144062568) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4270)
 
   * c5927b76a270ceaf3ae6442826e582d97487c52d Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144863310) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4415)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15469) UpsertStreamTableSink should support new type system

2020-01-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15469:
---
Labels: pull-request-available  (was: )

> UpsertStreamTableSink should support new type system
> 
>
> Key: FLINK-15469
> URL: https://issues.apache.org/jira/browse/FLINK-15469
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.10.0
>Reporter: Zhenghua Gao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Currently *UpsertStreamTableSink* can only returns TypeInformation of the 
> requested record, which can't support types with precision and scale, e.g. 
> TIMESTAMP(p), DECIMAL(p,s).
> A proposal is deprecating the *getRecordType* API and adding a 
> *getRecordDataType* API instead to return the data type of the requested 
> record.
> {code:java}
> /**
>  * Returns the requested record type.
>  * 
>  * @Deprecated This method will be removed in future versions. It's 
> recommended to use {@link #getRecordDataType()} instead.
>  */
> @Deprecated
> TypeInformation getRecordType();
> /*
>  * Returns the requested record data type.
>  */
> DataType getRecordDataType();
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] docete opened a new pull request #10880: [FLINK-15469][SQL] Update UpsertStreamTableSink and RetractStreamTabl…

2020-01-16 Thread GitBox
docete opened a new pull request #10880: [FLINK-15469][SQL] Update 
UpsertStreamTableSink and RetractStreamTabl…
URL: https://github.com/apache/flink/pull/10880
 
 
   …eSink and related interface to new type system
   
   
   ## What is the purpose of the change
   
   Currently UpsertStreamTableSink can only returns TypeInformation of the 
requested record, which can't support types with precision and scale, e.g. 
TIMESTAMP(p), DECIMAL(p,s).
   This PR would deprecate the getRecordType API and adding a getRecordDataType 
API instead to return the data type of the requested record. 
   
   ## Brief change log
   
   - e0f8cdb Update UpsertStreamTableSink and RetractStreamTableSink and 
related interface to new type system
   
   ## Verifying this change
   
   This change is already covered by existing tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (**yes** / no)
 - If yes, how is the feature documented? (not applicable / docs / 
**JavaDocs** / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10879: [FLINK-15595][table] Remove CoreMudule in ModuleManager

2020-01-16 Thread GitBox
flinkbot commented on issue #10879: [FLINK-15595][table] Remove CoreMudule in 
ModuleManager
URL: https://github.com/apache/flink/pull/10879#issuecomment-575498643
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 1ef1f079f2efe84d67a9e8b17921fc5ed984bbf5 (Fri Jan 17 
06:51:08 UTC 2020)
   
   **Warnings:**
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-15595).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15595) Entirely implement resolution order as FLIP-68 concept

2020-01-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15595:
---
Labels: pull-request-available  (was: )

> Entirely implement resolution order as FLIP-68 concept
> --
>
> Key: FLINK-15595
> URL: https://issues.apache.org/jira/browse/FLINK-15595
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Jingsong Lee
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> First of all, the implementation is problematic. CoreModule returns 
> BuiltinFunctionDefinition, which cannot be resolved in 
> FunctionCatalogOperatorTable, so it will fall back to FlinkSqlOperatorTable.
> Second, the function defined by CoreModule is seriously incomplete. You can 
> compare it with FunctionCatalogOperatorTable, a lot less. This leads to the 
> fact that the priority of some functions is in CoreModule, and the priority 
> of some functions is behind all modules. This is confusing, which is not what 
> we want to define in FLIP-68. 
> We should:
>  * We should resolve BuiltinFunctionDefinition correctly in 
> FunctionCatalogOperatorTable.
>  * CoreModule should contains all functions in FlinkSqlOperatorTable, a 
> simple way could provided calcite wrapper to wrap all functions.
>  * PlannerContext.getBuiltinSqlOperatorTable should not contains 
> FlinkSqlOperatorTable, we should use one 
> FunctionCatalogOperatorTable.Otherwise, there will be a lot of confusion.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi opened a new pull request #10879: [FLINK-15595][table] Remove CoreMudule in ModuleManager

2020-01-16 Thread GitBox
JingsongLi opened a new pull request #10879: [FLINK-15595][table] Remove 
CoreMudule in ModuleManager
URL: https://github.com/apache/flink/pull/10879
 
 
   
   ## What is the purpose of the change
   
   First of all, the implementation is problematic. CoreModule returns 
BuiltinFunctionDefinition, which cannot be resolved in 
FunctionCatalogOperatorTable, so it will fall back to FlinkSqlOperatorTable.
   
   Second, the function defined by CoreModule is seriously incomplete. You can 
compare it with FunctionCatalogOperatorTable, a lot less. This leads to the 
fact that the priority of some functions is in CoreModule, and the priority of 
some functions is behind all modules. This is confusing, which is not what we 
want. 
   
   ## Brief change log
   
   Dropping the core module for now. The class CoreModule is implemented 
correctly but underlying layers are not ready yet.
   
   ## Verifying this change
   
   This change is already covered by existing tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] curcur commented on issue #10832: [FLINK-14163][runtime]Enforce synchronous registration of Execution#producedPartitions

2020-01-16 Thread GitBox
curcur commented on issue #10832: [FLINK-14163][runtime]Enforce synchronous 
registration of Execution#producedPartitions
URL: https://github.com/apache/flink/pull/10832#issuecomment-575495502
 
 
   Thanks for reviewing the code @zentol ! Here is the updated version. Please 
let me know what do you think.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10877: [FLINK-15602][table-planner-blink] Padding TIMESTAMP type to respect …

2020-01-16 Thread GitBox
flinkbot commented on issue #10877: [FLINK-15602][table-planner-blink] Padding 
TIMESTAMP type to respect …
URL: https://github.com/apache/flink/pull/10877#issuecomment-575493649
 
 
   
   ## CI report:
   
   * c53d150308c6f08fc6c2d9bd6dd84b36e2021ef1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10878: [FLINK-15599][table] SQL client requires both legacy and blink planner to be on the classpath

2020-01-16 Thread GitBox
flinkbot commented on issue #10878: [FLINK-15599][table] SQL client requires 
both legacy and blink planner to be on the classpath
URL: https://github.com/apache/flink/pull/10878#issuecomment-575493814
 
 
   
   ## CI report:
   
   * 6ce81e21780883796248af5c87d2ec5f1dc5e0ef UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10836: [FLINK-11589][Security] Support security module and context discovery via ServiceLoader

2020-01-16 Thread GitBox
flinkbot edited a comment on issue #10836: [FLINK-11589][Security] Support 
security module and context discovery via ServiceLoader
URL: https://github.com/apache/flink/pull/10836#issuecomment-573438212
 
 
   
   ## CI report:
   
   * 0e53d682360fe30462917c820c9aa866caa957b5 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144062568) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4270)
 
   * c5927b76a270ceaf3ae6442826e582d97487c52d Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/144863310) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4415)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10832: [FLINK-14163][runtime]Enforce synchronous registration of Execution#producedPartitions

2020-01-16 Thread GitBox
flinkbot edited a comment on issue #10832: [FLINK-14163][runtime]Enforce 
synchronous registration of Execution#producedPartitions
URL: https://github.com/apache/flink/pull/10832#issuecomment-573276729
 
 
   
   ## CI report:
   
   * 5eb1599945f1bee35c342f762cc25684013d2d83 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/143984465) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4264)
 
   * 73579bbc6b556fe42e55e21581c5994f84480843 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144033919) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4267)
 
   * e303d84258f1e63edd45867f6c4e89e181307ee5 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144299177) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4325)
 
   * 426112c59a86ec127040178efda1085231f1988f Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144326556) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4334)
 
   * fe731e4a039fba606be83a9535841c157a048d58 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144458002) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4355)
 
   * 1278b4488cb8bc05addaf359458ed4bae6c1967b Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144670693) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4386)
 
   * 3eba7e89df2c0b5ad6ba098aaabbab42be37df2f Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144675097) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4388)
 
   * 66afb2a54d5da715b3d59bdf58a4de12af833c2f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10332: [FLINK-13905][checkpointing] Separate checkpoint triggering into several asynchronous stages

2020-01-16 Thread GitBox
flinkbot edited a comment on issue #10332: [FLINK-13905][checkpointing] 
Separate checkpoint triggering into several asynchronous stages
URL: https://github.com/apache/flink/pull/10332#issuecomment-559012596
 
 
   
   ## CI report:
   
   * 3198d4f64415b0d257733b7d214dece64c392081 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/138375484) 
   * 22c0207087c7e3fee9c2472470cf59f0a4cc3a2a Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/138433557) 
   * 0579ee0ad7bf99f51230813ae9ca3a189d7e475d Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/140628079) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3481)
 
   * f12b35af0ac084469f5c41556bd11f0b191265ba Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/143878207) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4254)
 
   * 879511efb0b0c15efe58d6728e073f366023c384 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/18410) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4350)
 
   * 91a493276a7658eb9cf7bb98969572a467cfeaa7 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144728716) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4397)
 
   * 43ca6bce736528c74742d2dc3c50ca0b43a6645e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15625) flink sql multiple statements syntatic validation supports

2020-01-16 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017746#comment-17017746
 ] 

Jingsong Lee commented on FLINK-15625:
--

[~jackylau] [~1026688210] thanks for investigation, Can you provide the it case 
to explain this issue? That will help us understanding.

> flink sql multiple statements syntatic validation supports
> --
>
> Key: FLINK-15625
> URL: https://issues.apache.org/jira/browse/FLINK-15625
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Legacy Planner
>Reporter: jackylau
>Priority: Major
> Fix For: 1.10.0
>
>
> we konw that blink(blink first commits ) parser and calcite parser all 
> support multiple statements now  and supports multiple statement syntatic 
> validation by calcite, which validates sql statements one by one, and it will 
> not validate the previous tablenames and others. and we only know the sql 
> syntatic error when we submit the flink applications. 
> I think it is eagerly need for users. we hope the flink community to support 
> it 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10878: [FLINK-15599][table] SQL client requires both legacy and blink planner to be on the classpath

2020-01-16 Thread GitBox
flinkbot commented on issue #10878: [FLINK-15599][table] SQL client requires 
both legacy and blink planner to be on the classpath
URL: https://github.com/apache/flink/pull/10878#issuecomment-575482343
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit d7aafde7cefee21e4b332517654c96171bdf48ef (Fri Jan 17 
06:21:25 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15599) SQL client requires both legacy and blink planner to be on the classpath

2020-01-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15599:
---
Labels: pull-request-available  (was: )

> SQL client requires both legacy and blink planner to be on the classpath
> 
>
> Key: FLINK-15599
> URL: https://issues.apache.org/jira/browse/FLINK-15599
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: Dawid Wysakowicz
>Assignee: Jingsong Lee
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> Sql client uses directly some of the internal classes of the legacy planner, 
> thus it does not work with only the blink planner on the classpath.
> The internal class that's being used is 
> {{org.apache.flink.table.functions.FunctionService}}
> This dependency was introduced in FLINK-13195



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10877: [FLINK-15602][table-planner-blink] Padding TIMESTAMP type to respect …

2020-01-16 Thread GitBox
flinkbot commented on issue #10877: [FLINK-15602][table-planner-blink] Padding 
TIMESTAMP type to respect …
URL: https://github.com/apache/flink/pull/10877#issuecomment-575481916
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit c53d150308c6f08fc6c2d9bd6dd84b36e2021ef1 (Fri Jan 17 
06:19:25 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] curcur commented on a change in pull request #10832: [FLINK-14163][runtime]Enforce synchronous registration of Execution#producedPartitions

2020-01-16 Thread GitBox
curcur commented on a change in pull request #10832: 
[FLINK-14163][runtime]Enforce synchronous registration of 
Execution#producedPartitions
URL: https://github.com/apache/flink/pull/10832#discussion_r367783969
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/executiongraph/ExecutionTest.java
 ##
 @@ -540,6 +551,46 @@ public void testSlotReleaseAtomicallyReleasesExecution() 
throws Exception {
});
}
 
+   /**
+* Tests that producedPartitions are registered synchronously under an 
asynchronous interface.
+*/
+   @Test(expected = IllegalStateException.class)
 
 Review comment:
   by `try/catch`
   
   I guess you mean something like this?
   ```
   boolean incompletePartitionRegistrationFuture = false;
   try {
Execution.registerProducedPartitions(sourceVertex, new 
LocalTaskManagerLocation(), new ExecutionAttemptID(), false);
   } catch (IllegalStateException e) {
incompletePartitionRegistrationFuture = true;
   }
   
   assertTrue(incompletePartitionRegistrationFuture);
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] curcur commented on a change in pull request #10832: [FLINK-14163][runtime]Enforce synchronous registration of Execution#producedPartitions

2020-01-16 Thread GitBox
curcur commented on a change in pull request #10832: 
[FLINK-14163][runtime]Enforce synchronous registration of 
Execution#producedPartitions
URL: https://github.com/apache/flink/pull/10832#discussion_r367783969
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/executiongraph/ExecutionTest.java
 ##
 @@ -540,6 +551,46 @@ public void testSlotReleaseAtomicallyReleasesExecution() 
throws Exception {
});
}
 
+   /**
+* Tests that producedPartitions are registered synchronously under an 
asynchronous interface.
+*/
+   @Test(expected = IllegalStateException.class)
 
 Review comment:
   by `try/catch`
   
   I guess you mean something like this?
   ```
   boolean incompletePartitionRegistrationFuture = false;
try {
Execution.registerProducedPartitions(sourceVertex, new 
LocalTaskManagerLocation(), new ExecutionAttemptID(), false);
} catch (IllegalStateException e) {
incompletePartitionRegistrationFuture = true;
}
   
assertTrue(incompletePartitionRegistrationFuture);
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi opened a new pull request #10878: [FLINK-15599][table] SQL client requires both legacy and blink planner to be on the classpath

2020-01-16 Thread GitBox
JingsongLi opened a new pull request #10878: [FLINK-15599][table] SQL client 
requires both legacy and blink planner to be on the classpath
URL: https://github.com/apache/flink/pull/10878
 
 
   
   ## What is the purpose of the change
   
   Sql client uses directly some of the internal classes of the legacy planner, 
thus it does not work with only the blink planner on the classpath.
   The internal class that's being used is 
org.apache.flink.table.functions.FunctionService
   This dependency was introduced in FLINK-13195
   
   ## Brief change log
   
   - Port FunctionService and FunctionServiceTest to table-common
   - check explicitly for an instance of Blink's executor by PlannerFactory
   
   ## Verifying this change
   
   Manual testing
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15602) Blink planner does not respect the precision when casting timestamp to varchar

2020-01-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15602:
---
Labels: pull-request-available  (was: )

> Blink planner does not respect the precision when casting timestamp to varchar
> --
>
> Key: FLINK-15602
> URL: https://issues.apache.org/jira/browse/FLINK-15602
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Dawid Wysakowicz
>Assignee: Zhenghua Gao
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> According to SQL 2011 Part 2 Section 6.13 General Rules 11) d)
> {quote}
> If SD is a datetime data type or an interval data type then let Y be the 
> shortest character string that
> conforms to the definition of  in Subclause 5.3, “”, and 
> such that the interpreted value
> of Y is SV and the interpreted precision of Y is the precision of SD.
> {quote}
> That means:
> {code}
> select cast(cast(TO_TIMESTAMP('2014-07-02 06:14:00', '-MM-DD HH24:mm:SS') 
> as TIMESTAMP(0)) as VARCHAR(256)) from ...;
> // should produce
> // 2014-07-02 06:14:00
> select cast(cast(TO_TIMESTAMP('2014-07-02 06:14:00', '-MM-DD HH24:mm:SS') 
> as TIMESTAMP(3)) as VARCHAR(256)) from ...;
> // should produce
> // 2014-07-02 06:14:00.000
> select cast(cast(TO_TIMESTAMP('2014-07-02 06:14:00', '-MM-DD HH24:mm:SS') 
> as TIMESTAMP(9)) as VARCHAR(256)) from ...;
> // should produce
> // 2014-07-02 06:14:00.0
> {code}
> One possible solution would be to propagate the precision in 
> {{org.apache.flink.table.planner.codegen.calls.ScalarOperatorGens#localTimeToStringCode}}.
>  If I am not mistaken this problem was introduced in [FLINK-14599]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] docete opened a new pull request #10877: [FLINK-15602][table-planner-blink] Padding TIMESTAMP type to respect …

2020-01-16 Thread GitBox
docete opened a new pull request #10877: [FLINK-15602][table-planner-blink] 
Padding TIMESTAMP type to respect …
URL: https://github.com/apache/flink/pull/10877
 
 
   …the precision when casting timestamp to varchar
   
   ## What is the purpose of the change
   
   According to SQL 2011 Part 2 Section 6.13 General Rules 11) d)
   
   > If SD is a datetime data type or an interval data type then let Y be the 
shortest character string that
   > conforms to the definition of  in Subclause 5.3, “”, and 
such that the interpreted value of Y is SV and the interpreted precision of Y 
is the precision of SD.
   
   This PR padding the TIMESTAMP type to respect the precision when casting 
timestamp to varchar. 
   
   ## Brief change log
   
   - c53d150 Padding TIMESTAMP type to respect precision
   
   ## Verifying this change
   
   This change is already covered by existing tests, such as *TempralTypeTests*.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (**yes** / no)
 - If yes, how is the feature documented? (not applicable / docs / 
**JavaDocs** / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15625) flink sql multiple statements syntatic validation supports

2020-01-16 Thread wgcn (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017738#comment-17017738
 ] 

wgcn commented on FLINK-15625:
--

it's indeed a good improvement ,  and it's seem to  not complex to realize the 
improvement .   i wanna to try  it 

> flink sql multiple statements syntatic validation supports
> --
>
> Key: FLINK-15625
> URL: https://issues.apache.org/jira/browse/FLINK-15625
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Legacy Planner
>Reporter: jackylau
>Priority: Major
> Fix For: 1.10.0
>
>
> we konw that blink(blink first commits ) parser and calcite parser all 
> support multiple statements now  and supports multiple statement syntatic 
> validation by calcite, which validates sql statements one by one, and it will 
> not validate the previous tablenames and others. and we only know the sql 
> syntatic error when we submit the flink applications. 
> I think it is eagerly need for users. we hope the flink community to support 
> it 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] ifndef-SleePy commented on issue #10332: [FLINK-13905][checkpointing] Separate checkpoint triggering into several asynchronous stages

2020-01-16 Thread GitBox
ifndef-SleePy commented on issue #10332: [FLINK-13905][checkpointing] Separate 
checkpoint triggering into several asynchronous stages
URL: https://github.com/apache/flink/pull/10332#issuecomment-575479252
 
 
   @pnowojski , I have addressed the explicit comments. Regarding to the async 
calls in timer thread, I think we could leave it along here. Because that part 
is intermediate, if everything goes well, we could replace timer thread soon. 
What do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-15623) Buildling flink-python with maven profile docs-and-source fails

2020-01-16 Thread Hequn Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng closed FLINK-15623.
---
Resolution: Fixed

> Buildling flink-python with maven profile docs-and-source fails
> ---
>
> Key: FLINK-15623
> URL: https://issues.apache.org/jira/browse/FLINK-15623
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.10.0
> Environment: rev: 91d96abe5f42bd088a326870b4885d79611fccb5
>Reporter: Gary Yao
>Assignee: sunjincheng
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> *Description*
> Building flink-python with maven profile {{docs-and-source}} fails due to 
> checkstyle violations. 
> *How to reproduce*
> Running
> {noformat}
> mvn clean install -pl flink-python -Pdocs-and-source -DskipTests 
> -DretryFailedDeploymentCount=10
> {noformat}
> should fail with the following error
> {noformat}
> [...]
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8343] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8344] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8345] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8346] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8347] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8348] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8349] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8350] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 18.046 s
> [INFO] Finished at: 2020-01-16T16:44:01+00:00
> [INFO] Final Memory: 158M/2826M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check (validate) on 
> project flink-python_2.11: You have 7603 Checkstyle violations. -> [Help 1]
> [ERROR]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15623) Buildling flink-python with maven profile docs-and-source fails

2020-01-16 Thread Hequn Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017736#comment-17017736
 ] 

Hequn Cheng commented on FLINK-15623:
-

Fixed 
in 1.11.0 via 9575b91880e976923f4a0fc30bd9fe76e0f7e401
in 1.10.0 via 66b0d815a449e340ee7eeb645d446dda42471ed5 

> Buildling flink-python with maven profile docs-and-source fails
> ---
>
> Key: FLINK-15623
> URL: https://issues.apache.org/jira/browse/FLINK-15623
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.10.0
> Environment: rev: 91d96abe5f42bd088a326870b4885d79611fccb5
>Reporter: Gary Yao
>Assignee: sunjincheng
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> *Description*
> Building flink-python with maven profile {{docs-and-source}} fails due to 
> checkstyle violations. 
> *How to reproduce*
> Running
> {noformat}
> mvn clean install -pl flink-python -Pdocs-and-source -DskipTests 
> -DretryFailedDeploymentCount=10
> {noformat}
> should fail with the following error
> {noformat}
> [...]
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8343] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8344] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8345] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8346] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8347] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8348] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8349] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8350] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 18.046 s
> [INFO] Finished at: 2020-01-16T16:44:01+00:00
> [INFO] Final Memory: 158M/2826M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check (validate) on 
> project flink-python_2.11: You have 7603 Checkstyle violations. -> [Help 1]
> [ERROR]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] hequn8128 closed pull request #10876: [FLINK-15623][build] Exclude code style check of generation code in P…

2020-01-16 Thread GitBox
hequn8128 closed pull request #10876: [FLINK-15623][build] Exclude code style 
check of generation code in P…
URL: https://github.com/apache/flink/pull/10876
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10876: [FLINK-15623][build] Exclude code style check of generation code in P…

2020-01-16 Thread GitBox
flinkbot edited a comment on issue #10876: [FLINK-15623][build] Exclude code 
style check of generation code in P…
URL: https://github.com/apache/flink/pull/10876#issuecomment-575453799
 
 
   
   ## CI report:
   
   * 92c609248116ac31a5e3c7cb00a81d2a70845f06 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144854219) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4414)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10836: [FLINK-11589][Security] Support security module and context discovery via ServiceLoader

2020-01-16 Thread GitBox
flinkbot edited a comment on issue #10836: [FLINK-11589][Security] Support 
security module and context discovery via ServiceLoader
URL: https://github.com/apache/flink/pull/10836#issuecomment-573438212
 
 
   
   ## CI report:
   
   * 0e53d682360fe30462917c820c9aa866caa957b5 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144062568) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4270)
 
   * c5927b76a270ceaf3ae6442826e582d97487c52d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] ifndef-SleePy commented on a change in pull request #10332: [FLINK-13905][checkpointing] Separate checkpoint triggering into several asynchronous stages

2020-01-16 Thread GitBox
ifndef-SleePy commented on a change in pull request #10332: 
[FLINK-13905][checkpointing] Separate checkpoint triggering into several 
asynchronous stages
URL: https://github.com/apache/flink/pull/10332#discussion_r367778946
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java
 ##
 @@ -576,85 +648,181 @@ public boolean isShutdown() {
checkpoint.setStatsCallback(callback);
}
 
-   // schedule the timer that will clean up the expired checkpoints
-   final Runnable canceller = () -> {
-   synchronized (lock) {
-   // only do the work if the checkpoint is not 
discarded anyways
-   // note that checkpoint completion discards the 
pending checkpoint object
-   if (!checkpoint.isDiscarded()) {
-   LOG.info("Checkpoint {} of job {} 
expired before completing.", checkpointID, job);
-
-   abortPendingCheckpoint(
-   checkpoint,
-   new 
CheckpointException(CheckpointFailureReason.CHECKPOINT_EXPIRED));
-   }
-   }
-   };
+   synchronized (lock) {
 
-   try {
-   // re-acquire the coordinator-wide lock
-   synchronized (lock) {
-   preCheckBeforeTriggeringCheckpoint(isPeriodic, 
props.forceCheckpoint());
+   pendingCheckpoints.put(checkpointID, checkpoint);
 
-   LOG.info("Triggering checkpoint {} @ {} for job 
{}.", checkpointID, timestamp, job);
+   ScheduledFuture cancellerHandle = timer.schedule(
+   new CheckpointCanceller(checkpoint),
+   checkpointTimeout, TimeUnit.MILLISECONDS);
 
-   pendingCheckpoints.put(checkpointID, 
checkpoint);
+   if (!checkpoint.setCancellerHandle(cancellerHandle)) {
+   // checkpoint is already disposed!
+   cancellerHandle.cancel(false);
+   }
+   }
 
-   ScheduledFuture cancellerHandle = 
timer.schedule(
-   canceller,
-   checkpointTimeout, 
TimeUnit.MILLISECONDS);
+   LOG.info("Triggering checkpoint {} @ {} for job {}.", 
checkpointID, timestamp, job);
+   return checkpoint;
+   }
 
-   if 
(!checkpoint.setCancellerHandle(cancellerHandle)) {
-   // checkpoint is already disposed!
-   cancellerHandle.cancel(false);
-   }
+   /**
+* Snapshot master hook states asynchronously.
+*
+* @param checkpoint the pending checkpoint
+* @return the future represents master hook states are finished or not
+*/
+   private CompletableFuture snapshotMasterState(PendingCheckpoint 
checkpoint) {
+   if (masterHooks.isEmpty()) {
+   return CompletableFuture.completedFuture(null);
+   }
 
-   // TODO, asynchronously snapshots master hook 
without waiting here
-   for (MasterTriggerRestoreHook masterHook : 
masterHooks.values()) {
-   final MasterState masterState =
-   
MasterHooks.triggerHook(masterHook, checkpointID, timestamp, executor)
-   .get(checkpointTimeout, 
TimeUnit.MILLISECONDS);
-   
checkpoint.acknowledgeMasterState(masterHook.getIdentifier(), masterState);
-   }
-   
Preconditions.checkState(checkpoint.areMasterStatesFullyAcknowledged());
-   }
-   // end of lock scope
+   final long checkpointID = checkpoint.getCheckpointId();
+   final long timestamp = checkpoint.getCheckpointTimestamp();
 
-   final CheckpointOptions checkpointOptions = new 
CheckpointOptions(
-   props.getCheckpointType(),
-   
checkpointStorageLocation.getLocationReference());
+   final CompletableFuture masterStateCompletableFuture = 
new CompletableFuture<>();
+   for (MasterTriggerRestoreHook masterHook : 
masterHooks.values()) {
+   MasterHooks
+   

[GitHub] [flink] ifndef-SleePy commented on a change in pull request #10332: [FLINK-13905][checkpointing] Separate checkpoint triggering into several asynchronous stages

2020-01-16 Thread GitBox
ifndef-SleePy commented on a change in pull request #10332: 
[FLINK-13905][checkpointing] Separate checkpoint triggering into several 
asynchronous stages
URL: https://github.com/apache/flink/pull/10332#discussion_r367778123
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java
 ##
 @@ -485,76 +481,151 @@ public boolean isShutdown() {
CheckpointProperties props,
@Nullable String externalSavepointLocation,
boolean isPeriodic,
-   boolean advanceToEndOfTime) throws CheckpointException {
+   boolean advanceToEndOfTime) {
 
if (advanceToEndOfTime && !(props.isSynchronous() && 
props.isSavepoint())) {
-   throw new IllegalArgumentException("Only synchronous 
savepoints are allowed to advance the watermark to MAX.");
+   return FutureUtils.completedExceptionally(new 
IllegalArgumentException(
+   "Only synchronous savepoints are allowed to 
advance the watermark to MAX."));
}
 
-   // make some eager pre-checks
+   final CompletableFuture 
onCompletionPromise =
+   new CompletableFuture<>();
synchronized (lock) {
-   preCheckBeforeTriggeringCheckpoint(isPeriodic, 
props.forceCheckpoint());
-   }
-
-   // check if all tasks that we need to trigger are running.
-   // if not, abort the checkpoint
-   Execution[] executions = new Execution[tasksToTrigger.length];
-   for (int i = 0; i < tasksToTrigger.length; i++) {
-   Execution ee = 
tasksToTrigger[i].getCurrentExecutionAttempt();
-   if (ee == null) {
-   LOG.info("Checkpoint triggering task {} of job 
{} is not being executed at the moment. Aborting checkpoint.",
-   
tasksToTrigger[i].getTaskNameWithSubtaskIndex(),
-   job);
-   throw new 
CheckpointException(CheckpointFailureReason.NOT_ALL_REQUIRED_TASKS_RUNNING);
-   } else if (ee.getState() == ExecutionState.RUNNING) {
-   executions[i] = ee;
-   } else {
-   LOG.info("Checkpoint triggering task {} of job 
{} is not in state {} but {} instead. Aborting checkpoint.",
-   
tasksToTrigger[i].getTaskNameWithSubtaskIndex(),
-   job,
-   ExecutionState.RUNNING,
-   ee.getState());
-   throw new 
CheckpointException(CheckpointFailureReason.NOT_ALL_REQUIRED_TASKS_RUNNING);
+   if (isTriggering || !triggerRequestQueue.isEmpty()) {
+   // we can't trigger checkpoint directly if 
there is a trigger request being processed
+   // or queued
+   triggerRequestQueue.add(new 
CheckpointTriggerRequest(
+   timestamp,
+   props,
+   externalSavepointLocation,
+   isPeriodic,
+   advanceToEndOfTime,
+   onCompletionPromise));
+   return onCompletionPromise;
}
}
+   startTriggeringCheckpoint(
+   timestamp,
+   props,
+   externalSavepointLocation,
+   isPeriodic,
+   advanceToEndOfTime,
+   onCompletionPromise);
+   return onCompletionPromise;
+   }
 
-   // next, check if all tasks that need to acknowledge the 
checkpoint are running.
-   // if not, abort the checkpoint
-   Map ackTasks = new 
HashMap<>(tasksToWaitFor.length);
+   private void startTriggeringCheckpoint(
+   long timestamp,
+   CheckpointProperties props,
+   @Nullable String externalSavepointLocation,
+   boolean isPeriodic,
+   boolean advanceToEndOfTime,
+   CompletableFuture onCompletionPromise) {
 
-   for (ExecutionVertex ev : tasksToWaitFor) {
-   Execution ee = ev.getCurrentExecutionAttempt();
-   if (ee != null) {
-   ackTasks.put(ee.getAttemptId(), ev);
-   } else {
-   LOG.info("Checkpoint 

[GitHub] [flink] ifndef-SleePy commented on a change in pull request #10332: [FLINK-13905][checkpointing] Separate checkpoint triggering into several asynchronous stages

2020-01-16 Thread GitBox
ifndef-SleePy commented on a change in pull request #10332: 
[FLINK-13905][checkpointing] Separate checkpoint triggering into several 
asynchronous stages
URL: https://github.com/apache/flink/pull/10332#discussion_r367774356
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java
 ##
 @@ -485,76 +481,151 @@ public boolean isShutdown() {
CheckpointProperties props,
@Nullable String externalSavepointLocation,
boolean isPeriodic,
-   boolean advanceToEndOfTime) throws CheckpointException {
+   boolean advanceToEndOfTime) {
 
if (advanceToEndOfTime && !(props.isSynchronous() && 
props.isSavepoint())) {
-   throw new IllegalArgumentException("Only synchronous 
savepoints are allowed to advance the watermark to MAX.");
+   return FutureUtils.completedExceptionally(new 
IllegalArgumentException(
+   "Only synchronous savepoints are allowed to 
advance the watermark to MAX."));
}
 
-   // make some eager pre-checks
+   final CompletableFuture 
onCompletionPromise =
+   new CompletableFuture<>();
synchronized (lock) {
-   preCheckBeforeTriggeringCheckpoint(isPeriodic, 
props.forceCheckpoint());
-   }
-
-   // check if all tasks that we need to trigger are running.
-   // if not, abort the checkpoint
-   Execution[] executions = new Execution[tasksToTrigger.length];
-   for (int i = 0; i < tasksToTrigger.length; i++) {
-   Execution ee = 
tasksToTrigger[i].getCurrentExecutionAttempt();
-   if (ee == null) {
-   LOG.info("Checkpoint triggering task {} of job 
{} is not being executed at the moment. Aborting checkpoint.",
-   
tasksToTrigger[i].getTaskNameWithSubtaskIndex(),
-   job);
-   throw new 
CheckpointException(CheckpointFailureReason.NOT_ALL_REQUIRED_TASKS_RUNNING);
-   } else if (ee.getState() == ExecutionState.RUNNING) {
-   executions[i] = ee;
-   } else {
-   LOG.info("Checkpoint triggering task {} of job 
{} is not in state {} but {} instead. Aborting checkpoint.",
-   
tasksToTrigger[i].getTaskNameWithSubtaskIndex(),
-   job,
-   ExecutionState.RUNNING,
-   ee.getState());
-   throw new 
CheckpointException(CheckpointFailureReason.NOT_ALL_REQUIRED_TASKS_RUNNING);
+   if (isTriggering || !triggerRequestQueue.isEmpty()) {
 
 Review comment:
   `triggerRequestQueue` might be accessed in main thread, through `shutdown` 
and `stopCheckpointScheduler`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15592) Streaming sql throw hive exception when it doesn't use any hive table

2020-01-16 Thread Rui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017715#comment-17017715
 ] 

Rui Li commented on FLINK-15592:


OK, [~jark] please assign this to me. Thanks.

> Streaming sql throw hive exception when it doesn't use any hive table
> -
>
> Key: FLINK-15592
> URL: https://issues.apache.org/jira/browse/FLINK-15592
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Jeff Zhang
>Priority: Critical
> Fix For: 1.10.0
>
>
> I use the following streaming sql to query a kafka table whose metadata is 
> store in hive metastore via HiveCatalog. But it will throw hive related 
> exception which is very confusing.
> SQL
> {code}
> SELECT *
> FROM (
>SELECT *,
>  ROW_NUMBER() OVER(
>ORDER BY event_ts) AS rownum
>FROM source_kafka)
> WHERE rownum <= 10
> {code}
> Exception
> {code}
> Caused by: org.apache.flink.table.api.ValidationException: SQL validation 
> failed. java.lang.reflect.InvocationTargetException
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:130)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:105)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:127)
>   at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:66)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:464)
>   at 
> org.apache.zeppelin.flink.sql.AbstractStreamSqlJob.run(AbstractStreamSqlJob.java:103)
>   ... 13 more
> Caused by: java.lang.RuntimeException: 
> java.lang.reflect.InvocationTargetException
>   at 
> org.apache.flink.table.planner.functions.utils.HiveFunctionUtils.invokeGetResultType(HiveFunctionUtils.java:77)
>   at 
> org.apache.flink.table.planner.functions.utils.HiveAggSqlFunction.lambda$createReturnTypeInference$0(HiveAggSqlFunction.java:82)
>   at 
> org.apache.calcite.sql.SqlOperator.inferReturnType(SqlOperator.java:470)
>   at 
> org.apache.calcite.sql.SqlOperator.validateOperands(SqlOperator.java:437)
>   at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:303)
>   at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:219)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5600)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5587)
>   at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1691)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1676)
>   at 
> org.apache.calcite.sql.SqlCallBinding.getOperandType(SqlCallBinding.java:237)
>   at 
> org.apache.calcite.sql.type.OrdinalReturnTypeInference.inferReturnType(OrdinalReturnTypeInference.java:40)
>   at 
> org.apache.calcite.sql.type.SqlTypeTransformCascade.inferReturnType(SqlTypeTransformCascade.java:54)
>   at 
> org.apache.calcite.sql.SqlOperator.inferReturnType(SqlOperator.java:470)
>   at 
> org.apache.calcite.sql.SqlOperator.validateOperands(SqlOperator.java:437)
>   at 
> org.apache.calcite.sql.SqlOverOperator.deriveType(SqlOverOperator.java:86)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5600)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5587)
>   at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1691)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1676)
>   at 
> org.apache.calcite.sql.SqlAsOperator.deriveType(SqlAsOperator.java:133)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5600)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5587)
>   at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1691)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1676)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.expandSelectItem(SqlValidatorImpl.java:479)
>   at 
> 

[GitHub] [flink] flinkbot edited a comment on issue #10876: [FLINK-15623][build] Exclude code style check of generation code in P…

2020-01-16 Thread GitBox
flinkbot edited a comment on issue #10876: [FLINK-15623][build] Exclude code 
style check of generation code in P…
URL: https://github.com/apache/flink/pull/10876#issuecomment-575453799
 
 
   
   ## CI report:
   
   * 92c609248116ac31a5e3c7cb00a81d2a70845f06 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144854219) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4414)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15592) Streaming sql throw hive exception when it doesn't use any hive table

2020-01-16 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017698#comment-17017698
 ] 

Jingsong Lee commented on FLINK-15592:
--

Increased priority, since we don't have a good solution in FLINK-15595

> Streaming sql throw hive exception when it doesn't use any hive table
> -
>
> Key: FLINK-15592
> URL: https://issues.apache.org/jira/browse/FLINK-15592
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Jeff Zhang
>Priority: Critical
> Fix For: 1.10.0
>
>
> I use the following streaming sql to query a kafka table whose metadata is 
> store in hive metastore via HiveCatalog. But it will throw hive related 
> exception which is very confusing.
> SQL
> {code}
> SELECT *
> FROM (
>SELECT *,
>  ROW_NUMBER() OVER(
>ORDER BY event_ts) AS rownum
>FROM source_kafka)
> WHERE rownum <= 10
> {code}
> Exception
> {code}
> Caused by: org.apache.flink.table.api.ValidationException: SQL validation 
> failed. java.lang.reflect.InvocationTargetException
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:130)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:105)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:127)
>   at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:66)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:464)
>   at 
> org.apache.zeppelin.flink.sql.AbstractStreamSqlJob.run(AbstractStreamSqlJob.java:103)
>   ... 13 more
> Caused by: java.lang.RuntimeException: 
> java.lang.reflect.InvocationTargetException
>   at 
> org.apache.flink.table.planner.functions.utils.HiveFunctionUtils.invokeGetResultType(HiveFunctionUtils.java:77)
>   at 
> org.apache.flink.table.planner.functions.utils.HiveAggSqlFunction.lambda$createReturnTypeInference$0(HiveAggSqlFunction.java:82)
>   at 
> org.apache.calcite.sql.SqlOperator.inferReturnType(SqlOperator.java:470)
>   at 
> org.apache.calcite.sql.SqlOperator.validateOperands(SqlOperator.java:437)
>   at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:303)
>   at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:219)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5600)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5587)
>   at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1691)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1676)
>   at 
> org.apache.calcite.sql.SqlCallBinding.getOperandType(SqlCallBinding.java:237)
>   at 
> org.apache.calcite.sql.type.OrdinalReturnTypeInference.inferReturnType(OrdinalReturnTypeInference.java:40)
>   at 
> org.apache.calcite.sql.type.SqlTypeTransformCascade.inferReturnType(SqlTypeTransformCascade.java:54)
>   at 
> org.apache.calcite.sql.SqlOperator.inferReturnType(SqlOperator.java:470)
>   at 
> org.apache.calcite.sql.SqlOperator.validateOperands(SqlOperator.java:437)
>   at 
> org.apache.calcite.sql.SqlOverOperator.deriveType(SqlOverOperator.java:86)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5600)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5587)
>   at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1691)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1676)
>   at 
> org.apache.calcite.sql.SqlAsOperator.deriveType(SqlAsOperator.java:133)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5600)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5587)
>   at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1691)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1676)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.expandSelectItem(SqlValidatorImpl.java:479)
>   at 
> 

[jira] [Updated] (FLINK-15592) Streaming sql throw hive exception when it doesn't use any hive table

2020-01-16 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-15592:
-
Priority: Critical  (was: Major)

> Streaming sql throw hive exception when it doesn't use any hive table
> -
>
> Key: FLINK-15592
> URL: https://issues.apache.org/jira/browse/FLINK-15592
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Jeff Zhang
>Priority: Critical
> Fix For: 1.10.0
>
>
> I use the following streaming sql to query a kafka table whose metadata is 
> store in hive metastore via HiveCatalog. But it will throw hive related 
> exception which is very confusing.
> SQL
> {code}
> SELECT *
> FROM (
>SELECT *,
>  ROW_NUMBER() OVER(
>ORDER BY event_ts) AS rownum
>FROM source_kafka)
> WHERE rownum <= 10
> {code}
> Exception
> {code}
> Caused by: org.apache.flink.table.api.ValidationException: SQL validation 
> failed. java.lang.reflect.InvocationTargetException
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:130)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:105)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:127)
>   at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:66)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:464)
>   at 
> org.apache.zeppelin.flink.sql.AbstractStreamSqlJob.run(AbstractStreamSqlJob.java:103)
>   ... 13 more
> Caused by: java.lang.RuntimeException: 
> java.lang.reflect.InvocationTargetException
>   at 
> org.apache.flink.table.planner.functions.utils.HiveFunctionUtils.invokeGetResultType(HiveFunctionUtils.java:77)
>   at 
> org.apache.flink.table.planner.functions.utils.HiveAggSqlFunction.lambda$createReturnTypeInference$0(HiveAggSqlFunction.java:82)
>   at 
> org.apache.calcite.sql.SqlOperator.inferReturnType(SqlOperator.java:470)
>   at 
> org.apache.calcite.sql.SqlOperator.validateOperands(SqlOperator.java:437)
>   at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:303)
>   at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:219)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5600)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5587)
>   at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1691)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1676)
>   at 
> org.apache.calcite.sql.SqlCallBinding.getOperandType(SqlCallBinding.java:237)
>   at 
> org.apache.calcite.sql.type.OrdinalReturnTypeInference.inferReturnType(OrdinalReturnTypeInference.java:40)
>   at 
> org.apache.calcite.sql.type.SqlTypeTransformCascade.inferReturnType(SqlTypeTransformCascade.java:54)
>   at 
> org.apache.calcite.sql.SqlOperator.inferReturnType(SqlOperator.java:470)
>   at 
> org.apache.calcite.sql.SqlOperator.validateOperands(SqlOperator.java:437)
>   at 
> org.apache.calcite.sql.SqlOverOperator.deriveType(SqlOverOperator.java:86)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5600)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5587)
>   at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1691)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1676)
>   at 
> org.apache.calcite.sql.SqlAsOperator.deriveType(SqlAsOperator.java:133)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5600)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5587)
>   at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1691)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1676)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.expandSelectItem(SqlValidatorImpl.java:479)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelectList(SqlValidatorImpl.java:4105)
>   

[jira] [Commented] (FLINK-15592) Streaming sql throw hive exception when it doesn't use any hive table

2020-01-16 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017697#comment-17017697
 ] 

Jingsong Lee commented on FLINK-15592:
--

[~lirui] Do you want to add black list to hive function module and take this 
ticket?

> Streaming sql throw hive exception when it doesn't use any hive table
> -
>
> Key: FLINK-15592
> URL: https://issues.apache.org/jira/browse/FLINK-15592
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Jeff Zhang
>Priority: Major
> Fix For: 1.10.0
>
>
> I use the following streaming sql to query a kafka table whose metadata is 
> store in hive metastore via HiveCatalog. But it will throw hive related 
> exception which is very confusing.
> SQL
> {code}
> SELECT *
> FROM (
>SELECT *,
>  ROW_NUMBER() OVER(
>ORDER BY event_ts) AS rownum
>FROM source_kafka)
> WHERE rownum <= 10
> {code}
> Exception
> {code}
> Caused by: org.apache.flink.table.api.ValidationException: SQL validation 
> failed. java.lang.reflect.InvocationTargetException
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:130)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:105)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:127)
>   at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:66)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:464)
>   at 
> org.apache.zeppelin.flink.sql.AbstractStreamSqlJob.run(AbstractStreamSqlJob.java:103)
>   ... 13 more
> Caused by: java.lang.RuntimeException: 
> java.lang.reflect.InvocationTargetException
>   at 
> org.apache.flink.table.planner.functions.utils.HiveFunctionUtils.invokeGetResultType(HiveFunctionUtils.java:77)
>   at 
> org.apache.flink.table.planner.functions.utils.HiveAggSqlFunction.lambda$createReturnTypeInference$0(HiveAggSqlFunction.java:82)
>   at 
> org.apache.calcite.sql.SqlOperator.inferReturnType(SqlOperator.java:470)
>   at 
> org.apache.calcite.sql.SqlOperator.validateOperands(SqlOperator.java:437)
>   at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:303)
>   at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:219)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5600)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5587)
>   at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1691)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1676)
>   at 
> org.apache.calcite.sql.SqlCallBinding.getOperandType(SqlCallBinding.java:237)
>   at 
> org.apache.calcite.sql.type.OrdinalReturnTypeInference.inferReturnType(OrdinalReturnTypeInference.java:40)
>   at 
> org.apache.calcite.sql.type.SqlTypeTransformCascade.inferReturnType(SqlTypeTransformCascade.java:54)
>   at 
> org.apache.calcite.sql.SqlOperator.inferReturnType(SqlOperator.java:470)
>   at 
> org.apache.calcite.sql.SqlOperator.validateOperands(SqlOperator.java:437)
>   at 
> org.apache.calcite.sql.SqlOverOperator.deriveType(SqlOverOperator.java:86)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5600)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5587)
>   at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1691)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1676)
>   at 
> org.apache.calcite.sql.SqlAsOperator.deriveType(SqlAsOperator.java:133)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5600)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5587)
>   at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1691)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1676)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.expandSelectItem(SqlValidatorImpl.java:479)
>   at 
> 

[jira] [Issue Comment Deleted] (FLINK-15599) SQL client requires both legacy and blink planner to be on the classpath

2020-01-16 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-15599:
-
Comment: was deleted

(was: Thanks [~dwysakowicz], yes, we could use 
{{environment.getExecution().isBatchPlanner()}} instead.)

> SQL client requires both legacy and blink planner to be on the classpath
> 
>
> Key: FLINK-15599
> URL: https://issues.apache.org/jira/browse/FLINK-15599
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: Dawid Wysakowicz
>Assignee: Jingsong Lee
>Priority: Critical
> Fix For: 1.10.0
>
>
> Sql client uses directly some of the internal classes of the legacy planner, 
> thus it does not work with only the blink planner on the classpath.
> The internal class that's being used is 
> {{org.apache.flink.table.functions.FunctionService}}
> This dependency was introduced in FLINK-13195



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10876: [FLINK-15623][build] Exclude code style check of generation code in P…

2020-01-16 Thread GitBox
flinkbot edited a comment on issue #10876: [FLINK-15623][build] Exclude code 
style check of generation code in P…
URL: https://github.com/apache/flink/pull/10876#issuecomment-575453799
 
 
   
   ## CI report:
   
   * 92c609248116ac31a5e3c7cb00a81d2a70845f06 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/144854219) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4414)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] curcur commented on a change in pull request #10832: [FLINK-14163][runtime]Enforce synchronous registration of Execution#producedPartitions

2020-01-16 Thread GitBox
curcur commented on a change in pull request #10832: 
[FLINK-14163][runtime]Enforce synchronous registration of 
Execution#producedPartitions
URL: https://github.com/apache/flink/pull/10832#discussion_r367759810
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/Execution.java
 ##
 @@ -605,6 +606,26 @@ public void setInitialState(@Nullable 
JobManagerTaskRestore taskRestore) {
});
}
 
+   /**
+* Register producedPartitions to {@link ShuffleMaster}
+*
+* HACK: Please notice that this method simulates asynchronous 
registration in a synchronous way
+* by making sure the returned {@link CompletableFuture} from {@link 
ShuffleMaster#registerPartitionWithProducer}
+* is done immediately.
+*
+* {@link Execution#producedPartitions} are registered through an 
asynchronous interface
+* {@link ShuffleMaster#registerPartitionWithProducer} to {@link 
ShuffleMaster}, however they are not always
+* accessed through callbacks. So, it is possible that {@link 
Execution#producedPartitions}
+* have not been available yet when accessed (in {@link 
Execution#deploy} for example).
+*
+* Since the only implementation of {@link ShuffleMaster} is {@link 
NettyShuffleMaster},
+* which indeed registers producedPartition in a synchronous way, hence 
this method enforces
 
 Review comment:
   good catch :-)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10876: [FLINK-15623][build] Exclude code style check of generation code in P…

2020-01-16 Thread GitBox
flinkbot commented on issue #10876: [FLINK-15623][build] Exclude code style 
check of generation code in P…
URL: https://github.com/apache/flink/pull/10876#issuecomment-575453799
 
 
   
   ## CI report:
   
   * 92c609248116ac31a5e3c7cb00a81d2a70845f06 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15582) Enable batch scheduling tests in LegacySchedulerBatchSchedulingTest for DefaultScheduler as well

2020-01-16 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-15582:

Parent: FLINK-15626
Issue Type: Sub-task  (was: Improvement)

> Enable batch scheduling tests in LegacySchedulerBatchSchedulingTest for 
> DefaultScheduler as well
> 
>
> Key: FLINK-15582
> URL: https://issues.apache.org/jira/browse/FLINK-15582
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Zhu Zhu
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{testSchedulingOfJobWithFewerSlotsThanParallelism}} is a common case but it 
> is only tested with legacy scheduler in 
> {{LegacySchedulerBatchSchedulingTest}} at the moment.
> We should enable it for DefaultScheduler as well. 
> This also allows us to safely remove {{LegacySchedulerBatchSchedulingTest}} 
> when we are removing the LegacyScheduler and related components without 
> loosing test coverage.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15626) Remove legacy scheduler

2020-01-16 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-15626:

Priority: Critical  (was: Blocker)

> Remove legacy scheduler
> ---
>
> Key: FLINK-15626
> URL: https://issues.apache.org/jira/browse/FLINK-15626
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Coordination
>Affects Versions: 1.11.0
>Reporter: Zhu Zhu
>Priority: Critical
>  Labels: Umbrella
> Fix For: 1.11.0
>
>
> This umbrella ticket is to track the tickets to remove the legacy scheduler 
> and related components.
> So that we can have a much cleaner scheduler framework which significantly 
> simplifies our next development work on job scheduling.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15626) Remove legacy scheduler

2020-01-16 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-15626:

Priority: Blocker  (was: Major)

> Remove legacy scheduler
> ---
>
> Key: FLINK-15626
> URL: https://issues.apache.org/jira/browse/FLINK-15626
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Coordination
>Affects Versions: 1.11.0
>Reporter: Zhu Zhu
>Priority: Blocker
>  Labels: Umbrella
> Fix For: 1.11.0
>
>
> This umbrella ticket is to track the tickets to remove the legacy scheduler 
> and related components.
> So that we can have a much cleaner scheduler framework which significantly 
> simplifies our next development work on job scheduling.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10875: [FLINK-15089][connectors]Pulsar sink

2020-01-16 Thread GitBox
flinkbot edited a comment on issue #10875: [FLINK-15089][connectors]Pulsar sink
URL: https://github.com/apache/flink/pull/10875#issuecomment-575441751
 
 
   
   ## CI report:
   
   * 781af18f6dccfef18beb2411ef0838f22ee5f1e5 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144850096) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4413)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-15626) Remove legacy scheduler

2020-01-16 Thread Zhu Zhu (Jira)
Zhu Zhu created FLINK-15626:
---

 Summary: Remove legacy scheduler
 Key: FLINK-15626
 URL: https://issues.apache.org/jira/browse/FLINK-15626
 Project: Flink
  Issue Type: New Feature
  Components: Runtime / Coordination
Affects Versions: 1.11.0
Reporter: Zhu Zhu
 Fix For: 1.11.0


This umbrella ticket is to track the tickets to remove the legacy scheduler and 
related components.
So that we can have a much cleaner scheduler framework which significantly 
simplifies our next development work on job scheduling.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10876: [FLINK-15623][build] Exclude code style check of generation code in P…

2020-01-16 Thread GitBox
flinkbot commented on issue #10876: [FLINK-15623][build] Exclude code style 
check of generation code in P…
URL: https://github.com/apache/flink/pull/10876#issuecomment-575447215
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 92c609248116ac31a5e3c7cb00a81d2a70845f06 (Fri Jan 17 
03:22:42 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15625) flink sql multiple statements syntatic validation supports

2020-01-16 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-15625:
-
Description: 
we konw that blink(blink first commits ) parser and calcite parser all support 
multiple statements now  and supports multiple statement syntatic validation by 
calcite, which validates sql statements one by one, and it will not validate 
the previous tablenames and others. and we only know the sql syntatic error 
when we submit the flink applications. 

I think it is eagerly need for users. we hope the flink community to support it 

  was:
we konw that flink supports multiple statement syntatic validation by calcite, 
which validates sql statements one by one, and it will not validate the 
previous tablenames and others. and we only know the sql syntatic error when we 
submit the flink applications. 

I think it is eagerly need for users. we hope the flink community to support it 


> flink sql multiple statements syntatic validation supports
> --
>
> Key: FLINK-15625
> URL: https://issues.apache.org/jira/browse/FLINK-15625
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Legacy Planner
>Reporter: jackylau
>Priority: Major
> Fix For: 1.10.0
>
>
> we konw that blink(blink first commits ) parser and calcite parser all 
> support multiple statements now  and supports multiple statement syntatic 
> validation by calcite, which validates sql statements one by one, and it will 
> not validate the previous tablenames and others. and we only know the sql 
> syntatic error when we submit the flink applications. 
> I think it is eagerly need for users. we hope the flink community to support 
> it 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15623) Buildling flink-python with maven profile docs-and-source fails

2020-01-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15623:
---
Labels: pull-request-available  (was: )

> Buildling flink-python with maven profile docs-and-source fails
> ---
>
> Key: FLINK-15623
> URL: https://issues.apache.org/jira/browse/FLINK-15623
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.10.0
> Environment: rev: 91d96abe5f42bd088a326870b4885d79611fccb5
>Reporter: Gary Yao
>Assignee: sunjincheng
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> *Description*
> Building flink-python with maven profile {{docs-and-source}} fails due to 
> checkstyle violations. 
> *How to reproduce*
> Running
> {noformat}
> mvn clean install -pl flink-python -Pdocs-and-source -DskipTests 
> -DretryFailedDeploymentCount=10
> {noformat}
> should fail with the following error
> {noformat}
> [...]
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8343] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8344] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8345] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8346] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8347] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8348] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8349] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8350] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 18.046 s
> [INFO] Finished at: 2020-01-16T16:44:01+00:00
> [INFO] Final Memory: 158M/2826M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check (validate) on 
> project flink-python_2.11: You have 7603 Checkstyle violations. -> [Help 1]
> [ERROR]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] sunjincheng121 opened a new pull request #10876: [FLINK-15623][build] Exclude code style check of generation code in P…

2020-01-16 Thread GitBox
sunjincheng121 opened a new pull request #10876: [FLINK-15623][build] Exclude 
code style check of generation code in P…
URL: https://github.com/apache/flink/pull/10876
 
 
   
   ## What is the purpose of the change
   
   *In this PR will exclude code style check of generation code in PyFlink.*
   
   
   ## Brief change log
 - Exclude code style check of generation code in `suppressions.xml`.
   
   ## Verifying this change
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-15625) flink sql multiple statements syntatic validation supports

2020-01-16 Thread jackylau (Jira)
jackylau created FLINK-15625:


 Summary: flink sql multiple statements syntatic validation supports
 Key: FLINK-15625
 URL: https://issues.apache.org/jira/browse/FLINK-15625
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Legacy Planner
Reporter: jackylau
 Fix For: 1.10.0


we konw that flink supports multiple statement syntatic validation by calcite, 
which validates sql statements one by one, and it will not validate the 
previous tablenames and others. and we only know the sql syntatic error when we 
submit the flink applications. 

I think it is eagerly need for users. we hope the flink community to support it 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15610) How to achieve the udf that the number of return column is uncertain

2020-01-16 Thread hehuiyuan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017677#comment-17017677
 ] 

hehuiyuan commented on FLINK-15610:
---

Thx

> How to achieve the udf that the number of return column is uncertain 
> -
>
> Key: FLINK-15610
> URL: https://issues.apache.org/jira/browse/FLINK-15610
> Project: Flink
>  Issue Type: Wish
>  Components: Table SQL / API
>Reporter: hehuiyuan
>Priority: Major
>
> For 
> example:[https://help.aliyun.com/knowledge_detail/98948.html?spm=a2c4g.11186631.2.3.21b81761QhpBte]
>  
> {code:java}
> SELECT c1, c2 
> FROM T1, lateral table(MULTI_KEYVALUE(str, split1, split2, key1, key2)) 
> as T(c1, c2)
> SELECT c1, c2, c3 
> FROM T1, lateral table(MULTI_KEYVALUE(str, split1, split2, key1, key2, key3)) 
> as T(c1, c2, c3)
> {code}
> For Tablefunction:
> {code:java}
> public TypeInformation getResultType() {
>   return Types.ROW(Types.STRING(),Types.STRING());
> }
> {code}
> The retrun type of  `getResultType` is `TypeInformation`,i want to 
> achieve the size of row is not fixed.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10875: [FLINK-15089][connectors]Pulsar sink

2020-01-16 Thread GitBox
flinkbot commented on issue #10875: [FLINK-15089][connectors]Pulsar sink
URL: https://github.com/apache/flink/pull/10875#issuecomment-575441751
 
 
   
   ## CI report:
   
   * 781af18f6dccfef18beb2411ef0838f22ee5f1e5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tankilo commented on issue #8649: [FLINK-12770] [elasticsearch] Make number of bulk concurrentRequests configurable

2020-01-16 Thread GitBox
tankilo commented on issue #8649: [FLINK-12770] [elasticsearch] Make number of 
bulk concurrentRequests configurable
URL: https://github.com/apache/flink/pull/8649#issuecomment-575441837
 
 
   Needs attention


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] curcur commented on a change in pull request #10832: [FLINK-14163][runtime]Enforce synchronous registration of Execution#producedPartitions

2020-01-16 Thread GitBox
curcur commented on a change in pull request #10832: 
[FLINK-14163][runtime]Enforce synchronous registration of 
Execution#producedPartitions
URL: https://github.com/apache/flink/pull/10832#discussion_r367744319
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/executiongraph/ExecutionTest.java
 ##
 @@ -540,6 +551,46 @@ public void testSlotReleaseAtomicallyReleasesExecution() 
throws Exception {
});
}
 
+   /**
+* Tests that producedPartitions are registered synchronously under an 
asynchronous interface.
 
 Review comment:
   How about this:
   
   Tests that asynchronous registrations (incomplete futures returned by {@link 
ShuffleMaster#registerPartitionWithProducer}) are rejected.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15623) Buildling flink-python with maven profile docs-and-source fails

2020-01-16 Thread sunjincheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017675#comment-17017675
 ] 

sunjincheng commented on FLINK-15623:
-

Agree with [~chesnay] 's analysis. The goal "source:jar" of maven-source-plugin 
will invoke the phase 
[generate-sources|https://maven.apache.org/plugins/maven-source-plugin/jar-mojo.html].
 It will firstly execute the early phase "validate" during which 
maven-checkstyle-plugin will be executed. At this point, the generated sources 
are already generated and so checkstyle check failed. I think it makes sense to 
exclude the generated sources from the check style check.

> Buildling flink-python with maven profile docs-and-source fails
> ---
>
> Key: FLINK-15623
> URL: https://issues.apache.org/jira/browse/FLINK-15623
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.10.0
> Environment: rev: 91d96abe5f42bd088a326870b4885d79611fccb5
>Reporter: Gary Yao
>Assignee: sunjincheng
>Priority: Blocker
> Fix For: 1.10.0
>
>
> *Description*
> Building flink-python with maven profile {{docs-and-source}} fails due to 
> checkstyle violations. 
> *How to reproduce*
> Running
> {noformat}
> mvn clean install -pl flink-python -Pdocs-and-source -DskipTests 
> -DretryFailedDeploymentCount=10
> {noformat}
> should fail with the following error
> {noformat}
> [...]
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8343] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8344] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8345] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8346] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8347] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8348] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8349] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8350] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 18.046 s
> [INFO] Finished at: 2020-01-16T16:44:01+00:00
> [INFO] Final Memory: 158M/2826M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check (validate) on 
> project flink-python_2.11: You have 7603 Checkstyle violations. -> [Help 1]
> [ERROR]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15602) Blink planner does not respect the precision when casting timestamp to varchar

2020-01-16 Thread Zhenghua Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017666#comment-17017666
 ] 

Zhenghua Gao commented on FLINK-15602:
--

[~twalthr] AFAIK we have padded DECIMAL type and intervals in Blink planner.

 

> Blink planner does not respect the precision when casting timestamp to varchar
> --
>
> Key: FLINK-15602
> URL: https://issues.apache.org/jira/browse/FLINK-15602
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Dawid Wysakowicz
>Assignee: Zhenghua Gao
>Priority: Blocker
> Fix For: 1.10.0
>
>
> According to SQL 2011 Part 2 Section 6.13 General Rules 11) d)
> {quote}
> If SD is a datetime data type or an interval data type then let Y be the 
> shortest character string that
> conforms to the definition of  in Subclause 5.3, “”, and 
> such that the interpreted value
> of Y is SV and the interpreted precision of Y is the precision of SD.
> {quote}
> That means:
> {code}
> select cast(cast(TO_TIMESTAMP('2014-07-02 06:14:00', '-MM-DD HH24:mm:SS') 
> as TIMESTAMP(0)) as VARCHAR(256)) from ...;
> // should produce
> // 2014-07-02 06:14:00
> select cast(cast(TO_TIMESTAMP('2014-07-02 06:14:00', '-MM-DD HH24:mm:SS') 
> as TIMESTAMP(3)) as VARCHAR(256)) from ...;
> // should produce
> // 2014-07-02 06:14:00.000
> select cast(cast(TO_TIMESTAMP('2014-07-02 06:14:00', '-MM-DD HH24:mm:SS') 
> as TIMESTAMP(9)) as VARCHAR(256)) from ...;
> // should produce
> // 2014-07-02 06:14:00.0
> {code}
> One possible solution would be to propagate the precision in 
> {{org.apache.flink.table.planner.codegen.calls.ScalarOperatorGens#localTimeToStringCode}}.
>  If I am not mistaken this problem was introduced in [FLINK-14599]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] curcur commented on a change in pull request #10832: [FLINK-14163][runtime]Enforce synchronous registration of Execution#producedPartitions

2020-01-16 Thread GitBox
curcur commented on a change in pull request #10832: 
[FLINK-14163][runtime]Enforce synchronous registration of 
Execution#producedPartitions
URL: https://github.com/apache/flink/pull/10832#discussion_r367744319
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/executiongraph/ExecutionTest.java
 ##
 @@ -540,6 +551,46 @@ public void testSlotReleaseAtomicallyReleasesExecution() 
throws Exception {
});
}
 
+   /**
+* Tests that producedPartitions are registered synchronously under an 
asynchronous interface.
 
 Review comment:
   How about this:
   
   Tests that asynchronous registrations (incomplete futures returned by {@link 
ShuffleMaster#registerPartitionWithProducer}) are rejected.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] curcur commented on a change in pull request #10832: [FLINK-14163][runtime]Enforce synchronous registration of Execution#producedPartitions

2020-01-16 Thread GitBox
curcur commented on a change in pull request #10832: 
[FLINK-14163][runtime]Enforce synchronous registration of 
Execution#producedPartitions
URL: https://github.com/apache/flink/pull/10832#discussion_r367744181
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/executiongraph/ExecutionTest.java
 ##
 @@ -540,6 +551,46 @@ public void testSlotReleaseAtomicallyReleasesExecution() 
throws Exception {
});
}
 
+   /**
+* Tests that producedPartitions are registered synchronously under an 
asynchronous interface.
 
 Review comment:
   How about this:
   
   Tests that asynchronous registrations (incomplete futures returned by {@link 
ShuffleMaster#registerPartitionWithProducer}) are rejected.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] curcur commented on a change in pull request #10832: [FLINK-14163][runtime]Enforce synchronous registration of Execution#producedPartitions

2020-01-16 Thread GitBox
curcur commented on a change in pull request #10832: 
[FLINK-14163][runtime]Enforce synchronous registration of 
Execution#producedPartitions
URL: https://github.com/apache/flink/pull/10832#discussion_r367744181
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/executiongraph/ExecutionTest.java
 ##
 @@ -540,6 +551,46 @@ public void testSlotReleaseAtomicallyReleasesExecution() 
throws Exception {
});
}
 
+   /**
+* Tests that producedPartitions are registered synchronously under an 
asynchronous interface.
 
 Review comment:
   How about this:
   
   Tests that asynchronous registrations (incomplete futures returned by {@link 
ShuffleMaster#registerPartitionWithProducer}) are rejected.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10455: [FLINK-15089][connectors] Puslar catalog

2020-01-16 Thread GitBox
flinkbot edited a comment on issue #10455: [FLINK-15089][connectors] Puslar 
catalog
URL: https://github.com/apache/flink/pull/10455#issuecomment-562462991
 
 
   
   ## CI report:
   
   * 206bb09e47ff4f36514a93c96f7cadc30f62cd4b Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/139653117) 
   * 72bbd5f4e9f3966d5ac31eb54a5547ea1c099768 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144846602) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4412)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10875: [FLINK-15089][connectors]Pulsar sink

2020-01-16 Thread GitBox
flinkbot commented on issue #10875: [FLINK-15089][connectors]Pulsar sink
URL: https://github.com/apache/flink/pull/10875#issuecomment-575436352
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 781af18f6dccfef18beb2411ef0838f22ee5f1e5 (Fri Jan 17 
02:27:39 UTC 2020)
   
   **Warnings:**
* **2 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-15089).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] yjshen opened a new pull request #10875: [FLINK-15089][connectors]Pulsar sink

2020-01-16 Thread GitBox
yjshen opened a new pull request #10875: [FLINK-15089][connectors]Pulsar sink
URL: https://github.com/apache/flink/pull/10875
 
 
   ## What is the purpose of the change
   
   This PR implements Pulsar sink which is part of FLIP-72, and based on #10455 
.
   
   ## Brief change log
   
 - `FlinkPulsarSinkBase` for the common structure of a sink which supports 
at-least-once sink.
 -  `FlinkPulsarRowSink` for row sink and `FlinkPulsarSink` for POJO sink.
 - `CachedPulsarClient` to enable share PulsarClient among tasks in the 
same JVM. PulsarClient holds all the resources that talks to Pulsar and could 
be shared by multiple producers.
   
   ## Verifying this change
   
   This change is already covered by existing tests, `FlinkPulsarSinkTest`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes )
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15552) parameters --library and --jar doesn't work for DDL in sqlClient

2020-01-16 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017658#comment-17017658
 ] 

Jingsong Lee commented on FLINK-15552:
--

Hi [~jark], I see, we are calling {{wrapClassLoader}} in {{sqlQuery}} already 
right.

But I am talking about the other invokings, we have discussed in FLINK-15509 , 
we are considering to validate/create source/sink in {{sqlUpdate(DDL)}}. We can 
do more things to ensure safety.

> parameters --library and --jar doesn't work for DDL in sqlClient
> 
>
> Key: FLINK-15552
> URL: https://issues.apache.org/jira/browse/FLINK-15552
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client, Table SQL / Runtime
>Reporter: Terry Wang
>Assignee: Leonard Xu
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> How to Reproduce:
> first, I start a sql client and using `-l` to point to a kafka connector 
> directory.
> `
>  bin/sql-client.sh embedded -l /xx/connectors/kafka/
> `
> Then, I create a Kafka Table like following 
> `
> Flink SQL> CREATE TABLE MyUserTable (
> >   content String
> > ) WITH (
> >   'connector.type' = 'kafka',
> >   'connector.version' = 'universal',
> >   'connector.topic' = 'test',
> >   'connector.properties.zookeeper.connect' = 'localhost:2181',
> >   'connector.properties.bootstrap.servers' = 'localhost:9092',
> >   'connector.properties.group.id' = 'testGroup',
> >   'connector.startup-mode' = 'earliest-offset',
> >   'format.type' = 'csv'
> >  );
> [INFO] Table has been created.
> `
> Then I select from just created table and an exception been thrown: 
> `
> Flink SQL> select * from MyUserTable;
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
> suitable table factory for 
> 'org.apache.flink.table.factories.TableSourceFactory' in
> the classpath.
> Reason: Required context properties mismatch.
> The matching candidates:
> org.apache.flink.table.sources.CsvBatchTableSourceFactory
> Mismatched properties:
> 'connector.type' expects 'filesystem', but is 'kafka'
> The following properties are requested:
> connector.properties.bootstrap.servers=localhost:9092
> connector.properties.group.id=testGroup
> connector.properties.zookeeper.connect=localhost:2181
> connector.startup-mode=earliest-offset
> connector.topic=test
> connector.type=kafka
> connector.version=universal
> format.type=csv
> schema.0.data-type=VARCHAR(2147483647)
> schema.0.name=content
> The following factories have been considered:
> org.apache.flink.table.sources.CsvBatchTableSourceFactory
> org.apache.flink.table.sources.CsvAppendTableSourceFactory
> `
> Potential Reasons:
> Now we use  `TableFactoryUtil#findAndCreateTableSource`  to convert a 
> CatalogTable to TableSource,  but when call `TableFactoryService.find` we 
> don't pass current classLoader to this method, the default loader will be 
> BootStrapClassLoader, which can not find our factory.
> I verified in my box, it's truly caused by this behavior.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-5763) Make savepoints self-contained and relocatable

2020-01-16 Thread Jiayi Liao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-5763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017654#comment-17017654
 ] 

Jiayi Liao commented on FLINK-5763:
---

[~sewen] +1 on this.

Will it be a break change for 1.11 version? Or we create something like 
SavepointV3 to make sure the backwards compatibility?

> Make savepoints self-contained and relocatable
> --
>
> Key: FLINK-5763
> URL: https://issues.apache.org/jira/browse/FLINK-5763
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Ufuk Celebi
>Priority: Critical
>  Labels: usability
> Fix For: 1.11.0
>
>
> After a user has triggered a savepoint, a single savepoint file will be 
> returned as a handle to the savepoint. A savepoint to {{}} creates a 
> savepoint file like {{/savepoint-}}.
> This file contains the metadata of the corresponding checkpoint, but not the 
> actual program state. While this works well for short term management 
> (pause-and-resume a job), it makes it hard to manage savepoints over longer 
> periods of time.
> h4. Problems
> h5. Scattered Checkpoint Files
> For file system based checkpoints (FsStateBackend, RocksDBStateBackend) this 
> results in the savepoint referencing files from the checkpoint directory 
> (usually different than ). For users, it is virtually impossible to 
> tell which checkpoint files belong to a savepoint and which are lingering 
> around. This can easily lead to accidentally invalidating a savepoint by 
> deleting checkpoint files.
> h5. Savepoints Not Relocatable
> Even if a user is able to figure out which checkpoint files belong to a 
> savepoint, moving these files will invalidate the savepoint as well, because 
> the metadata file references absolute file paths.
> h5. Forced to Use CLI for Disposal
> Because of the scattered files, the user is in practice forced to use Flink’s 
> CLI to dispose a savepoint. This should be possible to handle in the scope of 
> the user’s environment via a file system delete operation.
> h4. Proposal
> In order to solve the described problems, savepoints should contain all their 
> state, both metadata and program state, inside a single directory. 
> Furthermore the metadata must only hold relative references to the checkpoint 
> files. This makes it obvious which files make up the state of a savepoint and 
> it is possible to move savepoints around by moving the savepoint directory.
> h5. Desired File Layout
> Triggering a savepoint to {{}} creates a directory as follows:
> {code}
> /savepoint--
>   +-- _metadata
>   +-- data- [1 or more]
> {code}
> We include the JobID in the savepoint directory name in order to give some 
> hints about which job a savepoint belongs to.
> h5. CLI
> - Trigger: When triggering a savepoint to {{}} the savepoint 
> directory will be returned as the handle to the savepoint.
> - Restore: Users can restore by pointing to the directory or the _metadata 
> file. The data files should be required to be in the same directory as the 
> _metadata file.
> - Dispose: The disposal command should be deprecated and eventually removed. 
> While deprecated, disposal can happen by specifying the directory or the 
> _metadata file (same as restore).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] leonardBang commented on a change in pull request #10874: [FLINK-15552][table api] parameters --library and --jar doesn't work for DDL in sqlClient

2020-01-16 Thread GitBox
leonardBang commented on a change in pull request #10874: [FLINK-15552][table 
api] parameters --library and --jar doesn't work for DDL in sqlClient
URL: https://github.com/apache/flink/pull/10874#discussion_r367740553
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/TableFactoryService.java
 ##
 @@ -214,7 +213,10 @@
.iterator()
.forEachRemaining(result::add);
} else {
-   
defaultLoader.iterator().forEachRemaining(result::add);
+   ServiceLoader
+   .load(TableFactory.class, 
Thread.currentThread().getContextClassLoader())
+   .iterator()
+   .forEachRemaining(result::add);
}
 
 Review comment:
   nice tips


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-15624) Pulsar Sink

2020-01-16 Thread Yijie Shen (Jira)
Yijie Shen created FLINK-15624:
--

 Summary: Pulsar Sink
 Key: FLINK-15624
 URL: https://issues.apache.org/jira/browse/FLINK-15624
 Project: Flink
  Issue Type: Sub-task
Reporter: Yijie Shen






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-11781) Reject "DISABLED" as value for yarn.per-job-cluster.include-user-jar

2020-01-16 Thread watters.wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-11781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017652#comment-17017652
 ] 

watters.wang commented on FLINK-11781:
--

[~gjy] hi, In Flink 1.7,  does it support to specify 
yarn.per-job-cluster.include-user-jar with "-yD 
yarn.per-job-cluster.include-user-jar FIRST"? 

I found that, "-yD yarn.tags' is effective, but "-yD 
yarn.per-job-cluster.include-user-jar FIRST" isn't.

> Reject "DISABLED" as value for yarn.per-job-cluster.include-user-jar
> 
>
> Key: FLINK-11781
> URL: https://issues.apache.org/jira/browse/FLINK-11781
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.6.4, 1.7.2, 1.8.0
>Reporter: Gary Yao
>Assignee: Gary Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.8.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> *Description*
> Setting {{yarn.per-job-cluster.include-user-jar: DISABLED}} in 
> {{flink-conf.yaml}} is not supported (anymore). Doing so will lead to the job 
> jar not being on the system classpath, which is mandatory if Flink is 
> deployed in job mode. The job will never run.
> *Expected behavior*
> Documentation should reflect that setting 
> {{yarn.per-job-cluster.include-user-jar: DISABLED}} does not work.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-15623) Buildling flink-python with maven profile docs-and-source fails

2020-01-16 Thread sunjincheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng reassigned FLINK-15623:
---

Assignee: sunjincheng

> Buildling flink-python with maven profile docs-and-source fails
> ---
>
> Key: FLINK-15623
> URL: https://issues.apache.org/jira/browse/FLINK-15623
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.10.0
> Environment: rev: 91d96abe5f42bd088a326870b4885d79611fccb5
>Reporter: Gary Yao
>Assignee: sunjincheng
>Priority: Blocker
> Fix For: 1.10.0
>
>
> *Description*
> Building flink-python with maven profile {{docs-and-source}} fails due to 
> checkstyle violations. 
> *How to reproduce*
> Running
> {noformat}
> mvn clean install -pl flink-python -Pdocs-and-source -DskipTests 
> -DretryFailedDeploymentCount=10
> {noformat}
> should fail with the following error
> {noformat}
> [...]
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8343] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8344] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8345] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8346] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8347] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8348] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8349] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8350] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 18.046 s
> [INFO] Finished at: 2020-01-16T16:44:01+00:00
> [INFO] Final Memory: 158M/2826M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check (validate) on 
> project flink-python_2.11: You have 7603 Checkstyle violations. -> [Help 1]
> [ERROR]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on a change in pull request #10874: [FLINK-15552][table api] parameters --library and --jar doesn't work for DDL in sqlClient

2020-01-16 Thread GitBox
wuchong commented on a change in pull request #10874: [FLINK-15552][table api] 
parameters --library and --jar doesn't work for DDL in sqlClient
URL: https://github.com/apache/flink/pull/10874#discussion_r367738744
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/TableFactoryService.java
 ##
 @@ -214,7 +213,10 @@
.iterator()
.forEachRemaining(result::add);
} else {
-   
defaultLoader.iterator().forEachRemaining(result::add);
+   ServiceLoader
+   .load(TableFactory.class, 
Thread.currentThread().getContextClassLoader())
+   .iterator()
+   .forEachRemaining(result::add);
}
 
 Review comment:
   We can simplify it a bit more:
   
   ```java
ClassLoader cl = 
classLoader.orElse(Thread.currentThread().getContextClassLoader());
ServiceLoader
.load(TableFactory.class, cl)
.iterator()
.forEachRemaining(result::add);
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15552) parameters --library and --jar doesn't work for DDL in sqlClient

2020-01-16 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017649#comment-17017649
 ] 

Jark Wu commented on FLINK-15552:
-

[~lzljs3620320] we are calling {{wrapClassLoader}} already right. The problem 
is in 
{{org.apache.flink.table.factories.TableFactoryService#discoverFactories}} 
which doesn't use thread's context class loader. 

> parameters --library and --jar doesn't work for DDL in sqlClient
> 
>
> Key: FLINK-15552
> URL: https://issues.apache.org/jira/browse/FLINK-15552
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client, Table SQL / Runtime
>Reporter: Terry Wang
>Assignee: Leonard Xu
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> How to Reproduce:
> first, I start a sql client and using `-l` to point to a kafka connector 
> directory.
> `
>  bin/sql-client.sh embedded -l /xx/connectors/kafka/
> `
> Then, I create a Kafka Table like following 
> `
> Flink SQL> CREATE TABLE MyUserTable (
> >   content String
> > ) WITH (
> >   'connector.type' = 'kafka',
> >   'connector.version' = 'universal',
> >   'connector.topic' = 'test',
> >   'connector.properties.zookeeper.connect' = 'localhost:2181',
> >   'connector.properties.bootstrap.servers' = 'localhost:9092',
> >   'connector.properties.group.id' = 'testGroup',
> >   'connector.startup-mode' = 'earliest-offset',
> >   'format.type' = 'csv'
> >  );
> [INFO] Table has been created.
> `
> Then I select from just created table and an exception been thrown: 
> `
> Flink SQL> select * from MyUserTable;
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
> suitable table factory for 
> 'org.apache.flink.table.factories.TableSourceFactory' in
> the classpath.
> Reason: Required context properties mismatch.
> The matching candidates:
> org.apache.flink.table.sources.CsvBatchTableSourceFactory
> Mismatched properties:
> 'connector.type' expects 'filesystem', but is 'kafka'
> The following properties are requested:
> connector.properties.bootstrap.servers=localhost:9092
> connector.properties.group.id=testGroup
> connector.properties.zookeeper.connect=localhost:2181
> connector.startup-mode=earliest-offset
> connector.topic=test
> connector.type=kafka
> connector.version=universal
> format.type=csv
> schema.0.data-type=VARCHAR(2147483647)
> schema.0.name=content
> The following factories have been considered:
> org.apache.flink.table.sources.CsvBatchTableSourceFactory
> org.apache.flink.table.sources.CsvAppendTableSourceFactory
> `
> Potential Reasons:
> Now we use  `TableFactoryUtil#findAndCreateTableSource`  to convert a 
> CatalogTable to TableSource,  but when call `TableFactoryService.find` we 
> don't pass current classLoader to this method, the default loader will be 
> BootStrapClassLoader, which can not find our factory.
> I verified in my box, it's truly caused by this behavior.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10455: [FLINK-15089][connectors] Puslar catalog

2020-01-16 Thread GitBox
flinkbot edited a comment on issue #10455: [FLINK-15089][connectors] Puslar 
catalog
URL: https://github.com/apache/flink/pull/10455#issuecomment-562462991
 
 
   
   ## CI report:
   
   * 206bb09e47ff4f36514a93c96f7cadc30f62cd4b Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/139653117) 
   * 72bbd5f4e9f3966d5ac31eb54a5547ea1c099768 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15552) parameters --library and --jar doesn't work for DDL in sqlClient

2020-01-16 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017643#comment-17017643
 ] 

Jingsong Lee commented on FLINK-15552:
--

BTW, it is not the first time of classloader problem, see FLINK-15167 .

> parameters --library and --jar doesn't work for DDL in sqlClient
> 
>
> Key: FLINK-15552
> URL: https://issues.apache.org/jira/browse/FLINK-15552
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client, Table SQL / Runtime
>Reporter: Terry Wang
>Assignee: Leonard Xu
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> How to Reproduce:
> first, I start a sql client and using `-l` to point to a kafka connector 
> directory.
> `
>  bin/sql-client.sh embedded -l /xx/connectors/kafka/
> `
> Then, I create a Kafka Table like following 
> `
> Flink SQL> CREATE TABLE MyUserTable (
> >   content String
> > ) WITH (
> >   'connector.type' = 'kafka',
> >   'connector.version' = 'universal',
> >   'connector.topic' = 'test',
> >   'connector.properties.zookeeper.connect' = 'localhost:2181',
> >   'connector.properties.bootstrap.servers' = 'localhost:9092',
> >   'connector.properties.group.id' = 'testGroup',
> >   'connector.startup-mode' = 'earliest-offset',
> >   'format.type' = 'csv'
> >  );
> [INFO] Table has been created.
> `
> Then I select from just created table and an exception been thrown: 
> `
> Flink SQL> select * from MyUserTable;
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
> suitable table factory for 
> 'org.apache.flink.table.factories.TableSourceFactory' in
> the classpath.
> Reason: Required context properties mismatch.
> The matching candidates:
> org.apache.flink.table.sources.CsvBatchTableSourceFactory
> Mismatched properties:
> 'connector.type' expects 'filesystem', but is 'kafka'
> The following properties are requested:
> connector.properties.bootstrap.servers=localhost:9092
> connector.properties.group.id=testGroup
> connector.properties.zookeeper.connect=localhost:2181
> connector.startup-mode=earliest-offset
> connector.topic=test
> connector.type=kafka
> connector.version=universal
> format.type=csv
> schema.0.data-type=VARCHAR(2147483647)
> schema.0.name=content
> The following factories have been considered:
> org.apache.flink.table.sources.CsvBatchTableSourceFactory
> org.apache.flink.table.sources.CsvAppendTableSourceFactory
> `
> Potential Reasons:
> Now we use  `TableFactoryUtil#findAndCreateTableSource`  to convert a 
> CatalogTable to TableSource,  but when call `TableFactoryService.find` we 
> don't pass current classLoader to this method, the default loader will be 
> BootStrapClassLoader, which can not find our factory.
> I verified in my box, it's truly caused by this behavior.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15552) parameters --library and --jar doesn't work for DDL in sqlClient

2020-01-16 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017642#comment-17017642
 ] 

Jingsong Lee commented on FLINK-15552:
--

It looks like there are so many calling for {{tableEnv}}. They are dangerous. 
We don't know if we will depend on user jar in the future. So I suggest adding 
{{wrapClassLoader}} all.

> parameters --library and --jar doesn't work for DDL in sqlClient
> 
>
> Key: FLINK-15552
> URL: https://issues.apache.org/jira/browse/FLINK-15552
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client, Table SQL / Runtime
>Reporter: Terry Wang
>Assignee: Leonard Xu
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> How to Reproduce:
> first, I start a sql client and using `-l` to point to a kafka connector 
> directory.
> `
>  bin/sql-client.sh embedded -l /xx/connectors/kafka/
> `
> Then, I create a Kafka Table like following 
> `
> Flink SQL> CREATE TABLE MyUserTable (
> >   content String
> > ) WITH (
> >   'connector.type' = 'kafka',
> >   'connector.version' = 'universal',
> >   'connector.topic' = 'test',
> >   'connector.properties.zookeeper.connect' = 'localhost:2181',
> >   'connector.properties.bootstrap.servers' = 'localhost:9092',
> >   'connector.properties.group.id' = 'testGroup',
> >   'connector.startup-mode' = 'earliest-offset',
> >   'format.type' = 'csv'
> >  );
> [INFO] Table has been created.
> `
> Then I select from just created table and an exception been thrown: 
> `
> Flink SQL> select * from MyUserTable;
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
> suitable table factory for 
> 'org.apache.flink.table.factories.TableSourceFactory' in
> the classpath.
> Reason: Required context properties mismatch.
> The matching candidates:
> org.apache.flink.table.sources.CsvBatchTableSourceFactory
> Mismatched properties:
> 'connector.type' expects 'filesystem', but is 'kafka'
> The following properties are requested:
> connector.properties.bootstrap.servers=localhost:9092
> connector.properties.group.id=testGroup
> connector.properties.zookeeper.connect=localhost:2181
> connector.startup-mode=earliest-offset
> connector.topic=test
> connector.type=kafka
> connector.version=universal
> format.type=csv
> schema.0.data-type=VARCHAR(2147483647)
> schema.0.name=content
> The following factories have been considered:
> org.apache.flink.table.sources.CsvBatchTableSourceFactory
> org.apache.flink.table.sources.CsvAppendTableSourceFactory
> `
> Potential Reasons:
> Now we use  `TableFactoryUtil#findAndCreateTableSource`  to convert a 
> CatalogTable to TableSource,  but when call `TableFactoryService.find` we 
> don't pass current classLoader to this method, the default loader will be 
> BootStrapClassLoader, which can not find our factory.
> I verified in my box, it's truly caused by this behavior.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] yjshen commented on issue #10455: [FLINK-15089][connectors] Puslar catalog

2020-01-16 Thread GitBox
yjshen commented on issue #10455: [FLINK-15089][connectors] Puslar catalog
URL: https://github.com/apache/flink/pull/10455#issuecomment-575427324
 
 
   @bowenli86 Hi Bowen, I've updated the PR with tests. Could you please help 
review it? Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15476) Update StreamingFileSink documentation -- bulk encoded writer now supports customized checkpoint policy

2020-01-16 Thread Ying Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017522#comment-17017522
 ] 

Ying Xu commented on FLINK-15476:
-

HI [~kkl0u] is it OK to pursue this Jira based on [the 
comment|https://github.com/apache/flink/pull/10653#issuecomment-568616531] in 
FLINK-13027 ?  

Thanks!

> Update StreamingFileSink documentation -- bulk encoded writer now supports 
> customized checkpoint policy
> ---
>
> Key: FLINK-15476
> URL: https://issues.apache.org/jira/browse/FLINK-15476
> Project: Flink
>  Issue Type: Task
>  Components: Documentation
>Reporter: Ying Xu
>Priority: Major
>
> Per FLINK-13027, {{StreamingFileSink}}'s bulk encoded writer (created with 
> {{forBulkFormat}}) now supports customized checkpoint policies which roll 
> file at the checkpoint epoch. 
> The {{StreamingFileSink}} documentation needs to be updated accordingly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15595) Entirely implement resolution order as FLIP-68 concept

2020-01-16 Thread Bowen Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017499#comment-17017499
 ] 

Bowen Li commented on FLINK-15595:
--

Can you elaborate what you mean by "dropping CoreModule"? 

>From a user's perspective, the purpose of module, at least for now, is to be 
>able to use Hive built-in functions as a complement for Flink's built-in 
>functions. I'm fine with a solution as long as Hive built-in functions still 
>works by loading HiveModule. 

[~lzljs3620320]  maybe you can prototype a PR so everyone can see how much 
changes are required? That may need to change behaviors of SQL CLI and related 
documentation. Thanks!

> Entirely implement resolution order as FLIP-68 concept
> --
>
> Key: FLINK-15595
> URL: https://issues.apache.org/jira/browse/FLINK-15595
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Jingsong Lee
>Priority: Critical
> Fix For: 1.10.0
>
>
> First of all, the implementation is problematic. CoreModule returns 
> BuiltinFunctionDefinition, which cannot be resolved in 
> FunctionCatalogOperatorTable, so it will fall back to FlinkSqlOperatorTable.
> Second, the function defined by CoreModule is seriously incomplete. You can 
> compare it with FunctionCatalogOperatorTable, a lot less. This leads to the 
> fact that the priority of some functions is in CoreModule, and the priority 
> of some functions is behind all modules. This is confusing, which is not what 
> we want to define in FLIP-68. 
> We should:
>  * We should resolve BuiltinFunctionDefinition correctly in 
> FunctionCatalogOperatorTable.
>  * CoreModule should contains all functions in FlinkSqlOperatorTable, a 
> simple way could provided calcite wrapper to wrap all functions.
>  * PlannerContext.getBuiltinSqlOperatorTable should not contains 
> FlinkSqlOperatorTable, we should use one 
> FunctionCatalogOperatorTable.Otherwise, there will be a lot of confusion.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15623) Buildling flink-python with maven profile docs-and-source fails

2020-01-16 Thread Arvid Heise (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017452#comment-17017452
 ] 

Arvid Heise commented on FLINK-15623:
-

In general, I'd only whitelist stuff in `src/`.

> Buildling flink-python with maven profile docs-and-source fails
> ---
>
> Key: FLINK-15623
> URL: https://issues.apache.org/jira/browse/FLINK-15623
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.10.0
> Environment: rev: 91d96abe5f42bd088a326870b4885d79611fccb5
>Reporter: Gary Yao
>Priority: Blocker
> Fix For: 1.10.0
>
>
> *Description*
> Building flink-python with maven profile {{docs-and-source}} fails due to 
> checkstyle violations. 
> *How to reproduce*
> Running
> {noformat}
> mvn clean install -pl flink-python -Pdocs-and-source -DskipTests 
> -DretryFailedDeploymentCount=10
> {noformat}
> should fail with the following error
> {noformat}
> [...]
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8343] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8344] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8345] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8346] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8347] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8348] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8349] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8350] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 18.046 s
> [INFO] Finished at: 2020-01-16T16:44:01+00:00
> [INFO] Final Memory: 158M/2826M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check (validate) on 
> project flink-python_2.11: You have 7603 Checkstyle violations. -> [Help 1]
> [ERROR]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15623) Buildling flink-python with maven profile docs-and-source fails

2020-01-16 Thread Gary Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao updated FLINK-15623:
-
Description: 
*Description*
Building flink-python with maven profile {{docs-and-source}} fails due to 
checkstyle violations. 

*How to reproduce*

Running

{noformat}
mvn clean install -pl flink-python -Pdocs-and-source -DskipTests 
-DretryFailedDeploymentCount=10
{noformat}

should fail with the following error

{noformat}
[...]
[ERROR] 
generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8343] 
(regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
should be performed with tabs only.
[ERROR] 
generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8344] 
(regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
should be performed with tabs only.
[ERROR] 
generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8345] 
(regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
should be performed with tabs only.
[ERROR] 
generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8346] 
(regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
should be performed with tabs only.
[ERROR] 
generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8347] 
(regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
should be performed with tabs only.
[ERROR] 
generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8348] 
(regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
should be performed with tabs only.
[ERROR] 
generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8349] 
(regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
should be performed with tabs only.
[ERROR] 
generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8350] 
(regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
should be performed with tabs only.
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 18.046 s
[INFO] Finished at: 2020-01-16T16:44:01+00:00
[INFO] Final Memory: 158M/2826M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check (validate) on 
project flink-python_2.11: You have 7603 Checkstyle violations. -> [Help 1]
[ERROR]
{noformat}

  was:
*Description*
Building flink-python with maven profile docs-and-source fails due to 
checkstyle violations. 

*How to reproduce*

Running

{noformat}
mvn clean install -pl flink-python -Pdocs-and-source -DskipTests 
-DretryFailedDeploymentCount=10
{noformat}

should fail with the following error

{noformat}
[...]
[ERROR] 
generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8343] 
(regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
should be performed with tabs only.
[ERROR] 
generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8344] 
(regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
should be performed with tabs only.
[ERROR] 
generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8345] 
(regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
should be performed with tabs only.
[ERROR] 
generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8346] 
(regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
should be performed with tabs only.
[ERROR] 
generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8347] 
(regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
should be performed with tabs only.
[ERROR] 
generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8348] 
(regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
should be performed with tabs only.
[ERROR] 
generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8349] 
(regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
should be performed with tabs only.
[ERROR] 
generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8350] 
(regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
should be performed with tabs only.
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 18.046 s
[INFO] Finished at: 2020-01-16T16:44:01+00:00
[INFO] Final Memory: 158M/2826M
[INFO] 
[ERROR] Failed to execute goal 

[jira] [Commented] (FLINK-15623) Buildling flink-python with maven profile docs-and-source fails

2020-01-16 Thread Gary Yao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017427#comment-17017427
 ] 

Gary Yao commented on FLINK-15623:
--

I think it's reasonable to exclude generated sources from checkstyle.

> Buildling flink-python with maven profile docs-and-source fails
> ---
>
> Key: FLINK-15623
> URL: https://issues.apache.org/jira/browse/FLINK-15623
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.10.0
> Environment: rev: 91d96abe5f42bd088a326870b4885d79611fccb5
>Reporter: Gary Yao
>Priority: Blocker
> Fix For: 1.10.0
>
>
> *Description*
> Building flink-python with maven profile docs-and-source fails due to 
> checkstyle violations. 
> *How to reproduce*
> Running
> {noformat}
> mvn clean install -pl flink-python -Pdocs-and-source -DskipTests 
> -DretryFailedDeploymentCount=10
> {noformat}
> should fail with the following error
> {noformat}
> [...]
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8343] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8344] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8345] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8346] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8347] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8348] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8349] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8350] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 18.046 s
> [INFO] Finished at: 2020-01-16T16:44:01+00:00
> [INFO] Final Memory: 158M/2826M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check (validate) on 
> project flink-python_2.11: You have 7603 Checkstyle violations. -> [Help 1]
> [ERROR]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-15574) DataType to LogicalType conversion issue

2020-01-16 Thread Benoit Hanotte (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017343#comment-17017343
 ] 

Benoit Hanotte edited comment on FLINK-15574 at 1/16/20 7:11 PM:
-

Hello [~docete], I  pushed 2 unit tests that trigger the issue on my fork: 
https://github.com/BenoitHanotte/flink/commit/3cc7718aa707c50bc86f45de15926c7b6fd457d7

It looks like the issue comes from trying to convert the table back to a 
DataStream, both using a GenericTyeInfo[Row] or a RowTypeInfo as the type of 
the DataStream.


was (Author: b.hanotte):
Hello [~docete], I  pushed 2 unit tests that trigger the issue on my fork: 
https://github.com/BenoitHanotte/flink/commit/23e535963a2a0d515f77a66ea180856f651d47e2

It looks like the issue comes from trying to convert the table back to a 
DataStream, both using a GenericTyeInfo[Row] or a RowTypeInfo as the type of 
the DataStream.

> DataType to LogicalType conversion issue
> 
>
> Key: FLINK-15574
> URL: https://issues.apache.org/jira/browse/FLINK-15574
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: Benoit Hanotte
>Priority: Major
>  Labels: pull-request-available
> Attachments: 0001-FLINK-15574-Add-unit-test-to-reproduce-issue.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We seem to be encountering an issue with the conversion from DataType to 
> LogicalType with the Blink planner (full stacktrace below):
> {code}
> org.apache.flink.table.api.ValidationException: Type 
> LEGACY(BasicArrayTypeInfo) of table field 'my_array' does not match 
> with type BasicArrayTypeInfo of the field 'my_array' of the 
> TableSource return type.
> {code}
> It seems there exists 2 paths to do the conversion from DataType to 
> LogicalType:
> 1. TypeConversions.fromLegacyInfoToDataType():
> used for instance when calling TableSchema.fromTypeInformation().
> 2.  LogicalTypeDataTypeConverter.fromDataTypeToLogicalType():
> Deprecated but still used in TableSourceUtil and many other places.
> These 2 code paths can return a different LogicalType for the same input, 
> leading to issues when the LogicalTypes are compared to ensure they are 
> compatible.  For instance, PlannerTypeUtils.isAssignable() returns false for 
> a DataType created from BasicArrayTypeInfo (leading to the 
> ValidationException above).
> The full stacktrace is the following:
> {code}
> org.apache.flink.table.api.ValidationException: Type 
> LEGACY(BasicArrayTypeInfo) of table field 'my_array' does not match 
> with type BasicArrayTypeInfo of the field 'my_array' of the 
> TableSource return type.
>   at 
> org.apache.flink.table.planner.sources.TableSourceUtil$$anonfun$4.apply(TableSourceUtil.scala:121)
>   at 
> org.apache.flink.table.planner.sources.TableSourceUtil$$anonfun$4.apply(TableSourceUtil.scala:92)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>   at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
>   at 
> org.apache.flink.table.planner.sources.TableSourceUtil$.computeIndexMapping(TableSourceUtil.scala:92)
>   at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:100)
>   at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:55)
>   at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:54)
>   at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlan(StreamExecTableSourceScan.scala:55)
>   at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalc.translateToPlanInternal(StreamExecCalc.scala:86)
>   at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalc.translateToPlanInternal(StreamExecCalc.scala:46)
>   at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:54)
>   at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalc.translateToPlan(StreamExecCalc.scala:46)
>   at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecUnion$$anonfun$2.apply(StreamExecUnion.scala:86)
>   at 
> 

[jira] [Commented] (FLINK-15623) Buildling flink-python with maven profile docs-and-source fails

2020-01-16 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017389#comment-17017389
 ] 

Chesnay Schepler commented on FLINK-15623:
--

The maven-source-plugin, which runs as part of the {{release}} profile, 
re-executes the early maven life-cycle, including the checkstyle run. 
Unfortunately at this point the python module already generated the classes 
which violate our checkstyle rules.

I quickly looked at the protobuf-generator and source-plugin documentation and 
could not find a solution; so we'll likely just have to exclude them from 
checkstyle.

> Buildling flink-python with maven profile docs-and-source fails
> ---
>
> Key: FLINK-15623
> URL: https://issues.apache.org/jira/browse/FLINK-15623
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.10.0
> Environment: rev: 91d96abe5f42bd088a326870b4885d79611fccb5
>Reporter: Gary Yao
>Priority: Blocker
> Fix For: 1.10.0
>
>
> *Description*
> Building flink-python with maven profile docs-and-source fails due to 
> checkstyle violations. 
> *How to reproduce*
> Running
> {noformat}
> mvn clean install -pl flink-python -Pdocs-and-source -DskipTests 
> -DretryFailedDeploymentCount=10
> {noformat}
> should fail with the following error
> {noformat}
> [...]
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8343] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8344] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8345] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8346] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8347] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8348] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8349] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [ERROR] 
> generated-sources/org/apache/flink/fnexecution/v1/FlinkFnApi.java:[8350] 
> (regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
> should be performed with tabs only.
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 18.046 s
> [INFO] Finished at: 2020-01-16T16:44:01+00:00
> [INFO] Final Memory: 158M/2826M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check (validate) on 
> project flink-python_2.11: You have 7603 Checkstyle violations. -> [Help 1]
> [ERROR]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   4   5   >