[jira] [Commented] (FLINK-11484) Blink java.util.concurrent.TimeoutException

2019-02-01 Thread pj (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1675#comment-1675
 ] 

pj commented on FLINK-11484:


Ok. I understand. Quantitative resource management enable/disable status will 
affect whether share task slots or not.

But I remember I do not modify the configuration file of blink . So why does it 
behave like it could not share task slots?

And could you please tell me how to disable the quantitative resource 
management? Do I need modify any configuration file?

> Blink java.util.concurrent.TimeoutException
> ---
>
> Key: FLINK-11484
> URL: https://issues.apache.org/jira/browse/FLINK-11484
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.5.5
> Environment: The link of blink source code: 
> [github.com/apache/flink/tree/blink|https://github.com/apache/flink/tree/blink]
>Reporter: pj
>Priority: Major
>  Labels: blink
> Attachments: 1.png
>
>
> *If I run blink application on yarn and the parallelism number larger than 1.*
> *Following is the command :*
> ./flink run -m yarn-cluster -ynm FLINK_NG_ENGINE_1 -ys 4 -yn 10 -ytm 5120 -p 
> 40 -c XXMain ~/xx.jar
> *Following is the code:*
> {{DataStream outputStream = tableEnv.toAppendStream(curTable, Row.class); 
> outputStream.print();}}
> *{{The whole subtask of application will hang a long time and finally the 
> }}{{toAppendStream()}}{{ function will throw an exception like below:}}*
> {{org.apache.flink.client.program.ProgramInvocationException: Job failed. 
> (JobID: f5e4f7243d06035202e8fa250c364304) at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:276)
>  at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:482) 
> at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.executeInternal(StreamContextEnvironment.java:85)
>  at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.executeInternal(StreamContextEnvironment.java:37)
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1893)
>  at com.ngengine.main.KafkaMergeMain.startApp(KafkaMergeMain.java:352) at 
> com.ngengine.main.KafkaMergeMain.main(KafkaMergeMain.java:94) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:561)
>  at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:445)
>  at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:419) 
> at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:786) 
> at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:280) 
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:215) at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1029)
>  at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$9(CliFrontend.java:1105) 
> at java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
>  at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>  at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1105) 
> Caused by: java.util.concurrent.TimeoutException at 
> org.apache.flink.runtime.concurrent.FutureUtils$Timeout.run(FutureUtils.java:834)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:745)}}{{}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] Myasuka commented on issue #7598: [FLINK-11333][protobuf] First-class serializer support for Protobuf types

2019-02-01 Thread GitBox
Myasuka commented on issue #7598: [FLINK-11333][protobuf] First-class 
serializer support for Protobuf types
URL: https://github.com/apache/flink/pull/7598#issuecomment-459931110
 
 
   To not change current Flink's build prerequisites, I add the generated 
protobuf message `UserProtobuf` into the PR. Otherwise, users must install 
protobuf-2.x before building Flink. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11484) Blink java.util.concurrent.TimeoutException

2019-02-01 Thread Yang Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16758848#comment-16758848
 ] 

Yang Wang commented on FLINK-11484:
---

Blink share the slots by default, you could run the following command to verify 
that 43 tasks will be running in 40 slots.
{code:java}
./bin/flink run -m yarn-cluster -ys 4 -yn 10 -ytm 5120 -p 40 
./examples/streaming/WindowJoin.jar
{code}
This is just because quantitative resource management is disabled and the total 
slots are enough under slot sharing.

When quantitative resource management is enabled, the tasks may not be 
allocated because of insufficient 
resources(CpuCores/UserHeap/UserDirect/UserNative/ManagedMem/NetworkMem). For 
example, if each operator allocate 1 core, we will need 43 cpu cores and the 
flink session cluster only have 40 cores, so the job could not be running 
because of insufficient cpu cores.

> Blink java.util.concurrent.TimeoutException
> ---
>
> Key: FLINK-11484
> URL: https://issues.apache.org/jira/browse/FLINK-11484
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.5.5
> Environment: The link of blink source code: 
> [github.com/apache/flink/tree/blink|https://github.com/apache/flink/tree/blink]
>Reporter: pj
>Priority: Major
>  Labels: blink
> Attachments: 1.png
>
>
> *If I run blink application on yarn and the parallelism number larger than 1.*
> *Following is the command :*
> ./flink run -m yarn-cluster -ynm FLINK_NG_ENGINE_1 -ys 4 -yn 10 -ytm 5120 -p 
> 40 -c XXMain ~/xx.jar
> *Following is the code:*
> {{DataStream outputStream = tableEnv.toAppendStream(curTable, Row.class); 
> outputStream.print();}}
> *{{The whole subtask of application will hang a long time and finally the 
> }}{{toAppendStream()}}{{ function will throw an exception like below:}}*
> {{org.apache.flink.client.program.ProgramInvocationException: Job failed. 
> (JobID: f5e4f7243d06035202e8fa250c364304) at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:276)
>  at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:482) 
> at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.executeInternal(StreamContextEnvironment.java:85)
>  at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.executeInternal(StreamContextEnvironment.java:37)
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1893)
>  at com.ngengine.main.KafkaMergeMain.startApp(KafkaMergeMain.java:352) at 
> com.ngengine.main.KafkaMergeMain.main(KafkaMergeMain.java:94) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:561)
>  at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:445)
>  at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:419) 
> at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:786) 
> at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:280) 
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:215) at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1029)
>  at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$9(CliFrontend.java:1105) 
> at java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
>  at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>  at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1105) 
> Caused by: java.util.concurrent.TimeoutException at 
> org.apache.flink.runtime.concurrent.FutureUtils$Timeout.run(FutureUtils.java:834)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thr

[jira] [Assigned] (FLINK-11489) Add an initial Blink SQL streaming runtime

2019-02-01 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-11489:
---

Assignee: Jark Wu

> Add an initial Blink SQL streaming runtime
> --
>
> Key: FLINK-11489
> URL: https://issues.apache.org/jira/browse/FLINK-11489
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Jark Wu
>Priority: Major
>
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].
> This issue is an umbrella issue for tasks related to the merging of Blink 
> streaming runtime features. The goal is to provide minimum viable product 
> (MVP) to streaming users.
> An exact list of streaming features, their properties, and dependencies needs 
> to be defined.
> The type system might not have been reworked at this stage. Operations might 
> not be executed with the full performance until changes in other Flink core 
> components have taken place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11447) Deprecate "new Table(TableEnvironment, String)"

2019-02-01 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16758846#comment-16758846
 ] 

Jark Wu commented on FLINK-11447:
-

Jump in the discussion ;-)

That's right {{Table.create}} will not work if Table is only an interface in 
flink-table-api.

But from a user perspective, {{.join(Table.create("udtf"))}} is more clear than 
{{.joinLateral(udtf)}}. Btw, just as a reference, we renamed {{crossApply}} to 
{{join}} for udtf years ago, see FLINK-5304. 

> Deprecate "new Table(TableEnvironment, String)"
> ---
>
> Key: FLINK-11447
> URL: https://issues.apache.org/jira/browse/FLINK-11447
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Dian Fu
>Priority: Major
>
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].
> Once table is an interface we can easily replace the underlying 
> implementation at any time. The constructor call prevents us from converting 
> it into an interface.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11375) Concurrent modification to slot pool due to SlotSharingManager releaseSlot directly

2019-02-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-11375:
---
Labels: pull-request-available  (was: )

> Concurrent modification to slot pool due to SlotSharingManager releaseSlot 
> directly 
> 
>
> Key: FLINK-11375
> URL: https://issues.apache.org/jira/browse/FLINK-11375
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Affects Versions: 1.7.1
>Reporter: shuai.xu
>Assignee: BoWang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.8.0
>
>
> In SlotPool, the AvailableSlots is lock free, so all access to it should in 
> the main thread of SlotPool, and so all the public methods are called through 
> SlotPoolGateway except the releaseSlot directly called by SlotSharingManager. 
> This may cause a ConcurrentModificationException.
>  2019-01-16 19:50:16,184 INFO [flink-akka.actor.default-dispatcher-161] 
> org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: 
> BlinkStoreScanTableSource 
> feature_memory_entity_store-entity_lsc_page_detail_feats_group_178-Batch -> 
> SourceConversion(table:[_DataStreamTable_12, source: 
> [BlinkStoreScanTableSource 
> feature_memory_entity_store-entity_lsc_page_detail_feats_group_178]], 
> fields:(f0)) -> correlate: 
> table(ScanBlinkStore_entity_lsc_page_detail_feats_group_1786($cor6.f0)), 
> select: 
> item_id,mainse_searcher_rank__cart_uv,mainse_searcher_rank__cart_uv_14,mainse_searcher_rank__cart_uv_30,mainse_searcher_rank__cart_uv_7,mainse_s
>  (433/500) (bd34af8dd7ee02d04a4a25e698495f0a) switched from RUNNING to 
> FINISHED.
>  2019-01-16 19:50:16,187 INFO [jobmanager-future-thread-90] 
> org.apache.flink.runtime.executiongraph.ExecutionGraph - scheduleVertices 
> meet exception, need to fail global execution graph
>  java.lang.reflect.UndeclaredThrowableException
>  at org.apache.flink.runtime.rpc.akka.$Proxy26.allocateSlots(Unknown Source)
>  at 
> org.apache.flink.runtime.jobmaster.slotpool.SlotPool$ProviderAndOwner.allocateSlots(SlotPool.java:1955)
>  at 
> org.apache.flink.runtime.executiongraph.ExecutionGraph.schedule(ExecutionGraph.java:965)
>  at 
> org.apache.flink.runtime.executiongraph.ExecutionGraph.scheduleVertices(ExecutionGraph.java:1503)
>  at 
> org.apache.flink.runtime.jobmaster.GraphManager$ExecutionGraphVertexScheduler.scheduleExecutionVertices(GraphManager.java:349)
>  at 
> org.apache.flink.runtime.schedule.StepwiseSchedulingPlugin.scheduleOneByOne(StepwiseSchedulingPlugin.java:132)
>  at 
> org.apache.flink.runtime.schedule.StepwiseSchedulingPlugin.onExecutionVertexFailover(StepwiseSchedulingPlugin.java:107)
>  at 
> org.apache.flink.runtime.jobmaster.GraphManager.notifyExecutionVertexFailover(GraphManager.java:163)
>  at 
> org.apache.flink.runtime.executiongraph.ExecutionGraph.resetExecutionVerticesAndNotify(ExecutionGraph.java:1372)
>  at 
> org.apache.flink.runtime.executiongraph.failover.FailoverRegion.restart(FailoverRegion.java:213)
>  at 
> org.apache.flink.runtime.executiongraph.failover.FailoverRegion.reset(FailoverRegion.java:198)
>  at 
> org.apache.flink.runtime.executiongraph.failover.FailoverRegion.allVerticesInTerminalState(FailoverRegion.java:97)
>  at 
> org.apache.flink.runtime.executiongraph.failover.FailoverRegion.lambda$cancel$0(FailoverRegion.java:169)
>  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>  at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:186)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:299)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
>  at java.lang.Thread.run(Thread.java:834)
>  Caused by: java.util.concurrent.ExecutionException: 
> java.util.ConcurrentModificationException
>  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
>  at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1915)
>  at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.invokeRpc(AkkaInvocationHandler.java:213)
>  at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.invoke(AkkaInvocationHandler.java:125)
>  ... 23 more
>  Caused by: java.util.Concurrent

[jira] [Commented] (FLINK-10755) Port external catalogs in Table API extension points to flink-table-common

2019-02-01 Thread Dian Fu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16758844#comment-16758844
 ] 

Dian Fu commented on FLINK-10755:
-

[~phoenixjiangnan] Thanks a lot.

> Port external catalogs in Table API extension points to flink-table-common
> --
>
> Key: FLINK-10755
> URL: https://issues.apache.org/jira/browse/FLINK-10755
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.8.0
>
>
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].
> This ticket is for porting external catalog related classes such as 
> ExternalCatalog, ExternalCatalogTable, ExternalCatalogSchema, 
> ExternalTableUtil, TableNotExistException, CatalogNotExistException.
> Unblocks TableEnvironment interface task and catalog contribution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-10755) Port external catalogs in Table API extension points to flink-table-common

2019-02-01 Thread Dian Fu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reassigned FLINK-10755:
---

Assignee: Dian Fu  (was: Bowen Li)

> Port external catalogs in Table API extension points to flink-table-common
> --
>
> Key: FLINK-10755
> URL: https://issues.apache.org/jira/browse/FLINK-10755
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Bowen Li
>Assignee: Dian Fu
>Priority: Major
> Fix For: 1.8.0
>
>
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].
> This ticket is for porting external catalog related classes such as 
> ExternalCatalog, ExternalCatalogTable, ExternalCatalogSchema, 
> ExternalTableUtil, TableNotExistException, CatalogNotExistException.
> Unblocks TableEnvironment interface task and catalog contribution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] eaglewatcherwb opened a new pull request #7644: [FLINK-11375][schedule] releaseSlot in SlotSharingManager using

2019-02-01 Thread GitBox
eaglewatcherwb opened a new pull request #7644: [FLINK-11375][schedule] 
releaseSlot in SlotSharingManager using
URL: https://github.com/apache/flink/pull/7644
 
 
   SlotPoolGateway to avoid concurrent modification to slot pool
   
   Change-Id: I63bd752a090da8da8b9f6a38d9d3fe4ba443b90b
   
   
   
   ## What is the purpose of the change
   
   releaseSlot in SlotSharingManager using SlotPoolGateway to avoid concurrent 
modification to slot pool
   
   ## Brief change log
   
 - use SlotProviderAndOwner#cancelSlotRequest instead of 
AllocatedSlotActions#releaseSlot in SlotSharingManager#release
 - accommodate unit test for the fix
   
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive):no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flinkbot commented on issue #7644: [FLINK-11375][schedule] releaseSlot in SlotSharingManager using

2019-02-01 Thread GitBox
flinkbot commented on issue #7644: [FLINK-11375][schedule] releaseSlot in 
SlotSharingManager using
URL: https://github.com/apache/flink/pull/7644#issuecomment-459927197
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * [ ] 1. The [description] looks good.
   * [ ] 2. There is [consensus] that the contribution should go into to Flink.
   * [ ] 3. [Does not need specific [attention] | Needs specific attention for 
X | Has attention for X by Y]
   * [ ] 4. The [architecture] is sound.
   * [ ] 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) if you have questions about 
the review process or the usage of this bot


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11449) Uncouple the Expression class from RexNodes

2019-02-01 Thread sunjincheng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16758727#comment-16758727
 ] 

sunjincheng commented on FLINK-11449:
-

Hi, [~twalthr] [~dawidwys] great thanks for your valued comments!

Are there other things to be noted or discussed before PR? If there are no 
other things to pay attention to, I will start to develop PR?

Thanks, Jincheng

> Uncouple the Expression class from RexNodes
> ---
>
> Key: FLINK-11449
> URL: https://issues.apache.org/jira/browse/FLINK-11449
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: sunjincheng
>Priority: Major
>
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].
> Calcite will not be part of any API module anymore. Therefore, RexNode 
> translation must happen in a different layer. This issue will require a new 
> design document.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11520) Triggers should be provided the window state

2019-02-01 Thread Elias Levy (JIRA)
Elias Levy created FLINK-11520:
--

 Summary: Triggers should be provided the window state
 Key: FLINK-11520
 URL: https://issues.apache.org/jira/browse/FLINK-11520
 Project: Flink
  Issue Type: Improvement
Reporter: Elias Levy


Some triggers may require access to the window state to perform their job.  
Consider a window computing a count using an aggregate function.  It may be 
desired to fire the window when the count is 1 and then at the end of the 
window.  The early firing can provide feedback to external systems that a key 
has been observed, while waiting for the final count.

The same problem can be observed in 
org.apache.flink.streaming.api.windowing.triggers.CountTrigger, which must 
maintain an internal count instead of being able to make use of the window 
state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11519) Add function related catalog APIs

2019-02-01 Thread Xuefu Zhang (JIRA)
Xuefu Zhang created FLINK-11519:
---

 Summary: Add function related catalog APIs
 Key: FLINK-11519
 URL: https://issues.apache.org/jira/browse/FLINK-11519
 Project: Flink
  Issue Type: Sub-task
  Components: Table API & SQL
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang


This is to support functions (UDFs) with related to catalog.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11518) Add partition related catalog APIs

2019-02-01 Thread Xuefu Zhang (JIRA)
Xuefu Zhang created FLINK-11518:
---

 Summary: Add partition related catalog APIs
 Key: FLINK-11518
 URL: https://issues.apache.org/jira/browse/FLINK-11518
 Project: Flink
  Issue Type: Sub-task
  Components: Table API & SQL
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang


To support partitions, we need to introduce additional APIs on top of 
FLINK-11474.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flinkbot commented on issue #7643: [FLINK-11474][Catatalog] Add ReadableCatalog, ReadableWritableCatalog, and other …

2019-02-01 Thread GitBox
flinkbot commented on issue #7643: [FLINK-11474][Catatalog] Add 
ReadableCatalog, ReadableWritableCatalog, and other …
URL: https://github.com/apache/flink/pull/7643#issuecomment-459849424
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * [ ] 1. The [description] looks good.
   * [ ] 2. There is [consensus] that the contribution should go into to Flink.
   * [ ] 3. [Does not need specific [attention] | Needs specific attention for 
X | Has attention for X by Y]
   * [ ] 4. The [architecture] is sound.
   * [ ] 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) if you have questions about 
the review process or the usage of this bot


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11474) Add ReadableCatalog, ReadableWritableCatalog, and other related interfaces

2019-02-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-11474:
---
Labels: pull-request-available  (was: )

> Add ReadableCatalog, ReadableWritableCatalog, and other related interfaces
> --
>
> Key: FLINK-11474
> URL: https://issues.apache.org/jira/browse/FLINK-11474
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
>Priority: Major
>  Labels: pull-request-available
>
> Also deprecate ReadableCatalog, ReadableWritableCatalog, and other related, 
> existing classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] xuefuz opened a new pull request #7643: [FLINK-11474][Catatalog] Add ReadableCatalog, ReadableWritableCatalog, and other …

2019-02-01 Thread GitBox
xuefuz opened a new pull request #7643: [FLINK-11474][Catatalog] Add 
ReadableCatalog, ReadableWritableCatalog, and other …
URL: https://github.com/apache/flink/pull/7643
 
 
   …related interfaces
   
   
   
   ## What is the purpose of the change
   
   *Add new catalog APIs pertaining to FLIP-30*
   
   
   ## Brief change log
   
 - *Add ReadableCatalog and ReadableWritableCatalog interfaces*
 - *Add catalog related interfaces and classes such as CommonTable, 
CatalogTable, CatalogView, etc.*
 - *Migrated some old exception classes from scala to java.*
 - *Old interfaces or classes remains but are subject to removal once new 
implementation is in place.*
   
   ## Verifying this change
   
   *This change is mainly about interfaces without any test coverage.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): ( no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)



This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-10755) Port external catalogs in Table API extension points to flink-table-common

2019-02-01 Thread Bowen Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-10755:
-
Description: 
A more detailed description can be found in 
[FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].

This ticket is for porting external catalog related classes such as 
ExternalCatalog, ExternalCatalogTable, ExternalCatalogSchema, 
ExternalTableUtil, TableNotExistException, CatalogNotExistException.

Unblocks TableEnvironment interface task and catalog contribution.

  was:
A more detailed description can be found in 
[FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].

This ticket is for porting external catalog related classes such as 
ExternalCatalog, ExternalCatalogTable, TableNotExistException, 
CatalogNotExistException.

Unblocks TableEnvironment interface task and catalog contribution.


> Port external catalogs in Table API extension points to flink-table-common
> --
>
> Key: FLINK-10755
> URL: https://issues.apache.org/jira/browse/FLINK-10755
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.8.0
>
>
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].
> This ticket is for porting external catalog related classes such as 
> ExternalCatalog, ExternalCatalogTable, ExternalCatalogSchema, 
> ExternalTableUtil, TableNotExistException, CatalogNotExistException.
> Unblocks TableEnvironment interface task and catalog contribution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10755) Port external catalogs in Table API extension points to flink-table-common

2019-02-01 Thread Bowen Li (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16758605#comment-16758605
 ] 

Bowen Li commented on FLINK-10755:
--

[~dian.fu] Thanks for taking a look. I haven't started yet, feel free to assign 
it to yourself.


> Port external catalogs in Table API extension points to flink-table-common
> --
>
> Key: FLINK-10755
> URL: https://issues.apache.org/jira/browse/FLINK-10755
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.8.0
>
>
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].
> This ticket is for porting external catalog related classes such as 
> ExternalCatalog, ExternalCatalogTable, TableNotExistException, 
> CatalogNotExistException.
> Unblocks TableEnvironment interface task and catalog contribution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11517) Inefficient window state access when using RocksDB state backend

2019-02-01 Thread Elias Levy (JIRA)
Elias Levy created FLINK-11517:
--

 Summary: Inefficient window state access when using RocksDB state 
backend
 Key: FLINK-11517
 URL: https://issues.apache.org/jira/browse/FLINK-11517
 Project: Flink
  Issue Type: Bug
Reporter: Elias Levy


When using an aggregate function on a window with a process function and the 
RocksDB state backend, state access is inefficient.

The WindowOperator calls windowState.add to merge the new element using the 
aggregate function.  The add method of RocksDBAggregatingState will read the 
state, deserialize the state, call the aggregate function, deserialize the 
state, and write it out.

If the trigger decides the window must be fired, as the the windowState.add 
does not return the state, the WindowOperator must call windowState.get to get 
it and pass it to the window process function, resulting in another read and 
deserialization.

Finally, while the state is not passed in to the trigger, in some cases the 
trigger may have a need to access the state.  That is our case.  As the state 
is not passed to the trigger, we must read and deserialize the state one more 
from within the trigger.

Thus, state must be read and deserialized three times to process a single 
element.  If the state is large, this can be quite costly.

 

Ideally  windowState.add would return the state, so that the WindowOperator can 
pass it to the process function without having to read it again.  Additionally, 
the state would be made available to the trigger to enable more use cases 
without having to go through the state descriptor again.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] HuangZhenQiu commented on issue #7356: [FLINK-10868][flink-yarn] Enforce maximum TMs failure rate in ResourceManagers

2019-02-01 Thread GitBox
HuangZhenQiu commented on issue #7356: [FLINK-10868][flink-yarn] Enforce 
maximum TMs failure rate in ResourceManagers
URL: https://github.com/apache/flink/pull/7356#issuecomment-459833947
 
 
   @tillrohrmann @suez1224 
   Thanks for reviewing the PR. According to your suggestions. I changed as
   1) As the feature is generic for both Yarn and Mesos, add only the maximum 
failure rate config option in resource manager config. 
   2) Put failure rate related logic into the TimestampBasedFailureRater which 
implements the FailureRater interface. As I don't want to mix two changes with 
different purpose in the same PR, we can make other code 
FailureRateRestartStrategy use it in another small PR.
   3) For the failure rate test for RM, I tried to do in ResourceManagerTest. I 
found it is hard to mimic the behavior of registerSlotRequest without mocking 
lots components. And I also have to setup Test RM exactly like what 
YarnResourceManagerTest is doing. Thus, I still put the test cases separately 
in YarnResourceManagerTest and MesosResourceManagerTest.
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] HuangZhenQiu commented on a change in pull request #7356: [FLINK-10868][flink-yarn] Enforce maximum TMs failure rate in ResourceManagers

2019-02-01 Thread GitBox
HuangZhenQiu commented on a change in pull request #7356: 
[FLINK-10868][flink-yarn] Enforce maximum TMs failure rate in ResourceManagers
URL: https://github.com/apache/flink/pull/7356#discussion_r253161014
 
 

 ##
 File path: 
flink-yarn/src/main/java/org/apache/flink/yarn/configuration/YarnConfigOptions.java
 ##
 @@ -83,8 +83,27 @@
 */
public static final ConfigOption MAX_FAILED_CONTAINERS =
key("yarn.maximum-failed-containers")
-   .noDefaultValue()
-   .withDescription("Maximum number of containers the system is 
going to reallocate in case of a failure.");
+   .noDefaultValue()
+   .withDescription("Maximum number of containers the 
system is going to reallocate in case of a failure.");
+
+   /**
+* The maximum number of failed YARN containers within an interval 
before entirely stopping
+* the YARN session / job on YARN.
+* By default, the value is -1
+*/
+   public static final ConfigOption 
MAX_FAILED_CONTAINERS_PER_INTERVAL =
+   key("yarn.maximum-failed-containers-per-interval")
+   .defaultValue(-1)
+   .withDescription("Maximum number of containers the system is 
going to reallocate in case of a failure in an interval.");
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] HuangZhenQiu commented on a change in pull request #7356: [FLINK-10868][flink-yarn] Enforce maximum TMs failure rate in ResourceManagers

2019-02-01 Thread GitBox
HuangZhenQiu commented on a change in pull request #7356: 
[FLINK-10868][flink-yarn] Enforce maximum TMs failure rate in ResourceManagers
URL: https://github.com/apache/flink/pull/7356#discussion_r253160983
 
 

 ##
 File path: docs/_includes/generated/mesos_configuration.html
 ##
 @@ -27,6 +27,11 @@
 -1
 The maximum number of failed workers before the cluster fails. 
May be set to -1 to disable this feature. This option is ignored unless Flink 
is in legacy mode.
 
+
+mesos.maximum-failed-workers-per-interval
+-1
+Maximum number of workers the system is going to reallocate in 
case of a failure in an interval.
 
 Review comment:
   Rather than separate configs for Yarn and Mesos, I added the failure rate 
option into resource manager config. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] EAlexRojas commented on issue #7588: [FLINK-11419][filesystem] For recovery, wait until lease is revoked before truncate file

2019-02-01 Thread GitBox
EAlexRojas commented on issue #7588: [FLINK-11419][filesystem] For recovery, 
wait until lease is revoked before truncate file
URL: https://github.com/apache/flink/pull/7588#issuecomment-459810169
 
 
   @kl0u Travis test in my branch is green now :)
   Looks like the errors in your branch are not related to the changes in this 
PR either (?) 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11447) Deprecate "new Table(TableEnvironment, String)"

2019-02-01 Thread Dawid Wysakowicz (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16758530#comment-16758530
 ] 

Dawid Wysakowicz commented on FLINK-11447:
--

I'm still +1 for the {{joinLateral}}

No static method will work ATM, as there is no Table implementation in API to 
be instantiated. Implementation of Table comes only from Planner.

> Deprecate "new Table(TableEnvironment, String)"
> ---
>
> Key: FLINK-11447
> URL: https://issues.apache.org/jira/browse/FLINK-11447
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Dian Fu
>Priority: Major
>
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].
> Once table is an interface we can easily replace the underlying 
> implementation at any time. The constructor call prevents us from converting 
> it into an interface.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11447) Deprecate "new Table(TableEnvironment, String)"

2019-02-01 Thread Dian Fu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16758487#comment-16758487
 ] 

Dian Fu commented on FLINK-11447:
-

Personally I like the approach suggested by [~hequn8128] to add a static method 
Table.create to create a table to represent UDTF and still use the join 
interface for joining UDTF. As it only adds one method Table.create to replace 
the constructor to create a Table for UDTF. All other things remains the same 
as before. Regarding to adding an interface such as joinLateral, may be we 
should start a discussion in the dev mail list and hear the opinions from more 
people and only adds it if the majority think it OK. 

> Deprecate "new Table(TableEnvironment, String)"
> ---
>
> Key: FLINK-11447
> URL: https://issues.apache.org/jira/browse/FLINK-11447
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Dian Fu
>Priority: Major
>
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].
> Once table is an interface we can easily replace the underlying 
> implementation at any time. The constructor call prevents us from converting 
> it into an interface.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] tweise commented on issue #7099: [FLINK-10887] [jobmaster] Add source watermark tracking to the JobMaster

2019-02-01 Thread GitBox
tweise commented on issue #7099: [FLINK-10887] [jobmaster] Add source watermark 
tracking to the JobMaster
URL: https://github.com/apache/flink/pull/7099#issuecomment-459781428
 
 
   @jgrier thanks for the update, will take a look soon.
   
   Meanwhile, we will put this to work internally.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10755) Port external catalogs in Table API extension points to flink-table-common

2019-02-01 Thread Dian Fu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16758467#comment-16758467
 ] 

Dian Fu commented on FLINK-10755:
-

[~twalthr] As discussed offline, as the return type of method 
ExternalCatalogTable.builder() is ExternalCatalogTableBuilder and so it will be 
impossible to port ExternalCatalogTable to flink-table-common, while keep 
ExternalCatalogTableBuilder still in flink-table-planner. Do you have any 
suggestions on this issue or should we wait until FLINK-10688 and FLINK-11240 
are committed?

> Port external catalogs in Table API extension points to flink-table-common
> --
>
> Key: FLINK-10755
> URL: https://issues.apache.org/jira/browse/FLINK-10755
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.8.0
>
>
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].
> This ticket is for porting external catalog related classes such as 
> ExternalCatalog, ExternalCatalogTable, TableNotExistException, 
> CatalogNotExistException.
> Unblocks TableEnvironment interface task and catalog contribution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11447) Deprecate "new Table(TableEnvironment, String)"

2019-02-01 Thread sunjincheng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16758450#comment-16758450
 ] 

sunjincheng commented on FLINK-11447:
-

Thanks for the  [~twalthr] [~dawidwys] [~hequn8128] lively discussion!

Hi [~hequn8128] Thank you very much for reviewing the information of many 
lateralTables!

That's true, there are clear definitions of lateral in several commonly used 
databases, e.g.:
 * Postresql Lateral will join a subquery, e.g.: 
{code:java}
SELECT * FROM tab, lateral (select ...) alias ...`{code}

 * Hive Lateral will join the VIEW keyword, then UDTF, e.g:
{code:java}
SELECT * FROM tab LATERAL VIEW udtf(..) ...{code}

 * SQL Server uses Cross Apply, but essentially adds a subquery, e.g.:
{code:java}
SELECT * FROM tableA a CROSS APPLY (select ...) alias{code}

Although there are certain differences in grammar, here is essentially the 
right side parameter of join is Table.

So,I think the method `join(String)` should not be defined in TableAPI which we 
have reached a consensus

Then we have two better approach:

Approach 1:  Add the new operator `JoinLateral` and `leftOuterJoinLateral` 

Approach 2:  Add the static method `create(..)` (maybe `createLateralTable(..)` 
is better) in table interface;

>From the points of my view, both solutions can solve the UDTF problem of java 
>users.  About #1 we introduced joinLateral, which shows that the user's 
>argument is a udtf (String or Expression), but the semantics are still JOIN. 
>There is a little doubt which [~hequn8128] mentioned above. 

About #2: we do not need introduce the new operator, but there is also a 
problem here that the fieldRefrence of table.create(tenv, 
"udtf(fieldRefrence")), i.e: `table.create()` cannot be used independently 
(currently new Table("udtf(fieldRefrence") also exists same problem). 

I think both of the approach may be the temporary solution to complete the 
`flip32`, we may further discuss in the future to find a better solution, such 
as we may discuss Table classification, Table, TemporalTable, LateralTable, 
etc. So I suggest that we can choose one of them now As soon as possible to 
advance the progress of FLIP-32, maybe we can discuss the long-term planning in 
the new thread.

What do you think? [~twalthr] [~hequn8128] [~dawidwys] [~dian.fu] 

Best, Jincheng

 

 

> Deprecate "new Table(TableEnvironment, String)"
> ---
>
> Key: FLINK-11447
> URL: https://issues.apache.org/jira/browse/FLINK-11447
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Dian Fu
>Priority: Major
>
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].
> Once table is an interface we can easily replace the underlying 
> implementation at any time. The constructor call prevents us from converting 
> it into an interface.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] dianfu commented on issue #7642: [FLINK-11516][table] Port and move some Descriptor classes to flink-table-common

2019-02-01 Thread GitBox
dianfu commented on issue #7642: [FLINK-11516][table] Port and move some 
Descriptor classes to flink-table-common
URL: https://github.com/apache/flink/pull/7642#issuecomment-459769867
 
 
   @twalthr @dawidwys I have split the changes related to Descriptor to this 
PR. Looking forward to your review. Thanks in advance.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11516) Port and move some Descriptor classes to flink-table-common

2019-02-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-11516:
---
Labels: pull-request-available  (was: )

> Port and move some Descriptor classes to flink-table-common
> ---
>
> Key: FLINK-11516
> URL: https://issues.apache.org/jira/browse/FLINK-11516
> Project: Flink
>  Issue Type: Task
>  Components: Table API & SQL
>Reporter: Dian Fu
>Assignee: Dian Fu
>Priority: Major
>  Labels: pull-request-available
>
> As discussed in FLINK-11450, we will port the following classes to 
> flink-table-common:
> Metadata, MetadataValidator, TableStats, ColumnStats, Statistics, 
> StatisticsValidator,TableDescriptor, StreamableDescriptor, 
> StreamTableDescriptorValidator



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flinkbot commented on issue #7642: [FLINK-11516][table] Port and move some Descriptor classes to flink-table-common

2019-02-01 Thread GitBox
flinkbot commented on issue #7642: [FLINK-11516][table] Port and move some 
Descriptor classes to flink-table-common
URL: https://github.com/apache/flink/pull/7642#issuecomment-459762314
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * [ ] 1. The [description] looks good.
   * [ ] 2. There is [consensus] that the contribution should go into to Flink.
   * [ ] 3. [Does not need specific [attention] | Needs specific attention for 
X | Has attention for X by Y]
   * [ ] 4. The [architecture] is sound.
   * [ ] 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) if you have questions about 
the review process or the usage of this bot


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] dianfu opened a new pull request #7642: [FLINK-11516][table] Port and move some Descriptor classes to flink-table-common

2019-02-01 Thread GitBox
dianfu opened a new pull request #7642: [FLINK-11516][table] Port and move some 
Descriptor classes to flink-table-common
URL: https://github.com/apache/flink/pull/7642
 
 
   ## What is the purpose of the change
   
   *This pull request ported the following descriptor related classes to 
flink-table-common: Metadata, MetadataValidator, TableStats, ColumnStats, 
Statistics, StatisticsValidator,TableDescriptor, StreamableDescriptor, 
StreamTableDescriptorValidator*
   
   ## Brief change log
   
 - *Ported the classes and tests to flink-table-common*
 - *Make StreamableDescriptor not to extends TableDescriptor any more*
   
   ## Verifying this change
   
   This change is a code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] StefanRRichter commented on a change in pull request #7568: [FLINK-11417] Make access to ExecutionGraph single threaded from JobMaster main thread

2019-02-01 Thread GitBox
StefanRRichter commented on a change in pull request #7568: [FLINK-11417] Make 
access to ExecutionGraph single threaded from JobMaster main thread
URL: https://github.com/apache/flink/pull/7568#discussion_r253088533
 
 

 ##
 File path: 
flink-test-utils-parent/flink-test-utils-junit/src/main/java/org/apache/flink/core/testutils/ManuallyTriggeredDirectExecutor.java
 ##
 @@ -32,8 +33,17 @@
  */
 public class ManuallyTriggeredDirectExecutor implements Executor {
 
+   protected final Executor executorDelegate;
 
 Review comment:
   Unfortunately, it is still used by `ManuallyTriggeredScheduledExecutor`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-11450) Port and move TableSource and TableSink to flink-table-common

2019-02-01 Thread Timo Walther (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther closed FLINK-11450.

   Resolution: Fixed
Fix Version/s: 1.8.0

Fixed in 1.8.0: 83159624effc4a17daf9efbf549b58ec5df7b9a8

> Port and move TableSource and TableSink to flink-table-common
> -
>
> Key: FLINK-11450
> URL: https://issues.apache.org/jira/browse/FLINK-11450
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Dian Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.8.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].
> This step only unblockes the TableEnvironment interfaces task. 
> Stream/BatchTableSouce/Sink remain in flink-table-api-java-bridge for now 
> until they have been reworked.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] asfgit closed pull request #7626: [FLINK-11450][table] Port and move TableSource and TableSink to flink-table-common

2019-02-01 Thread GitBox
asfgit closed pull request #7626: [FLINK-11450][table] Port and move 
TableSource and TableSink to flink-table-common
URL: https://github.com/apache/flink/pull/7626
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flinkbot edited a comment on issue #7639: [FLINK-9920] Only check part files in BucketingSinkFaultToleranceITCase

2019-02-01 Thread GitBox
flinkbot edited a comment on issue #7639: [FLINK-9920] Only check part files in 
BucketingSinkFaultToleranceITCase
URL: https://github.com/apache/flink/pull/7639#issuecomment-459738292
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * [ ] 1. The [description] looks good.
   * [ ] 2. There is [consensus] that the contribution should go into to Flink.
   * [ ] 3. [Does not need specific [attention] | Needs specific attention for 
X | Has attention for X by Y]
   * [ ] 4. The [architecture] is sound.
   * [ ] 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) if you have questions about 
the review process or the usage of this bot


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rmetzger commented on issue #7639: [FLINK-9920] Only check part files in BucketingSinkFaultToleranceITCase

2019-02-01 Thread GitBox
rmetzger commented on issue #7639: [FLINK-9920] Only check part files in 
BucketingSinkFaultToleranceITCase
URL: https://github.com/apache/flink/pull/7639#issuecomment-459752058
 
 
   @aljoscha for approving the description, you need to send "@flink bot 
approve description" (without the space)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flinkbot edited a comment on issue #7639: [FLINK-9920] Only check part files in BucketingSinkFaultToleranceITCase

2019-02-01 Thread GitBox
flinkbot edited a comment on issue #7639: [FLINK-9920] Only check part files in 
BucketingSinkFaultToleranceITCase
URL: https://github.com/apache/flink/pull/7639#issuecomment-459738292
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * [x] 1. The [description] looks good.
   * [ ] 2. There is [consensus] that the contribution should go into to Flink.
   * [ ] 3. [Does not need specific [attention] | Needs specific attention for 
X | Has attention for X by Y]
   * [ ] 4. The [architecture] is sound.
   * [ ] 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) if you have questions about 
the review process or the usage of this bot


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11449) Uncouple the Expression class from RexNodes

2019-02-01 Thread sunjincheng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16758373#comment-16758373
 ] 

sunjincheng commented on FLINK-11449:
-

Hi [~dawidwys] thanks for your reply.

About CAST defined as an expression different from functioncall is simply to 
follow the current status, and then a simple conversion can complete 
RexNode-free, but I think CAST as a functioncall is also reasonable. And I also 
agree to solve the udx problem in a PR, I will try to prepare for the PR, I 
will leave a message in the process if I need to discuss!

Best,Jincheng

> Uncouple the Expression class from RexNodes
> ---
>
> Key: FLINK-11449
> URL: https://issues.apache.org/jira/browse/FLINK-11449
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: sunjincheng
>Priority: Major
>
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].
> Calcite will not be part of any API module anymore. Therefore, RexNode 
> translation must happen in a different layer. This issue will require a new 
> design document.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11516) Port and move some Descriptor classes to flink-table-common

2019-02-01 Thread Dian Fu (JIRA)
Dian Fu created FLINK-11516:
---

 Summary: Port and move some Descriptor classes to 
flink-table-common
 Key: FLINK-11516
 URL: https://issues.apache.org/jira/browse/FLINK-11516
 Project: Flink
  Issue Type: Task
  Components: Table API & SQL
Reporter: Dian Fu
Assignee: Dian Fu


As discussed in FLINK-11450, we will port the following classes to 
flink-table-common:
Metadata, MetadataValidator, TableStats, ColumnStats, Statistics, 
StatisticsValidator,TableDescriptor, StreamableDescriptor, 
StreamTableDescriptorValidator



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flinkbot commented on issue #7628: [FLINK-11503] Remove invalid test TaskManagerLossFailsTasksTest

2019-02-01 Thread GitBox
flinkbot commented on issue #7628: [FLINK-11503] Remove invalid test 
TaskManagerLossFailsTasksTest
URL: https://github.com/apache/flink/pull/7628#issuecomment-459738338
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * [ ] 1. The [description] looks good.
   * [ ] 2. There is [consensus] that the contribution should go into to Flink.
   * [ ] 3. [Does not need specific [attention] | Needs specific attention for 
X | Has attention for X by Y]
   * [ ] 4. The [architecture] is sound.
   * [ ] 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) if you have questions about 
the review process or the usage of this bot


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flinkbot commented on issue #7637: [FLINK-11511] Remove legacy class JobAttachmentClientActor

2019-02-01 Thread GitBox
flinkbot commented on issue #7637: [FLINK-11511] Remove legacy class 
JobAttachmentClientActor
URL: https://github.com/apache/flink/pull/7637#issuecomment-459738301
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * [ ] 1. The [description] looks good.
   * [ ] 2. There is [consensus] that the contribution should go into to Flink.
   * [ ] 3. [Does not need specific [attention] | Needs specific attention for 
X | Has attention for X by Y]
   * [ ] 4. The [architecture] is sound.
   * [ ] 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) if you have questions about 
the review process or the usage of this bot


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flinkbot commented on issue #7631: [FLINK-11391][shuffle] Introduce PartitionShuffleDescriptor and ShuffleDeploymentDescriptor

2019-02-01 Thread GitBox
flinkbot commented on issue #7631: [FLINK-11391][shuffle] Introduce 
PartitionShuffleDescriptor and ShuffleDeploymentDescriptor
URL: https://github.com/apache/flink/pull/7631#issuecomment-459738324
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * [ ] 1. The [description] looks good.
   * [ ] 2. There is [consensus] that the contribution should go into to Flink.
   * [ ] 3. [Does not need specific [attention] | Needs specific attention for 
X | Has attention for X by Y]
   * [ ] 4. The [architecture] is sound.
   * [ ] 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) if you have questions about 
the review process or the usage of this bot


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flinkbot commented on issue #7632: [FLINK-11362] Port TaskManagerComponentsStartupShutdownTest to new co…

2019-02-01 Thread GitBox
flinkbot commented on issue #7632: [FLINK-11362] Port 
TaskManagerComponentsStartupShutdownTest to new co…
URL: https://github.com/apache/flink/pull/7632#issuecomment-459738321
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * [ ] 1. The [description] looks good.
   * [ ] 2. There is [consensus] that the contribution should go into to Flink.
   * [ ] 3. [Does not need specific [attention] | Needs specific attention for 
X | Has attention for X by Y]
   * [ ] 4. The [architecture] is sound.
   * [ ] 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) if you have questions about 
the review process or the usage of this bot


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flinkbot commented on issue #7640: [FLINK-11512] Port CliFrontendModifyTest to new code base

2019-02-01 Thread GitBox
flinkbot commented on issue #7640: [FLINK-11512] Port CliFrontendModifyTest to 
new code base
URL: https://github.com/apache/flink/pull/7640#issuecomment-459738288
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * [ ] 1. The [description] looks good.
   * [ ] 2. There is [consensus] that the contribution should go into to Flink.
   * [ ] 3. [Does not need specific [attention] | Needs specific attention for 
X | Has attention for X by Y]
   * [ ] 4. The [architecture] is sound.
   * [ ] 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) if you have questions about 
the review process or the usage of this bot


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flinkbot commented on issue #7639: [FLINK-9920] Only check part files in BucketingSinkFaultToleranceITCase

2019-02-01 Thread GitBox
flinkbot commented on issue #7639: [FLINK-9920] Only check part files in 
BucketingSinkFaultToleranceITCase
URL: https://github.com/apache/flink/pull/7639#issuecomment-459738292
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * [ ] 1. The [description] looks good.
   * [ ] 2. There is [consensus] that the contribution should go into to Flink.
   * [ ] 3. [Does not need specific [attention] | Needs specific attention for 
X | Has attention for X by Y]
   * [ ] 4. The [architecture] is sound.
   * [ ] 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) if you have questions about 
the review process or the usage of this bot


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flinkbot commented on issue #7634: [FLINK-11509] Remove invalid test ClientConnectionTest

2019-02-01 Thread GitBox
flinkbot commented on issue #7634: [FLINK-11509] Remove invalid test 
ClientConnectionTest
URL: https://github.com/apache/flink/pull/7634#issuecomment-459738311
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * [ ] 1. The [description] looks good.
   * [ ] 2. There is [consensus] that the contribution should go into to Flink.
   * [ ] 3. [Does not need specific [attention] | Needs specific attention for 
X | Has attention for X by Y]
   * [ ] 4. The [architecture] is sound.
   * [ ] 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) if you have questions about 
the review process or the usage of this bot


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flinkbot commented on issue #7629: [FLINK-11504] Remove invalid test JobManagerConnectionTest

2019-02-01 Thread GitBox
flinkbot commented on issue #7629: [FLINK-11504] Remove invalid test 
JobManagerConnectionTest
URL: https://github.com/apache/flink/pull/7629#issuecomment-459738336
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * [ ] 1. The [description] looks good.
   * [ ] 2. There is [consensus] that the contribution should go into to Flink.
   * [ ] 3. [Does not need specific [attention] | Needs specific attention for 
X | Has attention for X by Y]
   * [ ] 4. The [architecture] is sound.
   * [ ] 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) if you have questions about 
the review process or the usage of this bot


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flinkbot commented on issue #7627: [FLINK-11502] Remove invalid test FlinkActorTest

2019-02-01 Thread GitBox
flinkbot commented on issue #7627: [FLINK-11502] Remove invalid test 
FlinkActorTest
URL: https://github.com/apache/flink/pull/7627#issuecomment-459738341
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * [ ] 1. The [description] looks good.
   * [ ] 2. There is [consensus] that the contribution should go into to Flink.
   * [ ] 3. [Does not need specific [attention] | Needs specific attention for 
X | Has attention for X by Y]
   * [ ] 4. The [architecture] is sound.
   * [ ] 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) if you have questions about 
the review process or the usage of this bot


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flinkbot commented on issue #7630: [FLINK-11505] Remove invalid test JobManagerRegistrationTest

2019-02-01 Thread GitBox
flinkbot commented on issue #7630: [FLINK-11505] Remove invalid test 
JobManagerRegistrationTest
URL: https://github.com/apache/flink/pull/7630#issuecomment-459738330
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * [ ] 1. The [description] looks good.
   * [ ] 2. There is [consensus] that the contribution should go into to Flink.
   * [ ] 3. [Does not need specific [attention] | Needs specific attention for 
X | Has attention for X by Y]
   * [ ] 4. The [architecture] is sound.
   * [ ] 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) if you have questions about 
the review process or the usage of this bot


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flinkbot commented on issue #7633: [FLINK-11508] Remove invalid test AkkaJobManagerRetrieverTest

2019-02-01 Thread GitBox
flinkbot commented on issue #7633: [FLINK-11508] Remove invalid test 
AkkaJobManagerRetrieverTest
URL: https://github.com/apache/flink/pull/7633#issuecomment-459738315
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * [ ] 1. The [description] looks good.
   * [ ] 2. There is [consensus] that the contribution should go into to Flink.
   * [ ] 3. [Does not need specific [attention] | Needs specific attention for 
X | Has attention for X by Y]
   * [ ] 4. The [architecture] is sound.
   * [ ] 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) if you have questions about 
the review process or the usage of this bot


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flinkbot commented on issue #7635: [FLINK-11507] Remove invalid test JobClientActorTest

2019-02-01 Thread GitBox
flinkbot commented on issue #7635:  [FLINK-11507] Remove invalid test 
JobClientActorTest
URL: https://github.com/apache/flink/pull/7635#issuecomment-459738307
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * [ ] 1. The [description] looks good.
   * [ ] 2. There is [consensus] that the contribution should go into to Flink.
   * [ ] 3. [Does not need specific [attention] | Needs specific attention for 
X | Has attention for X by Y]
   * [ ] 4. The [architecture] is sound.
   * [ ] 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) if you have questions about 
the review process or the usage of this bot


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flinkbot commented on issue #7638: [FLINK-11510] [DataStream] Add the MultiFieldSumAggregator to support KeyedStream.sum(int[] positionToSums )

2019-02-01 Thread GitBox
flinkbot commented on issue #7638: [FLINK-11510] [DataStream] Add the 
MultiFieldSumAggregator to support KeyedStream.sum(int[] positionToSums )
URL: https://github.com/apache/flink/pull/7638#issuecomment-459738297
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * [ ] 1. The [description] looks good.
   * [ ] 2. There is [consensus] that the contribution should go into to Flink.
   * [ ] 3. [Does not need specific [attention] | Needs specific attention for 
X | Has attention for X by Y]
   * [ ] 4. The [architecture] is sound.
   * [ ] 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) if you have questions about 
the review process or the usage of this bot


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flinkbot commented on issue #7641: [FLINK-11513] Port CliFrontendSavepointTest to new code base

2019-02-01 Thread GitBox
flinkbot commented on issue #7641: [FLINK-11513] Port CliFrontendSavepointTest 
to new code base
URL: https://github.com/apache/flink/pull/7641#issuecomment-459738286
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * [ ] 1. The [description] looks good.
   * [ ] 2. There is [consensus] that the contribution should go into to Flink.
   * [ ] 3. [Does not need specific [attention] | Needs specific attention for 
X | Has attention for X by Y]
   * [ ] 4. The [architecture] is sound.
   * [ ] 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) if you have questions about 
the review process or the usage of this bot


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] dianfu commented on issue #7626: [FLINK-11450][table] Port and move TableSource and TableSink to flink-table-common

2019-02-01 Thread GitBox
dianfu commented on issue #7626: [FLINK-11450][table] Port and move TableSource 
and TableSink to flink-table-common
URL: https://github.com/apache/flink/pull/7626#issuecomment-459735881
 
 
   OK. I'll keep it in mind and create another JIRA for this change. Thanks a 
lot.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] twalthr commented on issue #7626: [FLINK-11450][table] Port and move TableSource and TableSink to flink-table-common

2019-02-01 Thread GitBox
twalthr commented on issue #7626: [FLINK-11450][table] Port and move 
TableSource and TableSink to flink-table-common
URL: https://github.com/apache/flink/pull/7626#issuecomment-459735078
 
 
   @dianfu that makes sense to me. The overall goal should be that the API 
mentioned in 
https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/table/connect.html
 is still valid.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] dianfu commented on issue #7626: [FLINK-11450][table] Port and move TableSource and TableSink to flink-table-common

2019-02-01 Thread GitBox
dianfu commented on issue #7626: [FLINK-11450][table] Port and move TableSource 
and TableSink to flink-table-common
URL: https://github.com/apache/flink/pull/7626#issuecomment-459732168
 
 
   @twalthr Thanks for the suggestion. Make sense to me. Regarding to 
StreamableDescriptor, I'd like to declare it an interface and doesn't extend 
TableDescriptor anymore. The reason is that TableDescriptor is an abstract 
class and StreamableDescriptor has to be declared as an abstract class if it 
extends TableDescriptor. Given that StreamTableDescriptor extends both 
ConnectTableDescriptor and StreamableDescriptor, it will be impossible. What's 
your thought about this change?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] azagrebin commented on a change in pull request #6615: [FLINK-8354] [flink-connectors] Add ability to access and provider Kafka headers

2019-02-01 Thread GitBox
azagrebin commented on a change in pull request #6615: [FLINK-8354] 
[flink-connectors] Add ability to access and provider Kafka headers
URL: https://github.com/apache/flink/pull/6615#discussion_r253055987
 
 

 ##
 File path: 
flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/util/serialization/KeyedSerializationSchema.java
 ##
 @@ -55,4 +57,13 @@
 * @return null or the target topic
 */
String getTargetTopic(T element);
+
+   /**
+*
+* @param element The incoming element to be serialized
+* @return collection of headers (maybe empty)
+*/
+   default Iterable> headers(T element) {
 
 Review comment:
   @alexeyt820 
   Ideally we could deprecate the partitioner in the producer constructer as 
well because ProducerRecord already contains partition which user can assign 
for the record.
   
   Instead of deprecating the methods in (de)ser schema interfaces we could 
deprecate them fully and introduce new `Kafka(De)SerializationSchema` 
interfaces which work with Kafka `Consumer/ProducerRecord` classes.
   
   We also introduce adaptors from older schemas to the newer ones. The 
producer/consumer constructors, which currently accept older schemas, will use 
adaptors to create newer schemas. The actual code should also work with newer 
schemas only.
   
   The serialization schema adaptor could optionally take the 
`FlinkKafkaPartitioner` to populate the partition in similar way as now it 
happens directly in producer.
   
   What do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aljoscha commented on a change in pull request #7639: [FLINK-9920] Only check part files in BucketingSinkFaultToleranceITCase

2019-02-01 Thread GitBox
aljoscha commented on a change in pull request #7639: [FLINK-9920] Only check 
part files in BucketingSinkFaultToleranceITCase
URL: https://github.com/apache/flink/pull/7639#discussion_r253055436
 
 

 ##
 File path: 
flink-connectors/flink-connector-filesystem/src/test/java/org/apache/flink/streaming/connectors/fs/bucketing/BucketingSinkFaultToleranceITCase.java
 ##
 @@ -144,6 +148,10 @@ public void postSubmit() throws Exception {
 
while (files.hasNext()) {
LocatedFileStatus file = files.next();
+// if (!file.getPath().getName().startsWith(PART_PREFIX)) {
+   // ignore files that don't match with our 
expected part prefix
+// continue;
+// }
 
 Review comment:
   Ah, this is the code I actually added. I must have commented out by accident.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] twalthr commented on issue #7626: [FLINK-11450][table] Port and move TableSource and TableSink to flink-table-common

2019-02-01 Thread GitBox
twalthr commented on issue #7626: [FLINK-11450][table] Port and move 
TableSource and TableSink to flink-table-common
URL: https://github.com/apache/flink/pull/7626#issuecomment-459726492
 
 
   Thanks for working on this @dianfu. Here are my thoughts:
   
   - Add ColumnStats to this list.
   - SchematicDescriptor requires Schema. I would postpone this for now.
   - RegistrableDescriptor is more an API thing and must not be used by 
extension points in common.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11447) Deprecate "new Table(TableEnvironment, String)"

2019-02-01 Thread Hequn Cheng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16758307#comment-16758307
 ] 

Hequn Cheng commented on FLINK-11447:
-

Hi, [~twalthr]. I agree that we don't need to complete in sync with SQL in 
TableAPI. And yes, it is very unlikely that we will ever support correlated 
subqueries in Table API.

Hi, [~dawidwys]. I agree with you that we would better not to introduce the 
Lateral.subquery concept for now. Such a feature would definitely need another 
big discussion. 

Thanks again for your nice comments. I joined the discussion as I saw some good 
points raised by the second option listed by [~sunjincheng121]:
 - Don't need to add another kind of join.
 - The right side of join should be a Table.

I would like to share more thoughts for these two points.
1. We don't need to add another kind of join. 
The semantic of `joinLateral` is exactly same to `inner join`, i.e., it is not 
a left join, nor a right join. It is more clear to use `join`. We only need 
another kind of join if it acts differently. For example, we need to add 
antiJoin/semiJoin as they join two tables differently.

2. The right side of the join should be a Table.
We can only join two tables. I think this is the problem we need to solve when 
we make Table interface. And I'd like to propose a solution as follows.
{code:java}
object Table {
  def create(tEnv: TableEnvironment, expr: String): Table = {
tEnv.createTable(expr)
  }
}
{code}
Similar to {{TableEnvironment.create()}}, we can also add a create method for 
Table. The query written by user would be {{table.join(Table.create(tEnv, 
"xxx"))}}, similar to the query before, {{table.join(new Table(tEnv, "xxx"))}}

What do you guys think?

> Deprecate "new Table(TableEnvironment, String)"
> ---
>
> Key: FLINK-11447
> URL: https://issues.apache.org/jira/browse/FLINK-11447
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Dian Fu
>Priority: Major
>
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].
> Once table is an interface we can easily replace the underlying 
> implementation at any time. The constructor call prevents us from converting 
> it into an interface.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] dianfu edited a comment on issue #7626: [FLINK-11450][table] Port and move TableSource and TableSink to flink-table-common

2019-02-01 Thread GitBox
dianfu edited a comment on issue #7626: [FLINK-11450][table] Port and move 
TableSource and TableSink to flink-table-common
URL: https://github.com/apache/flink/pull/7626#issuecomment-459722585
 
 
   @twalthr Thanks a lot for the review. The suggestion makes sense and I will 
pay attention to it next time. Regarding to other changes in the original PR, 
I'd like to firstly port the following classes to flink-table-common: Metadata, 
MetadataValidator, TableStats, SchematicDescriptor, Statistics, 
StatisticsValidator,TableDescriptor, RegistrableDescriptor, 
StreamableDescriptor, StreamTableDescriptorValidator. Do you think it makes 
sense to you?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] dianfu commented on issue #7626: [FLINK-11450][table] Port and move TableSource and TableSink to flink-table-common

2019-02-01 Thread GitBox
dianfu commented on issue #7626: [FLINK-11450][table] Port and move TableSource 
and TableSink to flink-table-common
URL: https://github.com/apache/flink/pull/7626#issuecomment-459722585
 
 
   @twalthr Thanks a lot for the review. The suggestion makes sense and I will 
pay attention to it next time. Regarding to other changes to the original PR, 
I'd like to firstly port the following classes to flink-table-common: Metadata, 
MetadataValidator, TableStats, SchematicDescriptor, Statistics, 
StatisticsValidator,TableDescriptor, RegistrableDescriptor, 
StreamableDescriptor, StreamTableDescriptorValidator. Do you think it makes 
sense to you?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Myasuka commented on a change in pull request #7639: [FLINK-9920] Only check part files in BucketingSinkFaultToleranceITCase

2019-02-01 Thread GitBox
Myasuka commented on a change in pull request #7639: [FLINK-9920] Only check 
part files in BucketingSinkFaultToleranceITCase
URL: https://github.com/apache/flink/pull/7639#discussion_r253042777
 
 

 ##
 File path: 
flink-connectors/flink-connector-filesystem/src/test/java/org/apache/flink/streaming/connectors/fs/bucketing/BucketingSinkFaultToleranceITCase.java
 ##
 @@ -144,6 +148,10 @@ public void postSubmit() throws Exception {
 
while (files.hasNext()) {
LocatedFileStatus file = files.next();
+// if (!file.getPath().getName().startsWith(PART_PREFIX)) {
+   // ignore files that don't match with our 
expected part prefix
+// continue;
+// }
 
 Review comment:
   commented code should be removed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-11514) Check and port ClusterClientTest to new code base if necessary

2019-02-01 Thread TisonKun (JIRA)
TisonKun created FLINK-11514:


 Summary: Check and port ClusterClientTest to new code base if 
necessary
 Key: FLINK-11514
 URL: https://issues.apache.org/jira/browse/FLINK-11514
 Project: Flink
  Issue Type: Sub-task
  Components: Tests
Affects Versions: 1.8.0
Reporter: TisonKun
 Fix For: 1.8.0


Check and port {{ClusterClientTest}} to new code base if necessary



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11515) Check and port ClientTest to new code base if necessary

2019-02-01 Thread TisonKun (JIRA)
TisonKun created FLINK-11515:


 Summary: Check and port ClientTest to new code base if necessary
 Key: FLINK-11515
 URL: https://issues.apache.org/jira/browse/FLINK-11515
 Project: Flink
  Issue Type: Sub-task
  Components: Tests
Affects Versions: 1.8.0
Reporter: TisonKun
 Fix For: 1.8.0


Check and port {{ClientTest}} to new code base if necessary



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11513) Port CliFrontendSavepointTest to new code base

2019-02-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-11513:
---
Labels: pull-request-available  (was: )

> Port CliFrontendSavepointTest to new code base
> --
>
> Key: FLINK-11513
> URL: https://issues.apache.org/jira/browse/FLINK-11513
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.8.0
>Reporter: TisonKun
>Assignee: TisonKun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.8.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] TisonKun opened a new pull request #7641: [FLINK-11513] Port CliFrontendSavepointTest to new code base

2019-02-01 Thread GitBox
TisonKun opened a new pull request #7641: [FLINK-11513] Port 
CliFrontendSavepointTest to new code base
URL: https://github.com/apache/flink/pull/7641
 
 
   ## What is the purpose of the change
   
   Replace `StandaloneClusterClient` with 
`RestClusterClient`
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector:(no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   
   cc @tillrohrmann @mxm 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-8902) Re-scaling job sporadically fails with KeeperException

2019-02-01 Thread Peter Westermann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16758271#comment-16758271
 ] 

Peter Westermann commented on FLINK-8902:
-

I am running into the same issue - just for me it doesn't happen sporadically, 
it happens every time I attempt to rescale a job. This is on a Flink 1.6.3 
standalone cluster with HA via zookeeper and RocksDB backed by S3 as the state 
backend. 

 

 

> Re-scaling job sporadically fails with KeeperException
> --
>
> Key: FLINK-8902
> URL: https://issues.apache.org/jira/browse/FLINK-8902
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Affects Versions: 1.5.0, 1.6.0
> Environment: Commit: 80020cb
> Hadoop: 2.8.3
> YARN
>  
>Reporter: Gary Yao
>Priority: Critical
>  Labels: flip6
> Fix For: 1.6.4, 1.7.2, 1.8.0
>
>
> *Description*
>  Re-scaling a job with {{bin/flink modify -p }} sporadically 
> fails with a {{KeeperException}}
> *Steps to reproduce*
>  # Submit job to Flink cluster with flip6 enabled running on YARN (session 
> mode).
>  # Re-scale job (5-20 times)
> *Stacktrace (client)*
> {noformat}
> org.apache.flink.util.FlinkException: Could not rescale job 
> 61e2e99db2e959ebd94e40f9c5e816bc.
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$modify$8(CliFrontend.java:766)
>   at 
> org.apache.flink.client.cli.CliFrontend.runClusterAction(CliFrontend.java:954)
>   at org.apache.flink.client.cli.CliFrontend.modify(CliFrontend.java:757)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1037)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$9(CliFrontend.java:1095)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1095)
> Caused by: java.util.concurrent.CompletionException: 
> org.apache.flink.runtime.jobmaster.exceptions.JobModificationException: Could 
> not restore from temporary rescaling savepoint. This might indicate that the 
> savepoint 
> hdfs://172.31.33.72:9000/flinkha/savepoints/savepoint-61e2e9-fdb3d05a0035 got 
> corrupted. Deleting this savepoint as a precaution.
>   at 
> org.apache.flink.runtime.jobmaster.JobMaster.lambda$rescaleOperators$3(JobMaster.java:525)
>   at 
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
>   at 
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:295)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:150)
>   at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleMessage(FencedAkkaRpcActor.java:66)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$onReceive$1(AkkaRpcActor.java:132)
>   at 
> akka.actor.ActorCell$$anonfun$become$1.applyOrElse(ActorCell.scala:544)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
>   at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:495)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:224)
>   at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: 
> org.apache.flink.runtime.jobmaster.exceptions.JobModificationException: Could 
> not restore from temporary rescaling savepoint. This might indicate that the 
> savepoint 
> hdfs://172.31.33.72:9000/flinkha/savepoints/savepoint-61e2e9-fdb3d05a0035 got 
> corrupted. Deleting this savepoint as a precaution.
>   at 
> org.apache.flink.runtime.jobmaster.JobMaster.lambda$restoreExecutionGraphFromRescalingSavepoint$17(JobMaster.java:1317)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
>   at 
> java.util.concurrent.CompletableFuture$

[jira] [Created] (FLINK-11513) Port CliFrontendSavepointTest to new code base

2019-02-01 Thread TisonKun (JIRA)
TisonKun created FLINK-11513:


 Summary: Port CliFrontendSavepointTest to new code base
 Key: FLINK-11513
 URL: https://issues.apache.org/jira/browse/FLINK-11513
 Project: Flink
  Issue Type: Sub-task
  Components: Tests
Affects Versions: 1.8.0
Reporter: TisonKun
Assignee: TisonKun
 Fix For: 1.8.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-11451) Move *QueryConfig and TableDescriptor to flink-table-api-java

2019-02-01 Thread Dawid Wysakowicz (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-11451.

   Resolution: Fixed
Fix Version/s: 1.8.0

Implemented via: ed4b94001fe6d6115e5887bbdd263fd3247548b2

> Move *QueryConfig and TableDescriptor to flink-table-api-java
> -
>
> Key: FLINK-11451
> URL: https://issues.apache.org/jira/browse/FLINK-11451
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: xueyu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.8.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].
> Move QueryConfig, BatchQueryConfig, StreamQueryConfig, TableDescriptor in 
> flink-table-api-java.
> Unblocks TableEnvironment interface task.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] dawidwys commented on issue #7621: [FLINK-11451][table] move *QueryConfig and TableDescriptor to flink-table-api-java

2019-02-01 Thread GitBox
dawidwys commented on issue #7621: [FLINK-11451][table] move *QueryConfig and 
TableDescriptor to flink-table-api-java
URL: https://github.com/apache/flink/pull/7621#issuecomment-459713821
 
 
   Thank you @xueyumusic for your contribution


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] dawidwys closed pull request #7621: [FLINK-11451][table] move *QueryConfig and TableDescriptor to flink-table-api-java

2019-02-01 Thread GitBox
dawidwys closed pull request #7621: [FLINK-11451][table] move *QueryConfig and 
TableDescriptor to flink-table-api-java
URL: https://github.com/apache/flink/pull/7621
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11512) Port CliFrontendModifyTest to new code base

2019-02-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-11512:
---
Labels: pull-request-available  (was: )

> Port CliFrontendModifyTest to new code base
> ---
>
> Key: FLINK-11512
> URL: https://issues.apache.org/jira/browse/FLINK-11512
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.8.0
>Reporter: TisonKun
>Assignee: TisonKun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.8.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] TisonKun opened a new pull request #7640: [FLINK-11512] Port CliFrontendModifyTest to new code base

2019-02-01 Thread GitBox
TisonKun opened a new pull request #7640: [FLINK-11512] Port 
CliFrontendModifyTest to new code base
URL: https://github.com/apache/flink/pull/7640
 
 
   ## What is the purpose of the change
   
   Replace `StandaloneClusterClient` with 
`RestClusterClient`
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector:(no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   
   cc @tillrohrmann @mxm 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-11512) Port CliFrontendModifyTest to new code base

2019-02-01 Thread TisonKun (JIRA)
TisonKun created FLINK-11512:


 Summary: Port CliFrontendModifyTest to new code base
 Key: FLINK-11512
 URL: https://issues.apache.org/jira/browse/FLINK-11512
 Project: Flink
  Issue Type: Sub-task
  Components: Tests
Affects Versions: 1.8.0
Reporter: TisonKun
Assignee: TisonKun
 Fix For: 1.8.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] aljoscha commented on issue #7639: [FLINK-9920] Only check part files in BucketingSinkFaultToleranceITCase

2019-02-01 Thread GitBox
aljoscha commented on issue #7639: [FLINK-9920] Only check part files in 
BucketingSinkFaultToleranceITCase
URL: https://github.com/apache/flink/pull/7639#issuecomment-459705216
 
 
   Before, it could happen that other files are in the same directory. Then
   the verification logic in the test would pick up those files and the
   test would fail. Now we ignore all files that don't match our expected
   pattern.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-9920) BucketingSinkFaultToleranceITCase fails on travis

2019-02-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-9920:
--
Labels: pull-request-available test-stability  (was: test-stability)

> BucketingSinkFaultToleranceITCase fails on travis
> -
>
> Key: FLINK-9920
> URL: https://issues.apache.org/jira/browse/FLINK-9920
> Project: Flink
>  Issue Type: Bug
>  Components: filesystem-connector
>Affects Versions: 1.6.0, 1.7.0
>Reporter: Chesnay Schepler
>Assignee: Aljoscha Krettek
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.8.0
>
>
> https://travis-ci.org/zentol/flink/jobs/407021898
> {code}
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 13.082 sec 
> <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.fs.bucketing.BucketingSinkFaultToleranceITCase
> runCheckpointedProgram(org.apache.flink.streaming.connectors.fs.bucketing.BucketingSinkFaultToleranceITCase)
>   Time elapsed: 5.696 sec  <<< FAILURE!
> java.lang.AssertionError: Read line does not match expected pattern.
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.flink.streaming.connectors.fs.bucketing.BucketingSinkFaultToleranceITCase.postSubmit(BucketingSinkFaultToleranceITCase.java:182)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] dianfu commented on issue #7626: [FLINK-11450][table] Port and move TableSource and TableSink to flink-table-common

2019-02-01 Thread GitBox
dianfu commented on issue #7626: [FLINK-11450][table] Port and move TableSource 
and TableSink to flink-table-common
URL: https://github.com/apache/flink/pull/7626#issuecomment-459707309
 
 
   @twalthr @dawidwys I have updated the PR to make the change here focus only 
on TableSource/TableSink related changes. Looking forward to your review. 
Thanks in advance.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aljoscha commented on issue #7639: [FLINK-9920] Only check part files in BucketingSinkFaultToleranceITCase

2019-02-01 Thread GitBox
aljoscha commented on issue #7639: [FLINK-9920] Only check part files in 
BucketingSinkFaultToleranceITCase
URL: https://github.com/apache/flink/pull/7639#issuecomment-459705079
 
 
   cc @Myasuka 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aljoscha opened a new pull request #7639: [FLINK-9920] Only check part files in BucketingSinkFaultToleranceITCase

2019-02-01 Thread GitBox
aljoscha opened a new pull request #7639: [FLINK-9920] Only check part files in 
BucketingSinkFaultToleranceITCase
URL: https://github.com/apache/flink/pull/7639
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-9920) BucketingSinkFaultToleranceITCase fails on travis

2019-02-01 Thread Aljoscha Krettek (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16758249#comment-16758249
 ] 

Aljoscha Krettek commented on FLINK-9920:
-

Excellent, [~yunta]! I'll post a quick PR.

> BucketingSinkFaultToleranceITCase fails on travis
> -
>
> Key: FLINK-9920
> URL: https://issues.apache.org/jira/browse/FLINK-9920
> Project: Flink
>  Issue Type: Bug
>  Components: filesystem-connector
>Affects Versions: 1.6.0, 1.7.0
>Reporter: Chesnay Schepler
>Assignee: Aljoscha Krettek
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.8.0
>
>
> https://travis-ci.org/zentol/flink/jobs/407021898
> {code}
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 13.082 sec 
> <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.fs.bucketing.BucketingSinkFaultToleranceITCase
> runCheckpointedProgram(org.apache.flink.streaming.connectors.fs.bucketing.BucketingSinkFaultToleranceITCase)
>   Time elapsed: 5.696 sec  <<< FAILURE!
> java.lang.AssertionError: Read line does not match expected pattern.
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.flink.streaming.connectors.fs.bucketing.BucketingSinkFaultToleranceITCase.postSubmit(BucketingSinkFaultToleranceITCase.java:182)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-11445) Deprecate static methods in TableEnvironments

2019-02-01 Thread Timo Walther (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther closed FLINK-11445.

   Resolution: Fixed
Fix Version/s: 1.8.0
 Release Note: In order to separate API from actual implementation, the 
static methods TableEnvironment.getTableEnvironment() are deprecated. Use 
Batch/StreamTableEnvironment.create() instead.

Fixed in 1.8.0: 5d8e4d5b630b5e7da99ac879d102dbb18d6b6ac6

> Deprecate static methods in TableEnvironments
> -
>
> Key: FLINK-11445
> URL: https://issues.apache.org/jira/browse/FLINK-11445
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.8.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].
> Direct to the {{Batch/StreamTableEnvionrment.create()}} approach. The 
> {{create()}} method must not necessarily already perform a planner discovery. 
> We can hard-code the target table environment for now.
> {{TableEnvironment.create()}} is not supported yet.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] asfgit closed pull request #7622: [FLINK-11445][table] Deprecate static methods in TableEnvironments

2019-02-01 Thread GitBox
asfgit closed pull request #7622: [FLINK-11445][table] Deprecate static methods 
in TableEnvironments
URL: https://github.com/apache/flink/pull/7622
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-11462) "Powered by Flink" page does not display correctly on Firefox

2019-02-01 Thread Fabian Hueske (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-11462.
-
Resolution: Fixed

Fixed with 1b8aadf5988b04061338caee9232676a3cc28fb6

Thanks [~plucas]!

> "Powered by Flink" page does not display correctly on Firefox
> -
>
> Key: FLINK-11462
> URL: https://issues.apache.org/jira/browse/FLINK-11462
> Project: Flink
>  Issue Type: Bug
>  Components: Project Website
>Reporter: Patrick Lucas
>Assignee: Patrick Lucas
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screen Shot 2019-01-30 at 12.13.03.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The JavaScript that sets the height of each "tile" does not work as expected 
> in Firefox, causing them to overlap (see attached screenshot). [This Stack 
> Overflow 
> post|https://stackoverflow.com/questions/12184133/jquery-wrong-values-when-trying-to-get-div-height]
>  suggests using jQuery's {{outerHeight}} instead of {{height}} method, and 
> making this change seems to make the page display correctly in both Firefox 
> and Chrome.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11506) In the Windows environment, through sql-client, execute the SQL statement for a few minutes after the client interrupt exit, reported IO exception

2019-02-01 Thread Chen Zun (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zun updated FLINK-11506:
-
Description: 
 When I executed SQL statements using sql-client: select * from TaixFares; 
Report the following Exception:
{code:java}
Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
Unexpected exception. This is a bug. Please consider filing an issue. at 
org.apache.flink.table.client.SqlClient.main(SqlClient.java:199) Caused by: 
java.io.IOError: java.io.InterruptedIOException: Command interrupted at 
org.jline.terminal.impl.AbstractPosixTerminal.setAttributes(AbstractPosixTerminal.java:54)
 at org.apache.flink.table.client.cli.CliView.restoreTerminal(CliView.java:398) 
at org.apache.flink.table.client.cli.CliView.open(CliView.java:143) at 
org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:401) at 
org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:261) at 
java.util.Optional.ifPresent(Optional.java:159) at 
org.apache.flink.table.client.cli.CliClient.open(CliClient.java:193) at 
org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:121) at 
org.apache.flink.table.client.SqlClient.start(SqlClient.java:105) at 
org.apache.flink.table.client.SqlClient.main(SqlClient.java:187) Caused by: 
java.io.InterruptedIOException: Command interrupted at 
org.jline.utils.ExecHelper.exec(ExecHelper.java:46) at 
org.jline.terminal.impl.ExecPty.doGetConfig(ExecPty.java:175) at 
org.jline.terminal.impl.ExecPty.getAttr(ExecPty.java:87) at 
org.jline.terminal.impl.ExecPty.doSetAttr(ExecPty.java:93) at 
org.jline.terminal.impl.AbstractPty.setAttr(AbstractPty.java:21) at 
org.jline.terminal.impl.AbstractPosixTerminal.setAttributes(AbstractPosixTerminal.java:52)
 ... 9 more Caused by: java.lang.InterruptedException at 
java.lang.ProcessImpl.waitFor(ProcessImpl.java:451) at 
org.jline.utils.ExecHelper.waitAndCapture(ExecHelper.java:66) at 
org.jline.utils.ExecHelper.exec(ExecHelper.java:36) ... 14 more Shutting down 
executor...done.{code}
The log content of sql-client*.log file when SQL statement is executed:
{code:java}
2019-02-01 19:09:47,147 INFO 
org.apache.flink.table.client.config.entries.ExecutionEntry - Property 
'execution.restart-strategy.type' not specified. Using default value: fallback
2019-02-01 19:10:32,679 INFO 
org.apache.flink.table.client.config.entries.ExecutionEntry - Property 
'execution.restart-strategy.type' not specified. Using default value: fallback
2019-02-01 19:10:34,220 INFO org.apache.flink.api.java.typeutils.TypeExtractor 
- class org.joda.time.DateTime does not contain a getter for field iMillis
2019-02-01 19:10:34,220 INFO org.apache.flink.api.java.typeutils.TypeExtractor 
- class org.joda.time.DateTime does not contain a setter for field iMillis
2019-02-01 19:10:34,220 INFO org.apache.flink.api.java.typeutils.TypeExtractor 
- Class class org.joda.time.DateTime cannot be used as a POJO type because not 
all fields are valid POJO fields, and must be processed as GenericType. Please 
read the Flink documentation on "Data Types & Serialization" for details of the 
effect on performance.
2019-02-01 19:10:34,222 INFO org.apache.flink.api.java.typeutils.TypeExtractor 
- class org.joda.time.DateTime does not contain a getter for field iMillis
2019-02-01 19:10:34,223 INFO org.apache.flink.api.java.typeutils.TypeExtractor 
- class org.joda.time.DateTime does not contain a setter for field iMillis
2019-02-01 19:10:34,223 INFO org.apache.flink.api.java.typeutils.TypeExtractor 
- Class class org.joda.time.DateTime cannot be used as a POJO type because not 
all fields are valid POJO fields, and must be processed as GenericType. Please 
read the Flink documentation on "Data Types & Serialization" for details of the 
effect on performance.
2019-02-01 19:10:34,279 INFO org.apache.flink.api.java.typeutils.TypeExtractor 
- class org.apache.flink.types.Row does not contain a getter for field fields
2019-02-01 19:10:34,279 INFO org.apache.flink.api.java.typeutils.TypeExtractor 
- class org.apache.flink.types.Row does not contain a setter for field fields
2019-02-01 19:10:34,279 INFO org.apache.flink.api.java.typeutils.TypeExtractor 
- Class class org.apache.flink.types.Row cannot be used as a POJO type because 
not all fields are valid POJO fields, and must be processed as GenericType. 
Please read the Flink documentation on "Data Types & Serialization" for details 
of the effect on performance.
2019-02-01 19:10:34,785 INFO org.apache.flink.configuration.GlobalConfiguration 
- Loading configuration property: jobmanager.rpc.address, localhost
2019-02-01 19:10:34,786 INFO org.apache.flink.configuration.GlobalConfiguration 
- Loading configuration property: jobmanager.rpc.port, 6123
2019-02-01 19:10:34,786 INFO org.apache.flink.configuration.GlobalConfiguration 
- Loading configuration property: jobmanager.heap.size, 1024m
2019-02-01 19:10:34,786 INF

[jira] [Updated] (FLINK-11506) In the Windows environment, through sql-client, execute the SQL statement for a few minutes after the client interrupt exit, reported IO exception

2019-02-01 Thread Chen Zun (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zun updated FLINK-11506:
-
Description: 
 When I executed SQL statements using sql-client: select * from TaixFares; 
Report the following Exception:
{code:java}
Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
Unexpected exception. This is a bug. Please consider filing an issue. at 
org.apache.flink.table.client.SqlClient.main(SqlClient.java:199) Caused by: 
java.io.IOError: java.io.InterruptedIOException: Command interrupted at 
org.jline.terminal.impl.AbstractPosixTerminal.setAttributes(AbstractPosixTerminal.java:54)
 at org.apache.flink.table.client.cli.CliView.restoreTerminal(CliView.java:398) 
at org.apache.flink.table.client.cli.CliView.open(CliView.java:143) at 
org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:401) at 
org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:261) at 
java.util.Optional.ifPresent(Optional.java:159) at 
org.apache.flink.table.client.cli.CliClient.open(CliClient.java:193) at 
org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:121) at 
org.apache.flink.table.client.SqlClient.start(SqlClient.java:105) at 
org.apache.flink.table.client.SqlClient.main(SqlClient.java:187) Caused by: 
java.io.InterruptedIOException: Command interrupted at 
org.jline.utils.ExecHelper.exec(ExecHelper.java:46) at 
org.jline.terminal.impl.ExecPty.doGetConfig(ExecPty.java:175) at 
org.jline.terminal.impl.ExecPty.getAttr(ExecPty.java:87) at 
org.jline.terminal.impl.ExecPty.doSetAttr(ExecPty.java:93) at 
org.jline.terminal.impl.AbstractPty.setAttr(AbstractPty.java:21) at 
org.jline.terminal.impl.AbstractPosixTerminal.setAttributes(AbstractPosixTerminal.java:52)
 ... 9 more Caused by: java.lang.InterruptedException at 
java.lang.ProcessImpl.waitFor(ProcessImpl.java:451) at 
org.jline.utils.ExecHelper.waitAndCapture(ExecHelper.java:66) at 
org.jline.utils.ExecHelper.exec(ExecHelper.java:36) ... 14 more Shutting down 
executor...done.{code}
The log content of sql-client*.log file when SQL statement is executed:
{code:java}
2019-02-01 19:09:47,147 INFO 
org.apache.flink.table.client.config.entries.ExecutionEntry - Property 
'execution.restart-strategy.type' not specified. Using default value: fallback
2019-02-01 19:10:32,679 INFO 
org.apache.flink.table.client.config.entries.ExecutionEntry - Property 
'execution.restart-strategy.type' not specified. Using default value: fallback
2019-02-01 19:10:34,220 INFO org.apache.flink.api.java.typeutils.TypeExtractor 
- class org.joda.time.DateTime does not contain a getter for field iMillis
2019-02-01 19:10:34,220 INFO org.apache.flink.api.java.typeutils.TypeExtractor 
- class org.joda.time.DateTime does not contain a setter for field iMillis
2019-02-01 19:10:34,220 INFO org.apache.flink.api.java.typeutils.TypeExtractor 
- Class class org.joda.time.DateTime cannot be used as a POJO type because not 
all fields are valid POJO fields, and must be processed as GenericType. Please 
read the Flink documentation on "Data Types & Serialization" for details of the 
effect on performance.
2019-02-01 19:10:34,222 INFO org.apache.flink.api.java.typeutils.TypeExtractor 
- class org.joda.time.DateTime does not contain a getter for field iMillis
2019-02-01 19:10:34,223 INFO org.apache.flink.api.java.typeutils.TypeExtractor 
- class org.joda.time.DateTime does not contain a setter for field iMillis
2019-02-01 19:10:34,223 INFO org.apache.flink.api.java.typeutils.TypeExtractor 
- Class class org.joda.time.DateTime cannot be used as a POJO type because not 
all fields are valid POJO fields, and must be processed as GenericType. Please 
read the Flink documentation on "Data Types & Serialization" for details of the 
effect on performance.
2019-02-01 19:10:34,279 INFO org.apache.flink.api.java.typeutils.TypeExtractor 
- class org.apache.flink.types.Row does not contain a getter for field fields
2019-02-01 19:10:34,279 INFO org.apache.flink.api.java.typeutils.TypeExtractor 
- class org.apache.flink.types.Row does not contain a setter for field fields
2019-02-01 19:10:34,279 INFO org.apache.flink.api.java.typeutils.TypeExtractor 
- Class class org.apache.flink.types.Row cannot be used as a POJO type because 
not all fields are valid POJO fields, and must be processed as GenericType. 
Please read the Flink documentation on "Data Types & Serialization" for details 
of the effect on performance.
2019-02-01 19:10:34,785 INFO org.apache.flink.configuration.GlobalConfiguration 
- Loading configuration property: jobmanager.rpc.address, localhost
2019-02-01 19:10:34,786 INFO org.apache.flink.configuration.GlobalConfiguration 
- Loading configuration property: jobmanager.rpc.port, 6123
2019-02-01 19:10:34,786 INFO org.apache.flink.configuration.GlobalConfiguration 
- Loading configuration property: jobmanager.heap.size, 1024m
2019-02-01 19:10:34,786 INF

[jira] [Commented] (FLINK-11506) In the Windows environment, through sql-client, execute the SQL statement for a few minutes after the client interrupt exit, reported IO exception

2019-02-01 Thread Chen Zun (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16758217#comment-16758217
 ] 

Chen Zun commented on FLINK-11506:
--

I have uploaded my configuration file: sql-client-config.yaml

The file path of TaxiRides and TaxiFares is modified to be my local data file 
path.

> In the Windows environment, through sql-client, execute the SQL statement for 
> a few minutes after the client interrupt exit, reported IO exception
> --
>
> Key: FLINK-11506
> URL: https://issues.apache.org/jira/browse/FLINK-11506
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.7.1
> Environment: windows7
> cygwin64
>  java -version
> java version "1.8.0_101"
> Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
> Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode)
>Reporter: Chen Zun
>Priority: Major
> Attachments: sql-client-config.yaml
>
>
>  When I executed SQL statements using sql-client: select * from TaixFares; 
> Report the following Exception:
> {code:java}
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an issue. at 
> org.apache.flink.table.client.SqlClient.main(SqlClient.java:199) Caused by: 
> java.io.IOError: java.io.InterruptedIOException: Command interrupted at 
> org.jline.terminal.impl.AbstractPosixTerminal.setAttributes(AbstractPosixTerminal.java:54)
>  at 
> org.apache.flink.table.client.cli.CliView.restoreTerminal(CliView.java:398) 
> at org.apache.flink.table.client.cli.CliView.open(CliView.java:143) at 
> org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:401) at 
> org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:261) 
> at java.util.Optional.ifPresent(Optional.java:159) at 
> org.apache.flink.table.client.cli.CliClient.open(CliClient.java:193) at 
> org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:121) at 
> org.apache.flink.table.client.SqlClient.start(SqlClient.java:105) at 
> org.apache.flink.table.client.SqlClient.main(SqlClient.java:187) Caused by: 
> java.io.InterruptedIOException: Command interrupted at 
> org.jline.utils.ExecHelper.exec(ExecHelper.java:46) at 
> org.jline.terminal.impl.ExecPty.doGetConfig(ExecPty.java:175) at 
> org.jline.terminal.impl.ExecPty.getAttr(ExecPty.java:87) at 
> org.jline.terminal.impl.ExecPty.doSetAttr(ExecPty.java:93) at 
> org.jline.terminal.impl.AbstractPty.setAttr(AbstractPty.java:21) at 
> org.jline.terminal.impl.AbstractPosixTerminal.setAttributes(AbstractPosixTerminal.java:52)
>  ... 9 more Caused by: java.lang.InterruptedException at 
> java.lang.ProcessImpl.waitFor(ProcessImpl.java:451) at 
> org.jline.utils.ExecHelper.waitAndCapture(ExecHelper.java:66) at 
> org.jline.utils.ExecHelper.exec(ExecHelper.java:36) ... 14 more Shutting down 
> executor...done.{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-10713) RestartIndividualStrategy does not restore state

2019-02-01 Thread boshu Zheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

boshu Zheng reassigned FLINK-10713:
---

Assignee: boshu Zheng

> RestartIndividualStrategy does not restore state
> 
>
> Key: FLINK-10713
> URL: https://issues.apache.org/jira/browse/FLINK-10713
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Affects Versions: 1.3.3, 1.4.2, 1.5.5, 1.6.2, 1.7.0
>Reporter: Stefan Richter
>Assignee: boshu Zheng
>Priority: Critical
> Fix For: 1.8.0
>
>
> RestartIndividualStrategy does not perform any state restore. This is big 
> problem because all restored regions will be restarted with empty state. We 
> need to take checkpoints into account when restoring.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11506) In the Windows environment, through sql-client, execute the SQL statement for a few minutes after the client interrupt exit, reported IO exception

2019-02-01 Thread Chen Zun (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zun updated FLINK-11506:
-
Attachment: sql-client-config.yaml

> In the Windows environment, through sql-client, execute the SQL statement for 
> a few minutes after the client interrupt exit, reported IO exception
> --
>
> Key: FLINK-11506
> URL: https://issues.apache.org/jira/browse/FLINK-11506
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.7.1
> Environment: windows7
> cygwin64
>  java -version
> java version "1.8.0_101"
> Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
> Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode)
>Reporter: Chen Zun
>Priority: Major
> Attachments: sql-client-config.yaml
>
>
>  When I executed SQL statements using sql-client: select * from TaixFares; 
> Report the following Exception:
> {code:java}
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an issue. at 
> org.apache.flink.table.client.SqlClient.main(SqlClient.java:199) Caused by: 
> java.io.IOError: java.io.InterruptedIOException: Command interrupted at 
> org.jline.terminal.impl.AbstractPosixTerminal.setAttributes(AbstractPosixTerminal.java:54)
>  at 
> org.apache.flink.table.client.cli.CliView.restoreTerminal(CliView.java:398) 
> at org.apache.flink.table.client.cli.CliView.open(CliView.java:143) at 
> org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:401) at 
> org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:261) 
> at java.util.Optional.ifPresent(Optional.java:159) at 
> org.apache.flink.table.client.cli.CliClient.open(CliClient.java:193) at 
> org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:121) at 
> org.apache.flink.table.client.SqlClient.start(SqlClient.java:105) at 
> org.apache.flink.table.client.SqlClient.main(SqlClient.java:187) Caused by: 
> java.io.InterruptedIOException: Command interrupted at 
> org.jline.utils.ExecHelper.exec(ExecHelper.java:46) at 
> org.jline.terminal.impl.ExecPty.doGetConfig(ExecPty.java:175) at 
> org.jline.terminal.impl.ExecPty.getAttr(ExecPty.java:87) at 
> org.jline.terminal.impl.ExecPty.doSetAttr(ExecPty.java:93) at 
> org.jline.terminal.impl.AbstractPty.setAttr(AbstractPty.java:21) at 
> org.jline.terminal.impl.AbstractPosixTerminal.setAttributes(AbstractPosixTerminal.java:52)
>  ... 9 more Caused by: java.lang.InterruptedException at 
> java.lang.ProcessImpl.waitFor(ProcessImpl.java:451) at 
> org.jline.utils.ExecHelper.waitAndCapture(ExecHelper.java:66) at 
> org.jline.utils.ExecHelper.exec(ExecHelper.java:36) ... 14 more Shutting down 
> executor...done.{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11506) In the Windows environment, through sql-client, execute the SQL statement for a few minutes after the client interrupt exit, reported IO exception

2019-02-01 Thread Chen Zun (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zun updated FLINK-11506:
-
Description: 
 When I executed SQL statements using sql-client: select * from TaixFares; 
Report the following Exception:
{code:java}
Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
Unexpected exception. This is a bug. Please consider filing an issue. at 
org.apache.flink.table.client.SqlClient.main(SqlClient.java:199) Caused by: 
java.io.IOError: java.io.InterruptedIOException: Command interrupted at 
org.jline.terminal.impl.AbstractPosixTerminal.setAttributes(AbstractPosixTerminal.java:54)
 at org.apache.flink.table.client.cli.CliView.restoreTerminal(CliView.java:398) 
at org.apache.flink.table.client.cli.CliView.open(CliView.java:143) at 
org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:401) at 
org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:261) at 
java.util.Optional.ifPresent(Optional.java:159) at 
org.apache.flink.table.client.cli.CliClient.open(CliClient.java:193) at 
org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:121) at 
org.apache.flink.table.client.SqlClient.start(SqlClient.java:105) at 
org.apache.flink.table.client.SqlClient.main(SqlClient.java:187) Caused by: 
java.io.InterruptedIOException: Command interrupted at 
org.jline.utils.ExecHelper.exec(ExecHelper.java:46) at 
org.jline.terminal.impl.ExecPty.doGetConfig(ExecPty.java:175) at 
org.jline.terminal.impl.ExecPty.getAttr(ExecPty.java:87) at 
org.jline.terminal.impl.ExecPty.doSetAttr(ExecPty.java:93) at 
org.jline.terminal.impl.AbstractPty.setAttr(AbstractPty.java:21) at 
org.jline.terminal.impl.AbstractPosixTerminal.setAttributes(AbstractPosixTerminal.java:52)
 ... 9 more Caused by: java.lang.InterruptedException at 
java.lang.ProcessImpl.waitFor(ProcessImpl.java:451) at 
org.jline.utils.ExecHelper.waitAndCapture(ExecHelper.java:66) at 
org.jline.utils.ExecHelper.exec(ExecHelper.java:36) ... 14 more Shutting down 
executor...done.{code}

  was:
{code:java}
When I executed SQL statements using sql-client: select * from TaixFares;

Report the following Exception:
{code}


> In the Windows environment, through sql-client, execute the SQL statement for 
> a few minutes after the client interrupt exit, reported IO exception
> --
>
> Key: FLINK-11506
> URL: https://issues.apache.org/jira/browse/FLINK-11506
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.7.1
> Environment: windows7
> cygwin64
>  java -version
> java version "1.8.0_101"
> Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
> Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode)
>Reporter: Chen Zun
>Priority: Major
>
>  When I executed SQL statements using sql-client: select * from TaixFares; 
> Report the following Exception:
> {code:java}
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an issue. at 
> org.apache.flink.table.client.SqlClient.main(SqlClient.java:199) Caused by: 
> java.io.IOError: java.io.InterruptedIOException: Command interrupted at 
> org.jline.terminal.impl.AbstractPosixTerminal.setAttributes(AbstractPosixTerminal.java:54)
>  at 
> org.apache.flink.table.client.cli.CliView.restoreTerminal(CliView.java:398) 
> at org.apache.flink.table.client.cli.CliView.open(CliView.java:143) at 
> org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:401) at 
> org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:261) 
> at java.util.Optional.ifPresent(Optional.java:159) at 
> org.apache.flink.table.client.cli.CliClient.open(CliClient.java:193) at 
> org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:121) at 
> org.apache.flink.table.client.SqlClient.start(SqlClient.java:105) at 
> org.apache.flink.table.client.SqlClient.main(SqlClient.java:187) Caused by: 
> java.io.InterruptedIOException: Command interrupted at 
> org.jline.utils.ExecHelper.exec(ExecHelper.java:46) at 
> org.jline.terminal.impl.ExecPty.doGetConfig(ExecPty.java:175) at 
> org.jline.terminal.impl.ExecPty.getAttr(ExecPty.java:87) at 
> org.jline.terminal.impl.ExecPty.doSetAttr(ExecPty.java:93) at 
> org.jline.terminal.impl.AbstractPty.setAttr(AbstractPty.java:21) at 
> org.jline.terminal.impl.AbstractPosixTerminal.setAttributes(AbstractPosixTerminal.java:52)
>  ... 9 more Caused by: java.lang.InterruptedException at 
> java.lang.ProcessImpl.waitFor(ProcessImpl.java:451) at 
> org.jline.utils.ExecHelper.waitAndCapture(ExecHelper.java:66) at 
> org.jline.utils.ExecHelper.exec(ExecHelper.java:36) ... 14 more 

[jira] [Updated] (FLINK-11506) In the Windows environment, through sql-client, execute the SQL statement for a few minutes after the client interrupt exit, reported IO exception

2019-02-01 Thread Chen Zun (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zun updated FLINK-11506:
-
Description: 
{code:java}
When I executed SQL statements using sql-client: select * from TaixFares;

Report the following Exception:
{code}

  was:
{code:java}
进入sql客户端之后,执行select * from TaixFares;
Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
Unexpected exception. This is a bug. Please consider filing an issue.
at org.apache.flink.table.client.SqlClient.main(SqlClient.java:199)
Caused by: java.io.IOError: java.io.InterruptedIOException: Command interrupted
at 
org.jline.terminal.impl.AbstractPosixTerminal.setAttributes(AbstractPosixTerminal.java:54)
at org.apache.flink.table.client.cli.CliView.restoreTerminal(CliView.java:398)
at org.apache.flink.table.client.cli.CliView.open(CliView.java:143)
at org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:401)
at org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:261)
at java.util.Optional.ifPresent(Optional.java:159)
at org.apache.flink.table.client.cli.CliClient.open(CliClient.java:193)
at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:121)
at org.apache.flink.table.client.SqlClient.start(SqlClient.java:105)
at org.apache.flink.table.client.SqlClient.main(SqlClient.java:187)
Caused by: java.io.InterruptedIOException: Command interrupted
at org.jline.utils.ExecHelper.exec(ExecHelper.java:46)
at org.jline.terminal.impl.ExecPty.doGetConfig(ExecPty.java:175)
at org.jline.terminal.impl.ExecPty.getAttr(ExecPty.java:87)
at org.jline.terminal.impl.ExecPty.doSetAttr(ExecPty.java:93)
at org.jline.terminal.impl.AbstractPty.setAttr(AbstractPty.java:21)


{code}


> In the Windows environment, through sql-client, execute the SQL statement for 
> a few minutes after the client interrupt exit, reported IO exception
> --
>
> Key: FLINK-11506
> URL: https://issues.apache.org/jira/browse/FLINK-11506
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.7.1
> Environment: windows7
> cygwin64
>  java -version
> java version "1.8.0_101"
> Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
> Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode)
>Reporter: Chen Zun
>Priority: Major
>
> {code:java}
> When I executed SQL statements using sql-client: select * from TaixFares;
> Report the following Exception:
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11506) In the Windows environment, through sql-client, execute the SQL statement for a few minutes after the client interrupt exit, reported IO exception

2019-02-01 Thread Chen Zun (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zun updated FLINK-11506:
-
Summary: In the Windows environment, through sql-client, execute the SQL 
statement for a few minutes after the client interrupt exit, reported IO 
exception  (was: 通过sql-client.sh登录后,执行sql语句报IO异常,然后客户端中断退出)

> In the Windows environment, through sql-client, execute the SQL statement for 
> a few minutes after the client interrupt exit, reported IO exception
> --
>
> Key: FLINK-11506
> URL: https://issues.apache.org/jira/browse/FLINK-11506
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.7.1
> Environment: windows7
> cygwin64
>  java -version
> java version "1.8.0_101"
> Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
> Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode)
>Reporter: Chen Zun
>Priority: Major
>
> {code:java}
> 进入sql客户端之后,执行select * from TaixFares;
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an issue.
> at org.apache.flink.table.client.SqlClient.main(SqlClient.java:199)
> Caused by: java.io.IOError: java.io.InterruptedIOException: Command 
> interrupted
> at 
> org.jline.terminal.impl.AbstractPosixTerminal.setAttributes(AbstractPosixTerminal.java:54)
> at org.apache.flink.table.client.cli.CliView.restoreTerminal(CliView.java:398)
> at org.apache.flink.table.client.cli.CliView.open(CliView.java:143)
> at org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:401)
> at org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:261)
> at java.util.Optional.ifPresent(Optional.java:159)
> at org.apache.flink.table.client.cli.CliClient.open(CliClient.java:193)
> at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:121)
> at org.apache.flink.table.client.SqlClient.start(SqlClient.java:105)
> at org.apache.flink.table.client.SqlClient.main(SqlClient.java:187)
> Caused by: java.io.InterruptedIOException: Command interrupted
> at org.jline.utils.ExecHelper.exec(ExecHelper.java:46)
> at org.jline.terminal.impl.ExecPty.doGetConfig(ExecPty.java:175)
> at org.jline.terminal.impl.ExecPty.getAttr(ExecPty.java:87)
> at org.jline.terminal.impl.ExecPty.doSetAttr(ExecPty.java:93)
> at org.jline.terminal.impl.AbstractPty.setAttr(AbstractPty.java:21)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-8556) Add proxy feature to Kinesis Connector to acces its endpoint

2019-02-01 Thread Robert Metzger (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-8556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger reassigned FLINK-8556:
-

Assignee: Ph.Duveau

> Add proxy feature to Kinesis Connector to acces its endpoint
> 
>
> Key: FLINK-8556
> URL: https://issues.apache.org/jira/browse/FLINK-8556
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.4.0
>Reporter: Ph.Duveau
>Assignee: Ph.Duveau
>Priority: Major
>  Labels: features
>
> The connector can not be configured to use a proxy to access Kinesis 
> endpoint. This feature is required on EC2 instances which can access internet 
> only through a proxy. VPC Kinesis endpoints are currently available in few 
> AWS' regions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] wangpeibin713 opened a new pull request #7638: [FLINK-11510] [DataStream] Add the MultiFieldSumAggregator to support KeyedStream.sum(int[] positionToSums )[FLINK-11510] [DataStream] Add the M

2019-02-01 Thread GitBox
wangpeibin713 opened a new pull request #7638: [FLINK-11510] [DataStream] Add 
the MultiFieldSumAggregator to support KeyedStream.sum(int[] positionToSums 
)[FLINK-11510] [DataStream] Add the MultiFieldSumAggregator to support 
KeyedStream.sum(int[] positionToSums )
URL: https://github.com/apache/flink/pull/7638
 
 
   ## What is the purpose of the change
   
   - The goal is to implement a KeyedStream API to sum with multi field as
 
   
   ## Brief change log
   
   - add the class:
 - MultiFieldSumAggregator
   - add function in KeyedStream:
 - public SingleOutputStreamOperator sum(int[] positionToSums )
 - public SingleOutputStreamOperator sum(String[] fields)
   - add unit test in follow ut
 - DataStreamTest
 - AggregationFunctionTest
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
   - *Added integration tests for end-to-end deployment with small data 
collection*
   
   ## Does this pull request potentially affect one of the following parts:
   
   - Dependencies (does it add or upgrade a dependency): *no*
   
   - The public API, i.e., is any changed class annotated 
with@Public(Evolving): yes
   
 - org.apache.flink.streaming.api.datastream.KeyedStream
   
   - The serializers: *no*
   
   - The runtime per-record code paths (performance sensitive): *no*
   
   - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: *no*
   
   - The S3 file system connector: *no*
   
   ## Documentation
   
   - Does this pull request introduce a new feature? yes
   - If yes, how is the feature documented? not documented


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] dawidwys commented on a change in pull request #7621: [FLINK-11451][table] move *QueryConfig and TableDescriptor to flink-table-api-java

2019-02-01 Thread GitBox
dawidwys commented on a change in pull request #7621: [FLINK-11451][table] move 
*QueryConfig and TableDescriptor to flink-table-api-java
URL: https://github.com/apache/flink/pull/7621#discussion_r253000701
 
 

 ##
 File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/descriptors/TableDescriptor.java
 ##
 @@ -16,9 +16,14 @@
  * limitations under the License.
  */
 
-package org.apache.flink.table.descriptors
+package org.apache.flink.table.descriptors;
+
+import org.apache.flink.annotation.PublicEvolving;
 
 /**
   * Common class for all descriptors describing table sources and sinks.
   */
-abstract class TableDescriptor extends DescriptorBase
+@PublicEvolving
+public abstract class TableDescriptor extends DescriptorBase {
 
 Review comment:
   Oh, wasn't aware you cannot override methods from `Object` with 
`default`methods. Forget about this comment then.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-8556) Add proxy feature to Kinesis Connector to acces its endpoint

2019-02-01 Thread Robert Metzger (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-8556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger reassigned FLINK-8556:
-

Assignee: Robert Metzger  (was: Ph.Duveau)

> Add proxy feature to Kinesis Connector to acces its endpoint
> 
>
> Key: FLINK-8556
> URL: https://issues.apache.org/jira/browse/FLINK-8556
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.4.0
>Reporter: Ph.Duveau
>Assignee: Robert Metzger
>Priority: Major
>  Labels: features
>
> The connector can not be configured to use a proxy to access Kinesis 
> endpoint. This feature is required on EC2 instances which can access internet 
> only through a proxy. VPC Kinesis endpoints are currently available in few 
> AWS' regions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] fhueske commented on issue #4660: [FLINK-7050][table] Add support of RFC compliant CSV parser for Table Source

2019-02-01 Thread GitBox
fhueske commented on issue #4660: [FLINK-7050][table] Add support of RFC 
compliant CSV parser for Table Source
URL: https://github.com/apache/flink/pull/4660#issuecomment-459675571
 
 
   I think we can keep it open for now. 
   It's a new connector and not conflicting with other changes (the TableSource 
interface should be rather easy to adjust).


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-11484) Blink java.util.concurrent.TimeoutException

2019-02-01 Thread pj (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16758166#comment-16758166
 ] 

pj edited comment on FLINK-11484 at 2/1/19 10:13 AM:
-

PS: Yesterday is our last workday before new year holiday. I tried decreasing 
the parallelism number and increasing the task manager number, the same 
application on blink ran very well.


was (Author: pijing):
PS: Yesterday is our last workday before new year holiday. I have tried 
decrease the parallelism number and increased the taskmanager number, the same 
application on blink ran very well.

> Blink java.util.concurrent.TimeoutException
> ---
>
> Key: FLINK-11484
> URL: https://issues.apache.org/jira/browse/FLINK-11484
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.5.5
> Environment: The link of blink source code: 
> [github.com/apache/flink/tree/blink|https://github.com/apache/flink/tree/blink]
>Reporter: pj
>Priority: Major
>  Labels: blink
> Attachments: 1.png
>
>
> *If I run blink application on yarn and the parallelism number larger than 1.*
> *Following is the command :*
> ./flink run -m yarn-cluster -ynm FLINK_NG_ENGINE_1 -ys 4 -yn 10 -ytm 5120 -p 
> 40 -c XXMain ~/xx.jar
> *Following is the code:*
> {{DataStream outputStream = tableEnv.toAppendStream(curTable, Row.class); 
> outputStream.print();}}
> *{{The whole subtask of application will hang a long time and finally the 
> }}{{toAppendStream()}}{{ function will throw an exception like below:}}*
> {{org.apache.flink.client.program.ProgramInvocationException: Job failed. 
> (JobID: f5e4f7243d06035202e8fa250c364304) at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:276)
>  at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:482) 
> at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.executeInternal(StreamContextEnvironment.java:85)
>  at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.executeInternal(StreamContextEnvironment.java:37)
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1893)
>  at com.ngengine.main.KafkaMergeMain.startApp(KafkaMergeMain.java:352) at 
> com.ngengine.main.KafkaMergeMain.main(KafkaMergeMain.java:94) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:561)
>  at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:445)
>  at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:419) 
> at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:786) 
> at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:280) 
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:215) at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1029)
>  at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$9(CliFrontend.java:1105) 
> at java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
>  at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>  at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1105) 
> Caused by: java.util.concurrent.TimeoutException at 
> org.apache.flink.runtime.concurrent.FutureUtils$Timeout.run(FutureUtils.java:834)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:745)}}{{}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >