[GitHub] [flink] zhijiangW commented on a change in pull request #7911: [FLINK-11082][network] Fix the logic of getting backlog in sub partition

2019-03-14 Thread GitBox
zhijiangW commented on a change in pull request #7911: [FLINK-11082][network] 
Fix the logic of getting backlog in sub partition
URL: https://github.com/apache/flink/pull/7911#discussion_r265856906
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/ResultSubpartition.java
 ##
 @@ -118,13 +117,19 @@ protected Throwable getFailureCause() {
 
/**
 * Gets the number of non-event buffers in this subpartition.
-*
-* Beware: This method should only be used in tests 
in non-concurrent access
-* scenarios since it does not make any concurrency guarantees.
 */
-   @VisibleForTesting
-   public int getBuffersInBacklog() {
-   return buffersInBacklog;
+   public abstract int getBuffersInBacklog();
+
+   /**
+* @param lastBufferAvailable whether the last buffer in this 
subpartition is available for consumption
+* @return the number of non-event buffers in this subpartition
+*/
+   protected int getBuffersInBacklog(boolean lastBufferAvailable) {
 
 Review comment:
   Yes, it makes sense to keep the `Unsafe` here consistent with 
`decreaseBuffersInBacklog`.
   
   In another aspect, the existence of `Unsafe` is for distinguishing the 
different usages compared with `safe`. For `getBuffersInBacklog` there might 
need only one way for use either `safe` or `Unsafe`. Anyway I am willing to 
make it `Unsafe` here, then we could avoid explaining it in method comments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] godfreyhe edited a comment on issue #7969: [FLINK-11896] [table-planner-blink] Introduce stream physical nodes

2019-03-14 Thread GitBox
godfreyhe edited a comment on issue #7969: [FLINK-11896] [table-planner-blink] 
Introduce stream physical nodes
URL: https://github.com/apache/flink/pull/7969#issuecomment-473149388
 
 
   Thanks for your suggestion @wuchong. I have updated this PR based on comments


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-11925) KryoSerializerSnapshot doesn't completely capture state / configuration of Kryo instance

2019-03-14 Thread Tzu-Li (Gordon) Tai (JIRA)
Tzu-Li (Gordon) Tai created FLINK-11925:
---

 Summary: KryoSerializerSnapshot doesn't completely capture state / 
configuration of Kryo instance
 Key: FLINK-11925
 URL: https://issues.apache.org/jira/browse/FLINK-11925
 Project: Flink
  Issue Type: Bug
  Components: API / Type Serialization System
Affects Versions: 1.7.2, 1.6.4, 1.8.0
Reporter: Tzu-Li (Gordon) Tai


Currently, the {{KryoSerializerSnapshot}} only covers information about 
registered types / serializers that was configured in the {{ExecutionConfig}}.

This is problematic, because there are a few cases where we have some 
additional registrations:
1) When Avro is present in the classpath [1] [2]
2) When Scala is used, in which case Twitter Chill is used which itself has 
some registrations [3]
3) If a non-registered type is encountered, Kryo will on-the-fly registered the 
type because we currently configure Kryo to allow dynamic registrations [4].

For case 1), we do reflect these additional registrations in the 
{{KryoSerializerSnapshot}}.
This isn't the case for 2) and 3), which would cause problems when attempting 
to create a reconfigured instance of the {{KryoSerializer}} on restore.

In general, instead of relying on trying to keep track of the registrations 
ourselves, it would be much more straightforward if there is a way to "dump" 
the state / configuration of Kryo when we attempt to create a snapshot of the 
{{KryoSerializer}}.
Whether or not Kryo has APIs to allow this needs further investigation.

[1] 
https://github.com/apache/flink/blob/master/flink-formats/flink-avro/src/main/java/org/apache/flink/formats/avro/utils/AvroKryoSerializerUtils.java#L51
[2] 
https://github.com/apache/flink/blob/master/flink-formats/flink-avro/src/main/java/org/apache/flink/formats/avro/utils/AvroKryoSerializerUtils.java#L68
[3] 
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/java/typeutils/runtime/kryo/KryoSerializer.java#L430
[4] 
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/java/typeutils/runtime/kryo/KryoSerializer.java#L476





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] JingsongLi commented on issue #7961: [FLINK-11882][table-runtime-blink] Introduce BytesHashMap to batch hash agg

2019-03-14 Thread GitBox
JingsongLi commented on issue #7961: [FLINK-11882][table-runtime-blink] 
Introduce BytesHashMap to batch hash agg
URL: https://github.com/apache/flink/pull/7961#issuecomment-473164513
 
 
   Thanks for your suggestion @KurtYoung . I have update this PR based on 
comments


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #7982: [FLINK-11788][table-planner-blink] Support Code Generation for RexNode

2019-03-14 Thread GitBox
KurtYoung commented on a change in pull request #7982: 
[FLINK-11788][table-planner-blink] Support Code Generation for RexNode
URL: https://github.com/apache/flink/pull/7982#discussion_r265826579
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/calcite/FlinkLocalRef.scala
 ##
 @@ -0,0 +1,44 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.table.calcite
+
+import org.apache.calcite.rel.`type`.RelDataType
+import org.apache.calcite.rex.RexLocalRef
+import org.apache.flink.table.`type`.InternalType
+
+case class RexAggBufferVariable(
 
 Review comment:
   Could you add some comments to these newly introduced variables to explain 
what these are used for?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] godfreyhe removed a comment on issue #7969: [FLINK-11896] [table-planner-blink] Introduce stream physical nodes

2019-03-14 Thread GitBox
godfreyhe removed a comment on issue #7969: [FLINK-11896] [table-planner-blink] 
Introduce stream physical nodes
URL: https://github.com/apache/flink/pull/7969#issuecomment-473149397
 
 
   Thanks for your suggestion @wuchong. I have update this PR based on comments


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7911: [FLINK-11082][network] Fix the logic of getting backlog in sub partition

2019-03-14 Thread GitBox
zhijiangW commented on a change in pull request #7911: [FLINK-11082][network] 
Fix the logic of getting backlog in sub partition
URL: https://github.com/apache/flink/pull/7911#discussion_r265845525
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/SpilledSubpartitionView.java
 ##
 @@ -147,7 +147,8 @@ public BufferAndBacklog getNextBuffer() throws 
IOException, InterruptedException
return null;
}
 
-   int newBacklog = 
parent.decreaseBuffersInBacklog(current.isBuffer());
+   parent.decreaseBuffersInBacklog(current.isBuffer());
+   int newBacklog = parent.getBuffersInBacklog();
 
 Review comment:
   That is a good question that should be concerned. I considered it again and 
thought the synchronized is not needed for getting backlog because this value 
would be final consistent between sender and receiver.
   
   E.g. if the current backlog is 4 after decreasing, the previous behavior 
would report 4 strictly. The new behavior might report 4 or more than 4 if 
increasing backlog again before getting. But the result is still correct if 
reporting 5 because it actually exists. The difference is we report this 
increase in advance, and the previous behavior would reflect this increase in 
the next report. The early report might get some extra benefits because the 
receiver could prepare more credits for it.
   
   It could have two options:
   1. Uniform all the ways of getting backlog outside of synchronized.
   2. Integrate return backlog in `decreaseBuffersInBacklog` as you mentioned, 
and the form seems not so bad if adding boolean parameter in 
`decreaseBuffersInBacklog`. The backlog is strictly returned in this way.
   
   Which option do you prefer?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] chummyhe89 commented on issue #7971: [FLINK-11897][tests] should wait all submitTask methods executed,befo…

2019-03-14 Thread GitBox
chummyhe89 commented on issue #7971: [FLINK-11897][tests] should wait all 
submitTask methods executed,befo…
URL: https://github.com/apache/flink/pull/7971#issuecomment-473149607
 
 
   @TisonKun thanks for your review!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] godfreyhe commented on issue #7969: [FLINK-11896] [table-planner-blink] Introduce stream physical nodes

2019-03-14 Thread GitBox
godfreyhe commented on issue #7969: [FLINK-11896] [table-planner-blink] 
Introduce stream physical nodes
URL: https://github.com/apache/flink/pull/7969#issuecomment-473149397
 
 
   Thanks for your suggestion @wuchong. I have update this PR based on comments


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] godfreyhe commented on issue #7969: [FLINK-11896] [table-planner-blink] Introduce stream physical nodes

2019-03-14 Thread GitBox
godfreyhe commented on issue #7969: [FLINK-11896] [table-planner-blink] 
Introduce stream physical nodes
URL: https://github.com/apache/flink/pull/7969#issuecomment-473149388
 
 
   Thanks for your suggestion @wuchong. I have update this PR based on comments


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] chummyhe89 commented on a change in pull request #7971: [FLINK-11897][tests] should wait all submitTask methods executed,befo…

2019-03-14 Thread GitBox
chummyhe89 commented on a change in pull request #7971: [FLINK-11897][tests] 
should wait all submitTask methods executed,befo…
URL: https://github.com/apache/flink/pull/7971#discussion_r265841511
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/executiongraph/InteractionsCountingTaskManagerGateway.java
 ##
 @@ -32,6 +33,16 @@
 
private final AtomicInteger submitTaskCount = new AtomicInteger(0);
 
+   private CountDownLatch submitLatch;
+
+   public InteractionsCountingTaskManagerGateway(){
+   submitLatch = new CountDownLatch(0);
+   }
+
+   public InteractionsCountingTaskManagerGateway(final int parallelism){
 
 Review comment:
   I'm sorry for that, i'll correct it!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] chummyhe89 commented on a change in pull request #7971: [FLINK-11897][tests] should wait all submitTask methods executed,befo…

2019-03-14 Thread GitBox
chummyhe89 commented on a change in pull request #7971: [FLINK-11897][tests] 
should wait all submitTask methods executed,befo…
URL: https://github.com/apache/flink/pull/7971#discussion_r265841440
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/executiongraph/InteractionsCountingTaskManagerGateway.java
 ##
 @@ -60,4 +72,12 @@ int getSubmitTaskCount() {
int getInteractionsCount() {
return cancelTaskCount.get() + submitTaskCount.get();
}
+
+   void waitAllTasksSubmitted(){
+   try{
+   submitLatch.await();
+   }catch (InterruptedException e){
+   Thread.currentThread().interrupt();
+   }
+   }
 
 Review comment:
   I'm sorry for that, i'll correct it!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11860) Remove all the usage of deprecated unit-provided memory options in docs and scripts

2019-03-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-11860:
---
Labels: pull-request-available  (was: )

> Remove all the usage of deprecated unit-provided memory options in docs and 
> scripts
> ---
>
> Key: FLINK-11860
> URL: https://issues.apache.org/jira/browse/FLINK-11860
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Scripts, Documentation
>Reporter: Yun Tang
>Assignee: Yun Tang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>
> Currently, options with unit provided ,e.g. {{jobmanager.heap.mb}} and 
> {{taskmanager.heap.mb}} have already been deprecated. However, these options 
> are still showed in documentation and deployment scripts. We should remove 
> these to not confuse users.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-10705) Rework Flink Web Dashboard

2019-03-14 Thread Yadong Xie (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793272#comment-16793272
 ] 

Yadong Xie edited comment on FLINK-10705 at 3/15/19 3:13 AM:
-

Hi [~rmetzger], sorry for the late response

Currently, I am working on my branch at 
[https://github.com/vthinkxie/flink/tree/web-rework|https://github.com/vthinkxie/flink/tree/web-rework,]

I think it could be finished before 03/20


was (Author: vthinkxie):
Hi [~rmetzger], sorry for the late response

Currently, I am working on my branch at 
[https://github.com/vthinkxie/flink/tree/web-rework|https://github.com/vthinkxie/flink/tree/web-rework,],
 I think it could be finished before 03/20

> Rework Flink Web Dashboard
> --
>
> Key: FLINK-10705
> URL: https://issues.apache.org/jira/browse/FLINK-10705
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.6.2
>Reporter: Fabian Wollert
>Assignee: Yadong Xie
>Priority: Major
> Attachments: 3rdpartylicenses.txt, image-2018-10-29-09-17-24-115.png, 
> snapshot.jpeg
>
>
> The Flink Dashboard is very simple currently and should get updated. This is 
> the umbrella ticket for other tickets regarding this. Please check the 
> sub-tickets for details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] flinkbot commented on issue #7988: [FLINK-11860] Remove all the usage of deprecated unit-provided memory options in docs and scripts

2019-03-14 Thread GitBox
flinkbot commented on issue #7988: [FLINK-11860] Remove all the usage of 
deprecated unit-provided memory options in docs and scripts
URL: https://github.com/apache/flink/pull/7988#issuecomment-473145149
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10705) Rework Flink Web Dashboard

2019-03-14 Thread Yadong Xie (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793272#comment-16793272
 ] 

Yadong Xie commented on FLINK-10705:


Hi [~rmetzger], sorry for the late response

Currently, I am working on my branch at 
[https://github.com/vthinkxie/flink/tree/web-rework|https://github.com/vthinkxie/flink/tree/web-rework,],
 I think it could be finished before 03/20

> Rework Flink Web Dashboard
> --
>
> Key: FLINK-10705
> URL: https://issues.apache.org/jira/browse/FLINK-10705
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.6.2
>Reporter: Fabian Wollert
>Assignee: Yadong Xie
>Priority: Major
> Attachments: 3rdpartylicenses.txt, image-2018-10-29-09-17-24-115.png, 
> snapshot.jpeg
>
>
> The Flink Dashboard is very simple currently and should get updated. This is 
> the umbrella ticket for other tickets regarding this. Please check the 
> sub-tickets for details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] Myasuka opened a new pull request #7988: [FLINK-11860] Remove all the usage of deprecated unit-provided memory options in docs and scripts

2019-03-14 Thread GitBox
Myasuka opened a new pull request #7988: [FLINK-11860] Remove all the usage of 
deprecated unit-provided memory options in docs and scripts
URL: https://github.com/apache/flink/pull/7988
 
 
   ## What is the purpose of the change
   
   Currently, options with unit provided ,e.g. `jobmanager.heap.mb` and 
`taskmanager.heap.mb` have already been deprecated. However, these options are 
still showed in documentation and deployment scripts. We should remove these to 
not confuse users.
   
   
   ## Brief change log
   
   change all options using `jobmanager.heap.mb` and `taskmanager.heap.mb` to 
`jobmanager.heap.size` and `taskmanager.heap.size`.
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): **no**
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: **no**
 - The serializers: **no**
 - The runtime per-record code paths (performance sensitive): **no**
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: **no**
 - The S3 file system connector: **no**
   
   ## Documentation
   
 - Does this pull request introduce a new feature? **no**
 - If yes, how is the feature documented? **not applicable**
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11919) Exception in thread "main" org.apache.flink.table.api.SqlParserException: SQL parse failed. Encountered "FROM user" at line 1, column 17.

2019-03-14 Thread thinktothings (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793226#comment-16793226
 ] 

thinktothings commented on FLINK-11919:
---

 some string combinations are already reserved as keywords for future use. If 
you want to use one of the following strings as a field name, make sure to 
surround them with backticks (e.g. {{`value`}}, {{`count`}}).

---

 

package com.opensourceteams.module.bigdata.flink.example.sql.dataset.user

import org.apache.flink.api.scala.\{ExecutionEnvironment, _}
import org.apache.flink.table.api.TableEnvironment
import org.apache.flink.table.api.scala._

object Run {



 def main(args: Array[String]): Unit = {


 //得到批环境
 val env = ExecutionEnvironment.getExecutionEnvironment


 val dataSet = env.fromElements(("小明",15,"男"),("小李",25,"女"))

 //得到Table环境
 val tableEnv = TableEnvironment.getTableEnvironment(env)
 //注册table
 tableEnv.registerDataSet("user",dataSet,'name,'age,'sex)



 //系统保留的关键字,是需要加 ` 来使用
 tableEnv.sqlQuery(s"select name,age FROM `user` ")
 .first(100).print()


 }

}

> Exception in thread "main" org.apache.flink.table.api.SqlParserException: SQL 
> parse failed. Encountered "FROM user" at line 1, column 17.
> -
>
> Key: FLINK-11919
> URL: https://issues.apache.org/jira/browse/FLINK-11919
> Project: Flink
>  Issue Type: Bug
>  Components: API / Table SQL
>Affects Versions: 1.7.2
> Environment: os: mac 0.14.3 
> java: 1.8.0_191
> scala: 2.11.12
> code: 
> https://github.com/opensourceteams/flink-maven-scala/blob/master/src/main/scala/com/opensourceteams/module/bigdata/flink/example/sql/user/Run.scala
>  
>Reporter: thinktothings
>Priority: Blocker
> Attachments: image-2019-03-14-17-41-43-840.png
>
>
> Register table name, can not use user, use other names, such as user1 can be 
> normal
>  
>  
> ===
> package com.opensourceteams.module.bigdata.flink.example.tableapi.test
> import org.apache.flink.api.scala.ExecutionEnvironment
> import org.apache.flink.table.api.scala._
> import org.apache.flink.api.scala._
> import org.apache.flink.table.api.TableEnvironment
> object Run {
>  def main(args: Array[String]): Unit = {
>  //得到批环境
>  val env = ExecutionEnvironment.getExecutionEnvironment
>  val dataSet = env.fromElements(("小明",15,"男"),("小李",25,"女"))
>  //得到Table环境
>  val tableEnv = TableEnvironment.getTableEnvironment(env)
>  //注册table
>  tableEnv.registerDataSet("user",dataSet,'name,'age,'sex)
>  tableEnv.sqlQuery(s"select name,age FROM user")
>  .first(100).print()
>  }
> }
>  
> ===
>  
> Exception in thread "main" org.apache.flink.table.api.SqlParserException: SQL 
> parse failed. Encountered "FROM user" at line 1, column 17.
> Was expecting one of:
>   
>  "ORDER" ...
>  "LIMIT" ...
>  "OFFSET" ...
>  "FETCH" ...
>  "FROM"  ...
>  "FROM"  ...
>  "FROM"  ...
>  "FROM"  ...
>  "FROM"  ...
>  "FROM" "LATERAL" ...
>  "FROM" "(" ...
>  "FROM" "UNNEST" ...
>  "FROM" "TABLE" ...
>  "," ...
>  "AS" ...
>   ...
>   ...
>   ...
>   ...
>   ...
>  "." ...
>  "NOT" ...
>  "IN" ...
>  "<" ...
>  "<=" ...
>  ">" ...
>  ">=" ...
>  "=" ...
>  "<>" ...
>  "!=" ...
>  "BETWEEN" ...
>  "LIKE" ...
>  "SIMILAR" ...
>  "+" ...
>  "-" ...
>  "*" ...
>  "/" ...
>  "%" ...
>  "||" ...
>  "AND" ...
>  "OR" ...
>  "IS" ...
>  "MEMBER" ...
>  "SUBMULTISET" ...
>  "CONTAINS" ...
>  "OVERLAPS" ...
>  "EQUALS" ...
>  "PRECEDES" ...
>  "SUCCEEDS" ...
>  "MULTISET" ...
>  "[" ...
>  "UNION" ...
>  "INTERSECT" ...
>  "EXCEPT" ...
>  "MINUS" ...
>  "(" ...
>  
>  at 
> org.apache.flink.table.calcite.FlinkPlannerImpl.parse(FlinkPlannerImpl.scala:94)
>  at 
> org.apache.flink.table.api.TableEnvironment.sqlQuery(TableEnvironment.scala:743)
>  at 
> com.opensourceteams.module.bigdata.flink.example.tableapi.test.Run$.main(Run.scala:28)
>  at 
> com.opensourceteams.module.bigdata.flink.example.tableapi.test.Run.main(Run.scala)
>  
>  
> ===
>  
>  
> !image-2019-03-14-17-41-43-840.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] godfreyhe commented on a change in pull request #7969: [FLINK-11896] [table-planner-blink] Introduce stream physical nodes

2019-03-14 Thread GitBox
godfreyhe commented on a change in pull request #7969: [FLINK-11896] 
[table-planner-blink] Introduce stream physical nodes
URL: https://github.com/apache/flink/pull/7969#discussion_r265836012
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/plan/nodes/physical/stream/StreamExecFirstLastRow.scala
 ##
 @@ -0,0 +1,73 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.plan.nodes.physical.stream
+
+import org.apache.calcite.plan.{RelOptCluster, RelTraitSet}
+import org.apache.calcite.rel.`type`.RelDataType
+import org.apache.calcite.rel.{RelNode, RelWriter, SingleRel}
+
+import java.util
+
+import scala.collection.JavaConversions._
+
+/**
+  * Stream physical RelNode which deduplicate on keys and keeps only first row 
or last row.
+  * NOTES: only supports sort on proctime now.
+  */
+class StreamExecFirstLastRow(
+cluster: RelOptCluster,
+traitSet: RelTraitSet,
+inputRel: RelNode,
+uniqueKeys: Array[Int],
+isRowtime: Boolean,
+isLastRowMode: Boolean)
+  extends SingleRel(cluster, traitSet, inputRel)
+  with StreamPhysicalRel {
+
+  def getUniqueKeys: Array[Int] = uniqueKeys
+
+  override def producesUpdates: Boolean = isLastRowMode
+
+  override def consumesRetractions: Boolean = true
+
+  override def needsUpdatesAsRetraction(input: RelNode): Boolean = true
+
+  override def deriveRowType(): RelDataType = getInput.getRowType
+
+  override def copy(traitSet: RelTraitSet, inputs: util.List[RelNode]): 
RelNode = {
+new StreamExecFirstLastRow(
+  cluster,
+  traitSet,
+  inputs.get(0),
+  uniqueKeys,
+  isRowtime,
+  isLastRowMode)
+  }
+
+  override def explainTerms(pw: RelWriter): RelWriter = {
+val fieldNames = getRowType.getFieldNames
+val orderString = if (isRowtime) "ROWTIME" else "PROCTIME"
+super.explainTerms(pw)
+  .item("key", uniqueKeys.map(fieldNames.get).mkString(", "))
+  .item("select", fieldNames.mkString(", "))
+  .item("order", orderString)
+  .item("mode", if (isLastRowMode) "LastRow" else "FirstRow")
 
 Review comment:
   field names is needless if the `RelNode` does not change output type


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-11924) Exception in thread "main" org.apache.flink.table.api.SqlParserException: SQL parse failed. Encountered "EXISTS" at line 1, column 40.

2019-03-14 Thread thinktothings (JIRA)
thinktothings created FLINK-11924:
-

 Summary: Exception in thread "main" 
org.apache.flink.table.api.SqlParserException: SQL parse failed. Encountered 
"EXISTS" at line 1, column 40.
 Key: FLINK-11924
 URL: https://issues.apache.org/jira/browse/FLINK-11924
 Project: Flink
  Issue Type: Bug
  Components: API / Table SQL
Affects Versions: 1.7.2
 Environment: ).os mac 10.14.3 

).java  1.8.0_191

).2.11.12

).flink 1.7.2

-

EXISTS is error,but i replace in is ok

 

-

 

 

 

package 
com.opensourceteams.module.bigdata.flink.example.sql.dataset.operations.setOperations.exists

import org.apache.flink.api.scala.\{ExecutionEnvironment, _}
import org.apache.flink.table.api.TableEnvironment
import org.apache.flink.table.api.scala._

object Run {



 def main(args: Array[String]): Unit = {


 //得到批环境
 val env = ExecutionEnvironment.getExecutionEnvironment


 val dataSet = 
env.fromElements((1,"小明",15,"男",1500),(2,"小王",45,"男",4000),(3,"小李",25,"女",800),(4,"小慧",35,"女",500))
 val dataSet2 = 
env.fromElements((1,"小明",15,"男",1500),(2,"小王",45,"男",4000),(30,"小李",25,"女",800),(40,"小慧",35,"女",500))

 //得到Table环境
 val tableEnv = TableEnvironment.getTableEnvironment(env)
 //注册table
 tableEnv.registerDataSet("user",dataSet,'id,'name,'age,'sex,'salary)
 tableEnv.registerDataSet("t2",dataSet2,'id,'name,'age,'sex,'salary)


 /**
 * in ,子查询
 */
 tableEnv.sqlQuery(

 "select t1.* FROM `user`as t1 where id EXISTS " +
 " (select id from t2) "




 )
 .first(100).print()


 

 }

}
Reporter: thinktothings


Exception in thread "main" org.apache.flink.table.api.SqlParserException: SQL 
parse failed. Encountered "EXISTS" at line 1, column 40.
Was expecting one of:
  
 "ORDER" ...
 "LIMIT" ...
 "OFFSET" ...
 "FETCH" ...
 "GROUP" ...
 "HAVING" ...
 "WINDOW" ...
 "UNION" ...
 "INTERSECT" ...
 "EXCEPT" ...
 "MINUS" ...
 "NOT" ...
 "IN" ...
 "<" ...
 "<=" ...
 ">" ...
 ">=" ...
 "=" ...
 "<>" ...
 "!=" ...
 "BETWEEN" ...
 "LIKE" ...
 "SIMILAR" ...
 "+" ...
 "-" ...
 "*" ...
 "/" ...
 "%" ...
 "||" ...
 "AND" ...
 "OR" ...
 "IS" ...
 "MEMBER" ...
 "SUBMULTISET" ...
 "CONTAINS" ...
 "OVERLAPS" ...
 "EQUALS" ...
 "PRECEDES" ...
 "SUCCEEDS" ...
 "IMMEDIATELY" ...
 "MULTISET" ...
 "[" ...
 "." ...
 "(" ...
 
 at 
org.apache.flink.table.calcite.FlinkPlannerImpl.parse(FlinkPlannerImpl.scala:94)
 at 
org.apache.flink.table.api.TableEnvironment.sqlQuery(TableEnvironment.scala:743)
 at 
com.opensourceteams.module.bigdata.flink.example.sql.dataset.operations.setOperations.exists.Run$.main(Run.scala:31)
 at 
com.opensourceteams.module.bigdata.flink.example.sql.dataset.operations.setOperations.exists.Run.main(Run.scala)
Caused by: org.apache.calcite.sql.parser.SqlParseException: Encountered 
"EXISTS" at line 1, column 40.
Was expecting one of:
  
 "ORDER" ...
 "LIMIT" ...
 "OFFSET" ...
 "FETCH" ...
 "GROUP" ...
 "HAVING" ...
 "WINDOW" ...
 "UNION" ...
 "INTERSECT" ...
 "EXCEPT" ...
 "MINUS" ...
 "NOT" ...
 "IN" ...
 "<" ...
 "<=" ...
 ">" ...
 ">=" ...
 "=" ...
 "<>" ...
 "!=" ...
 "BETWEEN" ...
 "LIKE" ...
 "SIMILAR" ...
 "+" ...
 "-" ...
 "*" ...
 "/" ...
 "%" ...
 "||" ...
 "AND" ...
 "OR" ...
 "IS" ...
 "MEMBER" ...
 "SUBMULTISET" ...
 "CONTAINS" ...
 "OVERLAPS" ...
 "EQUALS" ...
 "PRECEDES" ...
 "SUCCEEDS" ...
 "IMMEDIATELY" ...
 "MULTISET" ...
 "[" ...
 "." ...
 "(" ...
 
 at 
org.apache.calcite.sql.parser.impl.SqlParserImpl.convertException(SqlParserImpl.java:347)
 at 
org.apache.calcite.sql.parser.impl.SqlParserImpl.normalizeException(SqlParserImpl.java:128)
 at org.apache.calcite.sql.parser.SqlParser.parseQuery(SqlParser.java:137)
 at org.apache.calcite.sql.parser.SqlParser.parseStmt(SqlParser.java:162)
 at 
org.apache.flink.table.calcite.FlinkPlannerImpl.parse(FlinkPlannerImpl.scala:90)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] dianfu commented on issue #7848: [FLINK-10755][table] Port external catalogs in Table API extension points to flink-table-common

2019-03-14 Thread GitBox
dianfu commented on issue #7848: [FLINK-10755][table] Port external catalogs in 
Table API extension points to flink-table-common
URL: https://github.com/apache/flink/pull/7848#issuecomment-473129098
 
 
   @sunjincheng121 @twalthr I have rebased the PR since FLINK-11449 has been 
merged. Could you help to take a look at this PR? Thanks in advance.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-9650) Support Protocol Buffers formats

2019-03-14 Thread Arup Malakar (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793231#comment-16793231
 ] 

Arup Malakar commented on FLINK-9650:
-

Thanks [~yanghua] , and that sounds great!  Just wanted to make sure this 
didn't fall through the cracks.

> Support Protocol Buffers formats
> 
>
> Key: FLINK-9650
> URL: https://issues.apache.org/jira/browse/FLINK-9650
> Project: Flink
>  Issue Type: Sub-task
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: zhangminglei
>Assignee: zhangminglei
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We need to generate a \{{TypeInformation}} from a standard [Protobuf 
> schema|https://github.com/google/protobuf] (and maybe vice verse). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] flinkbot edited a comment on issue #7940: [hotfix][docs] fix error in functions example

2019-03-14 Thread GitBox
flinkbot edited a comment on issue #7940: [hotfix][docs] fix error in functions 
example 
URL: https://github.com/apache/flink/pull/7940#issuecomment-470805294
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❗ 3. Needs [attention] from.
   - Needs attention by @twalthr [PMC], @zentol [PMC]
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] leesf commented on issue #7940: [hotfix][docs] fix error in functions example

2019-03-14 Thread GitBox
leesf commented on issue #7940: [hotfix][docs] fix error in functions example 
URL: https://github.com/apache/flink/pull/7940#issuecomment-473125200
 
 
   @flinkbot attention @twalthr @zentol 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] leesf removed a comment on issue #7940: [hotfix][docs] fix error in functions example

2019-03-14 Thread GitBox
leesf removed a comment on issue #7940: [hotfix][docs] fix error in functions 
example 
URL: https://github.com/apache/flink/pull/7940#issuecomment-471980702
 
 
   cc @twalthr @zentol 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (FLINK-11919) Exception in thread "main" org.apache.flink.table.api.SqlParserException: SQL parse failed. Encountered "FROM user" at line 1, column 17.

2019-03-14 Thread thinktothings (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

thinktothings resolved FLINK-11919.
---
   Resolution: Fixed
Fix Version/s: 1.7.2
 Release Note:  some string combinations are already reserved as keywords 
for future use. If you want to use one of the following strings as a field 
name, make sure to surround them with backticks (e.g. `value`, `count`).

 some string combinations are already reserved as keywords for future use. If 
you want to use one of the following strings as a field name, make sure to 
surround them with backticks (e.g. {{`value`}}, {{`count`}}).

> Exception in thread "main" org.apache.flink.table.api.SqlParserException: SQL 
> parse failed. Encountered "FROM user" at line 1, column 17.
> -
>
> Key: FLINK-11919
> URL: https://issues.apache.org/jira/browse/FLINK-11919
> Project: Flink
>  Issue Type: Bug
>  Components: API / Table SQL
>Affects Versions: 1.7.2
> Environment: os: mac 0.14.3 
> java: 1.8.0_191
> scala: 2.11.12
> code: 
> https://github.com/opensourceteams/flink-maven-scala/blob/master/src/main/scala/com/opensourceteams/module/bigdata/flink/example/sql/user/Run.scala
>  
>Reporter: thinktothings
>Priority: Blocker
> Fix For: 1.7.2
>
> Attachments: image-2019-03-14-17-41-43-840.png
>
>
> Register table name, can not use user, use other names, such as user1 can be 
> normal
>  
>  
> ===
> package com.opensourceteams.module.bigdata.flink.example.tableapi.test
> import org.apache.flink.api.scala.ExecutionEnvironment
> import org.apache.flink.table.api.scala._
> import org.apache.flink.api.scala._
> import org.apache.flink.table.api.TableEnvironment
> object Run {
>  def main(args: Array[String]): Unit = {
>  //得到批环境
>  val env = ExecutionEnvironment.getExecutionEnvironment
>  val dataSet = env.fromElements(("小明",15,"男"),("小李",25,"女"))
>  //得到Table环境
>  val tableEnv = TableEnvironment.getTableEnvironment(env)
>  //注册table
>  tableEnv.registerDataSet("user",dataSet,'name,'age,'sex)
>  tableEnv.sqlQuery(s"select name,age FROM user")
>  .first(100).print()
>  }
> }
>  
> ===
>  
> Exception in thread "main" org.apache.flink.table.api.SqlParserException: SQL 
> parse failed. Encountered "FROM user" at line 1, column 17.
> Was expecting one of:
>   
>  "ORDER" ...
>  "LIMIT" ...
>  "OFFSET" ...
>  "FETCH" ...
>  "FROM"  ...
>  "FROM"  ...
>  "FROM"  ...
>  "FROM"  ...
>  "FROM"  ...
>  "FROM" "LATERAL" ...
>  "FROM" "(" ...
>  "FROM" "UNNEST" ...
>  "FROM" "TABLE" ...
>  "," ...
>  "AS" ...
>   ...
>   ...
>   ...
>   ...
>   ...
>  "." ...
>  "NOT" ...
>  "IN" ...
>  "<" ...
>  "<=" ...
>  ">" ...
>  ">=" ...
>  "=" ...
>  "<>" ...
>  "!=" ...
>  "BETWEEN" ...
>  "LIKE" ...
>  "SIMILAR" ...
>  "+" ...
>  "-" ...
>  "*" ...
>  "/" ...
>  "%" ...
>  "||" ...
>  "AND" ...
>  "OR" ...
>  "IS" ...
>  "MEMBER" ...
>  "SUBMULTISET" ...
>  "CONTAINS" ...
>  "OVERLAPS" ...
>  "EQUALS" ...
>  "PRECEDES" ...
>  "SUCCEEDS" ...
>  "MULTISET" ...
>  "[" ...
>  "UNION" ...
>  "INTERSECT" ...
>  "EXCEPT" ...
>  "MINUS" ...
>  "(" ...
>  
>  at 
> org.apache.flink.table.calcite.FlinkPlannerImpl.parse(FlinkPlannerImpl.scala:94)
>  at 
> org.apache.flink.table.api.TableEnvironment.sqlQuery(TableEnvironment.scala:743)
>  at 
> com.opensourceteams.module.bigdata.flink.example.tableapi.test.Run$.main(Run.scala:28)
>  at 
> com.opensourceteams.module.bigdata.flink.example.tableapi.test.Run.main(Run.scala)
>  
>  
> ===
>  
>  
> !image-2019-03-14-17-41-43-840.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9477) Support SQL 2016 JSON functions in Flink SQL

2019-03-14 Thread vinoyang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793224#comment-16793224
 ] 

vinoyang commented on FLINK-9477:
-

[~suez1224] [~twalthr] I suggest that we could split all the JSON functions 
supported by Calcite 1.18 into single subtasks to speed up the whole progress. 
What do you think?

> Support SQL 2016 JSON functions in Flink SQL
> 
>
> Key: FLINK-9477
> URL: https://issues.apache.org/jira/browse/FLINK-9477
> Project: Flink
>  Issue Type: New Feature
>  Components: API / Table SQL
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-7237) Remove DateTimeUtils from Flink once Calcite is upgraded to 1.14

2019-03-14 Thread vinoyang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

vinoyang reassigned FLINK-7237:
---

Assignee: vinoyang

> Remove DateTimeUtils from Flink once Calcite is upgraded to 1.14
> 
>
> Key: FLINK-7237
> URL: https://issues.apache.org/jira/browse/FLINK-7237
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Table SQL
>Reporter: Haohui Mai
>Assignee: vinoyang
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11895) Allow FileSystem Configs to be altered at Runtime

2019-03-14 Thread vinoyang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

vinoyang reassigned FLINK-11895:


Assignee: vinoyang

> Allow FileSystem Configs to be altered at Runtime
> -
>
> Key: FLINK-11895
> URL: https://issues.apache.org/jira/browse/FLINK-11895
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Luka Jurukovski
>Assignee: vinoyang
>Priority: Minor
>
> This stems from a need to be able to pass in S3 auth keys at runtime in order 
> to allow users to specify the keys they want to use. Based on the 
> documentation it seems that currently S3 keys need to be part of the Flink 
> cluster configuration, in a hadoop file (which the cluster needs to pointed 
> to) or JVM args.
> This only seems to apply to the streaming API. Also Feel free to correct the 
> following if I am wrong, as there may be pieces I have no run across, or 
> parts of the code I have misunderstood.
> Currently it seems that FileSystems are inferred based on the extension type 
> and a set of cached Filesystems that are generated in the background. These 
> seem to use the config as defined at the time they are stood up. 
> Unfortunately there is no way to tap into this control mechanism or override 
> this behavior as many places in the code pulls from this cache. This is 
> particularly painful in the sink instance as there are places where this is 
> used that are not accessible outside the package it is implemented.
> Through a pretty hacky mechanism I have proved out that this is a self 
> imposed limitation, as I was able to change the code to pass in a Filesystem 
> from the top level and have it read and write to S3 given keys I set at 
> runtime.
> The current methodology is convenient, however there should be finer grain 
> controls for instances where the cluster is in a multitenant environment.
> As a final note it seems like both the FileSystem and FileSystemFactory 
> classes are not Serializable. I can see why this would be the case in former, 
> but I am not clear as to why a factory class would not be Serializable (like 
> in the case of BucketFactory). If this can be made serializable this should 
> make this a much cleaner process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9650) Support Protocol Buffers formats

2019-03-14 Thread vinoyang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793218#comment-16793218
 ] 

vinoyang commented on FLINK-9650:
-

[~amalakar] yes, I am. But, I need some time to process other things. Then I 
will deal with this issue.

> Support Protocol Buffers formats
> 
>
> Key: FLINK-9650
> URL: https://issues.apache.org/jira/browse/FLINK-9650
> Project: Flink
>  Issue Type: Sub-task
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: zhangminglei
>Assignee: zhangminglei
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We need to generate a \{{TypeInformation}} from a standard [Protobuf 
> schema|https://github.com/google/protobuf] (and maybe vice verse). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] klion26 commented on a change in pull request #7978: [FLINK-11910] [Yarn] add customizable yarn application type

2019-03-14 Thread GitBox
klion26 commented on a change in pull request #7978: [FLINK-11910] [Yarn] add 
customizable yarn application type
URL: https://github.com/apache/flink/pull/7978#discussion_r265810303
 
 

 ##
 File path: 
flink-yarn/src/main/java/org/apache/flink/yarn/AbstractYarnClusterDescriptor.java
 ##
 @@ -468,6 +470,8 @@ private void 
validateClusterSpecification(ClusterSpecification clusterSpecificat
flinkConfiguration.setString(dynProperty.getKey(), 
dynProperty.getValue());
}
 
+   this.flinkVersion = dynProperties.getOrDefault("flink-version", 
"");
 
 Review comment:
   Do you want to show the same version for different jars?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11914) Expose a REST endpoint in JobManager to kill specific TaskManager

2019-03-14 Thread Shuyi Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793176#comment-16793176
 ] 

Shuyi Chen commented on FLINK-11914:


[~gjy], thanks a lot for the quick reply. Yes, to kill the TM process on a 
host, it would require sudo permission to do so. And we dont allow individual 
job owners to have this privilege for security reason, as they might 
accidentally kill other user's job colocating on the same host.

Also, exposing the API will allow our external monitoring service (called 
watchdog) to monitor the TM health and programmatically disconnect it if it 
experiences issues. I see the JobMasterGateway already has a 
disconnectTaskManager() interface, so it wont be too much effort to add a REST 
endpoint to expose the capability. What do you think?

> Expose a REST endpoint in JobManager to kill specific TaskManager
> -
>
> Key: FLINK-11914
> URL: https://issues.apache.org/jira/browse/FLINK-11914
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / REST
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
>
> we want to add capability in the Flink web UI to kill each individual TM by 
> clicking a button, this would require first exposing the capability from the 
> REST API endpoint. The reason is that  some TM might be running on a heavily 
> loaded YARN host over time, and we want to kill just that TM and have flink 
> JM to reallocate a TM to restart the job graph. The other approach would be 
> restart the entire YARN job and this is heavy-weight.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11143) AskTimeoutException is thrown during job submission and completion

2019-03-14 Thread Alex Vinnik (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793092#comment-16793092
 ] 

Alex Vinnik commented on FLINK-11143:
-

[~Zentol] is there a repo with 1.8/1.9 build artifact I can compile against? 
Having trouble building source code both master and release-1.8
{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-shade-plugin:3.0.0:shade (shade-flink) on 
project flink-s3-fs-base: Error creating shaded jar: duplicate entry: 
META-INF/services/org.apache.flink.fs.s3base.shaded.com.fasterxml.jackson.core.ObjectCodec
 -> [Help 1] {noformat}

> AskTimeoutException is thrown during job submission and completion
> --
>
> Key: FLINK-11143
> URL: https://issues.apache.org/jira/browse/FLINK-11143
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.6.2
>Reporter: Alex Vinnik
>Priority: Major
>
> For more details please see the thread
> [http://mail-archives.apache.org/mod_mbox/flink-user/201812.mbox/%3cc2fb26f9-1410-4333-80f4-34807481b...@gmail.com%3E]
> On submission 
> 2018-12-12 02:28:31 ERROR JobsOverviewHandler:92 - Implementation error: 
> Unhandled exception.
>  akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka://flink/user/dispatcher#225683351|#225683351]] after [1 ms]. 
> Sender[null] sent message of type 
> "org.apache.flink.runtime.rpc.messages.LocalFencedMessage".
>  at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:604)
>  at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:126)
>  at 
> scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
>  at 
> scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
>  at 
> scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
>  at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:329)
>  at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:280)
>  at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:284)
>  at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:236)
>  at java.lang.Thread.run(Thread.java:748)
>  
> On completion
>  
> {"errors":["Internal server error."," side:\njava.util.concurrent.CompletionException: 
> akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka://flink/user/dispatcher#105638574]] after [1 ms]. 
> Sender[null] sent message of type 
> \"org.apache.flink.runtime.rpc.messages.LocalFencedMessage\".
> at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
> at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
> at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
> at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:772)
> at akka.dispatch.OnComplete.internal(Future.scala:258)
> at akka.dispatch.OnComplete.internal(Future.scala:256)
> at akka.dispatch.japi$CallbackBridge.apply(Future.scala:186)
> at akka.dispatch.japi$CallbackBridge.apply(Future.scala:183)
> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
> at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:83)
> at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
> at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
> at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:603)
> at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:126)
> at 
> scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
> at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
> at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
> at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:329)
> at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:280)
> at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:284)
> at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:236)
> at java.lang.Thread.run(Thread.java:748)\nCaused by: 
> akka.pattern.AskTimeoutException: Ask timed out on 
> 

[jira] [Comment Edited] (FLINK-11143) AskTimeoutException is thrown during job submission and completion

2019-03-14 Thread Alex Vinnik (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793092#comment-16793092
 ] 

Alex Vinnik edited comment on FLINK-11143 at 3/14/19 9:51 PM:
--

[~Zentol] is there a repo with 1.8/1.9 build artifacts I can compile against? 
Having trouble building source code both master and release-1.8
{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-shade-plugin:3.0.0:shade (shade-flink) on 
project flink-s3-fs-base: Error creating shaded jar: duplicate entry: 
META-INF/services/org.apache.flink.fs.s3base.shaded.com.fasterxml.jackson.core.ObjectCodec
 -> [Help 1] {noformat}


was (Author: alvinnik):
[~Zentol] is there a repo with 1.8/1.9 build artifact I can compile against? 
Having trouble building source code both master and release-1.8
{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-shade-plugin:3.0.0:shade (shade-flink) on 
project flink-s3-fs-base: Error creating shaded jar: duplicate entry: 
META-INF/services/org.apache.flink.fs.s3base.shaded.com.fasterxml.jackson.core.ObjectCodec
 -> [Help 1] {noformat}

> AskTimeoutException is thrown during job submission and completion
> --
>
> Key: FLINK-11143
> URL: https://issues.apache.org/jira/browse/FLINK-11143
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.6.2
>Reporter: Alex Vinnik
>Priority: Major
>
> For more details please see the thread
> [http://mail-archives.apache.org/mod_mbox/flink-user/201812.mbox/%3cc2fb26f9-1410-4333-80f4-34807481b...@gmail.com%3E]
> On submission 
> 2018-12-12 02:28:31 ERROR JobsOverviewHandler:92 - Implementation error: 
> Unhandled exception.
>  akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka://flink/user/dispatcher#225683351|#225683351]] after [1 ms]. 
> Sender[null] sent message of type 
> "org.apache.flink.runtime.rpc.messages.LocalFencedMessage".
>  at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:604)
>  at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:126)
>  at 
> scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
>  at 
> scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
>  at 
> scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
>  at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:329)
>  at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:280)
>  at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:284)
>  at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:236)
>  at java.lang.Thread.run(Thread.java:748)
>  
> On completion
>  
> {"errors":["Internal server error."," side:\njava.util.concurrent.CompletionException: 
> akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka://flink/user/dispatcher#105638574]] after [1 ms]. 
> Sender[null] sent message of type 
> \"org.apache.flink.runtime.rpc.messages.LocalFencedMessage\".
> at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
> at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
> at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
> at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:772)
> at akka.dispatch.OnComplete.internal(Future.scala:258)
> at akka.dispatch.OnComplete.internal(Future.scala:256)
> at akka.dispatch.japi$CallbackBridge.apply(Future.scala:186)
> at akka.dispatch.japi$CallbackBridge.apply(Future.scala:183)
> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
> at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:83)
> at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
> at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
> at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:603)
> at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:126)
> at 
> scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
> at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
> at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
> at 
> 

[jira] [Commented] (FLINK-11143) AskTimeoutException is thrown during job submission and completion

2019-03-14 Thread Alex Vinnik (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793059#comment-16793059
 ] 

Alex Vinnik commented on FLINK-11143:
-

[~Zentol] still seeing this problem using flink 1.7.2. Will try 1.8/1.9 and 
report back.

 
{noformat}
Exception in thread "main" 
org.apache.flink.runtime.client.JobExecutionException: Could not retrieve 
JobResult.
 at 
org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:643)
 at org.apache.flink.client.LocalExecutor.executePlan(LocalExecutor.java:223)
 at org.apache.flink.api.java.LocalEnvironment.execute(LocalEnvironment.java:91)
 at 
com.sailpoint.ida.data.jobs.peergrouptransform.PeerGroupTransformJob.main(PeerGroupTransformJob.java:107)
Caused by: akka.pattern.AskTimeoutException: Ask timed out on 
[Actor[akka://flink/user/dispatcher0f6e9a98-9ac5-48a9-aa5b-36aa96e74c69#897152189]]
 after [1 ms]. Sender[null] sent message of type 
"org.apache.flink.runtime.rpc.messages.LocalFencedMessage".
 at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:604)
 at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:126)
 at 
scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
 at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
 at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
 at 
akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:329)
 at 
akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:280)
 at 
akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:284)
 at 
akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:236)
 at java.lang.Thread.run(Thread.java:748)
real 2m49.557s
{noformat}
 

> AskTimeoutException is thrown during job submission and completion
> --
>
> Key: FLINK-11143
> URL: https://issues.apache.org/jira/browse/FLINK-11143
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.6.2
>Reporter: Alex Vinnik
>Priority: Major
>
> For more details please see the thread
> [http://mail-archives.apache.org/mod_mbox/flink-user/201812.mbox/%3cc2fb26f9-1410-4333-80f4-34807481b...@gmail.com%3E]
> On submission 
> 2018-12-12 02:28:31 ERROR JobsOverviewHandler:92 - Implementation error: 
> Unhandled exception.
>  akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka://flink/user/dispatcher#225683351|#225683351]] after [1 ms]. 
> Sender[null] sent message of type 
> "org.apache.flink.runtime.rpc.messages.LocalFencedMessage".
>  at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:604)
>  at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:126)
>  at 
> scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
>  at 
> scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
>  at 
> scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
>  at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:329)
>  at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:280)
>  at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:284)
>  at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:236)
>  at java.lang.Thread.run(Thread.java:748)
>  
> On completion
>  
> {"errors":["Internal server error."," side:\njava.util.concurrent.CompletionException: 
> akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka://flink/user/dispatcher#105638574]] after [1 ms]. 
> Sender[null] sent message of type 
> \"org.apache.flink.runtime.rpc.messages.LocalFencedMessage\".
> at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
> at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
> at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
> at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:772)
> at akka.dispatch.OnComplete.internal(Future.scala:258)
> at akka.dispatch.OnComplete.internal(Future.scala:256)
> at akka.dispatch.japi$CallbackBridge.apply(Future.scala:186)
> at akka.dispatch.japi$CallbackBridge.apply(Future.scala:183)
> at 

[jira] [Commented] (FLINK-10705) Rework Flink Web Dashboard

2019-03-14 Thread Robert Metzger (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16792904#comment-16792904
 ] 

Robert Metzger commented on FLINK-10705:


[~vthinkxie] Do you have a rough timeline for when the PR will be available to 
review? (It is not urgent, but I'd like to know when I will need to spend time 
on the review)

> Rework Flink Web Dashboard
> --
>
> Key: FLINK-10705
> URL: https://issues.apache.org/jira/browse/FLINK-10705
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.6.2
>Reporter: Fabian Wollert
>Assignee: Yadong Xie
>Priority: Major
> Attachments: 3rdpartylicenses.txt, image-2018-10-29-09-17-24-115.png, 
> snapshot.jpeg
>
>
> The Flink Dashboard is very simple currently and should get updated. This is 
> the umbrella ticket for other tickets regarding this. Please check the 
> sub-tickets for details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-10994) The bug of timestampadd handles time

2019-03-14 Thread Rong Rong (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rong Rong closed FLINK-10994.
-
Resolution: Duplicate

> The bug of timestampadd handles time
> 
>
> Key: FLINK-10994
> URL: https://issues.apache.org/jira/browse/FLINK-10994
> Project: Flink
>  Issue Type: Bug
>  Components: API / Table SQL
>Affects Versions: 1.6.2, 1.7.1
>Reporter: Forward Xu
>Assignee: Forward Xu
>Priority: Major
>  Labels: pull-request-available
>
> The error occur when {{timestampadd(MINUTE, 1, time '01:00:00')}} is executed:
> java.lang.ClassCastException: java.lang.Integer cannot be cast to 
> java.lang.Long
> at org.apache.calcite.rex.RexBuilder.clean(RexBuilder.java:1520)
>  at org.apache.calcite.rex.RexBuilder.makeLiteral(RexBuilder.java:1318)
>  at 
> org.apache.flink.table.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:135)
>  at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:620)
>  at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:540)
>  at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule$ProjectReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:288)
> I think it should meet the following conditions:
> ||expression||Expect the result||
> |timestampadd(MINUTE, -1, time '00:00:00')|23:59:00|
> |timestampadd(MINUTE, 1, time '00:00:00')|00:01:00|
> |timestampadd(MINUTE, 1, time '23:59:59')|00:00:59|
> |timestampadd(SECOND, 1, time '23:59:59')|00:00:00|
> |timestampadd(HOUR, 1, time '23:59:59')|00:59:59|
> This problem seems to be a bug in calcite. I have submitted isuse to calcite. 
> The following is the link.
>  CALCITE-2699



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-10076) Upgrade Calcite dependency to 1.18

2019-03-14 Thread Rong Rong (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rong Rong reassigned FLINK-10076:
-

Assignee: Rong Rong  (was: Shuyi Chen)

> Upgrade Calcite dependency to 1.18
> --
>
> Key: FLINK-10076
> URL: https://issues.apache.org/jira/browse/FLINK-10076
> Project: Flink
>  Issue Type: Task
>  Components: SQL / Planner
>Reporter: Shuyi Chen
>Assignee: Rong Rong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7235) Backport CALCITE-1884 to the Flink repository before Calcite 1.14

2019-03-14 Thread Rong Rong (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16792841#comment-16792841
 ] 

Rong Rong commented on FLINK-7235:
--

Yes I can take a look :-)

> Backport CALCITE-1884 to the Flink repository before Calcite 1.14
> -
>
> Key: FLINK-7235
> URL: https://issues.apache.org/jira/browse/FLINK-7235
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Table SQL
>Reporter: Haohui Mai
>Assignee: Rong Rong
>Priority: Major
>
> We need to backport CALCITE-1884 in order to unblock upgrading Calcite to 
> 1.13.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-7235) Backport CALCITE-1884 to the Flink repository before Calcite 1.14

2019-03-14 Thread Rong Rong (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-7235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rong Rong reassigned FLINK-7235:


Assignee: Rong Rong  (was: Haohui Mai)

> Backport CALCITE-1884 to the Flink repository before Calcite 1.14
> -
>
> Key: FLINK-7235
> URL: https://issues.apache.org/jira/browse/FLINK-7235
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Table SQL
>Reporter: Haohui Mai
>Assignee: Rong Rong
>Priority: Major
>
> We need to backport CALCITE-1884 in order to unblock upgrading Calcite to 
> 1.13.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] hequn8128 commented on issue #7976: [FLINK-11908][table] Port window classes into flink-api-java

2019-03-14 Thread GitBox
hequn8128 commented on issue #7976: [FLINK-11908][table] Port window classes 
into flink-api-java
URL: https://github.com/apache/flink/pull/7976#issuecomment-472939243
 
 
   @sunjincheng121 Thanks a lot for your review. I have addressed all your 
comments.
   
   @twalthr @sunjincheng121 I have also rebased to the deprecating window pr 
and delete the deprecated methods and classes in this one. Would be great if 
you can take a look. 
   
   Best, Hequn 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7713: [FLINK-10995][network] Copy intermediate serialization results only once for broadcast mode

2019-03-14 Thread GitBox
zhijiangW commented on a change in pull request #7713: [FLINK-10995][network] 
Copy intermediate serialization results only once for broadcast mode
URL: https://github.com/apache/flink/pull/7713#discussion_r265632723
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/api/writer/BroadcastRecordWriterTest.java
 ##
 @@ -21,30 +21,30 @@
 import org.apache.flink.core.io.IOReadableWritable;
 
 Review comment:
   I think I would squash the second with third commit. And submit a hotfix PR 
separately for the fourth commit. What do you think. The first commit should be 
solved in previous PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7713: [FLINK-10995][network] Copy intermediate serialization results only once for broadcast mode

2019-03-14 Thread GitBox
zhijiangW commented on a change in pull request #7713: [FLINK-10995][network] 
Copy intermediate serialization results only once for broadcast mode
URL: https://github.com/apache/flink/pull/7713#discussion_r265627063
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/api/writer/BroadcastRecordWriterTest.java
 ##
 @@ -21,30 +21,30 @@
 import org.apache.flink.core.io.IOReadableWritable;
 
 Review comment:
   I guess you meant the third commit in this PR? I should explain the 
differences for these commits.
   
   The second commit adds one whole test for `BroadcastRecordWriter`.
   
   The third commit is just making the previous tests in `RecordWriterTest` 
work in the way of `BroadcastRecordWriter`. So it is a refactor work. Before 
refactor, all the tests would create `SelectorRecordWriter` instance when 
running in `BroadcastRecordWriterTest` . After refactor, we defined the 
`isBroadcastWriter` for creating `BroadcastRecordWriter` instance when running 
these tests in `BroadcastRecordWriterTest`. It could be squashed with the 
second commit. But I thought it would be clear for review separately.
   
   The fourth commit is for simplifying the previous codes, actually it is not 
related to this PR. We could also submit a separate PR for it.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11910) Make Yarn Application Type Customizable with Flink Version

2019-03-14 Thread Zhenqiu Huang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhenqiu Huang updated FLINK-11910:
--
Attachment: Screen Shot 2019-03-14 at 8.17.18 AM.png

> Make Yarn Application Type Customizable with Flink Version
> --
>
> Key: FLINK-11910
> URL: https://issues.apache.org/jira/browse/FLINK-11910
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Affects Versions: 1.6.3, 1.6.4, 1.7.2
>Reporter: Zhenqiu Huang
>Assignee: Zhenqiu Huang
>Priority: Minor
>  Labels: pull-request-available
> Attachments: Screen Shot 2019-03-14 at 8.17.18 AM.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Internally, our organization support multiple version of Flink in production. 
> It will be more convenient for us to distinguish different version of jobs by 
> using the Application Type. 
> The simple solution is let user to use dynamic properties to set 
> "flink-version". If the property is set, we add it as suffix of "Apache 
> Flink" by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11910) Make Yarn Application Type Customizable with Flink Version

2019-03-14 Thread Zhenqiu Huang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16792787#comment-16792787
 ] 

Zhenqiu Huang commented on FLINK-11910:
---

!Screen Shot 2019-03-14 at 8.17.18 AM.png!

Verified end to end in our yarn cluster.

> Make Yarn Application Type Customizable with Flink Version
> --
>
> Key: FLINK-11910
> URL: https://issues.apache.org/jira/browse/FLINK-11910
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Affects Versions: 1.6.3, 1.6.4, 1.7.2
>Reporter: Zhenqiu Huang
>Assignee: Zhenqiu Huang
>Priority: Minor
>  Labels: pull-request-available
> Attachments: Screen Shot 2019-03-14 at 8.17.18 AM.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Internally, our organization support multiple version of Flink in production. 
> It will be more convenient for us to distinguish different version of jobs by 
> using the Application Type. 
> The simple solution is let user to use dynamic properties to set 
> "flink-version". If the property is set, we add it as suffix of "Apache 
> Flink" by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] HuangZhenQiu commented on a change in pull request #7978: [FLINK-11910] [Yarn] add customizable yarn application type

2019-03-14 Thread GitBox
HuangZhenQiu commented on a change in pull request #7978: [FLINK-11910] [Yarn] 
add customizable yarn application type
URL: https://github.com/apache/flink/pull/7978#discussion_r265620892
 
 

 ##
 File path: 
flink-yarn/src/main/java/org/apache/flink/yarn/AbstractYarnClusterDescriptor.java
 ##
 @@ -468,6 +470,8 @@ private void 
validateClusterSpecification(ClusterSpecification clusterSpecificat
flinkConfiguration.setString(dynProperty.getKey(), 
dynProperty.getValue());
}
 
+   this.flinkVersion = dynProperties.getOrDefault("flink-version", 
"");
 
 Review comment:
   For most of the cases, we can get the precise version of flinkJarPath. But 
for some deployment systems that submit jobs from customers, different jar will 
be used. Probably, using dynamic properties can give more flexibility here. How 
do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7713: [FLINK-10995][network] Copy intermediate serialization results only once for broadcast mode

2019-03-14 Thread GitBox
zhijiangW commented on a change in pull request #7713: [FLINK-10995][network] 
Copy intermediate serialization results only once for broadcast mode
URL: https://github.com/apache/flink/pull/7713#discussion_r265620648
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferBuilder.java
 ##
 @@ -40,26 +40,33 @@
 
private final SettablePositionMarker positionMarker = new 
SettablePositionMarker();
 
-   private boolean bufferConsumerCreated = false;
-
public BufferBuilder(MemorySegment memorySegment, BufferRecycler 
recycler) {
this.memorySegment = checkNotNull(memorySegment);
this.recycler = checkNotNull(recycler);
}
 
/**
-* @return created matching instance of {@link BufferConsumer} to this 
{@link BufferBuilder}. There can exist only
-* one {@link BufferConsumer} per each {@link BufferBuilder} and vice 
versa.
+* @return created matching instance of {@link BufferConsumer} to this 
{@link BufferBuilder}.
 */
public BufferConsumer createBufferConsumer() {
-   checkState(!bufferConsumerCreated, "There can not exists two 
BufferConsumer for one BufferBuilder");
-   bufferConsumerCreated = true;
return new BufferConsumer(
memorySegment,
recycler,
positionMarker);
}
 
+   /**
+* @return created matching instance of {@link BufferConsumer} similar 
with {@link #createBufferConsumer()},
+* except for initializing its reader position based on {@link 
BufferBuilder}'s current writable position .
+*/
+   public BufferConsumer createPositionedBufferConsumer() {
 
 Review comment:
   Yes, I would supplement a unit test for it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hequn8128 commented on issue #7985: [FLINK-11918][table] Deprecated Window and Rename it to GroupWindow

2019-03-14 Thread GitBox
hequn8128 commented on issue #7985: [FLINK-11918][table] Deprecated Window and 
Rename it to GroupWindow
URL: https://github.com/apache/flink/pull/7985#issuecomment-472907627
 
 
   @twalthr @sunjincheng121 I have addressed all your comments and updated the 
PR. Would be nice if you can take another look. Thank you.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] SuXingLee commented on issue #7966: [FLINK-11887][metrics] Fixed latency metrics drift apart

2019-03-14 Thread GitBox
SuXingLee commented on issue #7966: [FLINK-11887][metrics] Fixed latency 
metrics drift apart
URL: https://github.com/apache/flink/pull/7966#issuecomment-472906062
 
 
   Thank for your comment.
   We don't use ```System.nanoTime``` for compute latency metrics directly.
   Because, when a shuffle happened bewteen source(A node) and operator(B 
node), the latency value is ```endTime - startTime```. 
   ```startTime``` is produced by source(A taskManager), but ```endTime``` is 
produced by operator(B taskManager), and as we know,```System.nanoTime()``` is 
guaranteed to be safe within a single JVM instance.
   So, it would not be a right way that change ```LatencyStats``` to use 
```System.nanoTime()``` instead.
   
   Come back to this issue 
[FLINK-11887](https://issues.apache.org/jira/browse/FLINK-11887).The original 
way that we get ```startTime``` is use 
```SystemProcessingTimeService#scheduleAtFixedRate``` to accumulate a fixed 
time interval periodicity.
   With time going on, there is no guarantee that startTime and actual time 
don't drift apart.Especially if they are executed on different machines.In my 
cluster environment,I found the startTime is much later than actual time.
   If we change ```LatencyStats``` to use 
```SystemProcessingTimeService#scheduleAtFixedRate``` to acquire 
```endTime```,it will be unable to avoid time drift apart in different nodes.
   In many data center,different linux machines use Network Time Protocol to 
synchronize time. So we use ```System.currentTimeMillis (endTime) - 
System.currentTimeMillis (startTime)``` is a relatively accurate way.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-11923) Move reporter instantiation into MetricRegistryConfiguration

2019-03-14 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-11923:


 Summary: Move reporter instantiation into 
MetricRegistryConfiguration
 Key: FLINK-11923
 URL: https://issues.apache.org/jira/browse/FLINK-11923
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Metrics
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.9.0


{{MetricReporters}} are currently instantiated in the constructor of the 
{{MetricRegistryImpl}}. To ease testing it would be great if instead already 
instantiated reporters are passed into the registry instead, as this would 
allow testing of the registry without having to deal with any configuration 
setup/parsing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-11814) Changes of FLINK-11516 causes compilation failure

2019-03-14 Thread Dian Fu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-11814.
---
Resolution: Cannot Reproduce

> Changes of FLINK-11516 causes compilation failure
> -
>
> Key: FLINK-11814
> URL: https://issues.apache.org/jira/browse/FLINK-11814
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.9.0
>Reporter: Yu Li
>Assignee: Dian Fu
>Priority: Major
>
> As titled, the change breaks compilation with below error:
> {noformat}
> Error:(70, 34) type mismatch;
>  found   : 
> scala.collection.immutable.Map[String,org.apache.flink.table.plan.stats.ColumnStats]
>  required: java.util.Map[String,org.apache.flink.table.plan.stats.ColumnStats]
> Some(new TableStats(cnt, columnStats))
> Error:(52, 33) value getColumnStats is not a member of 
> org.apache.flink.table.plan.stats.TableStats
> case Some(tStats) => tStats.getColumnStats.get(columnName)
> Error:(62, 33) value getRowCount is not a member of 
> org.apache.flink.table.plan.stats.TableStats
> case Some(tStats) => tStats.getRowCount.toDouble
> {noformat}
> And this is found in the travis pre-commit check when running 
> {{Kafka09SecuredRunITCase}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-917) Rename netty IO Thread count parameters

2019-03-14 Thread Robert Metzger (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-917:
-
Component/s: Runtime / Network

> Rename netty IO Thread count parameters
> ---
>
> Key: FLINK-917
> URL: https://issues.apache.org/jira/browse/FLINK-917
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Reporter: GitHub Import
>Assignee: Ufuk Celebi
>Priority: Major
>  Labels: github-import
> Fix For: 0.6-incubating
>
>
> How about we rename the config parameters for 
> `taskmanager.netty.numOutThreads` and `taskmanager.netty.numInThreads`? That 
> way we make it "independent" of the underlying implementation. The same 
> parameter should also configure the number of I/O threads if we should choose 
> to go with zeroMQ for streaming, or so...
>  Imported from GitHub 
> Url: https://github.com/stratosphere/stratosphere/issues/917
> Created by: [StephanEwen|https://github.com/StephanEwen]
> Labels: 
> Created at: Sun Jun 08 16:12:44 CEST 2014
> State: open



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-7237) Remove DateTimeUtils from Flink once Calcite is upgraded to 1.14

2019-03-14 Thread Timo Walther (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-7237:

Issue Type: Bug  (was: Sub-task)
Parent: (was: FLINK-10076)

> Remove DateTimeUtils from Flink once Calcite is upgraded to 1.14
> 
>
> Key: FLINK-7237
> URL: https://issues.apache.org/jira/browse/FLINK-7237
> Project: Flink
>  Issue Type: Bug
>  Components: API / Table SQL
>Reporter: Haohui Mai
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11894) AbstractRowSerializer.serializeToPages return type should be void

2019-03-14 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young updated FLINK-11894:
---
Component/s: (was: API / Table SQL)
 Runtime / Operators

> AbstractRowSerializer.serializeToPages return type should be void
> -
>
> Key: FLINK-11894
> URL: https://issues.apache.org/jira/browse/FLINK-11894
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Operators
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>
> We should hide skipped offset in Serializer, all skip operations should be 
> resolved within Serializer.
> int serializeToPages => void serializeToPages



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11864) Let compressed channel reader/writer reuse the logic of AsynchronousFileIOChannel

2019-03-14 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young updated FLINK-11864:
---
Component/s: (was: Runtime / Network)
 Runtime / Operators

> Let compressed channel reader/writer reuse the logic of 
> AsynchronousFileIOChannel
> -
>
> Key: FLINK-11864
> URL: https://issues.apache.org/jira/browse/FLINK-11864
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Operators
>Reporter: Kurt Young
>Priority: Major
>
> This is a follow up issue of 
> [Flink-11863|https://issues.apache.org/jira/browse/FLINK-11863]. The 
> introduced `CompressedBlockChannelReader` and `CompressedBlockChannelWriter` 
> should reuse the logic of `AsynchronousFileIOChannel` by extending from it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11878) Implement the runtime handling of BoundedOneInput and BoundedTwoInput

2019-03-14 Thread Robert Metzger (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-11878:
---
Component/s: Runtime / Operators

> Implement the runtime handling of BoundedOneInput and BoundedTwoInput
> -
>
> Key: FLINK-11878
> URL: https://issues.apache.org/jira/browse/FLINK-11878
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Operators
>Reporter: Haibo Sun
>Assignee: Haibo Sun
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-11905) BlockCompressionTest does not compile with Java 9

2019-03-14 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young closed FLINK-11905.
--
   Resolution: Fixed
Fix Version/s: 1.9.0

fixed in c6878aca6c5aeee46581b4d6744b31049db9de95

> BlockCompressionTest does not compile with Java 9
> -
>
> Key: FLINK-11905
> URL: https://issues.apache.org/jira/browse/FLINK-11905
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Operators, Tests
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Assignee: Kurt Young
>Priority: Major
>  Labels: blink, pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> [https://travis-ci.org/apache/flink/builds/505693580?utm_source=slack_medium=notification]
>  
> {code:java}
> 13:58:16.804 [INFO] 
> -
> 13:58:16.804 [ERROR] COMPILATION ERROR : 
> 13:58:16.804 [INFO] 
> -
> 13:58:16.804 [ERROR] 
> /home/travis/build/apache/flink/flink/flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/compression/BlockCompressionTest.java:[23,16]
>  cannot find symbol
>   symbol:   class Cleaner
>   location: package sun.misc
> 13:58:16.804 [ERROR] 
> /home/travis/build/apache/flink/flink/flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/compression/BlockCompressionTest.java:[24,15]
>  package sun.nio.ch is not visible
>   (package sun.nio.ch is declared in module java.base, which does not export 
> it to the unnamed module)
> 13:58:16.804 [ERROR] 
> /home/travis/build/apache/flink/flink/flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/compression/BlockCompressionTest.java:[187,17]
>  cannot find symbol
>   symbol:   class Cleaner
>   location: class 
> org.apache.flink.table.runtime.compression.BlockCompressionTest{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11864) Let compressed channel reader/writer reuse the logic of AsynchronousFileIOChannel

2019-03-14 Thread Robert Metzger (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-11864:
---
Component/s: Runtime / Network

> Let compressed channel reader/writer reuse the logic of 
> AsynchronousFileIOChannel
> -
>
> Key: FLINK-11864
> URL: https://issues.apache.org/jira/browse/FLINK-11864
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Reporter: Kurt Young
>Priority: Major
>
> This is a follow up issue of 
> [Flink-11863|https://issues.apache.org/jira/browse/FLINK-11863]. The 
> introduced `CompressedBlockChannelReader` and `CompressedBlockChannelWriter` 
> should reuse the logic of `AsynchronousFileIOChannel` by extending from it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] KurtYoung merged pull request #7981: [FLINK-11905][table-runtime-blink] Fix BlockCompressionTest does not compile with Java 9

2019-03-14 Thread GitBox
KurtYoung merged pull request #7981: [FLINK-11905][table-runtime-blink] Fix 
BlockCompressionTest does not compile with Java 9
URL: https://github.com/apache/flink/pull/7981
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on issue #7981: [FLINK-11905][table-runtime-blink] Fix BlockCompressionTest does not compile with Java 9

2019-03-14 Thread GitBox
KurtYoung commented on issue #7981: [FLINK-11905][table-runtime-blink] Fix 
BlockCompressionTest does not compile with Java 9
URL: https://github.com/apache/flink/pull/7981#issuecomment-472857887
 
 
   Thanks @zentol for the reviewing, i will merge this now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-261) JDBC Input and Output format for Stratosphere

2019-03-14 Thread Robert Metzger (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-261:
-
Component/s: Connectors / JDBC

> JDBC Input and Output format for Stratosphere
> -
>
> Key: FLINK-261
> URL: https://issues.apache.org/jira/browse/FLINK-261
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Reporter: GitHub Import
>Priority: Major
>  Labels: github-import
> Fix For: pre-apache
>
>
> Hi,
> I would like to contribute to Stratosphere too. On your page 
> https://github.com/stratosphere/stratosphere/wiki/Starter-Jobs i found the 
> task 'JDBC Input and Output format for Stratosphere'  and would like to get 
> more information on that.
>  Imported from GitHub 
> Url: https://github.com/stratosphere/stratosphere/issues/261
> Created by: [emrehasan|https://github.com/emrehasan]
> Labels: core, enhancement, 
> Created at: Sat Nov 09 17:35:25 CET 2013
> State: closed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-11881) Introduce code generated typed sort to blink table

2019-03-14 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young closed FLINK-11881.
--
   Resolution: Implemented
Fix Version/s: 1.9.0

fixed in e20a9f8947244315f7732c719ebf8f77e7699a57

> Introduce code generated typed sort to blink table
> --
>
> Key: FLINK-11881
> URL: https://issues.apache.org/jira/browse/FLINK-11881
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Operators, SQL / Planner
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Introduce SortCodeGenerator (CodeGen efficient computation and comparison of  
> NormalizedKey, idea based on FLINK-5734 ):
> support sort by primitive type, string, decimal...
> support sort by ArrayType
> support sort by RowType(Nested Struct)
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] KurtYoung merged pull request #7958: [FLINK-11881][table-planner-blink] Introduce code generated typed sort to blink table

2019-03-14 Thread GitBox
KurtYoung merged pull request #7958: [FLINK-11881][table-planner-blink] 
Introduce code generated typed sort to blink table
URL: https://github.com/apache/flink/pull/7958
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on issue #7958: [FLINK-11881][table-planner-blink] Introduce code generated typed sort to blink table

2019-03-14 Thread GitBox
KurtYoung commented on issue #7958: [FLINK-11881][table-planner-blink] 
Introduce code generated typed sort to blink table
URL: https://github.com/apache/flink/pull/7958#issuecomment-472856770
 
 
   LGTM, merging this...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-9477) Support SQL 2016 JSON functions in Flink SQL

2019-03-14 Thread Timo Walther (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16792647#comment-16792647
 ] 

Timo Walther commented on FLINK-9477:
-

[~suez1224] the Calcite version is now at 1.18

> Support SQL 2016 JSON functions in Flink SQL
> 
>
> Key: FLINK-9477
> URL: https://issues.apache.org/jira/browse/FLINK-9477
> Project: Flink
>  Issue Type: New Feature
>  Components: API / Table SQL
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11922) Support MetricReporter factories

2019-03-14 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-11922:


 Summary: Support MetricReporter factories
 Key: FLINK-11922
 URL: https://issues.apache.org/jira/browse/FLINK-11922
 Project: Flink
  Issue Type: New Feature
  Components: Runtime / Metrics
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.9.0


Currently we only support instantiating {{MetricReporters}} via reflection, 
forcing us (and implementees) to deal with numerous downsides such as non-final 
fields (resources can only be acquired in open()), frustrating test setup 
(options must always be encoded in a  {{Configuration}}, and requiring non-arg 
constructors.

Factories are a more appropriate way of dealing with this, and as such we 
should add support for it. Existing reporters can be ported to this mechanism 
without affecting users.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-10076) Upgrade Calcite dependency to 1.18

2019-03-14 Thread Timo Walther (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther closed FLINK-10076.

   Resolution: Fixed
Fix Version/s: 1.9.0

Fixed in 1.9.0:

flink-table-planner: 93a2c79188b7fa3dbbc9921a58fc83bb19343895
flink-table-planner-blink: 0c062a34ddff41cda38f9492680f40fc8d49d499

> Upgrade Calcite dependency to 1.18
> --
>
> Key: FLINK-10076
> URL: https://issues.apache.org/jira/browse/FLINK-10076
> Project: Flink
>  Issue Type: Task
>  Components: SQL / Planner
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11877) Implement the runtime handling of TwoInputSelectable

2019-03-14 Thread Robert Metzger (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-11877:
---
Component/s: Runtime / Operators

> Implement the runtime handling of TwoInputSelectable
> 
>
> Key: FLINK-11877
> URL: https://issues.apache.org/jira/browse/FLINK-11877
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Operators
>Reporter: Haibo Sun
>Assignee: Haibo Sun
>Priority: Major
>
> - Introduces a new class `Input` to represent the logical input of operators.
>  - Introduces a new class `StreamTwoInputSelectableProcessor` to implement 
> selectively reading.
>  - Adds benchmarks for `StreamTwoInputProcessor` and 
> `StreamTwoInputSelectableProcessor` to ensure that 
> StreamTwoInputSelectableProcessor's throughput is the same or the regression 
> is acceptable in the case of constant `ALL`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11915) DataInputViewStream skip returns wrong value

2019-03-14 Thread Robert Metzger (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-11915:
---
Component/s: Formats (JSON, Avro, Parquet, ORC, SequenceFile)

> DataInputViewStream skip returns wrong value
> 
>
> Key: FLINK-11915
> URL: https://issues.apache.org/jira/browse/FLINK-11915
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), 
> Runtime / Operators
>Reporter: Andrew Prudhomme
>Priority: Minor
>
> The 
> flink-core:org.apache.flink.api.java.typeutils.runtime.DataInputViewStream 
> overrides the InputSteam skip function. This function should be returning the 
> actual number of bytes skipped, but there is a bug which makes it return a 
> lower value.
> The fix should be something simple like:
> {code:java}
> -  return n - counter - inputView.skipBytes((int) counter);
> +  return n - (counter - inputView.skipBytes((int) counter));
> {code}
> For context, I ran into this when trying to decode an Avro record where the 
> writer schema had fields not present in the reader schema. The decoder would 
> attempt to skip the unneeded data in the stream, but would throw an 
> EOFException because the return value was wrong.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-7237) Remove DateTimeUtils from Flink once Calcite is upgraded to 1.14

2019-03-14 Thread Timo Walther (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-7237:

Issue Type: Sub-task  (was: Bug)
Parent: FLINK-11921

> Remove DateTimeUtils from Flink once Calcite is upgraded to 1.14
> 
>
> Key: FLINK-7237
> URL: https://issues.apache.org/jira/browse/FLINK-7237
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Table SQL
>Reporter: Haohui Mai
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11915) DataInputViewStream skip returns wrong value

2019-03-14 Thread Robert Metzger (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-11915:
---
Component/s: Runtime / Operators

> DataInputViewStream skip returns wrong value
> 
>
> Key: FLINK-11915
> URL: https://issues.apache.org/jira/browse/FLINK-11915
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Operators
>Reporter: Andrew Prudhomme
>Priority: Minor
>
> The 
> flink-core:org.apache.flink.api.java.typeutils.runtime.DataInputViewStream 
> overrides the InputSteam skip function. This function should be returning the 
> actual number of bytes skipped, but there is a bug which makes it return a 
> lower value.
> The fix should be something simple like:
> {code:java}
> -  return n - counter - inputView.skipBytes((int) counter);
> +  return n - (counter - inputView.skipBytes((int) counter));
> {code}
> For context, I ran into this when trying to decode an Avro record where the 
> writer schema had fields not present in the reader schema. The decoder would 
> attempt to skip the unneeded data in the stream, but would throw an 
> EOFException because the return value was wrong.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11900) Flink on Kubernetes sensitive about arguments placement

2019-03-14 Thread Robert Metzger (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-11900:
---
Component/s: Deployment / Kubernetes

> Flink on Kubernetes sensitive about arguments placement
> ---
>
> Key: FLINK-11900
> URL: https://issues.apache.org/jira/browse/FLINK-11900
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.7.2
>Reporter:  Mario Georgiev
>Priority: Major
>
> Hello guys,
> I've discovered that when deploying the job cluster on Kubernetes, the Job 
> Cluster Manager seems sensitive about the placement of arguments. 
> For instance if i put the savepoint argument last, it never reads it. 
> For instance if arguments are :
> {code:java}
> job-cluster --job-classname  --fromSavepoint  
> --allowNonRestoredState does not pick the savepoint path and does not start 
> from it
> job-cluster --fromSavepoint    -n --job-classname  
> -p works for savepoint retrieval{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11895) Allow FileSystem Configs to be altered at Runtime

2019-03-14 Thread Robert Metzger (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-11895:
---
Component/s: Connectors / FileSystem

> Allow FileSystem Configs to be altered at Runtime
> -
>
> Key: FLINK-11895
> URL: https://issues.apache.org/jira/browse/FLINK-11895
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Luka Jurukovski
>Priority: Minor
>
> This stems from a need to be able to pass in S3 auth keys at runtime in order 
> to allow users to specify the keys they want to use. Based on the 
> documentation it seems that currently S3 keys need to be part of the Flink 
> cluster configuration, in a hadoop file (which the cluster needs to pointed 
> to) or JVM args.
> This only seems to apply to the streaming API. Also Feel free to correct the 
> following if I am wrong, as there may be pieces I have no run across, or 
> parts of the code I have misunderstood.
> Currently it seems that FileSystems are inferred based on the extension type 
> and a set of cached Filesystems that are generated in the background. These 
> seem to use the config as defined at the time they are stood up. 
> Unfortunately there is no way to tap into this control mechanism or override 
> this behavior as many places in the code pulls from this cache. This is 
> particularly painful in the sink instance as there are places where this is 
> used that are not accessible outside the package it is implemented.
> Through a pretty hacky mechanism I have proved out that this is a self 
> imposed limitation, as I was able to change the code to pass in a Filesystem 
> from the top level and have it read and write to S3 given keys I set at 
> runtime.
> The current methodology is convenient, however there should be finer grain 
> controls for instances where the cluster is in a multitenant environment.
> As a final note it seems like both the FileSystem and FileSystemFactory 
> classes are not Serializable. I can see why this would be the case in former, 
> but I am not clear as to why a factory class would not be Serializable (like 
> in the case of BucketFactory). If this can be made serializable this should 
> make this a much cleaner process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11876) Introduce the new interfaces TwoInputSelectable, BoundedOneInput and BoundedTwoInput

2019-03-14 Thread Robert Metzger (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-11876:
---
Component/s: Runtime / Operators

> Introduce the new interfaces TwoInputSelectable, BoundedOneInput and 
> BoundedTwoInput
> 
>
> Key: FLINK-11876
> URL: https://issues.apache.org/jira/browse/FLINK-11876
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Operators
>Reporter: Haibo Sun
>Assignee: Haibo Sun
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7235) Backport CALCITE-1884 to the Flink repository before Calcite 1.14

2019-03-14 Thread Timo Walther (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16792644#comment-16792644
 ] 

Timo Walther commented on FLINK-7235:
-

[~walterddr] could you take a look at this? Our goal should be to get rid of 
modified Calcite classes in our code base.

> Backport CALCITE-1884 to the Flink repository before Calcite 1.14
> -
>
> Key: FLINK-7235
> URL: https://issues.apache.org/jira/browse/FLINK-7235
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Table SQL
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Major
>
> We need to backport CALCITE-1884 in order to unblock upgrading Calcite to 
> 1.13.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11879) Add JobGraph validators for the uses of TwoInputSelectable, BoundedOneInput and BoundedTwoInput

2019-03-14 Thread Robert Metzger (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-11879:
---
Component/s: Runtime / Operators

> Add JobGraph validators for the uses of TwoInputSelectable, BoundedOneInput 
> and BoundedTwoInput
> ---
>
> Key: FLINK-11879
> URL: https://issues.apache.org/jira/browse/FLINK-11879
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Operators
>Reporter: Haibo Sun
>Assignee: Haibo Sun
>Priority: Major
>
> - Rejects the jobs containing operators which were implemented 
> `TwoInputSelectable` in case of enabled checkpointing.
>  - Rejects the jobs containing operators which were implemented 
> `BoundedInput` or `BoundedTwoInput` in case of enabled checkpointing.
>  - Rejects the jobs containing operators which were implemented 
> `TwoInputSelectable` in case that credit-based flow control is disabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11894) AbstractRowSerializer.serializeToPages return type should be void

2019-03-14 Thread Robert Metzger (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-11894:
---
Component/s: API / Table SQL

> AbstractRowSerializer.serializeToPages return type should be void
> -
>
> Key: FLINK-11894
> URL: https://issues.apache.org/jira/browse/FLINK-11894
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Table SQL
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>
> We should hide skipped offset in Serializer, all skip operations should be 
> resolved within Serializer.
> int serializeToPages => void serializeToPages



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11918) Deprecated Window and Rename it to GroupWindow

2019-03-14 Thread Robert Metzger (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-11918:
---
Component/s: API / DataStream

> Deprecated Window and Rename it to GroupWindow
> --
>
> Key: FLINK-11918
> URL: https://issues.apache.org/jira/browse/FLINK-11918
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Affects Versions: 1.8.0
>Reporter: sunjincheng
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{OverWindow}} and {{Window}} are confusing in the API, and mentioned that we 
> want to rename it to GroupWindow for many times.  So,  here just a 
> suggestion, how about Deprecated the Window in release-1.8, since we should 
> create a new RC2 for release 1.8. If we do not do that the Window will keep 
> existing for almost half a year. I create this JIRA, and link to release-1.8 
> vote mail thread, ask RM's options. If all of you do not agree, I'll close 
> the JIRA, otherwise, we can open the new PR for Depercated the window. 
> What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-7235) Backport CALCITE-1884 to the Flink repository before Calcite 1.14

2019-03-14 Thread Timo Walther (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-7235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-7235:

Issue Type: Sub-task  (was: Bug)
Parent: FLINK-11921

> Backport CALCITE-1884 to the Flink repository before Calcite 1.14
> -
>
> Key: FLINK-7235
> URL: https://issues.apache.org/jira/browse/FLINK-7235
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Table SQL
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Major
>
> We need to backport CALCITE-1884 in order to unblock upgrading Calcite to 
> 1.13.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11859) Improve SpanningRecordSerializer performance by serializing record length to serialization buffer directly

2019-03-14 Thread Robert Metzger (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-11859:
---
Component/s: Runtime / Network

> Improve SpanningRecordSerializer performance by serializing record length to 
> serialization buffer directly
> --
>
> Key: FLINK-11859
> URL: https://issues.apache.org/jira/browse/FLINK-11859
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Reporter: Yingjie Cao
>Assignee: Yingjie Cao
>Priority: Major
>
> In the current implementation of SpanningRecordSerializer, the length of a 
> record is serialized to an intermediate length buffer and then copied to the 
> target buffer. Actually, the length filed can be serialized directly to the 
> data buffer (serializationBuffer), which can avoid the copy of length buffer. 
> Though the total bytes copied remain unchanged, it one copy of a small record 
> which incurs high overhead. The flink-benchmarks shows it can improve 
> performance and the test results are as follows.
> Result with the optimization:
> |Benchmark|Mode|Threads|Samples|Score|Score Error (99.9%)|Unit|Param: 
> channelsFlushTimeout|Param: stateBackend|
> |KeyByBenchmarks.arrayKeyBy|thrpt|1|30|2228.049605|77.631804|ops/ms| | |
> |KeyByBenchmarks.tupleKeyBy|thrpt|1|30|3968.361739|193.501755|ops/ms| | |
> |MemoryStateBackendBenchmark.stateBackends|thrpt|1|30|3030.016702|29.272713|ops/ms|
>  |MEMORY|
> |MemoryStateBackendBenchmark.stateBackends|thrpt|1|30|2754.77678|26.215395|ops/ms|
>  |FS|
> |MemoryStateBackendBenchmark.stateBackends|thrpt|1|30|3001.957606|29.288019|ops/ms|
>  |FS_ASYNC|
> |RocksStateBackendBenchmark.stateBackends|thrpt|1|30|123.698984|3.339233|ops/ms|
>  |ROCKS|
> |RocksStateBackendBenchmark.stateBackends|thrpt|1|30|126.252137|1.137735|ops/ms|
>  |ROCKS_INC|
> |SerializationFrameworkMiniBenchmarks.serializerAvro|thrpt|1|30|323.658098|5.855697|ops/ms|
>  | |
> |SerializationFrameworkMiniBenchmarks.serializerKryo|thrpt|1|30|183.34423|3.710787|ops/ms|
>  | |
> |SerializationFrameworkMiniBenchmarks.serializerPojo|thrpt|1|30|404.380233|5.131744|ops/ms|
>  | |
> |SerializationFrameworkMiniBenchmarks.serializerRow|thrpt|1|30|527.193369|10.176726|ops/ms|
>  | |
> |SerializationFrameworkMiniBenchmarks.serializerTuple|thrpt|1|30|550.073024|11.724412|ops/ms|
>  | |
> |StreamNetworkBroadcastThroughputBenchmarkExecutor.networkBroadcastThroughput|thrpt|1|30|564.690627|13.766809|ops/ms|
>  | |
> |StreamNetworkThroughputBenchmarkExecutor.networkThroughput|thrpt|1|30|49918.11806|2324.234776|ops/ms|100,100ms|
>  |
> |StreamNetworkThroughputBenchmarkExecutor.networkThroughput|thrpt|1|30|10443.63491|315.835962|ops/ms|100,100ms,SSL|
>  |
> |StreamNetworkThroughputBenchmarkExecutor.networkThroughput|thrpt|1|30|21387.47608|2779.832704|ops/ms|1000,1ms|
>  |
> |StreamNetworkThroughputBenchmarkExecutor.networkThroughput|thrpt|1|30|26585.85453|860.243347|ops/ms|1000,100ms|
>  |
> |StreamNetworkThroughputBenchmarkExecutor.networkThroughput|thrpt|1|30|8252.563405|947.129028|ops/ms|1000,100ms,SSL|
>  |
> |SumLongsBenchmark.benchmarkCount|thrpt|1|30|8806.021402|263.995836|ops/ms| | 
> |
> |WindowBenchmarks.globalWindow|thrpt|1|30|4573.620126|112.099391|ops/ms| | |
> |WindowBenchmarks.sessionWindow|thrpt|1|30|585.246412|7.026569|ops/ms| | |
> |WindowBenchmarks.slidingWindow|thrpt|1|30|449.302134|4.123669|ops/ms| | |
> |WindowBenchmarks.tumblingWindow|thrpt|1|30|2979.806858|33.818909|ops/ms| | |
> |StreamNetworkLatencyBenchmarkExecutor.networkLatency1to1|avgt|1|30|12.842865|0.13796|ms/op|
>  | |
> Result without the optimization:
>  
> |Benchmark|Mode|Threads|Samples|Score|Score Error (99.9%)|Unit|Param: 
> channelsFlushTimeout|Param: stateBackend|
> |KeyByBenchmarks.arrayKeyBy|thrpt|1|30|2060.241715|59.898485|ops/ms| | |
> |KeyByBenchmarks.tupleKeyBy|thrpt|1|30|3645.306819|223.821719|ops/ms| | |
> |MemoryStateBackendBenchmark.stateBackends|thrpt|1|30|2992.698822|36.978115|ops/ms|
>  |MEMORY|
> |MemoryStateBackendBenchmark.stateBackends|thrpt|1|30|2756.10949|27.798937|ops/ms|
>  |FS|
> |MemoryStateBackendBenchmark.stateBackends|thrpt|1|30|2965.969876|44.159793|ops/ms|
>  |FS_ASYNC|
> |RocksStateBackendBenchmark.stateBackends|thrpt|1|30|125.506942|1.245978|ops/ms|
>  |ROCKS|
> |RocksStateBackendBenchmark.stateBackends|thrpt|1|30|127.258737|1.190588|ops/ms|
>  |ROCKS_INC|
> |SerializationFrameworkMiniBenchmarks.serializerAvro|thrpt|1|30|316.497954|8.309241|ops/ms|
>  | |
> |SerializationFrameworkMiniBenchmarks.serializerKryo|thrpt|1|30|189.065149|6.302073|ops/ms|
>  | |
> |SerializationFrameworkMiniBenchmarks.serializerPojo|thrpt|1|30|391.51305|7.750728|ops/ms|
>  | |
> 

[jira] [Updated] (FLINK-11814) Changes of FLINK-11516 causes compilation failure

2019-03-14 Thread Robert Metzger (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-11814:
---
Component/s: Connectors / Kafka

> Changes of FLINK-11516 causes compilation failure
> -
>
> Key: FLINK-11814
> URL: https://issues.apache.org/jira/browse/FLINK-11814
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.9.0
>Reporter: Yu Li
>Assignee: Dian Fu
>Priority: Major
>
> As titled, the change breaks compilation with below error:
> {noformat}
> Error:(70, 34) type mismatch;
>  found   : 
> scala.collection.immutable.Map[String,org.apache.flink.table.plan.stats.ColumnStats]
>  required: java.util.Map[String,org.apache.flink.table.plan.stats.ColumnStats]
> Some(new TableStats(cnt, columnStats))
> Error:(52, 33) value getColumnStats is not a member of 
> org.apache.flink.table.plan.stats.TableStats
> case Some(tStats) => tStats.getColumnStats.get(columnName)
> Error:(62, 33) value getRowCount is not a member of 
> org.apache.flink.table.plan.stats.TableStats
> case Some(tStats) => tStats.getRowCount.toDouble
> {noformat}
> And this is found in the travis pre-commit check when running 
> {{Kafka09SecuredRunITCase}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-7235) Backport CALCITE-1884 to the Flink repository before Calcite 1.14

2019-03-14 Thread Timo Walther (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-7235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-7235:

Issue Type: Bug  (was: Sub-task)
Parent: (was: FLINK-10076)

> Backport CALCITE-1884 to the Flink repository before Calcite 1.14
> -
>
> Key: FLINK-7235
> URL: https://issues.apache.org/jira/browse/FLINK-7235
> Project: Flink
>  Issue Type: Bug
>  Components: API / Table SQL
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Major
>
> We need to backport CALCITE-1884 in order to unblock upgrading Calcite to 
> 1.13.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11120) The bug of timestampadd handles time

2019-03-14 Thread Timo Walther (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-11120:
-
Issue Type: Bug  (was: Sub-task)
Parent: (was: FLINK-10076)

> The bug of timestampadd  handles time
> -
>
> Key: FLINK-11120
> URL: https://issues.apache.org/jira/browse/FLINK-11120
> Project: Flink
>  Issue Type: Bug
>  Components: SQL / Planner
>Reporter: Forward Xu
>Assignee: Forward Xu
>Priority: Major
>
> The error occur when {{timestampadd(MINUTE, 1, time '01:00:00')}} is executed:
> java.lang.ClassCastException: java.lang.Integer cannot be cast to 
> java.lang.Long
> at org.apache.calcite.rex.RexBuilder.clean(RexBuilder.java:1520)
> at org.apache.calcite.rex.RexBuilder.makeLiteral(RexBuilder.java:1318)
> at 
> org.apache.flink.table.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:135)
> at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:620)
> at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:540)
> at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule$ProjectReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:288)
> I think it should meet the following conditions:
> ||expression||Expect the result||
> |timestampadd(MINUTE, -1, time '00:00:00')|23:59:00|
> |timestampadd(MINUTE, 1, time '00:00:00')|00:01:00|
> |timestampadd(MINUTE, 1, time '23:59:59')|00:00:59|
> |timestampadd(SECOND, 1, time '23:59:59')|00:00:00|
> |timestampadd(HOUR, 1, time '23:59:59')|00:59:59|
> This problem seems to be a bug in calcite. I have submitted isuse to calcite. 
> The following is the link.
> CALCITE-2699



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-261) JDBC Input and Output format for Stratosphere

2019-03-14 Thread Robert Metzger (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-261:
-
Component/s: (was: Connectors / JDBC)

> JDBC Input and Output format for Stratosphere
> -
>
> Key: FLINK-261
> URL: https://issues.apache.org/jira/browse/FLINK-261
> Project: Flink
>  Issue Type: Improvement
>Reporter: GitHub Import
>Priority: Major
>  Labels: github-import
> Fix For: pre-apache
>
>
> Hi,
> I would like to contribute to Stratosphere too. On your page 
> https://github.com/stratosphere/stratosphere/wiki/Starter-Jobs i found the 
> task 'JDBC Input and Output format for Stratosphere'  and would like to get 
> more information on that.
>  Imported from GitHub 
> Url: https://github.com/stratosphere/stratosphere/issues/261
> Created by: [emrehasan|https://github.com/emrehasan]
> Labels: core, enhancement, 
> Created at: Sat Nov 09 17:35:25 CET 2013
> State: closed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] KurtYoung commented on issue #7952: [FLINK-11872][table-runtime-blink] update lz4 license file.

2019-03-14 Thread GitBox
KurtYoung commented on issue #7952: [FLINK-11872][table-runtime-blink] update 
lz4 license file.
URL: https://github.com/apache/flink/pull/7952#issuecomment-472854272
 
 
   Thanks @zentol for pointing this out. I was wondering how can i tell whether 
a dependent jar is packaged in flink's jar?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-11921) Upgrade Calcite dependency to 1.19

2019-03-14 Thread Timo Walther (JIRA)
Timo Walther created FLINK-11921:


 Summary: Upgrade Calcite dependency to 1.19
 Key: FLINK-11921
 URL: https://issues.apache.org/jira/browse/FLINK-11921
 Project: Flink
  Issue Type: Improvement
  Components: SQL / Planner
Reporter: Timo Walther


Umbrella issue for all tasks related to the next Calcite upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10076) Upgrade Calcite dependency to 1.18

2019-03-14 Thread Timo Walther (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-10076:
-
Component/s: (was: API / Table SQL)
 SQL / Planner

> Upgrade Calcite dependency to 1.18
> --
>
> Key: FLINK-10076
> URL: https://issues.apache.org/jira/browse/FLINK-10076
> Project: Flink
>  Issue Type: Task
>  Components: SQL / Planner
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] asfgit closed pull request #7607: [FLINK-10076][table] Upgrade Calcite dependency to 1.18

2019-03-14 Thread GitBox
asfgit closed pull request #7607: [FLINK-10076][table] Upgrade Calcite 
dependency to 1.18
URL: https://github.com/apache/flink/pull/7607
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-9650) Support Protocol Buffers formats

2019-03-14 Thread Arup Malakar (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16792629#comment-16792629
 ] 

Arup Malakar commented on FLINK-9650:
-

Hey [~yanghua] do let me know if you are still interested in taking this 
forward.

> Support Protocol Buffers formats
> 
>
> Key: FLINK-9650
> URL: https://issues.apache.org/jira/browse/FLINK-9650
> Project: Flink
>  Issue Type: Sub-task
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: zhangminglei
>Assignee: zhangminglei
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We need to generate a \{{TypeInformation}} from a standard [Protobuf 
> schema|https://github.com/google/protobuf] (and maybe vice verse). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] hequn8128 edited a comment on issue #7985: [FLINK-11918][table] Deprecated Window and Rename it to GroupWindow

2019-03-14 Thread GitBox
hequn8128 edited a comment on issue #7985: [FLINK-11918][table] Deprecated 
Window and Rename it to GroupWindow
URL: https://github.com/apache/flink/pull/7985#issuecomment-472844157
 
 
   @sunjincheng121 @twalthr Thanks for your suggestions. Putting all of the 
window deprecated APIs to 1.8 as soon as possible is a good idea. I will update 
the PR asap. 
   
   Thanks, Hequn


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hequn8128 commented on issue #7985: [FLINK-11918][table] Deprecated Window and Rename it to GroupWindow

2019-03-14 Thread GitBox
hequn8128 commented on issue #7985: [FLINK-11918][table] Deprecated Window and 
Rename it to GroupWindow
URL: https://github.com/apache/flink/pull/7985#issuecomment-472844157
 
 
   @sunjincheng121 @twalthr Thanks for your suggestions. Putting the deprecated 
APIs to 1.8 as soon as possible is a good idea. I will update the PR asap. 
   
   Thanks, Hequn


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #7987: SimpleStringSchema handle message record which value is null

2019-03-14 Thread GitBox
flinkbot commented on issue #7987: SimpleStringSchema handle message record 
which value is null
URL: https://github.com/apache/flink/pull/7987#issuecomment-472840434
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lamber-ken opened a new pull request #7987: SimpleStringSchema handle message record which value is null

2019-03-14 Thread GitBox
lamber-ken opened a new pull request #7987: SimpleStringSchema handle message 
record which value is null
URL: https://github.com/apache/flink/pull/7987
 
 
   ## What is the purpose of the change
   
   when kafka msg queue contains some records which value is null, 
flink-kafka-connector can't process these records.
   
   for example, msg queue like bellow.
   
   
   msg
   null
   msg
   msg
   msg
   
   
   
for normal, use **SimpleStringSchema** to process msg queue data
   ```
   env.addSource(new FlinkKafkaConsumer010("topic", new SimpleStringSchema(), 
properties));
   ```
but, will get NullPointerException
   
   ```
   java.lang.NullPointerException
at java.lang.String.(String.java:515)
at 
org.apache.flink.api.common.serialization.SimpleStringSchema.deserialize(SimpleStringSchema.java:75)
at 
org.apache.flink.api.common.serialization.SimpleStringSchema.deserialize(SimpleStringSchema.java:36)
   ```
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-258) Scala renames and cleanups

2019-03-14 Thread Robert Metzger (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-258:
-
Component/s: API / Scala

> Scala renames and cleanups
> --
>
> Key: FLINK-258
> URL: https://issues.apache.org/jira/browse/FLINK-258
> Project: Flink
>  Issue Type: Bug
>  Components: API / Scala
>Reporter: GitHub Import
>Priority: Major
>  Labels: github-import
> Fix For: pre-apache
>
> Attachments: pull-request-258-8375824509847558190.patch
>
>
>  Imported from GitHub 
> Url: https://github.com/stratosphere/stratosphere/pull/258
> Created by: [aljoscha|https://github.com/aljoscha]
> Labels: 
> Created at: Fri Nov 08 17:25:42 CET 2013
> State: closed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] pnowojski commented on a change in pull request #7911: [FLINK-11082][network] Fix the logic of getting backlog in sub partition

2019-03-14 Thread GitBox
pnowojski commented on a change in pull request #7911: [FLINK-11082][network] 
Fix the logic of getting backlog in sub partition
URL: https://github.com/apache/flink/pull/7911#discussion_r265551809
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/SpilledSubpartitionView.java
 ##
 @@ -147,7 +147,8 @@ public BufferAndBacklog getNextBuffer() throws 
IOException, InterruptedException
return null;
}
 
-   int newBacklog = 
parent.decreaseBuffersInBacklog(current.isBuffer());
+   parent.decreaseBuffersInBacklog(current.isBuffer());
+   int newBacklog = parent.getBuffersInBacklog();
 
 Review comment:
   Something is wrong here. The line above is synchronized 
(`decreaseBuffersInBacklog` vs `decreaseBuffersInBacklogUnsafe`) and this 
`getBuffersInBacklog` is not (note miss-leading lack of `Unsafe` suffix) . 
Either:
   
   1. the above one doesn't need to be synchronized
   2. there is a bug and this one should be synchronized. In that case however, 
it would be additional unnecessary synchronisation cost, so I think I would 
prefer to integrate `getBuffersInBacklog` with `decreaseBuffersInBacklog` after 
all.
   3. This doesn't need to be synchronized because of some hacky assumption. In 
that case either write big comment explaining this or probably better and/or 
safer approach would be to integrate `getBuffersInBacklog` with 
`decreaseBuffersInBacklog` after all.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] pnowojski commented on a change in pull request #7911: [FLINK-11082][network] Fix the logic of getting backlog in sub partition

2019-03-14 Thread GitBox
pnowojski commented on a change in pull request #7911: [FLINK-11082][network] 
Fix the logic of getting backlog in sub partition
URL: https://github.com/apache/flink/pull/7911#discussion_r265551809
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/SpilledSubpartitionView.java
 ##
 @@ -147,7 +147,8 @@ public BufferAndBacklog getNextBuffer() throws 
IOException, InterruptedException
return null;
}
 
-   int newBacklog = 
parent.decreaseBuffersInBacklog(current.isBuffer());
+   parent.decreaseBuffersInBacklog(current.isBuffer());
+   int newBacklog = parent.getBuffersInBacklog();
 
 Review comment:
   Something is wrong here. The line above is synchronized 
(`decreaseBuffersInBacklog` vs `decreaseBuffersInBacklogUnsafe`) and this 
`getBuffersInBacklog` is not (note miss-leading lack of `Unsafe` suffix) . 
Either:
   
   1. the above one doesn't need to be synchronized
   2. there is a bug and this one should be synchronized. In that case however, 
it would be additional unnecessary synchronisation cost, so I think I would 
prefer to integrate `getBuffersInBacklog` with `decreaseBuffersInBacklog` after 
all.
   3. This doesn't need to be synchronized because of some hacky assumption. In 
that case either write big comment or probably better and/or safer approach 
would be to integrate `getBuffersInBacklog` with `decreaseBuffersInBacklog` 
after all.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] pnowojski commented on a change in pull request #7911: [FLINK-11082][network] Fix the logic of getting backlog in sub partition

2019-03-14 Thread GitBox
pnowojski commented on a change in pull request #7911: [FLINK-11082][network] 
Fix the logic of getting backlog in sub partition
URL: https://github.com/apache/flink/pull/7911#discussion_r265550623
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/ResultSubpartition.java
 ##
 @@ -118,13 +117,19 @@ protected Throwable getFailureCause() {
 
/**
 * Gets the number of non-event buffers in this subpartition.
-*
-* Beware: This method should only be used in tests 
in non-concurrent access
-* scenarios since it does not make any concurrency guarantees.
 */
-   @VisibleForTesting
-   public int getBuffersInBacklog() {
-   return buffersInBacklog;
+   public abstract int getBuffersInBacklog();
+
+   /**
+* @param lastBufferAvailable whether the last buffer in this 
subpartition is available for consumption
+* @return the number of non-event buffers in this subpartition
+*/
+   protected int getBuffersInBacklog(boolean lastBufferAvailable) {
 
 Review comment:
   Hmm, I think I would keep the `Unsafe` suffix here, since this method is not 
synchronised compared to the `decreaseBuffersInBacklog`. Ditto for the abstract?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sunjincheng121 edited a comment on issue #7985: [FLINK-11918][table] Deprecated Window and Rename it to GroupWindow

2019-03-14 Thread GitBox
sunjincheng121 edited a comment on issue #7985: [FLINK-11918][table] Deprecated 
Window and Rename it to GroupWindow
URL: https://github.com/apache/flink/pull/7985#issuecomment-472834420
 
 
   Ooh, @twalthr, I got your points! I think we are on the same page about put 
the deprecated APIs to 1.8 as soon as possible.
   
   I mainly concern is many classes with the same name will be very confused.  
And in this PR we deal with the concept of `Window`.  if we want to deal with 
the `Tumble`, `Session` etc in this PR.  and merge into release-1.8,  then let 
API fully compatible, that's the correct thing we should follow.  I agree with 
your proposal. 
   
   +1 from the points of my view.
   
   Best,
   Jincheng
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sunjincheng121 edited a comment on issue #7985: [FLINK-11918][table] Deprecated Window and Rename it to GroupWindow

2019-03-14 Thread GitBox
sunjincheng121 edited a comment on issue #7985: [FLINK-11918][table] Deprecated 
Window and Rename it to GroupWindow
URL: https://github.com/apache/flink/pull/7985#issuecomment-472836720
 
 
   Hi @hequn8128 is that make sense to you? if so, I appreciate if you can 
update the PR according to @twalthr 's proposal.
   
   Thanks,
   Jincheng


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sunjincheng121 edited a comment on issue #7985: [FLINK-11918][table] Deprecated Window and Rename it to GroupWindow

2019-03-14 Thread GitBox
sunjincheng121 edited a comment on issue #7985: [FLINK-11918][table] Deprecated 
Window and Rename it to GroupWindow
URL: https://github.com/apache/flink/pull/7985#issuecomment-472836720
 
 
   Hi @hequn8128 is that make sense to you? if so, I appreciate if you can 
update the PR according to @twalthr 's proposal ASAP. :)
   
   Thanks,
   Jincheng


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   >