[jira] [Commented] (FLINK-10573) Support task revocation

2018-10-22 Thread zhijiang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660123#comment-16660123
 ] 

zhijiang commented on FLINK-10573:
--

I have not focused on this implement yet. If your Jira would be relying on the 
{{DataConsumptionException}}, you can assign my Jira to yourself and realize it 
if you like. Or you could wait me to submit the PR if not blocking you. I think 
I can do that next month.

> Support task revocation
> ---
>
> Key: FLINK-10573
> URL: https://issues.apache.org/jira/browse/FLINK-10573
> Project: Flink
>  Issue Type: Sub-task
>  Components: JobManager
>Reporter: JIN SUN
>Assignee: JIN SUN
>Priority: Major
> Fix For: 1.7.0
>
>
> In Batch Mode, When a downstream task has a partition missing failure, which 
> indicate the output of upstream task has been lost. To make the job success 
> we need to rerun the upstream task to reproduce the data, which we call task 
> revocation (revoke the success of upstream task)
> For revocation, we need to identify the partition missing issue, and it is 
> better to detect the missing partition accurately:
>  * Ideally, it makes things much easier if we get a specific exception 
> indicating that the data source is missing
>  * When a task got an IOException, it doesn’t mean the source data has 
> issues. It might also be related to target task, such as that the target task 
> has network issues.
>  * If multiple tasks cannot read the same source, it is highly likely the 
> source data is missing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10639) Fix java syntax error in document

2018-10-22 Thread Hequn Cheng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660071#comment-16660071
 ] 

Hequn Cheng commented on FLINK-10639:
-

[~sunjincheng121] Thanks for reporting this error in Flink documents. I have 
addressed the errors 
 in the PR. 

> Fix java syntax error in document
> -
>
> Key: FLINK-10639
> URL: https://issues.apache.org/jira/browse/FLINK-10639
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table API  SQL
>Affects Versions: 1.6.1, 1.7.0
>Reporter: sunjincheng
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.1, 1.7.0
>
> Attachments: image-2018-10-22-16-54-10-305.png
>
>
> Due to the  
> [StreamTableSourceFactory|https://github.com/apache/flink/blob/master/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/factories/StreamTableSourceFactory.scala]
>   is a trait. So the java example in the document should using "implements" 
> keyword. 
> !image-2018-10-22-16-54-10-305.png!
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10639) Fix java syntax error in document

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660068#comment-16660068
 ] 

ASF GitHub Bot commented on FLINK-10639:


hequn8128 opened a new pull request #6902: [FLINK-10639][docs] Fix java syntax 
error in document
URL: https://github.com/apache/flink/pull/6902
 
 
   
   ## What is the purpose of the change
   
   Fix java syntax error in document. For example, the 
`StreamTableSourceFactory`  is a trait. So the java example in the document 
should using `implements` keyword. 
   
   
   ## Brief change log
   
 - Use `implements` keyword instead of `extends` for java in Flink docs.
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Fix java syntax error in document
> -
>
> Key: FLINK-10639
> URL: https://issues.apache.org/jira/browse/FLINK-10639
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table API  SQL
>Affects Versions: 1.6.1, 1.7.0
>Reporter: sunjincheng
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.1, 1.7.0
>
> Attachments: image-2018-10-22-16-54-10-305.png
>
>
> Due to the  
> [StreamTableSourceFactory|https://github.com/apache/flink/blob/master/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/factories/StreamTableSourceFactory.scala]
>   is a trait. So the java example in the document should using "implements" 
> keyword. 
> !image-2018-10-22-16-54-10-305.png!
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10639) Fix java syntax error in document

2018-10-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-10639:
---
Labels: pull-request-available  (was: )

> Fix java syntax error in document
> -
>
> Key: FLINK-10639
> URL: https://issues.apache.org/jira/browse/FLINK-10639
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table API  SQL
>Affects Versions: 1.6.1, 1.7.0
>Reporter: sunjincheng
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.1, 1.7.0
>
> Attachments: image-2018-10-22-16-54-10-305.png
>
>
> Due to the  
> [StreamTableSourceFactory|https://github.com/apache/flink/blob/master/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/factories/StreamTableSourceFactory.scala]
>   is a trait. So the java example in the document should using "implements" 
> keyword. 
> !image-2018-10-22-16-54-10-305.png!
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] hequn8128 opened a new pull request #6902: [FLINK-10639][docs] Fix java syntax error in document

2018-10-22 Thread GitBox
hequn8128 opened a new pull request #6902: [FLINK-10639][docs] Fix java syntax 
error in document
URL: https://github.com/apache/flink/pull/6902
 
 
   
   ## What is the purpose of the change
   
   Fix java syntax error in document. For example, the 
`StreamTableSourceFactory`  is a trait. So the java example in the document 
should using `implements` keyword. 
   
   
   ## Brief change log
   
 - Use `implements` keyword instead of `extends` for java in Flink docs.
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-10572) Enable Per-job level failover strategy.

2018-10-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-10572:
---
Labels: pull-request-available  (was: )

> Enable Per-job level failover strategy. 
> 
>
> Key: FLINK-10572
> URL: https://issues.apache.org/jira/browse/FLINK-10572
> Project: Flink
>  Issue Type: Sub-task
>  Components: JobManager
>Reporter: JIN SUN
>Assignee: JIN SUN
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> Today we can specify ExecutionMode in ExecutionConfig, its a per-job setting. 
> However,  FailoverStrategy is a cluster-level configuration, while it should 
> be per-job:
>  * The FailoverStrategy has dependencies with ExecutionMode in 
> ExecutionConfig, such as Pipelined ExecutionMode doesn't compatible with 
> RestartIndividualStrategy, so set it as cluster-level doesn't make sense.
>  *  The FailoverStrategy also has dependencies with RestartStrategy. Like in 
> the new Batch failover strategy, instead of keep on restarting, we want to 
> fail the job if certain condition met, as a result, a NoRestart or some new 
> Restart strategy should be configured. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10572) Enable Per-job level failover strategy.

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660060#comment-16660060
 ] 

ASF GitHub Bot commented on FLINK-10572:


isunjin opened a new pull request #6901: [FLINK-10572] [JobManager] Enable 
Per-Job level failover config
URL: https://github.com/apache/flink/pull/6901
 
 
   ## What is the purpose of the change
   
   *Today we Failover strategy is a cluster level config, we cannot set it for 
individual job, this PR try to expose the config to ExecutionConfig and let it 
can be set in per-job level*
   
   
   ## Brief change log
   
 - *Add enum FailoverStrategyType*
 - *Add set FailoverStrategyType in ExecutionConfig*
 - *Extract FailoverStrategyType and pass it to ExecutionGraph*
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
 - *A test in jobMaster to validate the config was passed correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`:  yes
   
   ## Documentation
 - If yes, how is the feature documented? (not documented, will be handled 
in [FLINK-10574](https://issues.apache.org/jira/browse/FLINK-10574)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Enable Per-job level failover strategy. 
> 
>
> Key: FLINK-10572
> URL: https://issues.apache.org/jira/browse/FLINK-10572
> Project: Flink
>  Issue Type: Sub-task
>  Components: JobManager
>Reporter: JIN SUN
>Assignee: JIN SUN
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> Today we can specify ExecutionMode in ExecutionConfig, its a per-job setting. 
> However,  FailoverStrategy is a cluster-level configuration, while it should 
> be per-job:
>  * The FailoverStrategy has dependencies with ExecutionMode in 
> ExecutionConfig, such as Pipelined ExecutionMode doesn't compatible with 
> RestartIndividualStrategy, so set it as cluster-level doesn't make sense.
>  *  The FailoverStrategy also has dependencies with RestartStrategy. Like in 
> the new Batch failover strategy, instead of keep on restarting, we want to 
> fail the job if certain condition met, as a result, a NoRestart or some new 
> Restart strategy should be configured. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] isunjin opened a new pull request #6901: [FLINK-10572] [JobManager] Enable Per-Job level failover config

2018-10-22 Thread GitBox
isunjin opened a new pull request #6901: [FLINK-10572] [JobManager] Enable 
Per-Job level failover config
URL: https://github.com/apache/flink/pull/6901
 
 
   ## What is the purpose of the change
   
   *Today we Failover strategy is a cluster level config, we cannot set it for 
individual job, this PR try to expose the config to ExecutionConfig and let it 
can be set in per-job level*
   
   
   ## Brief change log
   
 - *Add enum FailoverStrategyType*
 - *Add set FailoverStrategyType in ExecutionConfig*
 - *Extract FailoverStrategyType and pass it to ExecutionGraph*
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
 - *A test in jobMaster to validate the config was passed correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`:  yes
   
   ## Documentation
 - If yes, how is the feature documented? (not documented, will be handled 
in [FLINK-10574](https://issues.apache.org/jira/browse/FLINK-10574)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhangxinyu1 commented on a change in pull request #6770: [FLINK-10002] [Webfrontend] WebUI shows jm/tm logs more friendly.

2018-10-22 Thread GitBox
zhangxinyu1 commented on a change in pull request #6770:  [FLINK-10002] 
[Webfrontend] WebUI shows jm/tm logs more friendly.
URL: https://github.com/apache/flink/pull/6770#discussion_r227217598
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/RangePathParameter.java
 ##
 @@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest.messages;
+
+import org.apache.flink.runtime.rest.handler.legacy.files.FileOffsetRange;
+
+/**
+ * Path parameter identifying the offset range of a log file to read.
+ */
+public class RangePathParameter extends MessagePathParameter {
 
 Review comment:
   `/taskmanagers/:%s/log?filename=xxx=123=456` may be better since 
it is compatible with the old API `/taskmanagers/:%s/log`, in which `filename`, 
`start` and `size` are all optional. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10002) WebUI shows logs unfriendly, especially when the amount of logs is large

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660054#comment-16660054
 ] 

ASF GitHub Bot commented on FLINK-10002:


zhangxinyu1 commented on a change in pull request #6770:  [FLINK-10002] 
[Webfrontend] WebUI shows jm/tm logs more friendly.
URL: https://github.com/apache/flink/pull/6770#discussion_r227217598
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/RangePathParameter.java
 ##
 @@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest.messages;
+
+import org.apache.flink.runtime.rest.handler.legacy.files.FileOffsetRange;
+
+/**
+ * Path parameter identifying the offset range of a log file to read.
+ */
+public class RangePathParameter extends MessagePathParameter {
 
 Review comment:
   `/taskmanagers/:%s/log?filename=xxx=123=456` may be better since 
it is compatible with the old API `/taskmanagers/:%s/log`, in which `filename`, 
`start` and `size` are all optional. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> WebUI shows logs unfriendly, especially when the amount of logs is large
> 
>
> Key: FLINK-10002
> URL: https://issues.apache.org/jira/browse/FLINK-10002
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: zhangxinyu
>Assignee: zhangxinyu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
> Attachments: image-2018-09-10-11-38-07-973.png
>
>
> When a streaming job run for a long time, the amount of logs may be very 
> large. The current WebUI shows all content of logs. It will cost much time to 
> download logs from task managers. and the browser cannot display the logs.
> Therefore, I suggest that Flink uses DailyRollingAppender to split logs by 
> default, and task manager provides an API that can get logs based on a 
> parameter of time interval. In this way WebUI can display logs based on time 
> interval.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] sunjincheng121 commented on a change in pull request #6873: [hotfix] Add Review Progress section to PR description template.

2018-10-22 Thread GitBox
sunjincheng121 commented on a change in pull request #6873: [hotfix] Add Review 
Progress section to PR description template.
URL: https://github.com/apache/flink/pull/6873#discussion_r227213293
 
 

 ##
 File path: .github/PULL_REQUEST_TEMPLATE.md
 ##
 @@ -70,3 +70,17 @@ This change added tests and can be verified as follows:
 
   - Does this pull request introduce a new feature? (yes / no)
   - If yes, how is the feature documented? (not applicable / docs / JavaDocs / 
not documented)
+
+
+
+# Review Progress
+
+**NOTE: THE REVIEW PROGRESS MUST ONLY BE UPDATED BY AN APACHE FLINK 
COMMITTER!**
+
+* [ ] 1. The contribution is well-described.
+* [ ] 2. There is consensus that the contribution should go into to Flink.
+* [ ] 3. [Does not need specific attention | Needs specific attention for X | 
Has attention for X by Y]
 
 Review comment:
   Make sense to me. +1 to merged.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10002) WebUI shows logs unfriendly, especially when the amount of logs is large

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660028#comment-16660028
 ] 

ASF GitHub Bot commented on FLINK-10002:


zhangxinyu1 commented on a change in pull request #6770:  [FLINK-10002] 
[Webfrontend] WebUI shows jm/tm logs more friendly.
URL: https://github.com/apache/flink/pull/6770#discussion_r227212699
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/RangePathParameter.java
 ##
 @@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest.messages;
+
+import org.apache.flink.runtime.rest.handler.legacy.files.FileOffsetRange;
+
+/**
+ * Path parameter identifying the offset range of a log file to read.
+ */
+public class RangePathParameter extends MessagePathParameter {
 
 Review comment:
   Good idea!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> WebUI shows logs unfriendly, especially when the amount of logs is large
> 
>
> Key: FLINK-10002
> URL: https://issues.apache.org/jira/browse/FLINK-10002
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: zhangxinyu
>Assignee: zhangxinyu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
> Attachments: image-2018-09-10-11-38-07-973.png
>
>
> When a streaming job run for a long time, the amount of logs may be very 
> large. The current WebUI shows all content of logs. It will cost much time to 
> download logs from task managers. and the browser cannot display the logs.
> Therefore, I suggest that Flink uses DailyRollingAppender to split logs by 
> default, and task manager provides an API that can get logs based on a 
> parameter of time interval. In this way WebUI can display logs based on time 
> interval.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zhangxinyu1 commented on a change in pull request #6770: [FLINK-10002] [Webfrontend] WebUI shows jm/tm logs more friendly.

2018-10-22 Thread GitBox
zhangxinyu1 commented on a change in pull request #6770:  [FLINK-10002] 
[Webfrontend] WebUI shows jm/tm logs more friendly.
URL: https://github.com/apache/flink/pull/6770#discussion_r227212699
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/RangePathParameter.java
 ##
 @@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest.messages;
+
+import org.apache.flink.runtime.rest.handler.legacy.files.FileOffsetRange;
+
+/**
+ * Path parameter identifying the offset range of a log file to read.
+ */
+public class RangePathParameter extends MessagePathParameter {
 
 Review comment:
   Good idea!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10002) WebUI shows logs unfriendly, especially when the amount of logs is large

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660009#comment-16660009
 ] 

ASF GitHub Bot commented on FLINK-10002:


zhangxinyu1 commented on a change in pull request #6770:  [FLINK-10002] 
[Webfrontend] WebUI shows jm/tm logs more friendly.
URL: https://github.com/apache/flink/pull/6770#discussion_r227210681
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/taskmanager/TaskManagerLogListHandler.java
 ##
 @@ -0,0 +1,100 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest.handler.taskmanager;
+
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.runtime.clusterframework.types.ResourceID;
+import org.apache.flink.runtime.resourcemanager.ResourceManagerGateway;
+import org.apache.flink.runtime.rest.AbstractHandler;
+import org.apache.flink.runtime.rest.handler.HandlerRequest;
+import org.apache.flink.runtime.rest.handler.RestHandlerException;
+import org.apache.flink.runtime.rest.handler.util.HandlerUtils;
+import org.apache.flink.runtime.rest.handler.util.KeepAliveWrite;
+import org.apache.flink.runtime.rest.messages.ErrorResponseBody;
+import org.apache.flink.runtime.rest.messages.UntypedResponseMessageHeaders;
+import 
org.apache.flink.runtime.rest.messages.taskmanager.TaskManagerIdPathParameter;
+import org.apache.flink.runtime.util.JsonUtils;
+import org.apache.flink.runtime.webmonitor.RestfulGateway;
+import org.apache.flink.runtime.webmonitor.retriever.GatewayRetriever;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.Unpooled;
+import org.apache.flink.shaded.netty4.io.netty.channel.ChannelHandlerContext;
+import 
org.apache.flink.shaded.netty4.io.netty.handler.codec.http.DefaultFullHttpResponse;
+import org.apache.flink.shaded.netty4.io.netty.handler.codec.http.HttpHeaders;
+import org.apache.flink.shaded.netty4.io.netty.handler.codec.http.HttpRequest;
+import 
org.apache.flink.shaded.netty4.io.netty.handler.codec.http.HttpResponseStatus;
+import org.apache.flink.shaded.netty4.io.netty.handler.codec.http.HttpVersion;
+
+import javax.annotation.Nonnull;
+
+import java.nio.charset.Charset;
+import java.util.Map;
+import java.util.Optional;
+import java.util.concurrent.CompletableFuture;
+
+/**
+ * TaskManagerLogListHandler serves the request which gets the historical log 
file list of a given taskmanager.
+ */
+public class TaskManagerLogListHandler extends AbstractHandler {
 
 Review comment:
   Yes, you'r right. I'll modify it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> WebUI shows logs unfriendly, especially when the amount of logs is large
> 
>
> Key: FLINK-10002
> URL: https://issues.apache.org/jira/browse/FLINK-10002
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: zhangxinyu
>Assignee: zhangxinyu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
> Attachments: image-2018-09-10-11-38-07-973.png
>
>
> When a streaming job run for a long time, the amount of logs may be very 
> large. The current WebUI shows all content of logs. It will cost much time to 
> download logs from task managers. and the browser cannot display the logs.
> Therefore, I suggest that Flink uses DailyRollingAppender to split logs by 
> default, and task manager provides an API that can get logs based on a 
> parameter of time interval. In this way WebUI can display logs based on time 
> interval.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zhangxinyu1 commented on a change in pull request #6770: [FLINK-10002] [Webfrontend] WebUI shows jm/tm logs more friendly.

2018-10-22 Thread GitBox
zhangxinyu1 commented on a change in pull request #6770:  [FLINK-10002] 
[Webfrontend] WebUI shows jm/tm logs more friendly.
URL: https://github.com/apache/flink/pull/6770#discussion_r227210681
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/taskmanager/TaskManagerLogListHandler.java
 ##
 @@ -0,0 +1,100 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest.handler.taskmanager;
+
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.runtime.clusterframework.types.ResourceID;
+import org.apache.flink.runtime.resourcemanager.ResourceManagerGateway;
+import org.apache.flink.runtime.rest.AbstractHandler;
+import org.apache.flink.runtime.rest.handler.HandlerRequest;
+import org.apache.flink.runtime.rest.handler.RestHandlerException;
+import org.apache.flink.runtime.rest.handler.util.HandlerUtils;
+import org.apache.flink.runtime.rest.handler.util.KeepAliveWrite;
+import org.apache.flink.runtime.rest.messages.ErrorResponseBody;
+import org.apache.flink.runtime.rest.messages.UntypedResponseMessageHeaders;
+import 
org.apache.flink.runtime.rest.messages.taskmanager.TaskManagerIdPathParameter;
+import org.apache.flink.runtime.util.JsonUtils;
+import org.apache.flink.runtime.webmonitor.RestfulGateway;
+import org.apache.flink.runtime.webmonitor.retriever.GatewayRetriever;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.Unpooled;
+import org.apache.flink.shaded.netty4.io.netty.channel.ChannelHandlerContext;
+import 
org.apache.flink.shaded.netty4.io.netty.handler.codec.http.DefaultFullHttpResponse;
+import org.apache.flink.shaded.netty4.io.netty.handler.codec.http.HttpHeaders;
+import org.apache.flink.shaded.netty4.io.netty.handler.codec.http.HttpRequest;
+import 
org.apache.flink.shaded.netty4.io.netty.handler.codec.http.HttpResponseStatus;
+import org.apache.flink.shaded.netty4.io.netty.handler.codec.http.HttpVersion;
+
+import javax.annotation.Nonnull;
+
+import java.nio.charset.Charset;
+import java.util.Map;
+import java.util.Optional;
+import java.util.concurrent.CompletableFuture;
+
+/**
+ * TaskManagerLogListHandler serves the request which gets the historical log 
file list of a given taskmanager.
+ */
+public class TaskManagerLogListHandler extends AbstractHandler {
 
 Review comment:
   Yes, you'r right. I'll modify it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10002) WebUI shows logs unfriendly, especially when the amount of logs is large

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660007#comment-16660007
 ] 

ASF GitHub Bot commented on FLINK-10002:


zhangxinyu1 commented on a change in pull request #6770:  [FLINK-10002] 
[Webfrontend] WebUI shows jm/tm logs more friendly.
URL: https://github.com/apache/flink/pull/6770#discussion_r227210230
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/taskmanager/TaskManagerLogFileWithRangeHeaders.java
 ##
 @@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest.messages.taskmanager;
+
+import org.apache.flink.runtime.rest.HttpMethodWrapper;
+import 
org.apache.flink.runtime.rest.handler.taskmanager.TaskManagerLogFileHandler;
+import org.apache.flink.runtime.rest.messages.EmptyRequestBody;
+import org.apache.flink.runtime.rest.messages.RangePathParameter;
+import org.apache.flink.runtime.rest.messages.UntypedResponseMessageHeaders;
+
+import java.util.Arrays;
+
+/**
+ * Headers for the {@link TaskManagerLogFileHandler}.
+ */
+public class TaskManagerLogFileWithRangeHeaders implements 
UntypedResponseMessageHeaders {
 
 Review comment:
   I'm wondering should the current implementation be compatible with previous 
interfaces `/taskmanagers/:taskmanagerId/log` and 
`/taskmanagers/:taskmanagerId/stdout`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> WebUI shows logs unfriendly, especially when the amount of logs is large
> 
>
> Key: FLINK-10002
> URL: https://issues.apache.org/jira/browse/FLINK-10002
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: zhangxinyu
>Assignee: zhangxinyu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
> Attachments: image-2018-09-10-11-38-07-973.png
>
>
> When a streaming job run for a long time, the amount of logs may be very 
> large. The current WebUI shows all content of logs. It will cost much time to 
> download logs from task managers. and the browser cannot display the logs.
> Therefore, I suggest that Flink uses DailyRollingAppender to split logs by 
> default, and task manager provides an API that can get logs based on a 
> parameter of time interval. In this way WebUI can display logs based on time 
> interval.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zhangxinyu1 commented on a change in pull request #6770: [FLINK-10002] [Webfrontend] WebUI shows jm/tm logs more friendly.

2018-10-22 Thread GitBox
zhangxinyu1 commented on a change in pull request #6770:  [FLINK-10002] 
[Webfrontend] WebUI shows jm/tm logs more friendly.
URL: https://github.com/apache/flink/pull/6770#discussion_r227210230
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/taskmanager/TaskManagerLogFileWithRangeHeaders.java
 ##
 @@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest.messages.taskmanager;
+
+import org.apache.flink.runtime.rest.HttpMethodWrapper;
+import 
org.apache.flink.runtime.rest.handler.taskmanager.TaskManagerLogFileHandler;
+import org.apache.flink.runtime.rest.messages.EmptyRequestBody;
+import org.apache.flink.runtime.rest.messages.RangePathParameter;
+import org.apache.flink.runtime.rest.messages.UntypedResponseMessageHeaders;
+
+import java.util.Arrays;
+
+/**
+ * Headers for the {@link TaskManagerLogFileHandler}.
+ */
+public class TaskManagerLogFileWithRangeHeaders implements 
UntypedResponseMessageHeaders {
 
 Review comment:
   I'm wondering should the current implementation be compatible with previous 
interfaces `/taskmanagers/:taskmanagerId/log` and 
`/taskmanagers/:taskmanagerId/stdout`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10002) WebUI shows logs unfriendly, especially when the amount of logs is large

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660003#comment-16660003
 ] 

ASF GitHub Bot commented on FLINK-10002:


zhangxinyu1 commented on a change in pull request #6770:  [FLINK-10002] 
[Webfrontend] WebUI shows jm/tm logs more friendly.
URL: https://github.com/apache/flink/pull/6770#discussion_r227209341
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/taskmanager/TaskManagerStdoutFileHandler.java
 ##
 @@ -54,7 +55,7 @@ public TaskManagerStdoutFileHandler(
}
 
@Override
-   protected CompletableFuture 
requestFileUpload(ResourceManagerGateway resourceManagerGateway, ResourceID 
taskManagerResourceId) {
-   return 
resourceManagerGateway.requestTaskManagerFileUpload(taskManagerResourceId, 
FileType.STDOUT, timeout);
+   protected CompletableFuture 
requestFileUpload(ResourceManagerGateway resourceManagerGateway, ResourceID 
taskManagerResourceId, String filename, FileOffsetRange range) {
 
 Review comment:
   ".out" filename wouldn't be in log list.  The log list only includes 
filename of all historical logs. If we modify the implement of 
`LogListHandler`, we can also get the stdout filename. However, do you think we 
should mix them up?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> WebUI shows logs unfriendly, especially when the amount of logs is large
> 
>
> Key: FLINK-10002
> URL: https://issues.apache.org/jira/browse/FLINK-10002
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: zhangxinyu
>Assignee: zhangxinyu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
> Attachments: image-2018-09-10-11-38-07-973.png
>
>
> When a streaming job run for a long time, the amount of logs may be very 
> large. The current WebUI shows all content of logs. It will cost much time to 
> download logs from task managers. and the browser cannot display the logs.
> Therefore, I suggest that Flink uses DailyRollingAppender to split logs by 
> default, and task manager provides an API that can get logs based on a 
> parameter of time interval. In this way WebUI can display logs based on time 
> interval.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10642) CodeGen split fields errors when maxGeneratedCodeLength equals 1

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16660004#comment-16660004
 ] 

ASF GitHub Bot commented on FLINK-10642:


hequn8128 commented on issue #6900: [FLINK-10642][table] fix CodeGen split 
fields errors in special config
URL: https://github.com/apache/flink/pull/6900#issuecomment-432073424
 
 
   @xueyumusic  
   Thanks for fixing the bug. The bug is caused by `private boolean false;` 
which is generated from the `reusableMemberStatements`. Looks good about the 
fix. I think it would be better to add a test to cover this case. 
   
   LGTM, I think we should ask for advice from @twalthr  before merge this 
change.
   Best, Hequn


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> CodeGen split fields errors when maxGeneratedCodeLength equals 1
> 
>
> Key: FLINK-10642
> URL: https://issues.apache.org/jira/browse/FLINK-10642
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.6.1
>Reporter: xueyu
>Assignee: xueyu
>Priority: Major
>  Labels: pull-request-available
>
> Several tests error in special config when setting maxGeneratedCodeLength 1. 
> e.g.
>   CalcITCase.testFilterOnCustomType:260 ? InvalidProgram Table program cannot 
> be...
>   JavaTableEnvironmentITCase.testAsFromAndToPojo:394 ? InvalidProgram Table 
> prog...
>   JavaTableEnvironmentITCase.testAsFromAndToPrivateFieldPojo:421 ? 
> InvalidProgram
>   JavaTableEnvironmentITCase.testAsFromPojo:288 ? InvalidProgram Table 
> program c...
>   JavaTableEnvironmentITCase.testAsFromPrivateFieldsPojo:366 ? InvalidProgram 
> Ta...
>   JavaTableEnvironmentITCase.testAsWithPojoAndGenericTypes:453 ? 
> InvalidProgram ...
>   TimeAttributesITCase.testPojoSupport:566 ? JobExecution Job execution 
> failed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zhangxinyu1 commented on a change in pull request #6770: [FLINK-10002] [Webfrontend] WebUI shows jm/tm logs more friendly.

2018-10-22 Thread GitBox
zhangxinyu1 commented on a change in pull request #6770:  [FLINK-10002] 
[Webfrontend] WebUI shows jm/tm logs more friendly.
URL: https://github.com/apache/flink/pull/6770#discussion_r227209341
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/taskmanager/TaskManagerStdoutFileHandler.java
 ##
 @@ -54,7 +55,7 @@ public TaskManagerStdoutFileHandler(
}
 
@Override
-   protected CompletableFuture 
requestFileUpload(ResourceManagerGateway resourceManagerGateway, ResourceID 
taskManagerResourceId) {
-   return 
resourceManagerGateway.requestTaskManagerFileUpload(taskManagerResourceId, 
FileType.STDOUT, timeout);
+   protected CompletableFuture 
requestFileUpload(ResourceManagerGateway resourceManagerGateway, ResourceID 
taskManagerResourceId, String filename, FileOffsetRange range) {
 
 Review comment:
   ".out" filename wouldn't be in log list.  The log list only includes 
filename of all historical logs. If we modify the implement of 
`LogListHandler`, we can also get the stdout filename. However, do you think we 
should mix them up?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] hequn8128 commented on issue #6900: [FLINK-10642][table] fix CodeGen split fields errors in special config

2018-10-22 Thread GitBox
hequn8128 commented on issue #6900: [FLINK-10642][table] fix CodeGen split 
fields errors in special config
URL: https://github.com/apache/flink/pull/6900#issuecomment-432073424
 
 
   @xueyumusic  
   Thanks for fixing the bug. The bug is caused by `private boolean false;` 
which is generated from the `reusableMemberStatements`. Looks good about the 
fix. I think it would be better to add a test to cover this case. 
   
   LGTM, I think we should ask for advice from @twalthr  before merge this 
change.
   Best, Hequn


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10002) WebUI shows logs unfriendly, especially when the amount of logs is large

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659969#comment-16659969
 ] 

ASF GitHub Bot commented on FLINK-10002:


zhangxinyu1 commented on a change in pull request #6770:  [FLINK-10002] 
[Webfrontend] WebUI shows jm/tm logs more friendly.
URL: https://github.com/apache/flink/pull/6770#discussion_r226959326
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/legacy/files/FileOffsetRange.java
 ##
 @@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest.handler.legacy.files;
+
+import java.io.File;
+import java.io.Serializable;
+
+/**
+ * FileOffsetRange is used to decide which part of files to read.
+ */
+public class FileOffsetRange implements Serializable{
+   private static final String SEPARATOR = "-";
+   private final long start;
 
 Review comment:
   The RESTful API like "/xxx/taskmanager/$filename?start=123=456" is 
better to use. However, I think 'Range' is always understood as something from 
'start' to 'end'.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> WebUI shows logs unfriendly, especially when the amount of logs is large
> 
>
> Key: FLINK-10002
> URL: https://issues.apache.org/jira/browse/FLINK-10002
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: zhangxinyu
>Assignee: zhangxinyu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
> Attachments: image-2018-09-10-11-38-07-973.png
>
>
> When a streaming job run for a long time, the amount of logs may be very 
> large. The current WebUI shows all content of logs. It will cost much time to 
> download logs from task managers. and the browser cannot display the logs.
> Therefore, I suggest that Flink uses DailyRollingAppender to split logs by 
> default, and task manager provides an API that can get logs based on a 
> parameter of time interval. In this way WebUI can display logs based on time 
> interval.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10002) WebUI shows logs unfriendly, especially when the amount of logs is large

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659965#comment-16659965
 ] 

ASF GitHub Bot commented on FLINK-10002:


zhangxinyu1 commented on a change in pull request #6770:  [FLINK-10002] 
[Webfrontend] WebUI shows jm/tm logs more friendly.
URL: https://github.com/apache/flink/pull/6770#discussion_r226958244
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/legacy/files/FileOffsetRange.java
 ##
 @@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest.handler.legacy.files;
+
+import java.io.File;
+import java.io.Serializable;
+
+/**
+ * FileOffsetRange is used to decide which part of files to read.
+ */
+public class FileOffsetRange implements Serializable{
+   private static final String SEPARATOR = "-";
+   private final long start;
 
 Review comment:
   The RESTful API like "/xxx/taskmanager/$filename?start=123=456" is 
better to use. However, I think 'Range' is always understood as something from 
'start' to 'end'.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> WebUI shows logs unfriendly, especially when the amount of logs is large
> 
>
> Key: FLINK-10002
> URL: https://issues.apache.org/jira/browse/FLINK-10002
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: zhangxinyu
>Assignee: zhangxinyu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
> Attachments: image-2018-09-10-11-38-07-973.png
>
>
> When a streaming job run for a long time, the amount of logs may be very 
> large. The current WebUI shows all content of logs. It will cost much time to 
> download logs from task managers. and the browser cannot display the logs.
> Therefore, I suggest that Flink uses DailyRollingAppender to split logs by 
> default, and task manager provides an API that can get logs based on a 
> parameter of time interval. In this way WebUI can display logs based on time 
> interval.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10002) WebUI shows logs unfriendly, especially when the amount of logs is large

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659967#comment-16659967
 ] 

ASF GitHub Bot commented on FLINK-10002:


zhangxinyu1 commented on a change in pull request #6770:  [FLINK-10002] 
[Webfrontend] WebUI shows jm/tm logs more friendly.
URL: https://github.com/apache/flink/pull/6770#discussion_r226958660
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/legacy/files/FileOffsetRange.java
 ##
 @@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest.handler.legacy.files;
+
+import java.io.File;
+import java.io.Serializable;
+
+/**
+ * FileOffsetRange is used to decide which part of files to read.
+ */
+public class FileOffsetRange implements Serializable{
+   private static final String SEPARATOR = "-";
+   private final long start;
 
 Review comment:
   The RESTful API like "/xxx/taskmanager/$filename?start=123=456" is 
better to use. However, I think 'Range' is always understood as something from 
'start' to 'end'.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> WebUI shows logs unfriendly, especially when the amount of logs is large
> 
>
> Key: FLINK-10002
> URL: https://issues.apache.org/jira/browse/FLINK-10002
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: zhangxinyu
>Assignee: zhangxinyu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
> Attachments: image-2018-09-10-11-38-07-973.png
>
>
> When a streaming job run for a long time, the amount of logs may be very 
> large. The current WebUI shows all content of logs. It will cost much time to 
> download logs from task managers. and the browser cannot display the logs.
> Therefore, I suggest that Flink uses DailyRollingAppender to split logs by 
> default, and task manager provides an API that can get logs based on a 
> parameter of time interval. In this way WebUI can display logs based on time 
> interval.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10002) WebUI shows logs unfriendly, especially when the amount of logs is large

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659966#comment-16659966
 ] 

ASF GitHub Bot commented on FLINK-10002:


zhangxinyu1 commented on a change in pull request #6770:  [FLINK-10002] 
[Webfrontend] WebUI shows jm/tm logs more friendly.
URL: https://github.com/apache/flink/pull/6770#discussion_r226958285
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/legacy/files/FileOffsetRange.java
 ##
 @@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest.handler.legacy.files;
+
+import java.io.File;
+import java.io.Serializable;
+
+/**
+ * FileOffsetRange is used to decide which part of files to read.
+ */
+public class FileOffsetRange implements Serializable{
+   private static final String SEPARATOR = "-";
+   private final long start;
 
 Review comment:
   The RESTful API like "/xxx/taskmanager/$filename?start=123=456" is 
better to use. However, I think 'Range' is always understood as something from 
'start' to 'end'.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> WebUI shows logs unfriendly, especially when the amount of logs is large
> 
>
> Key: FLINK-10002
> URL: https://issues.apache.org/jira/browse/FLINK-10002
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: zhangxinyu
>Assignee: zhangxinyu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
> Attachments: image-2018-09-10-11-38-07-973.png
>
>
> When a streaming job run for a long time, the amount of logs may be very 
> large. The current WebUI shows all content of logs. It will cost much time to 
> download logs from task managers. and the browser cannot display the logs.
> Therefore, I suggest that Flink uses DailyRollingAppender to split logs by 
> default, and task manager provides an API that can get logs based on a 
> parameter of time interval. In this way WebUI can display logs based on time 
> interval.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zhangxinyu1 commented on a change in pull request #6770: [FLINK-10002] [Webfrontend] WebUI shows jm/tm logs more friendly.

2018-10-22 Thread GitBox
zhangxinyu1 commented on a change in pull request #6770:  [FLINK-10002] 
[Webfrontend] WebUI shows jm/tm logs more friendly.
URL: https://github.com/apache/flink/pull/6770#discussion_r226958660
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/legacy/files/FileOffsetRange.java
 ##
 @@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest.handler.legacy.files;
+
+import java.io.File;
+import java.io.Serializable;
+
+/**
+ * FileOffsetRange is used to decide which part of files to read.
+ */
+public class FileOffsetRange implements Serializable{
+   private static final String SEPARATOR = "-";
+   private final long start;
 
 Review comment:
   The RESTful API like "/xxx/taskmanager/$filename?start=123=456" is 
better to use. However, I think 'Range' is always understood as something from 
'start' to 'end'.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhangxinyu1 commented on a change in pull request #6770: [FLINK-10002] [Webfrontend] WebUI shows jm/tm logs more friendly.

2018-10-22 Thread GitBox
zhangxinyu1 commented on a change in pull request #6770:  [FLINK-10002] 
[Webfrontend] WebUI shows jm/tm logs more friendly.
URL: https://github.com/apache/flink/pull/6770#discussion_r226958285
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/legacy/files/FileOffsetRange.java
 ##
 @@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest.handler.legacy.files;
+
+import java.io.File;
+import java.io.Serializable;
+
+/**
+ * FileOffsetRange is used to decide which part of files to read.
+ */
+public class FileOffsetRange implements Serializable{
+   private static final String SEPARATOR = "-";
+   private final long start;
 
 Review comment:
   The RESTful API like "/xxx/taskmanager/$filename?start=123=456" is 
better to use. However, I think 'Range' is always understood as something from 
'start' to 'end'.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhangxinyu1 commented on a change in pull request #6770: [FLINK-10002] [Webfrontend] WebUI shows jm/tm logs more friendly.

2018-10-22 Thread GitBox
zhangxinyu1 commented on a change in pull request #6770:  [FLINK-10002] 
[Webfrontend] WebUI shows jm/tm logs more friendly.
URL: https://github.com/apache/flink/pull/6770#discussion_r226959326
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/legacy/files/FileOffsetRange.java
 ##
 @@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest.handler.legacy.files;
+
+import java.io.File;
+import java.io.Serializable;
+
+/**
+ * FileOffsetRange is used to decide which part of files to read.
+ */
+public class FileOffsetRange implements Serializable{
+   private static final String SEPARATOR = "-";
+   private final long start;
 
 Review comment:
   The RESTful API like "/xxx/taskmanager/$filename?start=123=456" is 
better to use. However, I think 'Range' is always understood as something from 
'start' to 'end'.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhangxinyu1 commented on a change in pull request #6770: [FLINK-10002] [Webfrontend] WebUI shows jm/tm logs more friendly.

2018-10-22 Thread GitBox
zhangxinyu1 commented on a change in pull request #6770:  [FLINK-10002] 
[Webfrontend] WebUI shows jm/tm logs more friendly.
URL: https://github.com/apache/flink/pull/6770#discussion_r226958244
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/legacy/files/FileOffsetRange.java
 ##
 @@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest.handler.legacy.files;
+
+import java.io.File;
+import java.io.Serializable;
+
+/**
+ * FileOffsetRange is used to decide which part of files to read.
+ */
+public class FileOffsetRange implements Serializable{
+   private static final String SEPARATOR = "-";
+   private final long start;
 
 Review comment:
   The RESTful API like "/xxx/taskmanager/$filename?start=123=456" is 
better to use. However, I think 'Range' is always understood as something from 
'start' to 'end'.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10603) Reduce kafka test duration

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659948#comment-16659948
 ] 

ASF GitHub Bot commented on FLINK-10603:


yanghua commented on issue #6890: [FLINK-10603] Reduce kafka test duration
URL: https://github.com/apache/flink/pull/6890#issuecomment-432060529
 
 
   OK, there may be more than one potential reason. Since 0.11, more tests have 
been added to the connector (for example, for Producer transactions); in 
addition, Kafka may have adjusted the default timeout of the relevant API (just 
like this example), and there may be problems with the test logic itself. I 
will try to find the problem, but it may take some time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Reduce kafka test duration
> --
>
> Key: FLINK-10603
> URL: https://issues.apache.org/jira/browse/FLINK-10603
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kafka Connector, Tests
>Affects Versions: 1.7.0
>Reporter: Chesnay Schepler
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>
> The tests for the modern kafka connector take more than 10 minutes which is 
> simply unacceptable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] yanghua commented on issue #6890: [FLINK-10603] Reduce kafka test duration

2018-10-22 Thread GitBox
yanghua commented on issue #6890: [FLINK-10603] Reduce kafka test duration
URL: https://github.com/apache/flink/pull/6890#issuecomment-432060529
 
 
   OK, there may be more than one potential reason. Since 0.11, more tests have 
been added to the connector (for example, for Producer transactions); in 
addition, Kafka may have adjusted the default timeout of the relevant API (just 
like this example), and there may be problems with the test logic itself. I 
will try to find the problem, but it may take some time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-6036) Let catalog support partition

2018-10-22 Thread jingzhang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-6036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659932#comment-16659932
 ] 

jingzhang commented on FLINK-6036:
--

Hi,[~xuefuz]. I would submit a pr soon. Thanks.

> Let catalog support partition
> -
>
> Key: FLINK-6036
> URL: https://issues.apache.org/jira/browse/FLINK-6036
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Reporter: jingzhang
>Assignee: jingzhang
>Priority: Major
>
> Now catalog only support CRUD at database and table level. But in some kind 
> of catalog, for example for hive, we also need do CRUD operations at 
> partition level. 
> This issue aims to let catalog support partition.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-10641) Slow when applying new containers

2018-10-22 Thread Jiayi Liao (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiayi Liao closed FLINK-10641.
--
Resolution: Not A Problem

> Slow when applying new containers
> -
>
> Key: FLINK-10641
> URL: https://issues.apache.org/jira/browse/FLINK-10641
> Project: Flink
>  Issue Type: Bug
>  Components: YARN
>Affects Versions: 1.6.1
>Reporter: Jiayi Liao
>Assignee: Jiayi Liao
>Priority: Major
>
> When it comes to the last reuqested container, the container is received and 
> returned over and over again like this:
> 14:36:19,486 INFO org.apache.flink.yarn.YarnResourceManager - Received new 
> container: container_1535124617388_1936_01_000929 - Remaining pending 
> container requests: 0
> 14:36:19,486 INFO org.apache.flink.yarn.YarnResourceManager - Returning 
> excess container container_1535124617388_1936_01_000929.
> But the truth is that the program still needs a container, it should not be a 
> "excess".
> Sometimes it will last several minutes, which is out of our expectations. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10644) Batch Job: Speculative execution

2018-10-22 Thread JIN SUN (JIRA)
JIN SUN created FLINK-10644:
---

 Summary: Batch Job: Speculative execution
 Key: FLINK-10644
 URL: https://issues.apache.org/jira/browse/FLINK-10644
 Project: Flink
  Issue Type: New Feature
  Components: JobManager
Reporter: JIN SUN
Assignee: JIN SUN
 Fix For: 1.8.0


Strugglers/outlier are tasks that run slower than most of the all tasks in a 
Batch Job, this somehow impact job latency, as pretty much this straggler will 
be in the critical path of the job and become as the bottleneck. 

Tasks may be slow for various reasons, including hardware degradation, or 
software mis-configuration, or noise neighboring. It's hard for JM to predict 
the runtime. 

To reduce the overhead of strugglers, other system such as Hadoop/Tez, Spark 
has *_speculative execution_*. Speculative execution is a health-check 
procedure that checks for tasks to be speculated, i.e. running slower in a 
ExecutionJobVertex than the median of all successfully completed tasks in that 
EJV, Such slow tasks will be re-submitted to another TM. It will not stop the 
slow tasks, but run a new copy in parallel. And will kill the others if one of 
them complete. 

This JIRA is an umbrella to apply this kind of idea in FLINK. Details will be 
append later.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10643) Bubble execution: Resource aware job execution

2018-10-22 Thread JIN SUN (JIRA)
JIN SUN created FLINK-10643:
---

 Summary: Bubble execution: Resource aware job execution
 Key: FLINK-10643
 URL: https://issues.apache.org/jira/browse/FLINK-10643
 Project: Flink
  Issue Type: New Feature
  Components: JobManager
Reporter: JIN SUN
Assignee: JIN SUN
 Fix For: 1.8.0
 Attachments: image-2018-10-22-16-28-32-355.png

Today Flink support various channels such as pipelined channel and blocking 
channel. Blocking channel indicate that data need to be persistent in a batch 
and then it can be consumed later, it also indicate that the downstream task 
cannot start to process data unless its producer finished and also downstream 
task will only depends on this intermediate partition instead of upstream 
tasks. 

By leverage this characteristic, Flink already support fine grain-failover 
which will build a failover region has reduce failover cost.  However, we can 
leverage this characteristic even more. As described by this 
[paper|http://www.vldb.org/pvldb/vol11/p746-yin.pdf] (VLDB 2018), *_Bubble 
Execution_* not only use this characteristic to implement fine-grain failover, 
but also use this to balance the resource utilization and job performance. As 
shown in the paper (also in the following chart), with 50% of the resource, it 
get 25% (0.75 speedup) average slow down for TPCH benchmark.

!image-2018-10-22-16-28-32-355.png!

This JIRA here is umbrella that try to apply the idea of this paper to FLINK.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10573) Support task revocation

2018-10-22 Thread JIN SUN (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659782#comment-16659782
 ] 

JIN SUN commented on FLINK-10573:
-

Thanks Zhijiang, i would like use this exception, i see the Jira is still open, 
do you have any update or patch? 

> Support task revocation
> ---
>
> Key: FLINK-10573
> URL: https://issues.apache.org/jira/browse/FLINK-10573
> Project: Flink
>  Issue Type: Sub-task
>  Components: JobManager
>Reporter: JIN SUN
>Assignee: JIN SUN
>Priority: Major
> Fix For: 1.7.0
>
>
> In Batch Mode, When a downstream task has a partition missing failure, which 
> indicate the output of upstream task has been lost. To make the job success 
> we need to rerun the upstream task to reproduce the data, which we call task 
> revocation (revoke the success of upstream task)
> For revocation, we need to identify the partition missing issue, and it is 
> better to detect the missing partition accurately:
>  * Ideally, it makes things much easier if we get a specific exception 
> indicating that the data source is missing
>  * When a task got an IOException, it doesn’t mean the source data has 
> issues. It might also be related to target task, such as that the target task 
> has network issues.
>  * If multiple tasks cannot read the same source, it is highly likely the 
> source data is missing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] fhueske commented on a change in pull request #6873: [hotfix] Add Review Progress section to PR description template.

2018-10-22 Thread GitBox
fhueske commented on a change in pull request #6873: [hotfix] Add Review 
Progress section to PR description template.
URL: https://github.com/apache/flink/pull/6873#discussion_r227153277
 
 

 ##
 File path: .github/PULL_REQUEST_TEMPLATE.md
 ##
 @@ -70,3 +70,17 @@ This change added tests and can be verified as follows:
 
   - Does this pull request introduce a new feature? (yes / no)
   - If yes, how is the feature documented? (not applicable / docs / JavaDocs / 
not documented)
+
+
+
+# Review Progress
+
+**NOTE: THE REVIEW PROGRESS MUST ONLY BE UPDATED BY AN APACHE FLINK 
COMMITTER!**
+
+* [ ] 1. The contribution is well-described.
+* [ ] 2. There is consensus that the contribution should go into to Flink.
+* [ ] 3. [Does not need specific attention | Needs specific attention for X | 
Has attention for X by Y]
 
 Review comment:
   Yes, I think in theory, the number of people who should have a look at the 
PR can be extended. IMO, the committer who finally merges the PR should check 
that everybody mentioned there commented on the PR or ask them before merging.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10205) Batch Job: InputSplit Fault tolerant for DataSourceTask

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659711#comment-16659711
 ] 

ASF GitHub Bot commented on FLINK-10205:


isunjin commented on a change in pull request #6684: [FLINK-10205] Batch 
Job: InputSplit Fault tolerant for DataSource…
URL: https://github.com/apache/flink/pull/6684#discussion_r227137203
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionVertex.java
 ##
 @@ -104,6 +105,9 @@
/** The current or latest execution attempt of this vertex's task. */
private volatile Execution currentExecution;// this field must 
never be null
 
+   /** input split*/
+   private ArrayList inputSplits;
 
 Review comment:
   good catch


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Batch Job: InputSplit Fault tolerant for DataSourceTask
> ---
>
> Key: FLINK-10205
> URL: https://issues.apache.org/jira/browse/FLINK-10205
> Project: Flink
>  Issue Type: Sub-task
>  Components: JobManager
>Affects Versions: 1.6.1, 1.6.2, 1.7.0
>Reporter: JIN SUN
>Assignee: JIN SUN
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Today DataSource Task pull InputSplits from JobManager to achieve better 
> performance, however, when a DataSourceTask failed and rerun, it will not get 
> the same splits as its previous version. this will introduce inconsistent 
> result or even data corruption.
> Furthermore,  if there are two executions run at the same time (in batch 
> scenario), this two executions should process same splits.
> we need to fix the issue to make the inputs of a DataSourceTask 
> deterministic. The propose is save all splits into ExecutionVertex and 
> DataSourceTask will pull split from there.
>  document:
> [https://docs.google.com/document/d/1FdZdcA63tPUEewcCimTFy9Iz2jlVlMRANZkO4RngIuk/edit?usp=sharing]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] isunjin commented on a change in pull request #6684: [FLINK-10205] Batch Job: InputSplit Fault tolerant for DataSource…

2018-10-22 Thread GitBox
isunjin commented on a change in pull request #6684: [FLINK-10205] Batch 
Job: InputSplit Fault tolerant for DataSource…
URL: https://github.com/apache/flink/pull/6684#discussion_r227137203
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionVertex.java
 ##
 @@ -104,6 +105,9 @@
/** The current or latest execution attempt of this vertex's task. */
private volatile Execution currentExecution;// this field must 
never be null
 
+   /** input split*/
+   private ArrayList inputSplits;
 
 Review comment:
   good catch


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10205) Batch Job: InputSplit Fault tolerant for DataSourceTask

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659655#comment-16659655
 ] 

ASF GitHub Bot commented on FLINK-10205:


isunjin commented on issue #6684: [FLINK-10205] Batch Job: InputSplit Fault 
tolerant for DataSource…
URL: https://github.com/apache/flink/pull/6684#issuecomment-431950619
 
 
   @tillrohrmann 
   
   >The thing I'm questioning is whether the InputSplits of the failed task 
need to be processed by the same (restarted) task or can be given to any 
running task.
   
   Agree. 
   
   I think failed task **doesn't** very necessary need to be processed by the 
same task (executionvertex).
   
   > So far I'm not convinced that something would break if we simply return 
the InputSplits to the InputSplitAssigner
   
   Agree. 
   
   i  think ```simply return the InputSplits to the InputSplitAssigner``` would 
work, the point is how to make it work.
   
   Restart the entier graph will call ExecutionJobVertex.resetForNewExecution 
which will create a new ```InputSplitAssigner``` and "return" all 
```InputSplits``` to ``` InputSplitAssigner```.
   
   My point is that for fine-grian failover, we might not want to return  all 
```InputSplits``` but just the failed ```InputSplits```.  However, currently 
not all subclass of InputSplitAssigner has the logic to ```simply return the 
InputSplits to the InputSplitAssigner```, such as 
```LocatableInputSplitAssigner``` or any other ```customized 
InputSplitAssigner```.
   
   ```simply return the InputSplits to the InputSplitAssigner``` also implies 
transaction between task and jobManager (maybe multiple one), we need to make 
sure the ```inputSplits``` get return to the ```InputSplitAssigner``` exactly 
once. what happened if we have speculative execution, which means two task 
consume the same set of InputSplits and but not fail at same time, does every 
InputSplitAssigner need to keep a list to deduplicate? what happened if the TM 
died or has network issue and InputSplit cannot be return?
   
   Save the ```InputSplits``` in executionVertex is a way to "return" it to ``` 
InputSplitAssigner```, the "side effect" of this implementation is that this 
also implies the ``` InputSplits``` will be handled by the same task 
(executionVertex). But this seams a simple and safe way to implement ```simply 
return the InputSplits to the InputSplitAssigner``` with transaction. 
   
   @tillrohrmann, the above is my understanding, let you know if we are on the 
same page. I would happy to redo this if you have any other suggestion. 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Batch Job: InputSplit Fault tolerant for DataSourceTask
> ---
>
> Key: FLINK-10205
> URL: https://issues.apache.org/jira/browse/FLINK-10205
> Project: Flink
>  Issue Type: Sub-task
>  Components: JobManager
>Affects Versions: 1.6.1, 1.6.2, 1.7.0
>Reporter: JIN SUN
>Assignee: JIN SUN
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Today DataSource Task pull InputSplits from JobManager to achieve better 
> performance, however, when a DataSourceTask failed and rerun, it will not get 
> the same splits as its previous version. this will introduce inconsistent 
> result or even data corruption.
> Furthermore,  if there are two executions run at the same time (in batch 
> scenario), this two executions should process same splits.
> we need to fix the issue to make the inputs of a DataSourceTask 
> deterministic. The propose is save all splits into ExecutionVertex and 
> DataSourceTask will pull split from there.
>  document:
> [https://docs.google.com/document/d/1FdZdcA63tPUEewcCimTFy9Iz2jlVlMRANZkO4RngIuk/edit?usp=sharing]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] isunjin commented on issue #6684: [FLINK-10205] Batch Job: InputSplit Fault tolerant for DataSource…

2018-10-22 Thread GitBox
isunjin commented on issue #6684: [FLINK-10205] Batch Job: InputSplit Fault 
tolerant for DataSource…
URL: https://github.com/apache/flink/pull/6684#issuecomment-431950619
 
 
   @tillrohrmann 
   
   >The thing I'm questioning is whether the InputSplits of the failed task 
need to be processed by the same (restarted) task or can be given to any 
running task.
   
   Agree. 
   
   I think failed task **doesn't** very necessary need to be processed by the 
same task (executionvertex).
   
   > So far I'm not convinced that something would break if we simply return 
the InputSplits to the InputSplitAssigner
   
   Agree. 
   
   i  think ```simply return the InputSplits to the InputSplitAssigner``` would 
work, the point is how to make it work.
   
   Restart the entier graph will call ExecutionJobVertex.resetForNewExecution 
which will create a new ```InputSplitAssigner``` and "return" all 
```InputSplits``` to ``` InputSplitAssigner```.
   
   My point is that for fine-grian failover, we might not want to return  all 
```InputSplits``` but just the failed ```InputSplits```.  However, currently 
not all subclass of InputSplitAssigner has the logic to ```simply return the 
InputSplits to the InputSplitAssigner```, such as 
```LocatableInputSplitAssigner``` or any other ```customized 
InputSplitAssigner```.
   
   ```simply return the InputSplits to the InputSplitAssigner``` also implies 
transaction between task and jobManager (maybe multiple one), we need to make 
sure the ```inputSplits``` get return to the ```InputSplitAssigner``` exactly 
once. what happened if we have speculative execution, which means two task 
consume the same set of InputSplits and but not fail at same time, does every 
InputSplitAssigner need to keep a list to deduplicate? what happened if the TM 
died or has network issue and InputSplit cannot be return?
   
   Save the ```InputSplits``` in executionVertex is a way to "return" it to ``` 
InputSplitAssigner```, the "side effect" of this implementation is that this 
also implies the ``` InputSplits``` will be handled by the same task 
(executionVertex). But this seams a simple and safe way to implement ```simply 
return the InputSplits to the InputSplitAssigner``` with transaction. 
   
   @tillrohrmann, the above is my understanding, let you know if we are on the 
same page. I would happy to redo this if you have any other suggestion. 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-7811) Add support for Scala 2.12

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659640#comment-16659640
 ] 

ASF GitHub Bot commented on FLINK-7811:
---

zentol commented on a change in pull request #6784: [FLINK-7811] Add support 
for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r227098378
 
 

 ##
 File path: .travis.yml
 ##
 @@ -76,25 +76,25 @@ jobs:
 # main profile
 - stage: compile
   script: ./tools/travis_controller.sh
-  env: PROFILE="-Dhadoop.version=2.8.3 -Pinclude-kinesis 
-Dinclude_hadoop_aws"
+  env: PROFILE="-Dhadoop.version=2.8.3 -Pinclude-kinesis 
-Dinclude_hadoop_aws -Dscala-212"
 
 Review comment:
   this is just for testing purposes and was not intended to be merged. Scala 
2.12 will be covered with a nightly cron-job as you suggested.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add support for Scala 2.12
> --
>
> Key: FLINK-7811
> URL: https://issues.apache.org/jira/browse/FLINK-7811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Scala API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol commented on a change in pull request #6784: [FLINK-7811] Add support for Scala 2.12

2018-10-22 Thread GitBox
zentol commented on a change in pull request #6784: [FLINK-7811] Add support 
for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r227098378
 
 

 ##
 File path: .travis.yml
 ##
 @@ -76,25 +76,25 @@ jobs:
 # main profile
 - stage: compile
   script: ./tools/travis_controller.sh
-  env: PROFILE="-Dhadoop.version=2.8.3 -Pinclude-kinesis 
-Dinclude_hadoop_aws"
+  env: PROFILE="-Dhadoop.version=2.8.3 -Pinclude-kinesis 
-Dinclude_hadoop_aws -Dscala-212"
 
 Review comment:
   this is just for testing purposes and was not intended to be merged. Scala 
2.12 will be covered with a nightly cron-job as you suggested.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-7811) Add support for Scala 2.12

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659625#comment-16659625
 ] 

ASF GitHub Bot commented on FLINK-7811:
---

StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r227087898
 
 

 ##
 File path: .travis.yml
 ##
 @@ -76,25 +76,25 @@ jobs:
 # main profile
 - stage: compile
   script: ./tools/travis_controller.sh
-  env: PROFILE="-Dhadoop.version=2.8.3 -Pinclude-kinesis 
-Dinclude_hadoop_aws"
+  env: PROFILE="-Dhadoop.version=2.8.3 -Pinclude-kinesis 
-Dinclude_hadoop_aws -Dscala-212"
 
 Review comment:
   This switches all profiles to Scala 2.12 - I guess we should only switch 
half of them, or make Scala 2.12 a nightly cron-job profile?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add support for Scala 2.12
> --
>
> Key: FLINK-7811
> URL: https://issues.apache.org/jira/browse/FLINK-7811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Scala API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add support for Scala 2.12

2018-10-22 Thread GitBox
StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r227087898
 
 

 ##
 File path: .travis.yml
 ##
 @@ -76,25 +76,25 @@ jobs:
 # main profile
 - stage: compile
   script: ./tools/travis_controller.sh
-  env: PROFILE="-Dhadoop.version=2.8.3 -Pinclude-kinesis 
-Dinclude_hadoop_aws"
+  env: PROFILE="-Dhadoop.version=2.8.3 -Pinclude-kinesis 
-Dinclude_hadoop_aws -Dscala-212"
 
 Review comment:
   This switches all profiles to Scala 2.12 - I guess we should only switch 
half of them, or make Scala 2.12 a nightly cron-job profile?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-7811) Add support for Scala 2.12

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659621#comment-16659621
 ] 

ASF GitHub Bot commented on FLINK-7811:
---

StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r227086269
 
 

 ##
 File path: 
flink-core/src/main/java/org/apache/flink/util/InstantiationUtil.java
 ##
 @@ -169,6 +169,7 @@ public ClassLoaderObjectInputStream(InputStream in, 
ClassLoader classLoader) thr
scalaTypes.add("scala.Tuple2$mcDJ$sp");
scalaTypes.add("scala.Tuple2$mcDI$sp");
scalaTypes.add("scala.Tuple2$mcDD$sp");
+   scalaTypes.add("scala.Enumeration$ValueSet");
 
 Review comment:
   This seems like fixing one specific case that was discovered. There is 
probably a ton of unsupported cases. I would suggest that we either:
 - See why this is a problem in the first place (classes should not be 
serialized into savepoints or should not be in the serializers any more).
 - Do not support Scala version switches with savepoint compatibility (due 
to scala)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add support for Scala 2.12
> --
>
> Key: FLINK-7811
> URL: https://issues.apache.org/jira/browse/FLINK-7811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Scala API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add support for Scala 2.12

2018-10-22 Thread GitBox
StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r227086269
 
 

 ##
 File path: 
flink-core/src/main/java/org/apache/flink/util/InstantiationUtil.java
 ##
 @@ -169,6 +169,7 @@ public ClassLoaderObjectInputStream(InputStream in, 
ClassLoader classLoader) thr
scalaTypes.add("scala.Tuple2$mcDJ$sp");
scalaTypes.add("scala.Tuple2$mcDI$sp");
scalaTypes.add("scala.Tuple2$mcDD$sp");
+   scalaTypes.add("scala.Enumeration$ValueSet");
 
 Review comment:
   This seems like fixing one specific case that was discovered. There is 
probably a ton of unsupported cases. I would suggest that we either:
 - See why this is a problem in the first place (classes should not be 
serialized into savepoints or should not be in the serializers any more).
 - Do not support Scala version switches with savepoint compatibility (due 
to scala)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-7811) Add support for Scala 2.12

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659615#comment-16659615
 ] 

ASF GitHub Bot commented on FLINK-7811:
---

StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r227081211
 
 

 ##
 File path: 
flink-runtime/src/main/scala/org/apache/flink/runtime/types/FlinkScalaKryoInstantiator.scala
 ##
 @@ -0,0 +1,187 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.types
+
+import scala.collection.immutable.{BitSet, HashMap, HashSet, ListMap, ListSet, 
NumericRange, Queue, Range, SortedMap, SortedSet}
+import scala.collection.mutable.{Buffer, ListBuffer, WrappedArray, BitSet => 
MBitSet, HashMap => MHashMap, HashSet => MHashSet, Map => MMap, Queue => 
MQueue, Set => MSet}
+import scala.util.matching.Regex
+import _root_.java.io.Serializable
+
+import com.twitter.chill._
+
+
+import scala.collection.JavaConverters._
+
+
+/**
+ * This class has a no-arg constructor, suitable for use with reflection 
instantiation
+ * It has no registered serializers, just the standard Kryo configured for 
Kryo.
+ */
+class EmptyFlinkScalaKryoInstantiator extends KryoInstantiator {
+  override def newKryo = {
+val k = new KryoBase
+k.setRegistrationRequired(false)
+k.setInstantiatorStrategy(new 
org.objenesis.strategy.StdInstantiatorStrategy)
+
+// Handle cases where we may have an odd classloader setup like with 
libjars
+// for hadoop
+val classLoader = Thread.currentThread.getContextClassLoader
+k.setClassLoader(classLoader)
+
+k
+  }
+}
+
+object FlinkScalaKryoInstantiator extends Serializable {
+  private val mutex = new AnyRef with Serializable // some serializable object
 
 Review comment:
   Use `SerializableObject`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add support for Scala 2.12
> --
>
> Key: FLINK-7811
> URL: https://issues.apache.org/jira/browse/FLINK-7811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Scala API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7811) Add support for Scala 2.12

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659614#comment-16659614
 ] 

ASF GitHub Bot commented on FLINK-7811:
---

StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r227080991
 
 

 ##
 File path: 
flink-runtime/src/main/scala/org/apache/flink/runtime/types/FlinkScalaKryoInstantiator.scala
 ##
 @@ -0,0 +1,187 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.types
+
+import scala.collection.immutable.{BitSet, HashMap, HashSet, ListMap, ListSet, 
NumericRange, Queue, Range, SortedMap, SortedSet}
+import scala.collection.mutable.{Buffer, ListBuffer, WrappedArray, BitSet => 
MBitSet, HashMap => MHashMap, HashSet => MHashSet, Map => MMap, Queue => 
MQueue, Set => MSet}
+import scala.util.matching.Regex
+import _root_.java.io.Serializable
+
+import com.twitter.chill._
+
+
+import scala.collection.JavaConverters._
+
+
+/**
 
 Review comment:
   If this class is coped from Chill, it should be mentioned somewhere.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add support for Scala 2.12
> --
>
> Key: FLINK-7811
> URL: https://issues.apache.org/jira/browse/FLINK-7811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Scala API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add support for Scala 2.12

2018-10-22 Thread GitBox
StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r227080991
 
 

 ##
 File path: 
flink-runtime/src/main/scala/org/apache/flink/runtime/types/FlinkScalaKryoInstantiator.scala
 ##
 @@ -0,0 +1,187 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.types
+
+import scala.collection.immutable.{BitSet, HashMap, HashSet, ListMap, ListSet, 
NumericRange, Queue, Range, SortedMap, SortedSet}
+import scala.collection.mutable.{Buffer, ListBuffer, WrappedArray, BitSet => 
MBitSet, HashMap => MHashMap, HashSet => MHashSet, Map => MMap, Queue => 
MQueue, Set => MSet}
+import scala.util.matching.Regex
+import _root_.java.io.Serializable
+
+import com.twitter.chill._
+
+
+import scala.collection.JavaConverters._
+
+
+/**
 
 Review comment:
   If this class is coped from Chill, it should be mentioned somewhere.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add support for Scala 2.12

2018-10-22 Thread GitBox
StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r227081211
 
 

 ##
 File path: 
flink-runtime/src/main/scala/org/apache/flink/runtime/types/FlinkScalaKryoInstantiator.scala
 ##
 @@ -0,0 +1,187 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.types
+
+import scala.collection.immutable.{BitSet, HashMap, HashSet, ListMap, ListSet, 
NumericRange, Queue, Range, SortedMap, SortedSet}
+import scala.collection.mutable.{Buffer, ListBuffer, WrappedArray, BitSet => 
MBitSet, HashMap => MHashMap, HashSet => MHashSet, Map => MMap, Queue => 
MQueue, Set => MSet}
+import scala.util.matching.Regex
+import _root_.java.io.Serializable
+
+import com.twitter.chill._
+
+
+import scala.collection.JavaConverters._
+
+
+/**
+ * This class has a no-arg constructor, suitable for use with reflection 
instantiation
+ * It has no registered serializers, just the standard Kryo configured for 
Kryo.
+ */
+class EmptyFlinkScalaKryoInstantiator extends KryoInstantiator {
+  override def newKryo = {
+val k = new KryoBase
+k.setRegistrationRequired(false)
+k.setInstantiatorStrategy(new 
org.objenesis.strategy.StdInstantiatorStrategy)
+
+// Handle cases where we may have an odd classloader setup like with 
libjars
+// for hadoop
+val classLoader = Thread.currentThread.getContextClassLoader
+k.setClassLoader(classLoader)
+
+k
+  }
+}
+
+object FlinkScalaKryoInstantiator extends Serializable {
+  private val mutex = new AnyRef with Serializable // some serializable object
 
 Review comment:
   Use `SerializableObject`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-7811) Add support for Scala 2.12

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659611#comment-16659611
 ] 

ASF GitHub Bot commented on FLINK-7811:
---

StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r227080329
 
 

 ##
 File path: flink-formats/flink-avro/pom.xml
 ##
 @@ -106,6 +106,15 @@ under the License.
test
test-jar

+
+   
 
 Review comment:
   Why do we need the `FlinkScalaKryoInstantiator`. The serializers, including 
Kryo, should work without it. This indicates some problem in the scope of the 
flink-avro dependenies...


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add support for Scala 2.12
> --
>
> Key: FLINK-7811
> URL: https://issues.apache.org/jira/browse/FLINK-7811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Scala API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7811) Add support for Scala 2.12

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659613#comment-16659613
 ] 

ASF GitHub Bot commented on FLINK-7811:
---

StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r227080714
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/types/PriorityQueueSerializer.java
 ##
 @@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.types;
+
+import com.esotericsoftware.kryo.Kryo;
+import com.esotericsoftware.kryo.Serializer;
+import com.esotericsoftware.kryo.io.Input;
+import com.esotericsoftware.kryo.io.Output;
+import com.twitter.chill.IKryoRegistrar;
+import com.twitter.chill.SingleRegistrar;
+
+import java.lang.reflect.Field;
+import java.util.Comparator;
+import java.util.PriorityQueue;
+
+class PriorityQueueSerializer extends Serializer> {
+   private Field compField;
 
 Review comment:
   This looks like it should go into a companion object.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add support for Scala 2.12
> --
>
> Key: FLINK-7811
> URL: https://issues.apache.org/jira/browse/FLINK-7811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Scala API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add support for Scala 2.12

2018-10-22 Thread GitBox
StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r227080714
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/types/PriorityQueueSerializer.java
 ##
 @@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.types;
+
+import com.esotericsoftware.kryo.Kryo;
+import com.esotericsoftware.kryo.Serializer;
+import com.esotericsoftware.kryo.io.Input;
+import com.esotericsoftware.kryo.io.Output;
+import com.twitter.chill.IKryoRegistrar;
+import com.twitter.chill.SingleRegistrar;
+
+import java.lang.reflect.Field;
+import java.util.Comparator;
+import java.util.PriorityQueue;
+
+class PriorityQueueSerializer extends Serializer> {
+   private Field compField;
 
 Review comment:
   This looks like it should go into a companion object.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add support for Scala 2.12

2018-10-22 Thread GitBox
StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r227080329
 
 

 ##
 File path: flink-formats/flink-avro/pom.xml
 ##
 @@ -106,6 +106,15 @@ under the License.
test
test-jar

+
+   
 
 Review comment:
   Why do we need the `FlinkScalaKryoInstantiator`. The serializers, including 
Kryo, should work without it. This indicates some problem in the scope of the 
flink-avro dependenies...


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10603) Reduce kafka test duration

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659610#comment-16659610
 ] 

ASF GitHub Bot commented on FLINK-10603:


zentol commented on issue #6890: [FLINK-10603] Reduce kafka test duration
URL: https://github.com/apache/flink/pull/6890#issuecomment-431917344
 
 
   the modern connector is run in the `misc` profile since it wasn't properly 
added to the `connector` profile in `stage.sh`. This coincidentally was a 
"good" thing since it would've exceeded the max build time.
   
   It looks normal _compared_ to `0.11`, but that one already is a disgrace in 
terms of test duration.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Reduce kafka test duration
> --
>
> Key: FLINK-10603
> URL: https://issues.apache.org/jira/browse/FLINK-10603
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kafka Connector, Tests
>Affects Versions: 1.7.0
>Reporter: Chesnay Schepler
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>
> The tests for the modern kafka connector take more than 10 minutes which is 
> simply unacceptable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol commented on issue #6890: [FLINK-10603] Reduce kafka test duration

2018-10-22 Thread GitBox
zentol commented on issue #6890: [FLINK-10603] Reduce kafka test duration
URL: https://github.com/apache/flink/pull/6890#issuecomment-431917344
 
 
   the modern connector is run in the `misc` profile since it wasn't properly 
added to the `connector` profile in `stage.sh`. This coincidentally was a 
"good" thing since it would've exceeded the max build time.
   
   It looks normal _compared_ to `0.11`, but that one already is a disgrace in 
terms of test duration.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add support for Scala 2.12

2018-10-22 Thread GitBox
StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r227074756
 
 

 ##
 File path: flink-libraries/flink-sql-client/pom.xml
 ##
 @@ -45,22 +45,19 @@ under the License.
 

org.apache.flink
-   
 
 Review comment:
   I am puzzled by these dependencies. Why does the SQL client create an 
artifact that is not Scala versioned when it has "compile" dependencies on 
Scala-versioned artifacts? This seems like a problem in the current setup that 
is only not surfacing because there is only one Scala version. As soon as we 
support a second version, this will cause real problems with non-deterministic 
dependencies of released artifacts.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-7811) Add support for Scala 2.12

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659608#comment-16659608
 ] 

ASF GitHub Bot commented on FLINK-7811:
---

StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r227074142
 
 

 ##
 File path: flink-formats/flink-parquet/pom.xml
 ##
 @@ -96,14 +96,14 @@ under the License.
 

org.apache.flink
-   flink-test-utils_2.11
 
 Review comment:
   See above, tricky dependency non-determinism in released artifacts.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add support for Scala 2.12
> --
>
> Key: FLINK-7811
> URL: https://issues.apache.org/jira/browse/FLINK-7811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Scala API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7811) Add support for Scala 2.12

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659609#comment-16659609
 ] 

ASF GitHub Bot commented on FLINK-7811:
---

StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r227074756
 
 

 ##
 File path: flink-libraries/flink-sql-client/pom.xml
 ##
 @@ -45,22 +45,19 @@ under the License.
 

org.apache.flink
-   
 
 Review comment:
   I am puzzled by these dependencies. Why does the SQL client create an 
artifact that is not Scala versioned when it has "compile" dependencies on 
Scala-versioned artifacts? This seems like a problem in the current setup that 
is only not surfacing because there is only one Scala version. As soon as we 
support a second version, this will cause real problems with non-deterministic 
dependencies of released artifacts.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add support for Scala 2.12
> --
>
> Key: FLINK-7811
> URL: https://issues.apache.org/jira/browse/FLINK-7811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Scala API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add support for Scala 2.12

2018-10-22 Thread GitBox
StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r227074142
 
 

 ##
 File path: flink-formats/flink-parquet/pom.xml
 ##
 @@ -96,14 +96,14 @@ under the License.
 

org.apache.flink
-   flink-test-utils_2.11
 
 Review comment:
   See above, tricky dependency non-determinism in released artifacts.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-7811) Add support for Scala 2.12

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659606#comment-16659606
 ] 

ASF GitHub Bot commented on FLINK-7811:
---

StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r227072036
 
 

 ##
 File path: flink-formats/flink-json/pom.xml
 ##
 @@ -70,17 +69,24 @@ under the License.
 

org.apache.flink
-   
-   flink-table_2.11
+   
flink-table_${scala.binary.version}
${project.version}
test-jar
test

 
-   
+   

org.scala-lang
-   scala-compiler
+   scala-library
+   test
+   
+   
+   
+   org.scala-lang.modules
 
 Review comment:
   Is this an artifact of wrong dependency management in the `flink-table` 
module?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add support for Scala 2.12
> --
>
> Key: FLINK-7811
> URL: https://issues.apache.org/jira/browse/FLINK-7811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Scala API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add support for Scala 2.12

2018-10-22 Thread GitBox
StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r227072036
 
 

 ##
 File path: flink-formats/flink-json/pom.xml
 ##
 @@ -70,17 +69,24 @@ under the License.
 

org.apache.flink
-   
-   flink-table_2.11
+   
flink-table_${scala.binary.version}
${project.version}
test-jar
test

 
-   
+   

org.scala-lang
-   scala-compiler
+   scala-library
+   test
+   
+   
+   
+   org.scala-lang.modules
 
 Review comment:
   Is this an artifact of wrong dependency management in the `flink-table` 
module?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-7811) Add support for Scala 2.12

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659604#comment-16659604
 ] 

ASF GitHub Bot commented on FLINK-7811:
---

StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r227070839
 
 

 ##
 File path: flink-formats/flink-json/pom.xml
 ##
 @@ -53,8 +53,7 @@ under the License.
 

org.apache.flink
-   
 
 Review comment:
   Same here, see above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add support for Scala 2.12
> --
>
> Key: FLINK-7811
> URL: https://issues.apache.org/jira/browse/FLINK-7811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Scala API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7811) Add support for Scala 2.12

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659603#comment-16659603
 ] 

ASF GitHub Bot commented on FLINK-7811:
---

StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r227070724
 
 

 ##
 File path: flink-formats/flink-avro/pom.xml
 ##
 @@ -64,8 +64,7 @@ under the License.
 

org.apache.flink
-   
 
 Review comment:
   I think this is dangerous, because it means that the artifact we release 
have an undefined Scala version - effectively whatever version we uploaded 
first (or last?).
   
   That means `flink-avro` 1.7 depends on Scala 2.11, `flink-avro` 1.8 might 
depend on Scala 2.12,  `flink-avro` 1.9 might depend again on Scala 2.11.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add support for Scala 2.12
> --
>
> Key: FLINK-7811
> URL: https://issues.apache.org/jira/browse/FLINK-7811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Scala API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add support for Scala 2.12

2018-10-22 Thread GitBox
StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r227070724
 
 

 ##
 File path: flink-formats/flink-avro/pom.xml
 ##
 @@ -64,8 +64,7 @@ under the License.
 

org.apache.flink
-   
 
 Review comment:
   I think this is dangerous, because it means that the artifact we release 
have an undefined Scala version - effectively whatever version we uploaded 
first (or last?).
   
   That means `flink-avro` 1.7 depends on Scala 2.11, `flink-avro` 1.8 might 
depend on Scala 2.12,  `flink-avro` 1.9 might depend again on Scala 2.11.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add support for Scala 2.12

2018-10-22 Thread GitBox
StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r227070839
 
 

 ##
 File path: flink-formats/flink-json/pom.xml
 ##
 @@ -53,8 +53,7 @@ under the License.
 

org.apache.flink
-   
 
 Review comment:
   Same here, see above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-7811) Add support for Scala 2.12

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659591#comment-16659591
 ] 

ASF GitHub Bot commented on FLINK-7811:
---

StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r227059104
 
 

 ##
 File path: 
flink-scala/src/main/scala/org/apache/flink/api/scala/unfinishedKeyPairOperation.scala
 ##
 @@ -169,13 +165,10 @@ private[flink] class HalfUnfinishedKeyPairOperation[L, 
R, O](
* the finished operation.
*/
   def equalTo[K: TypeInformation](fun: KeySelector[R, K]): O = {
+
 val keyType = implicitly[TypeInformation[K]]
-val keyExtractor = new KeySelector[R, K] {
-  val cleanFun = unfinished.leftInput.clean(fun)
-  def getKey(in: R) = cleanFun.getKey(in)
-}
 val rightKey = new Keys.SelectorFunctionKeys[R, K](
-  keyExtractor,
+  unfinished.leftInput.clean(fun),
 
 Review comment:
   Should this be `unfinished.rightInput.clean(fun)`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add support for Scala 2.12
> --
>
> Key: FLINK-7811
> URL: https://issues.apache.org/jira/browse/FLINK-7811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Scala API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10251) Handle oversized response messages in AkkaRpcActor

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659586#comment-16659586
 ] 

ASF GitHub Bot commented on FLINK-10251:


yanghua removed a comment on issue #6876: [FLINK-10251] Handle oversized 
response messages in AkkaRpcActor
URL: https://github.com/apache/flink/pull/6876#issuecomment-431755708
 
 
   Thank you @tillrohrmann , I think I misunderstood your previous meaning 
about serialization results (I thought all the results were serialized). I have 
fixed it locally, but the [Github service has 
failed](https://status.github.com/messages) (at least in China), and once the 
Github service is normal, I will push my changes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Handle oversized response messages in AkkaRpcActor
> --
>
> Key: FLINK-10251
> URL: https://issues.apache.org/jira/browse/FLINK-10251
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Coordination
>Affects Versions: 1.5.3, 1.6.0, 1.7.0
>Reporter: Till Rohrmann
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.5.6, 1.6.3, 1.7.0
>
>
> The {{AkkaRpcActor}} should check whether an RPC response which is sent to a 
> remote sender does not exceed the maximum framesize of the underlying 
> {{ActorSystem}}. If this is the case we should fail fast instead. We can 
> achieve this by serializing the response and sending the serialized byte 
> array.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add support for Scala 2.12

2018-10-22 Thread GitBox
StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r227059104
 
 

 ##
 File path: 
flink-scala/src/main/scala/org/apache/flink/api/scala/unfinishedKeyPairOperation.scala
 ##
 @@ -169,13 +165,10 @@ private[flink] class HalfUnfinishedKeyPairOperation[L, 
R, O](
* the finished operation.
*/
   def equalTo[K: TypeInformation](fun: KeySelector[R, K]): O = {
+
 val keyType = implicitly[TypeInformation[K]]
-val keyExtractor = new KeySelector[R, K] {
-  val cleanFun = unfinished.leftInput.clean(fun)
-  def getKey(in: R) = cleanFun.getKey(in)
-}
 val rightKey = new Keys.SelectorFunctionKeys[R, K](
-  keyExtractor,
+  unfinished.leftInput.clean(fun),
 
 Review comment:
   Should this be `unfinished.rightInput.clean(fun)`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yanghua removed a comment on issue #6876: [FLINK-10251] Handle oversized response messages in AkkaRpcActor

2018-10-22 Thread GitBox
yanghua removed a comment on issue #6876: [FLINK-10251] Handle oversized 
response messages in AkkaRpcActor
URL: https://github.com/apache/flink/pull/6876#issuecomment-431755708
 
 
   Thank you @tillrohrmann , I think I misunderstood your previous meaning 
about serialization results (I thought all the results were serialized). I have 
fixed it locally, but the [Github service has 
failed](https://status.github.com/messages) (at least in China), and once the 
Github service is normal, I will push my changes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-9970) Add ASCII/CHR function for table/sql API

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659580#comment-16659580
 ] 

ASF GitHub Bot commented on FLINK-9970:
---

yanghua commented on a change in pull request #6432: [FLINK-9970] [table] Add 
ASCII/CHR function for table/sql API
URL: https://github.com/apache/flink/pull/6432#discussion_r227055799
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/functions/ScalarFunctions.scala
 ##
 @@ -266,6 +268,21 @@ object ScalarFunctions {
 regexp_extract(str, regex, 0)
   }
 
+  /**
+* Returns the ASCII code value of the leftmost character of the string str.
+*/
+  def ascii(str: String): Integer = {
+if (str == null || str.equals("")) {
+  0
+} else {
+  if (CharMatcher.ASCII.matches(str.charAt(0))) {
+str.charAt(0).toByte.toInt
+  } else {
+0
 
 Review comment:
   @xccui sorry for sending many duplicated comment, Github service is not 
available yesterday.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add ASCII/CHR function for table/sql API
> 
>
> Key: FLINK-9970
> URL: https://issues.apache.org/jira/browse/FLINK-9970
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> for ASCII function : 
> refer to : 
> [https://dev.mysql.com/doc/refman/8.0/en/string-functions.html#function_ascii]
> for CHR function : 
> This function convert ASCII code to a character,
> refer to : [https://doc.ispirer.com/sqlways/Output/SQLWays-1-071.html]
> Considering "CHAR" always is a keyword in many database, so we use "CHR" 
> keyword.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9970) Add ASCII/CHR function for table/sql API

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659575#comment-16659575
 ] 

ASF GitHub Bot commented on FLINK-9970:
---

yanghua commented on a change in pull request #6432: [FLINK-9970] [table] Add 
ASCII/CHR function for table/sql API
URL: https://github.com/apache/flink/pull/6432#discussion_r226888739
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/functions/ScalarFunctions.scala
 ##
 @@ -266,6 +268,21 @@ object ScalarFunctions {
 regexp_extract(str, regex, 0)
   }
 
+  /**
+* Returns the ASCII code value of the leftmost character of the string str.
+*/
+  def ascii(str: String): Integer = {
+if (str == null || str.equals("")) {
+  0
+} else {
+  if (CharMatcher.ASCII.matches(str.charAt(0))) {
+str.charAt(0).toByte.toInt
+  } else {
+0
 
 Review comment:
   hi @xccui What do you think about this problem?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add ASCII/CHR function for table/sql API
> 
>
> Key: FLINK-9970
> URL: https://issues.apache.org/jira/browse/FLINK-9970
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> for ASCII function : 
> refer to : 
> [https://dev.mysql.com/doc/refman/8.0/en/string-functions.html#function_ascii]
> for CHR function : 
> This function convert ASCII code to a character,
> refer to : [https://doc.ispirer.com/sqlways/Output/SQLWays-1-071.html]
> Considering "CHAR" always is a keyword in many database, so we use "CHR" 
> keyword.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9970) Add ASCII/CHR function for table/sql API

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659577#comment-16659577
 ] 

ASF GitHub Bot commented on FLINK-9970:
---

yanghua commented on a change in pull request #6432: [FLINK-9970] [table] Add 
ASCII/CHR function for table/sql API
URL: https://github.com/apache/flink/pull/6432#discussion_r226883965
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/functions/ScalarFunctions.scala
 ##
 @@ -266,6 +268,21 @@ object ScalarFunctions {
 regexp_extract(str, regex, 0)
   }
 
+  /**
+* Returns the ASCII code value of the leftmost character of the string str.
+*/
+  def ascii(str: String): Integer = {
+if (str == null || str.equals("")) {
+  0
+} else {
+  if (CharMatcher.ASCII.matches(str.charAt(0))) {
+str.charAt(0).toByte.toInt
+  } else {
+0
 
 Review comment:
   @xccui  What do you think about this issue?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add ASCII/CHR function for table/sql API
> 
>
> Key: FLINK-9970
> URL: https://issues.apache.org/jira/browse/FLINK-9970
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> for ASCII function : 
> refer to : 
> [https://dev.mysql.com/doc/refman/8.0/en/string-functions.html#function_ascii]
> for CHR function : 
> This function convert ASCII code to a character,
> refer to : [https://doc.ispirer.com/sqlways/Output/SQLWays-1-071.html]
> Considering "CHAR" always is a keyword in many database, so we use "CHR" 
> keyword.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9970) Add ASCII/CHR function for table/sql API

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659578#comment-16659578
 ] 

ASF GitHub Bot commented on FLINK-9970:
---

yanghua commented on a change in pull request #6432: [FLINK-9970] [table] Add 
ASCII/CHR function for table/sql API
URL: https://github.com/apache/flink/pull/6432#discussion_r226883946
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/functions/ScalarFunctions.scala
 ##
 @@ -266,6 +268,21 @@ object ScalarFunctions {
 regexp_extract(str, regex, 0)
   }
 
+  /**
+* Returns the ASCII code value of the leftmost character of the string str.
+*/
+  def ascii(str: String): Integer = {
+if (str == null || str.equals("")) {
+  0
+} else {
+  if (CharMatcher.ASCII.matches(str.charAt(0))) {
+str.charAt(0).toByte.toInt
+  } else {
+0
 
 Review comment:
   @xccui  What do you think about this issue?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add ASCII/CHR function for table/sql API
> 
>
> Key: FLINK-9970
> URL: https://issues.apache.org/jira/browse/FLINK-9970
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> for ASCII function : 
> refer to : 
> [https://dev.mysql.com/doc/refman/8.0/en/string-functions.html#function_ascii]
> for CHR function : 
> This function convert ASCII code to a character,
> refer to : [https://doc.ispirer.com/sqlways/Output/SQLWays-1-071.html]
> Considering "CHAR" always is a keyword in many database, so we use "CHR" 
> keyword.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9970) Add ASCII/CHR function for table/sql API

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659576#comment-16659576
 ] 

ASF GitHub Bot commented on FLINK-9970:
---

yanghua commented on a change in pull request #6432: [FLINK-9970] [table] Add 
ASCII/CHR function for table/sql API
URL: https://github.com/apache/flink/pull/6432#discussion_r226888630
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/functions/ScalarFunctions.scala
 ##
 @@ -266,6 +268,21 @@ object ScalarFunctions {
 regexp_extract(str, regex, 0)
   }
 
+  /**
+* Returns the ASCII code value of the leftmost character of the string str.
+*/
+  def ascii(str: String): Integer = {
+if (str == null || str.equals("")) {
+  0
+} else {
+  if (CharMatcher.ASCII.matches(str.charAt(0))) {
+str.charAt(0).toByte.toInt
+  } else {
+0
 
 Review comment:
   hi @xccui What do you think about this problem?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add ASCII/CHR function for table/sql API
> 
>
> Key: FLINK-9970
> URL: https://issues.apache.org/jira/browse/FLINK-9970
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> for ASCII function : 
> refer to : 
> [https://dev.mysql.com/doc/refman/8.0/en/string-functions.html#function_ascii]
> for CHR function : 
> This function convert ASCII code to a character,
> refer to : [https://doc.ispirer.com/sqlways/Output/SQLWays-1-071.html]
> Considering "CHAR" always is a keyword in many database, so we use "CHR" 
> keyword.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] yanghua commented on a change in pull request #6432: [FLINK-9970] [table] Add ASCII/CHR function for table/sql API

2018-10-22 Thread GitBox
yanghua commented on a change in pull request #6432: [FLINK-9970] [table] Add 
ASCII/CHR function for table/sql API
URL: https://github.com/apache/flink/pull/6432#discussion_r227055799
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/functions/ScalarFunctions.scala
 ##
 @@ -266,6 +268,21 @@ object ScalarFunctions {
 regexp_extract(str, regex, 0)
   }
 
+  /**
+* Returns the ASCII code value of the leftmost character of the string str.
+*/
+  def ascii(str: String): Integer = {
+if (str == null || str.equals("")) {
+  0
+} else {
+  if (CharMatcher.ASCII.matches(str.charAt(0))) {
+str.charAt(0).toByte.toInt
+  } else {
+0
 
 Review comment:
   @xccui sorry for sending many duplicated comment, Github service is not 
available yesterday.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yanghua commented on a change in pull request #6432: [FLINK-9970] [table] Add ASCII/CHR function for table/sql API

2018-10-22 Thread GitBox
yanghua commented on a change in pull request #6432: [FLINK-9970] [table] Add 
ASCII/CHR function for table/sql API
URL: https://github.com/apache/flink/pull/6432#discussion_r226888630
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/functions/ScalarFunctions.scala
 ##
 @@ -266,6 +268,21 @@ object ScalarFunctions {
 regexp_extract(str, regex, 0)
   }
 
+  /**
+* Returns the ASCII code value of the leftmost character of the string str.
+*/
+  def ascii(str: String): Integer = {
+if (str == null || str.equals("")) {
+  0
+} else {
+  if (CharMatcher.ASCII.matches(str.charAt(0))) {
+str.charAt(0).toByte.toInt
+  } else {
+0
 
 Review comment:
   hi @xccui What do you think about this problem?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yanghua commented on a change in pull request #6432: [FLINK-9970] [table] Add ASCII/CHR function for table/sql API

2018-10-22 Thread GitBox
yanghua commented on a change in pull request #6432: [FLINK-9970] [table] Add 
ASCII/CHR function for table/sql API
URL: https://github.com/apache/flink/pull/6432#discussion_r226883965
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/functions/ScalarFunctions.scala
 ##
 @@ -266,6 +268,21 @@ object ScalarFunctions {
 regexp_extract(str, regex, 0)
   }
 
+  /**
+* Returns the ASCII code value of the leftmost character of the string str.
+*/
+  def ascii(str: String): Integer = {
+if (str == null || str.equals("")) {
+  0
+} else {
+  if (CharMatcher.ASCII.matches(str.charAt(0))) {
+str.charAt(0).toByte.toInt
+  } else {
+0
 
 Review comment:
   @xccui  What do you think about this issue?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yanghua commented on a change in pull request #6432: [FLINK-9970] [table] Add ASCII/CHR function for table/sql API

2018-10-22 Thread GitBox
yanghua commented on a change in pull request #6432: [FLINK-9970] [table] Add 
ASCII/CHR function for table/sql API
URL: https://github.com/apache/flink/pull/6432#discussion_r226888739
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/functions/ScalarFunctions.scala
 ##
 @@ -266,6 +268,21 @@ object ScalarFunctions {
 regexp_extract(str, regex, 0)
   }
 
+  /**
+* Returns the ASCII code value of the leftmost character of the string str.
+*/
+  def ascii(str: String): Integer = {
+if (str == null || str.equals("")) {
+  0
+} else {
+  if (CharMatcher.ASCII.matches(str.charAt(0))) {
+str.charAt(0).toByte.toInt
+  } else {
+0
 
 Review comment:
   hi @xccui What do you think about this problem?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yanghua commented on a change in pull request #6432: [FLINK-9970] [table] Add ASCII/CHR function for table/sql API

2018-10-22 Thread GitBox
yanghua commented on a change in pull request #6432: [FLINK-9970] [table] Add 
ASCII/CHR function for table/sql API
URL: https://github.com/apache/flink/pull/6432#discussion_r226883946
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/functions/ScalarFunctions.scala
 ##
 @@ -266,6 +268,21 @@ object ScalarFunctions {
 regexp_extract(str, regex, 0)
   }
 
+  /**
+* Returns the ASCII code value of the leftmost character of the string str.
+*/
+  def ascii(str: String): Integer = {
+if (str == null || str.equals("")) {
+  0
+} else {
+  if (CharMatcher.ASCII.matches(str.charAt(0))) {
+str.charAt(0).toByte.toInt
+  } else {
+0
 
 Review comment:
   @xccui  What do you think about this issue?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-7811) Add support for Scala 2.12

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659571#comment-16659571
 ] 

ASF GitHub Bot commented on FLINK-7811:
---

StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r226955806
 
 

 ##
 File path: 
flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/nn/KNN.scala
 ##
 @@ -227,32 +227,35 @@ object KNN {
 
 // join input and training set
 val crossed = crossTuned.mapPartition {
-  (iter, out: Collector[(FlinkVector, FlinkVector, Long, Double)]) 
=> {
 
 Review comment:
   Minor: Can this be fixed without changing the indentation level of the 
function body?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add support for Scala 2.12
> --
>
> Key: FLINK-7811
> URL: https://issues.apache.org/jira/browse/FLINK-7811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Scala API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7811) Add support for Scala 2.12

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659569#comment-16659569
 ] 

ASF GitHub Bot commented on FLINK-7811:
---

StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r226954765
 
 

 ##
 File path: 
flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/DataStream.scala
 ##
 @@ -414,6 +414,22 @@ class DataStream[T](stream: JavaStream[T]) {
 asScalaStream(new JavaKeyedStream(stream, keyExtractor, keyType))
   }
 
+  /**
+   * Groups the elements of a DataStream by the given K key to
+   * be used with grouped operators like grouped reduce or grouped 
aggregations.
+   */
+  def keyBy[K: TypeInformation](fun: KeySelector[T, K]): KeyedStream[T, K] = {
+
+val cleanFun = clean(fun)
+val keyType: TypeInformation[K] = implicitly[TypeInformation[K]]
+
+val keyExtractor = new KeySelector[T, K] with ResultTypeQueryable[K] {
 
 Review comment:
   Do we need to wrap this in a `ResultTypeQueryable` if we explicitly pass the 
key type info below?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add support for Scala 2.12
> --
>
> Key: FLINK-7811
> URL: https://issues.apache.org/jira/browse/FLINK-7811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Scala API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7811) Add support for Scala 2.12

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659567#comment-16659567
 ] 

ASF GitHub Bot commented on FLINK-7811:
---

StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r226954278
 
 

 ##
 File path: 
flink-scala/src/main/scala/org/apache/flink/api/scala/unfinishedKeyPairOperation.scala
 ##
 @@ -90,6 +90,22 @@ private[flink] abstract class UnfinishedKeyPairOperation[L, 
R, O](
 val leftKey = new Keys.SelectorFunctionKeys[L, K](keyExtractor, 
leftInput.getType, keyType)
 new HalfUnfinishedKeyPairOperation[L, R, O](this, leftKey)
   }
+
+  /**
+   * Specify the key selector function for the left side of the key based 
operation. This returns
+   * a [[HalfUnfinishedKeyPairOperation]] on which `equalTo` must be called to 
specify the
+   * key for the right side. The result after specifying the right side key is 
the finished
+   * operation.
+   */
+  def where[K: TypeInformation](fun: KeySelector[L, K]) = {
+val keyType = implicitly[TypeInformation[K]]
+val keyExtractor = new KeySelector[L, K] {
 
 Review comment:
   This looks like redundant wrapping of `KeySelector` into `KeySelector`. I 
would suggest to remove that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add support for Scala 2.12
> --
>
> Key: FLINK-7811
> URL: https://issues.apache.org/jira/browse/FLINK-7811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Scala API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7811) Add support for Scala 2.12

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659568#comment-16659568
 ] 

ASF GitHub Bot commented on FLINK-7811:
---

StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r226954357
 
 

 ##
 File path: 
flink-scala/src/main/scala/org/apache/flink/api/scala/unfinishedKeyPairOperation.scala
 ##
 @@ -90,6 +90,22 @@ private[flink] abstract class UnfinishedKeyPairOperation[L, 
R, O](
 val leftKey = new Keys.SelectorFunctionKeys[L, K](keyExtractor, 
leftInput.getType, keyType)
 new HalfUnfinishedKeyPairOperation[L, R, O](this, leftKey)
   }
+
+  /**
+   * Specify the key selector function for the left side of the key based 
operation. This returns
+   * a [[HalfUnfinishedKeyPairOperation]] on which `equalTo` must be called to 
specify the
+   * key for the right side. The result after specifying the right side key is 
the finished
+   * operation.
+   */
+  def where[K: TypeInformation](fun: KeySelector[L, K]) = {
+val keyType = implicitly[TypeInformation[K]]
+val keyExtractor = new KeySelector[L, K] {
 
 Review comment:
   This looks like redundant wrapping of `KeySelector` into `KeySelector`. I 
would suggest to remove that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add support for Scala 2.12
> --
>
> Key: FLINK-7811
> URL: https://issues.apache.org/jira/browse/FLINK-7811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Scala API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7811) Add support for Scala 2.12

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659572#comment-16659572
 ] 

ASF GitHub Bot commented on FLINK-7811:
---

StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r226954254
 
 

 ##
 File path: 
flink-scala/src/main/scala/org/apache/flink/api/scala/unfinishedKeyPairOperation.scala
 ##
 @@ -90,6 +90,22 @@ private[flink] abstract class UnfinishedKeyPairOperation[L, 
R, O](
 val leftKey = new Keys.SelectorFunctionKeys[L, K](keyExtractor, 
leftInput.getType, keyType)
 new HalfUnfinishedKeyPairOperation[L, R, O](this, leftKey)
   }
+
+  /**
+   * Specify the key selector function for the left side of the key based 
operation. This returns
+   * a [[HalfUnfinishedKeyPairOperation]] on which `equalTo` must be called to 
specify the
+   * key for the right side. The result after specifying the right side key is 
the finished
+   * operation.
+   */
+  def where[K: TypeInformation](fun: KeySelector[L, K]) = {
+val keyType = implicitly[TypeInformation[K]]
+val keyExtractor = new KeySelector[L, K] {
 
 Review comment:
   This looks like redundant wrapping of `KeySelector` into `KeySelector`. I 
would suggest to remove that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add support for Scala 2.12
> --
>
> Key: FLINK-7811
> URL: https://issues.apache.org/jira/browse/FLINK-7811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Scala API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7811) Add support for Scala 2.12

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659564#comment-16659564
 ] 

ASF GitHub Bot commented on FLINK-7811:
---

StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r226956138
 
 

 ##
 File path: 
flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/nn/KNN.scala
 ##
 @@ -227,32 +227,35 @@ object KNN {
 
 // join input and training set
 val crossed = crossTuned.mapPartition {
-  (iter, out: Collector[(FlinkVector, FlinkVector, Long, Double)]) 
=> {
-for ((training, testing) <- iter) {
 
 Review comment:
   Minor: Can this be changed without changing the indentation of the function 
body?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add support for Scala 2.12
> --
>
> Key: FLINK-7811
> URL: https://issues.apache.org/jira/browse/FLINK-7811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Scala API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7811) Add support for Scala 2.12

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659566#comment-16659566
 ] 

ASF GitHub Bot commented on FLINK-7811:
---

StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r226956170
 
 

 ##
 File path: 
flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/nn/KNN.scala
 ##
 @@ -227,32 +227,35 @@ object KNN {
 
 // join input and training set
 val crossed = crossTuned.mapPartition {
-  (iter, out: Collector[(FlinkVector, FlinkVector, Long, Double)]) 
=> {
-for ((training, testing) <- iter) {
 
 Review comment:
   Minor: Can this be changed without changing the indentation of the function 
body?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add support for Scala 2.12
> --
>
> Key: FLINK-7811
> URL: https://issues.apache.org/jira/browse/FLINK-7811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Scala API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7811) Add support for Scala 2.12

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659570#comment-16659570
 ] 

ASF GitHub Bot commented on FLINK-7811:
---

StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r226954521
 
 

 ##
 File path: 
flink-scala/src/main/scala/org/apache/flink/api/scala/unfinishedKeyPairOperation.scala
 ##
 @@ -140,6 +156,28 @@ private[flink] class HalfUnfinishedKeyPairOperation[L, R, 
O](
   keyExtractor,
   unfinished.rightInput.getType,
   keyType)
+
+if (!leftKey.areCompatible(rightKey)) {
+  throw new InvalidProgramException("The types of the key fields do not 
match. Left: " +
+leftKey + " Right: " + rightKey)
+}
+unfinished.finish(leftKey, rightKey)
+  }
+
+  /**
+   * Specify the key selector function for the right side of the key based 
operation. This returns
+   * the finished operation.
+   */
+  def equalTo[K: TypeInformation](fun: KeySelector[R, K]): O = {
+val keyType = implicitly[TypeInformation[K]]
+val keyExtractor = new KeySelector[R, K] {
 
 Review comment:
   Redundant wrapping, see above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add support for Scala 2.12
> --
>
> Key: FLINK-7811
> URL: https://issues.apache.org/jira/browse/FLINK-7811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Scala API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7811) Add support for Scala 2.12

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659565#comment-16659565
 ] 

ASF GitHub Bot commented on FLINK-7811:
---

StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r226956325
 
 

 ##
 File path: 
flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/nn/KNN.scala
 ##
 @@ -227,32 +227,35 @@ object KNN {
 
 // join input and training set
 val crossed = crossTuned.mapPartition {
-  (iter, out: Collector[(FlinkVector, FlinkVector, Long, Double)]) 
=> {
-for ((training, testing) <- iter) {
 
 Review comment:
   Minor: Can this be changed without changing the indentation of the function 
body?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add support for Scala 2.12
> --
>
> Key: FLINK-7811
> URL: https://issues.apache.org/jira/browse/FLINK-7811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Scala API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add support for Scala 2.12

2018-10-22 Thread GitBox
StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r226954357
 
 

 ##
 File path: 
flink-scala/src/main/scala/org/apache/flink/api/scala/unfinishedKeyPairOperation.scala
 ##
 @@ -90,6 +90,22 @@ private[flink] abstract class UnfinishedKeyPairOperation[L, 
R, O](
 val leftKey = new Keys.SelectorFunctionKeys[L, K](keyExtractor, 
leftInput.getType, keyType)
 new HalfUnfinishedKeyPairOperation[L, R, O](this, leftKey)
   }
+
+  /**
+   * Specify the key selector function for the left side of the key based 
operation. This returns
+   * a [[HalfUnfinishedKeyPairOperation]] on which `equalTo` must be called to 
specify the
+   * key for the right side. The result after specifying the right side key is 
the finished
+   * operation.
+   */
+  def where[K: TypeInformation](fun: KeySelector[L, K]) = {
+val keyType = implicitly[TypeInformation[K]]
+val keyExtractor = new KeySelector[L, K] {
 
 Review comment:
   This looks like redundant wrapping of `KeySelector` into `KeySelector`. I 
would suggest to remove that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add support for Scala 2.12

2018-10-22 Thread GitBox
StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r226954765
 
 

 ##
 File path: 
flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/DataStream.scala
 ##
 @@ -414,6 +414,22 @@ class DataStream[T](stream: JavaStream[T]) {
 asScalaStream(new JavaKeyedStream(stream, keyExtractor, keyType))
   }
 
+  /**
+   * Groups the elements of a DataStream by the given K key to
+   * be used with grouped operators like grouped reduce or grouped 
aggregations.
+   */
+  def keyBy[K: TypeInformation](fun: KeySelector[T, K]): KeyedStream[T, K] = {
+
+val cleanFun = clean(fun)
+val keyType: TypeInformation[K] = implicitly[TypeInformation[K]]
+
+val keyExtractor = new KeySelector[T, K] with ResultTypeQueryable[K] {
 
 Review comment:
   Do we need to wrap this in a `ResultTypeQueryable` if we explicitly pass the 
key type info below?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add support for Scala 2.12

2018-10-22 Thread GitBox
StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r226954254
 
 

 ##
 File path: 
flink-scala/src/main/scala/org/apache/flink/api/scala/unfinishedKeyPairOperation.scala
 ##
 @@ -90,6 +90,22 @@ private[flink] abstract class UnfinishedKeyPairOperation[L, 
R, O](
 val leftKey = new Keys.SelectorFunctionKeys[L, K](keyExtractor, 
leftInput.getType, keyType)
 new HalfUnfinishedKeyPairOperation[L, R, O](this, leftKey)
   }
+
+  /**
+   * Specify the key selector function for the left side of the key based 
operation. This returns
+   * a [[HalfUnfinishedKeyPairOperation]] on which `equalTo` must be called to 
specify the
+   * key for the right side. The result after specifying the right side key is 
the finished
+   * operation.
+   */
+  def where[K: TypeInformation](fun: KeySelector[L, K]) = {
+val keyType = implicitly[TypeInformation[K]]
+val keyExtractor = new KeySelector[L, K] {
 
 Review comment:
   This looks like redundant wrapping of `KeySelector` into `KeySelector`. I 
would suggest to remove that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add support for Scala 2.12

2018-10-22 Thread GitBox
StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r226956325
 
 

 ##
 File path: 
flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/nn/KNN.scala
 ##
 @@ -227,32 +227,35 @@ object KNN {
 
 // join input and training set
 val crossed = crossTuned.mapPartition {
-  (iter, out: Collector[(FlinkVector, FlinkVector, Long, Double)]) 
=> {
-for ((training, testing) <- iter) {
 
 Review comment:
   Minor: Can this be changed without changing the indentation of the function 
body?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add support for Scala 2.12

2018-10-22 Thread GitBox
StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r226956138
 
 

 ##
 File path: 
flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/nn/KNN.scala
 ##
 @@ -227,32 +227,35 @@ object KNN {
 
 // join input and training set
 val crossed = crossTuned.mapPartition {
-  (iter, out: Collector[(FlinkVector, FlinkVector, Long, Double)]) 
=> {
-for ((training, testing) <- iter) {
 
 Review comment:
   Minor: Can this be changed without changing the indentation of the function 
body?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add support for Scala 2.12

2018-10-22 Thread GitBox
StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r226956170
 
 

 ##
 File path: 
flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/nn/KNN.scala
 ##
 @@ -227,32 +227,35 @@ object KNN {
 
 // join input and training set
 val crossed = crossTuned.mapPartition {
-  (iter, out: Collector[(FlinkVector, FlinkVector, Long, Double)]) 
=> {
-for ((training, testing) <- iter) {
 
 Review comment:
   Minor: Can this be changed without changing the indentation of the function 
body?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add support for Scala 2.12

2018-10-22 Thread GitBox
StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r226954521
 
 

 ##
 File path: 
flink-scala/src/main/scala/org/apache/flink/api/scala/unfinishedKeyPairOperation.scala
 ##
 @@ -140,6 +156,28 @@ private[flink] class HalfUnfinishedKeyPairOperation[L, R, 
O](
   keyExtractor,
   unfinished.rightInput.getType,
   keyType)
+
+if (!leftKey.areCompatible(rightKey)) {
+  throw new InvalidProgramException("The types of the key fields do not 
match. Left: " +
+leftKey + " Right: " + rightKey)
+}
+unfinished.finish(leftKey, rightKey)
+  }
+
+  /**
+   * Specify the key selector function for the right side of the key based 
operation. This returns
+   * the finished operation.
+   */
+  def equalTo[K: TypeInformation](fun: KeySelector[R, K]): O = {
+val keyType = implicitly[TypeInformation[K]]
+val keyExtractor = new KeySelector[R, K] {
 
 Review comment:
   Redundant wrapping, see above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add support for Scala 2.12

2018-10-22 Thread GitBox
StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r226954278
 
 

 ##
 File path: 
flink-scala/src/main/scala/org/apache/flink/api/scala/unfinishedKeyPairOperation.scala
 ##
 @@ -90,6 +90,22 @@ private[flink] abstract class UnfinishedKeyPairOperation[L, 
R, O](
 val leftKey = new Keys.SelectorFunctionKeys[L, K](keyExtractor, 
leftInput.getType, keyType)
 new HalfUnfinishedKeyPairOperation[L, R, O](this, leftKey)
   }
+
+  /**
+   * Specify the key selector function for the left side of the key based 
operation. This returns
+   * a [[HalfUnfinishedKeyPairOperation]] on which `equalTo` must be called to 
specify the
+   * key for the right side. The result after specifying the right side key is 
the finished
+   * operation.
+   */
+  def where[K: TypeInformation](fun: KeySelector[L, K]) = {
+val keyType = implicitly[TypeInformation[K]]
+val keyExtractor = new KeySelector[L, K] {
 
 Review comment:
   This looks like redundant wrapping of `KeySelector` into `KeySelector`. I 
would suggest to remove that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add support for Scala 2.12

2018-10-22 Thread GitBox
StephanEwen commented on a change in pull request #6784: [FLINK-7811] Add 
support for Scala 2.12
URL: https://github.com/apache/flink/pull/6784#discussion_r226955806
 
 

 ##
 File path: 
flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/nn/KNN.scala
 ##
 @@ -227,32 +227,35 @@ object KNN {
 
 // join input and training set
 val crossed = crossTuned.mapPartition {
-  (iter, out: Collector[(FlinkVector, FlinkVector, Long, Double)]) 
=> {
 
 Review comment:
   Minor: Can this be fixed without changing the indentation level of the 
function body?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10603) Reduce kafka test duration

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659559#comment-16659559
 ] 

ASF GitHub Bot commented on FLINK-10603:


yanghua commented on issue #6890: [FLINK-10603] Reduce kafka test duration
URL: https://github.com/apache/flink/pull/6890#issuecomment-431892438
 
 
   Hi @zentol It's strange, why I can not find the modern kafka connector's 
test duration report, however I found the other connectors test report. 
Comparing with 0.11, the time duration looks normal?
   
   ```
   04:30:26.649 [INFO] flink-connector-kafka-base . 
SUCCESS [ 10.444 s]
   04:30:26.649 [INFO] flink-connector-kafka-0.8 .. 
SUCCESS [03:08 min]
   04:30:26.649 [INFO] flink-connector-kafka-0.9 .. 
SUCCESS [03:48 min]
   04:30:26.649 [INFO] flink-connector-kafka-0.10 . 
SUCCESS [03:18 min]
   04:30:26.649 [INFO] flink-connector-kafka-0.11 . 
SUCCESS [10:12 min]
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Reduce kafka test duration
> --
>
> Key: FLINK-10603
> URL: https://issues.apache.org/jira/browse/FLINK-10603
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kafka Connector, Tests
>Affects Versions: 1.7.0
>Reporter: Chesnay Schepler
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>
> The tests for the modern kafka connector take more than 10 minutes which is 
> simply unacceptable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   4   >