[GitHub] [flink] flinkbot edited a comment on issue #9711: [FLINK-14033][yarn] upload user artifacts for yarn job cluster

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #9711: [FLINK-14033][yarn] upload user 
artifacts for yarn job cluster
URL: https://github.com/apache/flink/pull/9711#issuecomment-532971678
 
 
   
   ## CI report:
   
   * 2011905b5abe4cb332a60bc3f70378c777482924 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/128281677)
   * 8f3ec4639c0c16591302cdd2a5b294d357903a22 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/128282747)
   * ac617dc8927f276c8906f62c469b68ef87db4a26 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134807959)
   * 236ef95780e19ae8cf1aca5e4a6bc0d562507d02 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138105277)
   * 1b0966fb37f89546dafd72e8c44303e118eddcb8 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/138524589)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14970) Doomed test for equality to NaN

2019-11-27 Thread Dezhi Cai (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984210#comment-16984210
 ] 

Dezhi Cai commented on FLINK-14970:
---

cc [~jark] for awareness

> Doomed test for equality to NaN
> ---
>
> Key: FLINK-14970
> URL: https://issues.apache.org/jira/browse/FLINK-14970
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Legacy Planner, Table SQL / Planner
>Affects Versions: 1.8.2, 1.9.0, 1.9.1
>Reporter: Dezhi Cai
>Priority: Critical
>  Labels: pull-request-available
> Attachments: image-2019-11-27-16-51-00-150.png, 
> image-2019-11-27-16-51-06-801.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> scala pattern matching can't handle "NaN","NegativeInfinity " etc.
> in GenerateUtils, CodeGenerator, some code logic fall into this issue, please 
> refer to
> the screenshot for details.
> {code:java}
> def main(args: Array[String]): Unit = {
>   val floatVaue = Float.NaN
>   floatVaue match {
> case Float.NaN => println("Float.NaN")
> case Float.NegativeInfinity => println("Float.NegativeInfinity")
> case _ => println("not match")
>   }
> }
> will output: not match
> {code}
> {code:java}
> // this one works
> def main(args: Array[String]): Unit = {
>   val floatVaue = Float.NaN
>   floatVaue match {
> case value if value.isNaN => println("Float.NaN")
> case value if value.isNegInfinity  => println("Float.NegativeInfinity")
> case _ => println("not match")
>   }
> }
> will output: Float.NaN
> {code}
>  
>  
> org.apache.flink.table.planner.codegen.GenerateUtils
> !image-2019-11-27-16-51-06-801.png|width=723,height=257!
> org.apache.flink.table.codegen.CodeGenerator
> !image-2019-11-27-16-51-00-150.png|width=727,height=158!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14970) Doomed test for equality to NaN

2019-11-27 Thread Dezhi Cai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dezhi Cai updated FLINK-14970:
--
Description: 
scala pattern matching can't handle "NaN","NegativeInfinity " etc.

in GenerateUtils, CodeGenerator, some code logic fall into this issue, please 
refer to

the screenshot for details.
{code:java}
def main(args: Array[String]): Unit = {
  val floatVaue = Float.NaN
  floatVaue match {
case Float.NaN => println("Float.NaN")
case Float.NegativeInfinity => println("Float.NegativeInfinity")
case _ => println("not match")
  }
}

will output: not match

{code}
{code:java}

// this one works
def main(args: Array[String]): Unit = {
  val floatVaue = Float.NaN
  floatVaue match {
case value if value.isNaN => println("Float.NaN")
case value if value.isNegInfinity  => println("Float.NegativeInfinity")
case _ => println("not match")
  }
}
will output: Float.NaN
{code}
 

 

org.apache.flink.table.planner.codegen.GenerateUtils

!image-2019-11-27-16-51-06-801.png|width=723,height=257!

org.apache.flink.table.codegen.CodeGenerator

!image-2019-11-27-16-51-00-150.png|width=727,height=158!

  was:
scala pattern matching can't handle "NaN","NegativeInfinity " etc.

in GenerateUtils, CodeGenerator, some code logic fall into this issue, please 
refer to

the screenshot for details.
{code:java}
def main(args: Array[String]): Unit = {
  val floatVaue = Float.NaN
  floatVaue match {
case Float.NaN => println("Float.NaN")
case Float.NegativeInfinity => println("Float.NegativeInfinity")
case _ => println("not match")
  }
}

will output: not match

{code}
 

org.apache.flink.table.planner.codegen.GenerateUtils

!image-2019-11-27-16-51-06-801.png|width=723,height=257!

org.apache.flink.table.codegen.CodeGenerator

!image-2019-11-27-16-51-00-150.png|width=727,height=158!


> Doomed test for equality to NaN
> ---
>
> Key: FLINK-14970
> URL: https://issues.apache.org/jira/browse/FLINK-14970
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Legacy Planner, Table SQL / Planner
>Affects Versions: 1.8.2, 1.9.0, 1.9.1
>Reporter: Dezhi Cai
>Priority: Critical
>  Labels: pull-request-available
> Attachments: image-2019-11-27-16-51-00-150.png, 
> image-2019-11-27-16-51-06-801.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> scala pattern matching can't handle "NaN","NegativeInfinity " etc.
> in GenerateUtils, CodeGenerator, some code logic fall into this issue, please 
> refer to
> the screenshot for details.
> {code:java}
> def main(args: Array[String]): Unit = {
>   val floatVaue = Float.NaN
>   floatVaue match {
> case Float.NaN => println("Float.NaN")
> case Float.NegativeInfinity => println("Float.NegativeInfinity")
> case _ => println("not match")
>   }
> }
> will output: not match
> {code}
> {code:java}
> // this one works
> def main(args: Array[String]): Unit = {
>   val floatVaue = Float.NaN
>   floatVaue match {
> case value if value.isNaN => println("Float.NaN")
> case value if value.isNegInfinity  => println("Float.NegativeInfinity")
> case _ => println("not match")
>   }
> }
> will output: Float.NaN
> {code}
>  
>  
> org.apache.flink.table.planner.codegen.GenerateUtils
> !image-2019-11-27-16-51-06-801.png|width=723,height=257!
> org.apache.flink.table.codegen.CodeGenerator
> !image-2019-11-27-16-51-00-150.png|width=727,height=158!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14970) Doomed test for equality to NaN

2019-11-27 Thread Dezhi Cai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dezhi Cai updated FLINK-14970:
--
Description: 
scala pattern matching can't handle "NaN","NegativeInfinity " etc.

in GenerateUtils, CodeGenerator, some code logic fall into this issue, please 
refer to

the screenshot for details.
{code:java}
def main(args: Array[String]): Unit = {
  val floatVaue = Float.NaN
  floatVaue match {
case Float.NaN => println("Float.NaN")
case Float.NegativeInfinity => println("Float.NegativeInfinity")
case _ => println("not match")
  }
}

will output: not match

{code}
 

org.apache.flink.table.planner.codegen.GenerateUtils

!image-2019-11-27-16-51-06-801.png|width=723,height=257!

org.apache.flink.table.codegen.CodeGenerator

!image-2019-11-27-16-51-00-150.png|width=727,height=158!

  was:
scala pattern matching can't handle "NaN","NegativeInfinity " etc.

in GenerateUtils, CodeGenerator, some code logic fall into this issue, please 
refer to

the screenshot for details.
{code:java}
def main(args: Array[String]): Unit = {
  val floatVaue = Float.NaN
  floatVaue match {
case Float.NaN => println("Float.NaN")
case Float.NegativeInfinity => println("Float.NegativeInfinity")
case _ => println("not match")
  }
}

will output: not match{code}
 

org.apache.flink.table.planner.codegen.GenerateUtils

!image-2019-11-27-16-51-06-801.png|width=723,height=257!

org.apache.flink.table.codegen.CodeGenerator

!image-2019-11-27-16-51-00-150.png|width=727,height=158!


> Doomed test for equality to NaN
> ---
>
> Key: FLINK-14970
> URL: https://issues.apache.org/jira/browse/FLINK-14970
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Legacy Planner, Table SQL / Planner
>Affects Versions: 1.8.2, 1.9.0, 1.9.1
>Reporter: Dezhi Cai
>Priority: Critical
>  Labels: pull-request-available
> Attachments: image-2019-11-27-16-51-00-150.png, 
> image-2019-11-27-16-51-06-801.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> scala pattern matching can't handle "NaN","NegativeInfinity " etc.
> in GenerateUtils, CodeGenerator, some code logic fall into this issue, please 
> refer to
> the screenshot for details.
> {code:java}
> def main(args: Array[String]): Unit = {
>   val floatVaue = Float.NaN
>   floatVaue match {
> case Float.NaN => println("Float.NaN")
> case Float.NegativeInfinity => println("Float.NegativeInfinity")
> case _ => println("not match")
>   }
> }
> will output: not match
> {code}
>  
> org.apache.flink.table.planner.codegen.GenerateUtils
> !image-2019-11-27-16-51-06-801.png|width=723,height=257!
> org.apache.flink.table.codegen.CodeGenerator
> !image-2019-11-27-16-51-00-150.png|width=727,height=158!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10344: [FLINK-14264][StateBackend][Rest] Expose state backend in checkpoint rest api

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #10344: [FLINK-14264][StateBackend][Rest] 
Expose state backend in checkpoint rest api
URL: https://github.com/apache/flink/pull/10344#issuecomment-559371382
 
 
   
   ## CI report:
   
   * fbd6e8914b49348a275657130ebdd9413d7d17c6 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/138524564)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-8949) Rest API failure with long URL

2019-11-27 Thread lining (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-8949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984205#comment-16984205
 ] 

lining commented on FLINK-8949:
---

Is  
/jobs/:jobid/vertices/:vertexId/metrics?get=[key]=[]=[] 
will better. Now the watermarks just for watermark. If a user want see other 
metrics, it still not work. 

> Rest API failure with long URL
> --
>
> Key: FLINK-8949
> URL: https://issues.apache.org/jira/browse/FLINK-8949
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST, Runtime / Web Frontend
>Affects Versions: 1.4.2, 1.5.0, 1.6.4, 1.7.2, 1.8.2
>Reporter: Truong Duc Kien
>Assignee: Maximilian Michels
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When you have jobs with high parallelism, the URL for a REST request can get 
> very long. When the URL is longer than 4096 bytes, the  REST API will return 
> error
> {{Failure: 404 Not Found}}
>  This can easily be seen in the Web UI, when Flink queries for the watermark 
> using the REST API:
> {{GET 
> /jobs/:jobId/vertices/:vertexId/metrics?get=0.currentLowWatermark,1.currentLowWatermark,2.currentLo...}}
> The request will fail with more than 170 subtasks and the watermark will not 
> be displayed in the Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-14470) Watermark display not working with high parallelism job

2019-11-27 Thread lining (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lining updated FLINK-14470:
---
Comment: was deleted

(was: cc [~chesnay])

> Watermark display not working with high parallelism job
> ---
>
> Key: FLINK-14470
> URL: https://issues.apache.org/jira/browse/FLINK-14470
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.8.2, 1.9.1
>Reporter: Thomas Weise
>Assignee: Yadong Xie
>Priority: Major
>
> Watermarks don't display in the UI when the job graph has many vertices. The 
> REST API call to fetch currentInputWatermark fails as it enumerates each 
> subtask, which obviously fails beyond a limit. With the new UI that can be 
> noticed through a flickering error message in the upper right corner.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-14470) Watermark display not working with high parallelism job

2019-11-27 Thread lining (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984181#comment-16984181
 ] 

lining edited comment on FLINK-14470 at 11/28/19 7:29 AM:
--

I have tested it if you get more than 165 WaterMarks, will redirect to URL 
/bad-request. There isn't any handler for it, so its Http status is 404. Find 
someone who has asked a similar 
[question|https://groups.google.com/forum/#!topic/gatling/9LtKV5sSXxU] in 
GoogleGroup. The 
[doc|https://netty.io/4.0/api/io/netty/handler/codec/http/HttpRequestDecoder.html]
 for maxInitialLineLength and its default value is 4096 in [code.  
|https://github.com/netty/netty/blob/4.1/codec-http/src/main/java/io/netty/handler/codec/http/HttpRequestDecoder.java#L59]
 * the reason for redirecting to /bad-request is the URL exceeds 4kb. In 
HttpRequestDecoder, if it decodes invalid will redirect to /bad-request.(ps 
[code|https://github.com/netty/netty/blob/4.1/codec-http/src/main/java/io/netty/handler/codec/http/HttpRequestDecoder.java#L91-L94])
 * if Http call has an exception, it needs to show the error, not redirect to 
/bad-request.
 * new handler for getting vertex's metric, such as 
/jobs/:jobid/vertices/:vertexId/metrics?get=[key]=[]=[]


was (Author: lining):
I have tested it if you get more than 165 WaterMarks, will redirect to URL 
/bad-request. There isn't any handler for it, so its Http status is 404. Find 
someone who has asked a similar 
[question|https://groups.google.com/forum/#!topic/gatling/9LtKV5sSXxU] in 
GoogleGroup. The 
[doc|https://netty.io/4.0/api/io/netty/handler/codec/http/HttpRequestDecoder.html]
 for maxInitialLineLength and its default value is 4096 in [code.  
|https://github.com/netty/netty/blob/4.1/codec-http/src/main/java/io/netty/handler/codec/http/HttpRequestDecoder.java#L59]
 * the reason for redirecting to /bad-request is the URL exceeds 4kb. In 
HttpRequestDecoder, if it decodes invalid will redirect to /bad-request.
 * if Http call has an exception, it needs to show the error, not redirect to 
/bad-request.
 * new handler for getting vertex's metric, such as 
/jobs/:jobid/vertices/:vertexId/metrics?get=[key]=[]=[]

> Watermark display not working with high parallelism job
> ---
>
> Key: FLINK-14470
> URL: https://issues.apache.org/jira/browse/FLINK-14470
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.8.2, 1.9.1
>Reporter: Thomas Weise
>Assignee: Yadong Xie
>Priority: Major
>
> Watermarks don't display in the UI when the job graph has many vertices. The 
> REST API call to fetch currentInputWatermark fails as it enumerates each 
> subtask, which obviously fails beyond a limit. With the new UI that can be 
> noticed through a flickering error message in the upper right corner.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-14470) Watermark display not working with high parallelism job

2019-11-27 Thread lining (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984181#comment-16984181
 ] 

lining edited comment on FLINK-14470 at 11/28/19 7:28 AM:
--

I have tested it if you get more than 165 WaterMarks, will redirect to URL 
/bad-request. There isn't any handler for it, so its Http status is 404. Find 
someone who has asked a similar 
[question|https://groups.google.com/forum/#!topic/gatling/9LtKV5sSXxU] in 
GoogleGroup. The 
[doc|https://netty.io/4.0/api/io/netty/handler/codec/http/HttpRequestDecoder.html]
 for maxInitialLineLength and its default value is 4096 in [code.  
|https://github.com/netty/netty/blob/4.1/codec-http/src/main/java/io/netty/handler/codec/http/HttpRequestDecoder.java#L59]
 * the reason for redirecting to /bad-request is the URL exceeds 4kb. In 
HttpRequestDecoder, if it decodes invalid will redirect to /bad-request.
 * if Http call has an exception, it needs to show the error, not redirect to 
/bad-request.
 * new handler for getting vertex's metric, such as 
/jobs/:jobid/vertices/:vertexId/metrics?get=[key]=[]=[]


was (Author: lining):
I have tested it if you get more than 165 WaterMarks, will redirect to URL 
/bad-request. There isn't any handler for it, so its Http status is 404. Find 
someone who has asked a similar 
[question|https://groups.google.com/forum/#!topic/gatling/9LtKV5sSXxU] in 
GoogleGroup. The 
[doc|https://netty.io/4.0/api/io/netty/handler/codec/http/HttpRequestDecoder.html]
 for maxInitialLineLength and its default value is 4096 in 
[code.|https://github.com/netty/netty/blob/4.1/codec-http/src/main/java/io/netty/handler/codec/http/HttpRequestDecoder.java#L59]
 * the reason for redirecting to /bad-request is the URL exceeds 4kb. 
 * if Http call has an exception, it needs to show the error, not redirect to 
/bad-request.
 * new handler for getting vertex's metric, such as 
/jobs/:jobid/vertices/:vertexId/metrics?get=[key]=[]=[]

> Watermark display not working with high parallelism job
> ---
>
> Key: FLINK-14470
> URL: https://issues.apache.org/jira/browse/FLINK-14470
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.8.2, 1.9.1
>Reporter: Thomas Weise
>Assignee: Yadong Xie
>Priority: Major
>
> Watermarks don't display in the UI when the job graph has many vertices. The 
> REST API call to fetch currentInputWatermark fails as it enumerates each 
> subtask, which obviously fails beyond a limit. With the new UI that can be 
> noticed through a flickering error message in the upper right corner.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9711: [FLINK-14033][yarn] upload user artifacts for yarn job cluster

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #9711: [FLINK-14033][yarn] upload user 
artifacts for yarn job cluster
URL: https://github.com/apache/flink/pull/9711#issuecomment-532971678
 
 
   
   ## CI report:
   
   * 2011905b5abe4cb332a60bc3f70378c777482924 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/128281677)
   * 8f3ec4639c0c16591302cdd2a5b294d357903a22 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/128282747)
   * ac617dc8927f276c8906f62c469b68ef87db4a26 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134807959)
   * 236ef95780e19ae8cf1aca5e4a6bc0d562507d02 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138105277)
   * 1b0966fb37f89546dafd72e8c44303e118eddcb8 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] HuangZhenQiu commented on issue #9711: [FLINK-14033][yarn] upload user artifacts for yarn job cluster

2019-11-27 Thread GitBox
HuangZhenQiu commented on issue #9711: [FLINK-14033][yarn] upload user 
artifacts for yarn job cluster
URL: https://github.com/apache/flink/pull/9711#issuecomment-559372994
 
 
   @aljoscha During debugging, I found another bug introduced by a recent 
change. https://issues.apache.org/jira/browse/FLINK-14982. I would like to fix 
it, please assign to me after confirming it is a real issue.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-14982) YARN IT Test Case log config is mistakenly disabled

2019-11-27 Thread Zhenqiu Huang (Jira)
Zhenqiu Huang created FLINK-14982:
-

 Summary: YARN IT Test Case log config is mistakenly disabled
 Key: FLINK-14982
 URL: https://issues.apache.org/jira/browse/FLINK-14982
 Project: Flink
  Issue Type: Bug
  Components: Deployment / YARN
Affects Versions: 1.10.0
Reporter: Zhenqiu Huang


The [FLINK-14630] Make the yarn APPLICATION_LOG_CONFIG_FILE an internal option 
changed how log config is shipped in YarnClusterDescritor. Currently, we need 
to rely on the yarn.log-config-file to specify which log file to ship in flink 
conf. But currently all YARN IT test cases haven't enabled it. It will cause 
the IT test to fail catch issue by looking into JM, TM log files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10344: [FLINK-14264][StateBackend][Rest] Expose state backend in checkpoint rest api

2019-11-27 Thread GitBox
flinkbot commented on issue #10344: [FLINK-14264][StateBackend][Rest] Expose 
state backend in checkpoint rest api
URL: https://github.com/apache/flink/pull/10344#issuecomment-559371382
 
 
   
   ## CI report:
   
   * fbd6e8914b49348a275657130ebdd9413d7d17c6 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-14470) Watermark display not working with high parallelism job

2019-11-27 Thread lining (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984181#comment-16984181
 ] 

lining edited comment on FLINK-14470 at 11/28/19 7:15 AM:
--

I have tested it if you get more than 165 WaterMarks, will redirect to URL 
/bad-request. There isn't any handler for it, so its Http status is 404. Find 
someone who has asked a similar 
[question|https://groups.google.com/forum/#!topic/gatling/9LtKV5sSXxU] in 
GoogleGroup. The 
[doc|https://netty.io/4.0/api/io/netty/handler/codec/http/HttpRequestDecoder.html]
 for maxInitialLineLength and its default value is 4096 in 
[code.|https://github.com/netty/netty/blob/4.1/codec-http/src/main/java/io/netty/handler/codec/http/HttpRequestDecoder.java#L59]
 * the reason for redirecting to /bad-request is the URL exceeds 4kb. 
 * if Http call has an exception, it needs to show the error, not redirect to 
/bad-request.
 * new handler for getting vertex's metric, such as 
/jobs/:jobid/vertices/:vertexId/metrics?get=[key]=[]=[]


was (Author: lining):
I have tested it if you get more than 165 WaterMarks, will redirect to URL 
/bad-request. There isn't any handler for it, so its Http status is 404. Find 
someone who has asked it in GoogleGroup 
[question|https://groups.google.com/forum/#!topic/gatling/9LtKV5sSXxU]. The 
[doc|https://netty.io/4.0/api/io/netty/handler/codec/http/HttpRequestDecoder.html]
 for maxInitialLineLength and its default value is 4096 in 
[code.|https://github.com/netty/netty/blob/4.1/codec-http/src/main/java/io/netty/handler/codec/http/HttpRequestDecoder.java#L59]
 * the reason for redirecting to /bad-request is the URL exceeds 4kb. 
 * if Http call has an exception, it needs to show the error, not redirect to 
/bad-request.
 * new handler for getting vertex's metric, such as 
/jobs/:jobid/vertices/:vertexId/metrics?get=[key]=[]=[]

> Watermark display not working with high parallelism job
> ---
>
> Key: FLINK-14470
> URL: https://issues.apache.org/jira/browse/FLINK-14470
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.8.2, 1.9.1
>Reporter: Thomas Weise
>Assignee: Yadong Xie
>Priority: Major
>
> Watermarks don't display in the UI when the job graph has many vertices. The 
> REST API call to fetch currentInputWatermark fails as it enumerates each 
> subtask, which obviously fails beyond a limit. With the new UI that can be 
> noticed through a flickering error message in the upper right corner.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-14470) Watermark display not working with high parallelism job

2019-11-27 Thread lining (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984181#comment-16984181
 ] 

lining edited comment on FLINK-14470 at 11/28/19 7:12 AM:
--

I have tested it if you get more than 165 WaterMarks, will redirect to URL 
/bad-request. There isn't any handler for it, so its Http status is 404. Find 
someone who has asked it in GoogleGroup 
[question|https://groups.google.com/forum/#!topic/gatling/9LtKV5sSXxU]. The 
[doc|https://netty.io/4.0/api/io/netty/handler/codec/http/HttpRequestDecoder.html]
 for maxInitialLineLength and its default value is 4096 in 
[code.|https://github.com/netty/netty/blob/4.1/codec-http/src/main/java/io/netty/handler/codec/http/HttpRequestDecoder.java#L59]
 * the reason for redirecting to /bad-request is the URL exceeds 4kb. 
 * if Http call has an exception, it needs to show the error, not redirect to 
/bad-request.
 * new handler for getting vertex's metric, such as 
/jobs/:jobid/vertices/:vertexId/metrics?get=[key]=[]=[]


was (Author: lining):
I have tested it if you get more than 165 WaterMarks, will redirect to URL 
/bad-request. There isn't any handler for it, so its Http status is 404. Find 
someone who has asked it in GoogleGroup 
[question|https://groups.google.com/forum/#!topic/gatling/9LtKV5sSXxU]. The 
[doc|https://netty.io/4.0/api/io/netty/handler/codec/http/HttpRequestDecoder.html]
 for maxInitialLineLength and its default value is 4096 in 
[code.|https://github.com/netty/netty/blob/4.1/codec-http/src/main/java/io/netty/handler/codec/http/HttpRequestDecoder.java#L59]
 * the reason for redirecting to /bad-request is the URL exceeds 4kb. 
 * If Http call has an exception, it needs to show the error, not redirect to 
/bad-request.
 * new handler for getting vertex's metric, such as 
/jobs/:jobid/vertices/:vertexId/metrics?get=[key]=[]=[]

> Watermark display not working with high parallelism job
> ---
>
> Key: FLINK-14470
> URL: https://issues.apache.org/jira/browse/FLINK-14470
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.8.2, 1.9.1
>Reporter: Thomas Weise
>Assignee: Yadong Xie
>Priority: Major
>
> Watermarks don't display in the UI when the job graph has many vertices. The 
> REST API call to fetch currentInputWatermark fails as it enumerates each 
> subtask, which obviously fails beyond a limit. With the new UI that can be 
> noticed through a flickering error message in the upper right corner.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] HuangZhenQiu commented on issue #9711: [FLINK-14033][yarn] upload user artifacts for yarn job cluster

2019-11-27 Thread GitBox
HuangZhenQiu commented on issue #9711: [FLINK-14033][yarn] upload user 
artifacts for yarn job cluster
URL: https://github.com/apache/flink/pull/9711#issuecomment-559370853
 
 
   @aljoscha Thanks for pointing out issues in test case. I rebased latest 
master and removed unused classes. The distributed cache functionality should 
be covered in the map function.  For the life cycle of the distributed cache 
files, we should split into two parts. For the local copies in each TM, they 
are managed as a rolling array in the FileCache class. Any time one acces to a 
file not in FileCache, the file will be read from remote. For the remote file, 
it is the same as other local resources files for yarn application, they are 
under the ./flink/application_id fodler. If user kill the job, the folder will 
be deleted. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-14470) Watermark display not working with high parallelism job

2019-11-27 Thread lining (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984181#comment-16984181
 ] 

lining edited comment on FLINK-14470 at 11/28/19 7:12 AM:
--

I have tested it if you get more than 165 WaterMarks, will redirect to URL 
/bad-request. There isn't any handler for it, so its Http status is 404. Find 
someone who has asked it in GoogleGroup 
[question|https://groups.google.com/forum/#!topic/gatling/9LtKV5sSXxU]. The 
[doc|https://netty.io/4.0/api/io/netty/handler/codec/http/HttpRequestDecoder.html]
 for maxInitialLineLength and its default value is 4096 in 
[code.|https://github.com/netty/netty/blob/4.1/codec-http/src/main/java/io/netty/handler/codec/http/HttpRequestDecoder.java#L59]
 * the reason for redirecting to /bad-request is the URL exceeds 4kb. 
 * If Http call has an exception, it needs to show the error, not redirect to 
/bad-request.
 * new handler for getting vertex's metric, such as 
/jobs/:jobid/vertices/:vertexId/metrics?get=[key]=[]=[]


was (Author: lining):
I have tested it if you get more than 165 WaterMarks, will redirect to URL 
/bad-request. There isn't any handler for it, so its Http status is 404. Find 
someone who has asked it in GoogleGroup 
[question|https://groups.google.com/forum/#!topic/gatling/9LtKV5sSXxU]. The 
[doc|https://netty.io/4.0/api/io/netty/handler/codec/http/HttpRequestDecoder.html]
 for maxInitialLineLength, its default value is 4096 in 
[code|https://github.com/netty/netty/blob/4.1/codec-http/src/main/java/io/netty/handler/codec/http/HttpRequestDecoder.java#L59]
 * the reason for redirecting to /bad-request is the url exceeds 4kb. 
 * If Http call has an exception, it needs to show the error, not redirect to 
/bad-request.
 * new handler for getting vertex's metric, such as 
/jobs/:jobid/vertices/:vertexId/metrics?get=[key]=[]=[]

> Watermark display not working with high parallelism job
> ---
>
> Key: FLINK-14470
> URL: https://issues.apache.org/jira/browse/FLINK-14470
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.8.2, 1.9.1
>Reporter: Thomas Weise
>Assignee: Yadong Xie
>Priority: Major
>
> Watermarks don't display in the UI when the job graph has many vertices. The 
> REST API call to fetch currentInputWatermark fails as it enumerates each 
> subtask, which obviously fails beyond a limit. With the new UI that can be 
> noticed through a flickering error message in the upper right corner.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-14470) Watermark display not working with high parallelism job

2019-11-27 Thread lining (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984181#comment-16984181
 ] 

lining edited comment on FLINK-14470 at 11/28/19 7:11 AM:
--

I have tested it if you get more than 165 WaterMarks, will redirect to URL 
/bad-request. There isn't any handler for it, so its Http status is 404. Find 
someone who has asked it in GoogleGroup 
[question|https://groups.google.com/forum/#!topic/gatling/9LtKV5sSXxU]. The 
[doc|https://netty.io/4.0/api/io/netty/handler/codec/http/HttpRequestDecoder.html]
 for maxInitialLineLength, its default value is 4096 in 
[code|https://github.com/netty/netty/blob/4.1/codec-http/src/main/java/io/netty/handler/codec/http/HttpRequestDecoder.java#L59]
 * the reason for redirecting to /bad-request is the url exceeds 4kb. 
 * If Http call has an exception, it needs to show the error, not redirect to 
/bad-request.
 * new handler for getting vertex's metric, such as 
/jobs/:jobid/vertices/:vertexId/metrics?get=[key]=[]=[]


was (Author: lining):
I have tested it if you get more than 165 WaterMarks, will redirect to URL 
/bad-request. There isn't any handler for it, so its Http status is 404. 
*  find out the reason of redirecting to /bad-request. If Http call has an 
exception, it needs to show the error, not redirect to /bad-request.
* new handler for getting vertex's metric, such as 
/jobs/:jobid/vertices/:vertexId/metrics?get=[key]=[]=[]

> Watermark display not working with high parallelism job
> ---
>
> Key: FLINK-14470
> URL: https://issues.apache.org/jira/browse/FLINK-14470
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.8.2, 1.9.1
>Reporter: Thomas Weise
>Assignee: Yadong Xie
>Priority: Major
>
> Watermarks don't display in the UI when the job graph has many vertices. The 
> REST API call to fetch currentInputWatermark fails as it enumerates each 
> subtask, which obviously fails beyond a limit. With the new UI that can be 
> noticed through a flickering error message in the upper right corner.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] TisonKun commented on a change in pull request #9973: [FLINK-9955] Add Kubernetes ClusterDescriptor to support deploying session cluster.

2019-11-27 Thread GitBox
TisonKun commented on a change in pull request #9973: [FLINK-9955] Add 
Kubernetes ClusterDescriptor to support deploying session cluster.
URL: https://github.com/apache/flink/pull/9973#discussion_r351617528
 
 

 ##
 File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/KubernetesClusterDescriptor.java
 ##
 @@ -0,0 +1,188 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.kubernetes;
+
+import org.apache.flink.client.deployment.ClusterDeploymentException;
+import org.apache.flink.client.deployment.ClusterDescriptor;
+import org.apache.flink.client.deployment.ClusterRetrieveException;
+import org.apache.flink.client.deployment.ClusterSpecification;
+import org.apache.flink.client.program.ClusterClient;
+import org.apache.flink.client.program.rest.RestClusterClient;
+import org.apache.flink.configuration.BlobServerOptions;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.configuration.JobManagerOptions;
+import org.apache.flink.configuration.RestOptions;
+import org.apache.flink.configuration.TaskManagerOptions;
+import org.apache.flink.kubernetes.configuration.KubernetesConfigOptions;
+import org.apache.flink.kubernetes.kubeclient.Endpoint;
+import org.apache.flink.kubernetes.kubeclient.FlinkKubeClient;
+import 
org.apache.flink.kubernetes.kubeclient.decorators.FlinkMasterDeploymentDecorator;
+import org.apache.flink.kubernetes.kubeclient.resources.FlinkService;
+import org.apache.flink.kubernetes.utils.Constants;
+import org.apache.flink.runtime.entrypoint.ClusterEntrypoint;
+import org.apache.flink.runtime.jobgraph.JobGraph;
+import org.apache.flink.util.FlinkException;
+import org.apache.flink.util.Preconditions;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nonnull;
+
+/**
+ * Kubernetes specific {@link ClusterDescriptor} implementation.
+ */
+public class KubernetesClusterDescriptor implements ClusterDescriptor {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(KubernetesClusterDescriptor.class);
+
+   private static final String CLUSTER_DESCRIPTION = "Kubernetes cluster";
 
 Review comment:
   It could be a follow up we give detailed description as on YARN. Just 
thought, not requirement for this patch :-) It would be better if we track it 
as a ticket on JIRA.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on issue #9965: [FLINK-10935][kubernetes]Implement KubeClient with Faric8 Kubernetes clients

2019-11-27 Thread GitBox
TisonKun commented on issue #9965: [FLINK-10935][kubernetes]Implement 
KubeClient with Faric8 Kubernetes clients
URL: https://github.com/apache/flink/pull/9965#issuecomment-559366954
 
 
   also you could rebase the branch on current master especially on merged 
FLINK-10932


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #9965: [FLINK-10935][kubernetes]Implement KubeClient with Faric8 Kubernetes clients

2019-11-27 Thread GitBox
TisonKun commented on a change in pull request #9965: 
[FLINK-10935][kubernetes]Implement KubeClient with Faric8 Kubernetes clients
URL: https://github.com/apache/flink/pull/9965#discussion_r351612292
 
 

 ##
 File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/resources/ActionWatcher.java
 ##
 @@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.kubernetes.kubeclient.resources;
+
+import io.fabric8.kubernetes.api.model.HasMetadata;
+import io.fabric8.kubernetes.client.KubernetesClientException;
+import io.fabric8.kubernetes.client.KubernetesClientTimeoutException;
+import io.fabric8.kubernetes.client.Watcher;
+
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicReference;
+
+/**
+ * Watch a specific action.
+ */
+public class ActionWatcher implements Watcher {
+   private final CountDownLatch latch = new CountDownLatch(1);
+   private final AtomicReference reference = new AtomicReference();
 
 Review comment:
   I don't see any atomic actions like compare-and-set or get-and-set are in 
used. Why do you instance an `AtomicReference` here? Is `volatile` sufficient?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] klion26 commented on a change in pull request #10344: [FLINK-14264][StateBackend][Rest] Expose state backend in checkpoint rest api

2019-11-27 Thread GitBox
klion26 commented on a change in pull request #10344: 
[FLINK-14264][StateBackend][Rest] Expose state backend in checkpoint rest api
URL: https://github.com/apache/flink/pull/10344#discussion_r351615345
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/tasks/CheckpointCoordinatorConfiguration.java
 ##
 @@ -119,6 +123,14 @@ public int getTolerableCheckpointFailureNumber() {
return tolerableCheckpointFailureNumber;
}
 
+   public String getStateBackendName() {
+   return stateBackendName;
+   }
+
+   public void setStateBackendName(@Nonnull String newStateBackend) {
 
 Review comment:
   Add a setter function here instead of passed in from constructor, because 
[`StreamGraph#getStateBackend()`](https://github.com/apache/flink/blob/f38106582e69765677ef8ebddece3a92641f6f22/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/graph/StreamingJobGraphGenerator.java#L895)
 may return null in `StreamingJobGraphGenerator#configureCheckpointing`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #9965: [FLINK-10935][kubernetes]Implement KubeClient with Faric8 Kubernetes clients

2019-11-27 Thread GitBox
TisonKun commented on a change in pull request #9965: 
[FLINK-10935][kubernetes]Implement KubeClient with Faric8 Kubernetes clients
URL: https://github.com/apache/flink/pull/9965#discussion_r351611401
 
 

 ##
 File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/FlinkKubeClient.java
 ##
 @@ -0,0 +1,110 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.kubernetes.kubeclient;
+
+import org.apache.flink.client.deployment.ClusterSpecification;
+import org.apache.flink.kubernetes.kubeclient.resources.FlinkPod;
+import org.apache.flink.kubernetes.kubeclient.resources.FlinkService;
+
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.CompletableFuture;
+
+/**
+ * The client to talk with kubernetes.
+ */
+public interface FlinkKubeClient extends AutoCloseable {
+
+   /**
+* Create kubernetes config map, include flink-conf.yaml, 
log4j.properties.
+*/
+   void createConfigMap() throws Exception;
 
 Review comment:
   I notice that some of "create" functions are blocking while others are 
non-blocking. What is the different between them and how is the decision made?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] klion26 commented on a change in pull request #10344: [FLINK-14264][StateBackend][Rest] Expose state backend in checkpoint rest api

2019-11-27 Thread GitBox
klion26 commented on a change in pull request #10344: 
[FLINK-14264][StateBackend][Rest] Expose state backend in checkpoint rest api
URL: https://github.com/apache/flink/pull/10344#discussion_r351614505
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/job/checkpoints/CheckpointConfigHandler.java
 ##
 @@ -100,13 +100,15 @@ private static CheckpointConfigInfo 
createCheckpointConfigInfo(AccessExecutionGr
retentionPolicy != 
CheckpointRetentionPolicy.NEVER_RETAIN_AFTER_TERMINATION,
retentionPolicy != 
CheckpointRetentionPolicy.RETAIN_ON_CANCELLATION);
 
+   String stateName = 
checkpointCoordinatorConfiguration.getStateBackendName() == null ? 
"TEMPORARY_UNKNOWN" : checkpointCoordinatorConfiguration.getStateBackendName();
 
 Review comment:
   If we query the rest before the job is running, maybe the 
`checkpointCoordinatorConfiguration.getStateBackendName()` will return null 
(please see the above comments), so transfer to `TEMPORARY_UNKNOWN` here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #9965: [FLINK-10935][kubernetes]Implement KubeClient with Faric8 Kubernetes clients

2019-11-27 Thread GitBox
TisonKun commented on a change in pull request #9965: 
[FLINK-10935][kubernetes]Implement KubeClient with Faric8 Kubernetes clients
URL: https://github.com/apache/flink/pull/9965#discussion_r351613313
 
 

 ##
 File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/resources/Resource.java
 ##
 @@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.kubernetes.kubeclient.resources;
+
+import org.apache.flink.configuration.Configuration;
+
+/**
+ * Represent a kubernetes resource.
+ */
+public abstract class Resource {
+
+   protected T internalResource;
 
 Review comment:
   There is code
   
   ```java
   subclass extends Resource {
 super(flinkConfig);
 this.internalResource = internalResource;
   }
   ```
   
   which could be 
   
   ```java
 super(flinkConfig, internalResource);
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #9965: [FLINK-10935][kubernetes]Implement KubeClient with Faric8 Kubernetes clients

2019-11-27 Thread GitBox
TisonKun commented on a change in pull request #9965: 
[FLINK-10935][kubernetes]Implement KubeClient with Faric8 Kubernetes clients
URL: https://github.com/apache/flink/pull/9965#discussion_r351614570
 
 

 ##
 File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/FlinkKubeClient.java
 ##
 @@ -0,0 +1,110 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.kubernetes.kubeclient;
+
+import org.apache.flink.client.deployment.ClusterSpecification;
+import org.apache.flink.kubernetes.kubeclient.resources.FlinkPod;
+import org.apache.flink.kubernetes.kubeclient.resources.FlinkService;
+
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.CompletableFuture;
+
+/**
+ * The client to talk with kubernetes.
+ */
+public interface FlinkKubeClient extends AutoCloseable {
+
+   /**
+* Create kubernetes config map, include flink-conf.yaml, 
log4j.properties.
+*/
+   void createConfigMap() throws Exception;
+
+   /**
+* Create kubernetes service for internal use. This will be set to 
jobmanager rpc address.
+* It is the owner of all resources. After deletion, all other resource 
will be deleted by gc.
+* A CompletableFuture is returned and could be used to wait for 
service ready.
+*/
+   CompletableFuture createInternalService(String clusterId) 
throws Exception;
+
+   /**
+* Create kubernetes service for rest port. This will be used by client 
to interact with flink cluster.
+*/
+   CompletableFuture createRestService(String clusterId) 
throws Exception;
+
+   /**
+* Create flink master deployment with replication of 1.
+*/
+   void createFlinkMasterDeployment(ClusterSpecification clusterSpec);
+
+   /**
+* Create task manager pod.
+*/
+   void createTaskManagerPod(TaskManagerPodParameter parameter);
+
+   /**
+* Stop a specified pod by name.
+*/
+   void stopPod(String podName);
+
+   /**
+* Stop cluster and clean up all resources, include services, auxiliary 
services and all running pods.
+*/
+   void stopAndCleanupCluster(String clusterId);
+
+   /**
+* Get the kubernetes internal service of the given flink clusterId.
+*/
+   FlinkService getInternalService(String clusterId);
 
 Review comment:
   I notice that it could be `Nullable` in Faric8 implementation. The fact that 
it is nullable and the meaning of null should be documented.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #9965: [FLINK-10935][kubernetes]Implement KubeClient with Faric8 Kubernetes clients

2019-11-27 Thread GitBox
TisonKun commented on a change in pull request #9965: 
[FLINK-10935][kubernetes]Implement KubeClient with Faric8 Kubernetes clients
URL: https://github.com/apache/flink/pull/9965#discussion_r351613978
 
 

 ##
 File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/Fabric8FlinkKubeClient.java
 ##
 @@ -0,0 +1,325 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.kubernetes.kubeclient;
+
+import org.apache.flink.client.deployment.ClusterSpecification;
+import org.apache.flink.configuration.ConfigOption;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.configuration.RestOptions;
+import org.apache.flink.kubernetes.configuration.KubernetesConfigOptions;
+import org.apache.flink.kubernetes.kubeclient.decorators.ConfigMapDecorator;
+import org.apache.flink.kubernetes.kubeclient.decorators.Decorator;
+import 
org.apache.flink.kubernetes.kubeclient.decorators.FlinkMasterDeploymentDecorator;
+import org.apache.flink.kubernetes.kubeclient.decorators.InitializerDecorator;
+import 
org.apache.flink.kubernetes.kubeclient.decorators.OwnerReferenceDecorator;
+import org.apache.flink.kubernetes.kubeclient.decorators.ServiceDecorator;
+import 
org.apache.flink.kubernetes.kubeclient.decorators.TaskManagerPodDecorator;
+import org.apache.flink.kubernetes.kubeclient.resources.ActionWatcher;
+import org.apache.flink.kubernetes.kubeclient.resources.FlinkConfigMap;
+import org.apache.flink.kubernetes.kubeclient.resources.FlinkDeployment;
+import org.apache.flink.kubernetes.kubeclient.resources.FlinkPod;
+import org.apache.flink.kubernetes.kubeclient.resources.FlinkService;
+import org.apache.flink.kubernetes.utils.Constants;
+
+import io.fabric8.kubernetes.api.model.ConfigMap;
+import io.fabric8.kubernetes.api.model.Pod;
+import io.fabric8.kubernetes.api.model.Service;
+import io.fabric8.kubernetes.api.model.ServicePort;
+import io.fabric8.kubernetes.api.model.apps.Deployment;
+import io.fabric8.kubernetes.client.KubernetesClient;
+import io.fabric8.kubernetes.client.KubernetesClientException;
+import io.fabric8.kubernetes.client.Watch;
+import io.fabric8.kubernetes.client.Watcher;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nonnull;
+import javax.annotation.Nullable;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * The implementation of {@link FlinkKubeClient}.
+ */
+public class Fabric8FlinkKubeClient implements FlinkKubeClient {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(Fabric8FlinkKubeClient.class);
+
+   private final Configuration flinkConfig;
+
+   private final KubernetesClient internalClient;
+
+   private List> configMapDecorators;
 
 Review comment:
   I'd tend to mark these lists as `final` when possible


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #9965: [FLINK-10935][kubernetes]Implement KubeClient with Faric8 Kubernetes clients

2019-11-27 Thread GitBox
TisonKun commented on a change in pull request #9965: 
[FLINK-10935][kubernetes]Implement KubeClient with Faric8 Kubernetes clients
URL: https://github.com/apache/flink/pull/9965#discussion_r351613059
 
 

 ##
 File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/resources/Resource.java
 ##
 @@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.kubernetes.kubeclient.resources;
+
+import org.apache.flink.configuration.Configuration;
+
+/**
+ * Represent a kubernetes resource.
+ */
+public abstract class Resource {
+
+   protected T internalResource;
 
 Review comment:
   given we have getter for `internalResource` and `flinkConfig` they could be 
`private final`; just notice their setters are not into used at all.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #9965: [FLINK-10935][kubernetes]Implement KubeClient with Faric8 Kubernetes clients

2019-11-27 Thread GitBox
TisonKun commented on a change in pull request #9965: 
[FLINK-10935][kubernetes]Implement KubeClient with Faric8 Kubernetes clients
URL: https://github.com/apache/flink/pull/9965#discussion_r351615017
 
 

 ##
 File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/KubeClientFactory.java
 ##
 @@ -0,0 +1,63 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.kubernetes.kubeclient;
+
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.kubernetes.configuration.KubernetesConfigOptions;
+import org.apache.flink.kubernetes.utils.KubernetesUtils;
+
+import io.fabric8.kubernetes.client.Config;
+import io.fabric8.kubernetes.client.DefaultKubernetesClient;
+import io.fabric8.kubernetes.client.KubernetesClient;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+
+/**
+ * Factory class to create {@link FlinkKubeClient}.
+ */
+public class KubeClientFactory {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(KubeClientFactory.class);
+
+   public static synchronized FlinkKubeClient 
fromConfiguration(Configuration flinkConfig) {
+
+   Config config = null;
+
+   String kubeConfigFile = 
flinkConfig.getString(KubernetesConfigOptions.KUBE_CONFIG_FILE);
+   if (kubeConfigFile != null && 
KubernetesUtils.getContentFromFile(kubeConfigFile) != null) {
+   LOG.debug("Load kubernetes config from file: {}.", 
kubeConfigFile);
+   try {
+   config = 
Config.fromKubeconfig(KubernetesUtils.getContentFromFile(kubeConfigFile));
+   } catch (IOException e) {
+   LOG.error("Load kubernetes config failed.", e);
 
 Review comment:
   We don't throw exception here so `new DefaultKubernetesClient(config);` 
could be `new DefaultKubernetesClient(null);`. What is the semantic there then? 
It looks weird we log error but control flow continues. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #9965: [FLINK-10935][kubernetes]Implement KubeClient with Faric8 Kubernetes clients

2019-11-27 Thread GitBox
TisonKun commented on a change in pull request #9965: 
[FLINK-10935][kubernetes]Implement KubeClient with Faric8 Kubernetes clients
URL: https://github.com/apache/flink/pull/9965#discussion_r351612392
 
 

 ##
 File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/resources/ActionWatcher.java
 ##
 @@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.kubernetes.kubeclient.resources;
+
+import io.fabric8.kubernetes.api.model.HasMetadata;
+import io.fabric8.kubernetes.client.KubernetesClientException;
+import io.fabric8.kubernetes.client.KubernetesClientTimeoutException;
+import io.fabric8.kubernetes.client.Watcher;
+
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicReference;
+
+/**
+ * Watch a specific action.
+ */
+public class ActionWatcher implements Watcher {
+   private final CountDownLatch latch = new CountDownLatch(1);
+   private final AtomicReference reference = new AtomicReference();
+   private final T resource;
+   private Action expectedAction;
 
 Review comment:
   It could be `final` IMO.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #9965: [FLINK-10935][kubernetes]Implement KubeClient with Faric8 Kubernetes clients

2019-11-27 Thread GitBox
TisonKun commented on a change in pull request #9965: 
[FLINK-10935][kubernetes]Implement KubeClient with Faric8 Kubernetes clients
URL: https://github.com/apache/flink/pull/9965#discussion_r351614350
 
 

 ##
 File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/FlinkKubeClient.java
 ##
 @@ -0,0 +1,110 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.kubernetes.kubeclient;
+
+import org.apache.flink.client.deployment.ClusterSpecification;
+import org.apache.flink.kubernetes.kubeclient.resources.FlinkPod;
+import org.apache.flink.kubernetes.kubeclient.resources.FlinkService;
+
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.CompletableFuture;
+
+/**
+ * The client to talk with kubernetes.
+ */
+public interface FlinkKubeClient extends AutoCloseable {
+
+   /**
+* Create kubernetes config map, include flink-conf.yaml, 
log4j.properties.
+*/
+   void createConfigMap() throws Exception;
+
+   /**
+* Create kubernetes service for internal use. This will be set to 
jobmanager rpc address.
+* It is the owner of all resources. After deletion, all other resource 
will be deleted by gc.
+* A CompletableFuture is returned and could be used to wait for 
service ready.
+*/
+   CompletableFuture createInternalService(String clusterId) 
throws Exception;
+
+   /**
+* Create kubernetes service for rest port. This will be used by client 
to interact with flink cluster.
+*/
+   CompletableFuture createRestService(String clusterId) 
throws Exception;
+
+   /**
+* Create flink master deployment with replication of 1.
+*/
+   void createFlinkMasterDeployment(ClusterSpecification clusterSpec);
+
+   /**
+* Create task manager pod.
+*/
+   void createTaskManagerPod(TaskManagerPodParameter parameter);
+
+   /**
+* Stop a specified pod by name.
+*/
+   void stopPod(String podName);
+
+   /**
+* Stop cluster and clean up all resources, include services, auxiliary 
services and all running pods.
+*/
+   void stopAndCleanupCluster(String clusterId);
+
+   /**
+* Get the kubernetes internal service of the given flink clusterId.
+*/
+   FlinkService getInternalService(String clusterId);
+
+   /**
+* Get the kubernetes rest service of the given flink clusterId.
+*/
+   FlinkService getRestService(String clusterId);
+
+   /**
+* Get the rest endpoints for access outside cluster.
+*/
+   Endpoint getRestEndpoints(String clusterId);
 
 Review comment:
   I notice that it could be `Nullable` in Faric8 implementation. The fact that 
it is nullable and the meaning of null should be documented.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14470) Watermark display not working with high parallelism job

2019-11-27 Thread lining (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984182#comment-16984182
 ] 

lining commented on FLINK-14470:


cc [~chesnay]

> Watermark display not working with high parallelism job
> ---
>
> Key: FLINK-14470
> URL: https://issues.apache.org/jira/browse/FLINK-14470
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.8.2, 1.9.1
>Reporter: Thomas Weise
>Assignee: Yadong Xie
>Priority: Major
>
> Watermarks don't display in the UI when the job graph has many vertices. The 
> REST API call to fetch currentInputWatermark fails as it enumerates each 
> subtask, which obviously fails beyond a limit. With the new UI that can be 
> noticed through a flickering error message in the upper right corner.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] klion26 commented on issue #10344: [FLINK-14264][StateBackend][Rest] Expose state backend in checkpoint rest api

2019-11-27 Thread GitBox
klion26 commented on issue #10344: [FLINK-14264][StateBackend][Rest] Expose 
state backend in checkpoint rest api
URL: https://github.com/apache/flink/pull/10344#issuecomment-559365412
 
 
   cc @aljoscha 
   another question: is there a way to automatically update the `rest_api.html` 
or do I need to update the `rest_api.html` manually? I followed the `README` in 
`docs`, seems does not generate the config.
   and if there is not automatically generate script currently, do we need to 
add one?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14470) Watermark display not working with high parallelism job

2019-11-27 Thread lining (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984181#comment-16984181
 ] 

lining commented on FLINK-14470:


I have tested it if you get more than 165 WaterMarks, will redirect to URL 
/bad-request. There isn't any handler for it, so its Http status is 404. 
*  find out the reason of redirecting to /bad-request. If Http call has an 
exception, it needs to show the error, not redirect to /bad-request.
* new handler for getting vertex's metric, such as 
/jobs/:jobid/vertices/:vertexId/metrics?get=[key]=[]=[]

> Watermark display not working with high parallelism job
> ---
>
> Key: FLINK-14470
> URL: https://issues.apache.org/jira/browse/FLINK-14470
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.8.2, 1.9.1
>Reporter: Thomas Weise
>Assignee: Yadong Xie
>Priority: Major
>
> Watermarks don't display in the UI when the job graph has many vertices. The 
> REST API call to fetch currentInputWatermark fails as it enumerates each 
> subtask, which obviously fails beyond a limit. With the new UI that can be 
> noticed through a flickering error message in the upper right corner.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10344: [FLINK-14264][StateBackend][Rest] Expose state backend in checkpoint rest api

2019-11-27 Thread GitBox
flinkbot commented on issue #10344: [FLINK-14264][StateBackend][Rest] Expose 
state backend in checkpoint rest api
URL: https://github.com/apache/flink/pull/10344#issuecomment-559365495
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit fbd6e8914b49348a275657130ebdd9413d7d17c6 (Thu Nov 28 
06:51:46 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-14264).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14963) Massive Duplicated Useless Oss Message in jobmanager.log

2019-11-27 Thread Eric Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lee updated FLINK-14963:
-
Issue Type: Bug  (was: Improvement)

> Massive Duplicated Useless Oss Message in jobmanager.log
> 
>
> Key: FLINK-14963
> URL: https://issues.apache.org/jira/browse/FLINK-14963
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.9.1
>Reporter: Eric Lee
>Priority: Major
> Attachments: 7120B204-1B34-46D5-A9ED-D5BAB2D76F26.png
>
>
> When I was using AliCloud Oss as the external file system for flink 
> checkpoints, there were massive oss related logs which are useless because 
> checkpoints can still complete successfully.
> After I debugged into checkpoint coordinator and file system, I found the 
> root cause, the wrong use of oss sdk to check whether the target exists in 
> hadoop fs module.
> (Attached image is the screenshot of jobmanager.log)
> 
> Solutions:
> I have submitted a [pull request 
> |https://github.com/aliyun/aliyun-oss-java-sdk/pull/264]in aliyun-oss-client 
> repository and v3.8.0 has released. We can alter the oss client version with 
> [3.8.0|https://github.com/apache/flink/blob/master/flink-filesystems/flink-oss-fs-hadoop/pom.xml#L31].
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14264) Expose CheckpointBackend in checkpoint config RestAPI

2019-11-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-14264:
---
Labels: pull-request-available  (was: )

> Expose CheckpointBackend in checkpoint config RestAPI
> -
>
> Key: FLINK-14264
> URL: https://issues.apache.org/jira/browse/FLINK-14264
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST, Runtime / State Backends
>Reporter: Congxian Qiu(klion26)
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> Currently, we can get checkpoint config from rest api[1], the response 
> contains the information as below
>  * timeout
>  * min_pause
>  * max_concurrent
>  * externalization
> But did not contain the type of CheckpointBackend, but in some scenarios, we 
> want to get the CheckpointBackend type from Rest, this issue wants to add the 
> simple name of the CheckpointBackend in the {{checkpoints/config rest with 
> key }}{{checkpoint_backend, so the response will contain the information such 
> as below}}
>  * {{timeout}}
>  * {{min_pause}}
>  * {{max_concurrent}}
>  * checkpoint_backend 
>  * externalization
>  
>  [1] 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.9/monitoring/rest_api.html#jobs-jobid-checkpoints-config]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] klion26 opened a new pull request #10344: [FLINK-14264][StateBackend][Rest] Expose state backend in checkpoint rest api

2019-11-27 Thread GitBox
klion26 opened a new pull request #10344: [FLINK-14264][StateBackend][Rest] 
Expose state backend in checkpoint rest api
URL: https://github.com/apache/flink/pull/10344
 
 
   
   ## What is the purpose of the change
   
   Expose `stateBackend` in `jobs/:jobId/checkpoints/config` rest api
   
   
   ## Verifying this change
   
   
   This change added tests and can be verified as follows:
   
   - `CheckpointConfigInfoTest#testJsonMarshalling`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14147) Reduce REST API Request/Response redundancy

2019-11-27 Thread lining (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lining updated FLINK-14147:
---
Description: 
1. Redundancy Response

In the response of API /jobs/:jobid, the id and name in both plan and the 
vertices data are exactly the same, it would waste a lot of network bandwidth 
if the vertex graph is very big(1000+ vertex in a job).

!16_17_07__09_20_2019.jpg|width=427,height=279!

2. Redundancy Request
 In the current Web UI design, we have to query vertex number times to display 
the low watermarks in the job graph. If the vertex number is very large(1000+ 
sometimes), the Web UI will send 1000+ request to the REST API, the max 
concurrent HTTP request in a browser is limited, it would bring a long time 
delay for users. Test it, if there are more than 165 WarterMarks to call will 
redirect to /bad-request, then 404.

  was:
1. Redundancy Response

In the response of API /jobs/:jobid, the id and name in both plan and the 
vertices data are exactly the same, it would waste a lot of network bandwidth 
if the vertex graph is very big(1000+ vertex in a job).

!16_17_07__09_20_2019.jpg|width=427,height=279!


2. Redundancy Request
In the current Web UI design, we have to query vertex number times to display 
the low watermarks in the job graph. If the vertex number is very large(1000+ 
sometimes), the Web UI will send 1000+ request to the REST API, the max 
concurrent HTTP request in a browser is limited, it would bring a long time 
delay for users.


>  Reduce REST API Request/Response redundancy
> 
>
> Key: FLINK-14147
> URL: https://issues.apache.org/jira/browse/FLINK-14147
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Reporter: Yadong Xie
>Priority: Major
> Attachments: 16_17_07__09_20_2019.jpg
>
>
> 1. Redundancy Response
> In the response of API /jobs/:jobid, the id and name in both plan and the 
> vertices data are exactly the same, it would waste a lot of network bandwidth 
> if the vertex graph is very big(1000+ vertex in a job).
> !16_17_07__09_20_2019.jpg|width=427,height=279!
> 2. Redundancy Request
>  In the current Web UI design, we have to query vertex number times to 
> display the low watermarks in the job graph. If the vertex number is very 
> large(1000+ sometimes), the Web UI will send 1000+ request to the REST API, 
> the max concurrent HTTP request in a browser is limited, it would bring a 
> long time delay for users. Test it, if there are more than 165 WarterMarks to 
> call will redirect to /bad-request, then 404.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10343: [FLINK-14729][connectors] Multi-topics consuming from KafkaTableSource

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #10343: [FLINK-14729][connectors] 
Multi-topics consuming from KafkaTableSource
URL: https://github.com/apache/flink/pull/10343#issuecomment-559320006
 
 
   
   ## CI report:
   
   * 42fbaee718ac9509938b448b80f1b78ba3d7cd52 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138507502)
   * 0b765573aec7a60ed6ba17a10757ebfe6970edb3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138512992)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10343: [FLINK-14729][connectors] Multi-topics consuming from KafkaTableSource

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #10343: [FLINK-14729][connectors] 
Multi-topics consuming from KafkaTableSource
URL: https://github.com/apache/flink/pull/10343#issuecomment-559320006
 
 
   
   ## CI report:
   
   * 42fbaee718ac9509938b448b80f1b78ba3d7cd52 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/138507502)
   * 0b765573aec7a60ed6ba17a10757ebfe6970edb3 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10309: [FLINK-14909][runtime] Let tasks get scheduled in topological order when possible

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #10309: [FLINK-14909][runtime] Let tasks get 
scheduled in topological order when possible
URL: https://github.com/apache/flink/pull/10309#issuecomment-558068110
 
 
   
   ## CI report:
   
   * 51e86fe9d2397879945677aee80da5983402afca : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138017542)
   * 05c35c051959237fe03cf96491c69f317fca0438 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/138366843)
   * 5cc1dad6b5c2c69434b633bce559ca6f171006bd : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138370781)
   * 95aecab94c2ca6fe9337fa76e86866200189309b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138405642)
   * 2e7c4c54a4b98268f21c4f8196040b036c35cbd6 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138439651)
   * 449944386e5c6acba9b061cb7c121ba578d54fd9 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138507488)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] fangpengcheng95 commented on issue #10343: [FLINK-14729][connectors] Multi-topics consuming from KafkaTableSource

2019-11-27 Thread GitBox
fangpengcheng95 commented on issue #10343: [FLINK-14729][connectors] 
Multi-topics consuming from KafkaTableSource
URL: https://github.com/apache/flink/pull/10343#issuecomment-559329442
 
 
   @flinkbot run travis


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10343: [FLINK-14729][connectors] Multi-topics consuming from KafkaTableSource

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #10343: [FLINK-14729][connectors] 
Multi-topics consuming from KafkaTableSource
URL: https://github.com/apache/flink/pull/10343#issuecomment-559320006
 
 
   
   ## CI report:
   
   * 42fbaee718ac9509938b448b80f1b78ba3d7cd52 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138507502)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10309: [FLINK-14909][runtime] Let tasks get scheduled in topological order when possible

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #10309: [FLINK-14909][runtime] Let tasks get 
scheduled in topological order when possible
URL: https://github.com/apache/flink/pull/10309#issuecomment-558068110
 
 
   
   ## CI report:
   
   * 51e86fe9d2397879945677aee80da5983402afca : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138017542)
   * 05c35c051959237fe03cf96491c69f317fca0438 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/138366843)
   * 5cc1dad6b5c2c69434b633bce559ca6f171006bd : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138370781)
   * 95aecab94c2ca6fe9337fa76e86866200189309b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138405642)
   * 2e7c4c54a4b98268f21c4f8196040b036c35cbd6 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138439651)
   * 449944386e5c6acba9b061cb7c121ba578d54fd9 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/138507488)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14717) JobExceptionsHandler show exceptions of prior attempts

2019-11-27 Thread lining (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984106#comment-16984106
 ] 

lining commented on FLINK-14717:


The current priority is based on the task's position in the vertex. Maybe based 
on time, we can add a parameter to determine whether the latest or oldest is 
displayed.
Since this interface is job level, it should show all exceptions of the job, so 
there aren't any inconsistencies.

Then whether needs to be classified according to vertex and subtask. I think it 
may be great! I have some advice: 
*  we could emulate the way the current checkpoint shows:
!screenshot-1.png! 

> JobExceptionsHandler show exceptions of prior  attempts 
> 
>
> Key: FLINK-14717
> URL: https://issues.apache.org/jira/browse/FLINK-14717
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Web Frontend
>Reporter: lining
>Priority: Major
> Attachments: screenshot-1.png
>
>
> *Current*
> The job's exceptions just show current attempt’s exceptions in web UI.(ps: 
> [code|https://github.com/apache/flink/blob/34b5399f4effb679baabd8bca312cbf92ec34165/flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/job/JobExceptionsHandler.java#L97-L98])
> If the job failovers, we couldn't see any prior attempts' exceptions.
> *Proposal*
> We could use executionVertex.getPriorExecutionAttempt to get prior attempt in 
> JobExceptionsHandler.
> {code:java}
> for (int i = task.getAttemptNumber() - 1; i >= 0; i--) {
>   task = executionVertex.getPriorExecutionAttempt(i);
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14717) JobExceptionsHandler show exceptions of prior attempts

2019-11-27 Thread lining (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lining updated FLINK-14717:
---
Attachment: screenshot-1.png

> JobExceptionsHandler show exceptions of prior  attempts 
> 
>
> Key: FLINK-14717
> URL: https://issues.apache.org/jira/browse/FLINK-14717
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Web Frontend
>Reporter: lining
>Priority: Major
> Attachments: screenshot-1.png
>
>
> *Current*
> The job's exceptions just show current attempt’s exceptions in web UI.(ps: 
> [code|https://github.com/apache/flink/blob/34b5399f4effb679baabd8bca312cbf92ec34165/flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/job/JobExceptionsHandler.java#L97-L98])
> If the job failovers, we couldn't see any prior attempts' exceptions.
> *Proposal*
> We could use executionVertex.getPriorExecutionAttempt to get prior attempt in 
> JobExceptionsHandler.
> {code:java}
> for (int i = task.getAttemptNumber() - 1; i >= 0; i--) {
>   task = executionVertex.getPriorExecutionAttempt(i);
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14952) Yarn containers can exceed physical memory limits when using BoundedBlockingSubpartition.

2019-11-27 Thread Yingjie Cao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984103#comment-16984103
 ] 

Yingjie Cao commented on FLINK-14952:
-

> Doesn't this defeat the purpose of using mmap in the first place? For it to 
> be beneficial two different readers of the mmap'ed region would have to 
> closely follow one another, right?

You are right, if we decide to manage the mmapped region, we need to consider 
when and which region to recycle. Implementing a memory management algorithm is 
possible but can be complicated, currently, OS does it for us.

Maybe there's another choice we can consider - using the FILE-FILE mode in the 
first place. The FILE-FILE mode can also leverage the capability of OS page 
cache and the only problem is that it uses some unpooled heap memory for 
reading, which has a potential of OOM. If we can manage those memory in the 
future, there should be no problem with FILE-FILE mode.

> Yarn containers can exceed physical memory limits when using 
> BoundedBlockingSubpartition.
> -
>
> Key: FLINK-14952
> URL: https://issues.apache.org/jira/browse/FLINK-14952
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Runtime / Network
>Affects Versions: 1.9.1
>Reporter: Piotr Nowojski
>Priority: Blocker
> Fix For: 1.10.0
>
>
> As [reported by a user on the user mailing 
> list|http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/CoGroup-SortMerger-performance-degradation-from-1-6-4-1-9-1-td31082.html],
>  combination of using {{BoundedBlockingSubpartition}} with yarn containers 
> can cause yarn container to exceed memory limits.
> {quote}2019-11-19 12:49:23,068 INFO org.apache.flink.yarn.YarnResourceManager 
> - Closing TaskExecutor connection container_e42_1574076744505_9444_01_04 
> because: Container 
> [pid=42774,containerID=container_e42_1574076744505_9444_01_04] is running 
> beyond physical memory limits. Current usage: 12.0 GB of 12 GB physical 
> memory used; 13.9 GB of 25.2 GB virtual memory used. Killing container.
> {quote}
> This is probably happening because memory usage of mmap is not capped and not 
> accounted by configured memory limits, however yarn is tracking this memory 
> usage and once Flink exceeds some threshold, container is being killed.
> Workaround is to overrule default value and force Flink to not user mmap, by 
> setting a secret (狼) config option:
> {noformat}
> taskmanager.network.bounded-blocking-subpartition-type: file
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14966) Online document typo, package name org.apache.flinktypes.xxxx should be corrected as org.apache.flink.types.xxxx . From online documents flink-docs-release-1.9 (Applic

2019-11-27 Thread yuzhemin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984100#comment-16984100
 ] 

yuzhemin commented on FLINK-14966:
--

[~klion26] Thank you for reminding me. I'm just about to learn Flink and not so 
familiar with Jira, hope you don't mind.

> Online document typo, package name org.apache.flinktypes. should be 
> corrected as org.apache.flink.types. . From online documents 
> flink-docs-release-1.9 (Application Development > Basic API 
> Concepts>Supported Data Types>Values). 
> 
>
> Key: FLINK-14966
> URL: https://issues.apache.org/jira/browse/FLINK-14966
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.9.0
>Reporter: yuzhemin
>Priority: Trivial
>
> Document url is 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/api_concepts.html]
> error location: _Application Development > Basic API Concepts>Supported Data 
> Types>Values_
> I guess the location  _org.apache.flinktypes.Value and 
> org.apache.flinktypes.CopyableValue_ _referred in the paragraph means 
> org.apache.flink.types.XXX_



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10343: [FLINK-14729][connectors] Multi-topics consuming from KafkaTableSource

2019-11-27 Thread GitBox
flinkbot commented on issue #10343: [FLINK-14729][connectors] Multi-topics 
consuming from KafkaTableSource
URL: https://github.com/apache/flink/pull/10343#issuecomment-559320006
 
 
   
   ## CI report:
   
   * 42fbaee718ac9509938b448b80f1b78ba3d7cd52 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10309: [FLINK-14909][runtime] Let tasks get scheduled in topological order when possible

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #10309: [FLINK-14909][runtime] Let tasks get 
scheduled in topological order when possible
URL: https://github.com/apache/flink/pull/10309#issuecomment-558068110
 
 
   
   ## CI report:
   
   * 51e86fe9d2397879945677aee80da5983402afca : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138017542)
   * 05c35c051959237fe03cf96491c69f317fca0438 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/138366843)
   * 5cc1dad6b5c2c69434b633bce559ca6f171006bd : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138370781)
   * 95aecab94c2ca6fe9337fa76e86866200189309b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138405642)
   * 2e7c4c54a4b98268f21c4f8196040b036c35cbd6 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138439651)
   * 449944386e5c6acba9b061cb7c121ba578d54fd9 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14845) Introduce data compression to blocking shuffle.

2019-11-27 Thread Yingjie Cao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984097#comment-16984097
 ] 

Yingjie Cao commented on FLINK-14845:
-

> So I think it's fine to not make it pluggable in the first version. However 
> probably this should be changed and moved to plugin sooner or later.

Totally agree.

> In that case we would need to document this expected behaviour (previously 
>returned buffers should be recycled immediately before asking for new ones) 
>some where in the {{InputGate#getNext}} and {{InputChannel#getNextBuffer}}.

That's necessary and I will do that.

> Introduce data compression to blocking shuffle.
> ---
>
> Key: FLINK-14845
> URL: https://issues.apache.org/jira/browse/FLINK-14845
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Network
>Reporter: Yingjie Cao
>Assignee: Yingjie Cao
>Priority: Major
>
> Currently, blocking shuffle writer writes raw output data to disk without 
> compression. For IO bounded scenario, this can be optimized by compressing 
> the output data. It is better to introduce a compression mechanism and offer 
> users a config option to let the user decide whether to compress the shuffle 
> data. Actually, we hava implemented compression in our inner Flink version 
> and  here are some key points:
> 1. Where to compress/decompress?
> Compressing at upstream and decompressing at downstream.
> 2. Which thread do compress/decompress?
> Task threads do compress/decompress.
> 3. Data compression granularity.
> Per buffer.
> 4. How to handle that when data size become even bigger after compression?
> Give up compression in this case and introduce an extra flag to identify if 
> the data was compressed, that is, the output may be a mixture of compressed 
> and uncompressed data.
>  
> We'd like to introduce blocking shuffle data compression to Flink if there 
> are interests.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14966) Online document typo, package name org.apache.flinktypes.xxxx should be corrected as org.apache.flink.types.xxxx . From online documents flink-docs-release-1.9 (Applicat

2019-11-27 Thread yuzhemin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuzhemin updated FLINK-14966:
-
Summary: Online document typo, package name org.apache.flinktypes. 
should be corrected as org.apache.flink.types. . From online documents 
flink-docs-release-1.9 (Application Development > Basic API Concepts>Supported 
Data Types>Values).   (was: one error in document )

> Online document typo, package name org.apache.flinktypes. should be 
> corrected as org.apache.flink.types. . From online documents 
> flink-docs-release-1.9 (Application Development > Basic API 
> Concepts>Supported Data Types>Values). 
> 
>
> Key: FLINK-14966
> URL: https://issues.apache.org/jira/browse/FLINK-14966
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.9.0
>Reporter: yuzhemin
>Priority: Trivial
>
> Document url is 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/api_concepts.html]
> error location: _Application Development > Basic API Concepts>Supported Data 
> Types>Values_
> I guess the location  _org.apache.flinktypes.Value and 
> org.apache.flinktypes.CopyableValue_ _referred in the paragraph means 
> org.apache.flink.types.XXX_



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhuzhurk commented on a change in pull request #10309: [FLINK-14909][runtime] Let tasks get scheduled in topological order when possible

2019-11-27 Thread GitBox
zhuzhurk commented on a change in pull request #10309: [FLINK-14909][runtime] 
Let tasks get scheduled in topological order when possible
URL: https://github.com/apache/flink/pull/10309#discussion_r351570110
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/strategy/LazyFromSourcesSchedulingStrategy.java
 ##
 @@ -139,14 +140,27 @@ private void 
allocateSlotsAndDeployExecutionVertexIds(Set ver
private void allocateSlotsAndDeployExecutionVertices(
final Iterable> vertices) {
 
-   schedulerOperations.allocateSlotsAndDeploy(
-   IterableUtils.toStream(vertices)
-   
.filter(IS_IN_CREATED_EXECUTION_STATE.and(isInputConstraintSatisfied()))
-   .map(SchedulingExecutionVertex::getId)
-   .map(executionVertexID -> new 
ExecutionVertexDeploymentOption(
-   executionVertexID,
-   
deploymentOptions.get(executionVertexID)))
-   .collect(Collectors.toSet()));
+   final Set verticesToDeploy = 
IterableUtils.toStream(vertices)
+   
.filter(IS_IN_CREATED_EXECUTION_STATE.and(isInputConstraintSatisfied()))
+   .map(SchedulingExecutionVertex::getId)
+   .collect(Collectors.toSet());
+
+   final List 
vertexDeploymentOptions =
+   
createExecutionVertexDeploymentOptionsInTopologicalOrder(verticesToDeploy);
+
+   
schedulerOperations.allocateSlotsAndDeploy(vertexDeploymentOptions);
+   }
+
+   private List 
createExecutionVertexDeploymentOptionsInTopologicalOrder(
 
 Review comment:
   Sure. done in 449944386e5c6acba9b061cb7c121ba578d54fd9, along with a few 
other extracted utils.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-14981) web frontend support dynamic adjustment of the log level

2019-11-27 Thread liu (Jira)
liu created FLINK-14981:
---

 Summary: web frontend support dynamic adjustment of the log level 
 Key: FLINK-14981
 URL: https://issues.apache.org/jira/browse/FLINK-14981
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Web Frontend
Affects Versions: 1.9.1, 1.8.2, 1.7.2
Reporter: liu
 Fix For: 1.9.1, 1.8.2, 1.7.2
 Attachments: change llog level(Storm).png

In a production environment, sometimes the application behaves differently than 
expected, which requires the log debugging mode to be turned on to locate the 
problem. If the debug log is turned on when the program is started, the amount 
of data will be very large, causing the web frontend to crash. If it is not 
turned on, the program is a black box and cannot be debugged.
Can flink support dynamic adjustment of the log level like Storm and set the 
expiration time at the same time.  the original log level will be restored as 
soon as the time expires. This will not make the program run like a black box, 
nor  print too many logs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10343: [FLINK-14729][connectors] Multi-topics consuming from KafkaTableSource

2019-11-27 Thread GitBox
flinkbot commented on issue #10343: [FLINK-14729][connectors] Multi-topics 
consuming from KafkaTableSource
URL: https://github.com/apache/flink/pull/10343#issuecomment-559313502
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 42fbaee718ac9509938b448b80f1b78ba3d7cd52 (Thu Nov 28 
02:15:51 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-14729).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] fangpengcheng95 opened a new pull request #10343: [FLINK-14729][connectors] Multi-topics consuming from KafkaTableSource

2019-11-27 Thread GitBox
fangpengcheng95 opened a new pull request #10343: [FLINK-14729][connectors] 
Multi-topics consuming from KafkaTableSource
URL: https://github.com/apache/flink/pull/10343
 
 
   
   
   ## What is the purpose of the change
   propose a new functionality of KafkaTableSource which can consume multiple 
topics at the same time. 
   
   ## Brief change log
   
   Design plan
   
   Add a new constructor in KafkaTableSource which accepts topics with List 
type as one parameter.
   Modify the existed one which only accepts one topic as string type to 
call the proposed one to finish the instantiation. That is to say, wrap this 
topic in a list and pass it to the multi-topics-consuming constructor.
   Modify the overridden method createKafkaConsumer in KafkaTableSource to 
pass topics as List instead of String.
   Replace the field topic with topics as List type in  
KafkaTableSourceBase and modify every place using topic with topics. So we just 
need to modify the constructor KafkaTableSourceBase, method getDataStream, and 
equals and hashCode.
   
   
   
   ## Verifying this change
   
   Test plan
   
   There is less to do as KafkaTableSource is based on FlinkKafkaConsumer 
which already supports consuming multiple topics and is tested well. Of course, 
we can easily add further more tests if needed.
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14729) Multi-topics consuming from KafkaTableSource

2019-11-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-14729:
---
Labels: features pull-request-available  (was: features)

> Multi-topics consuming from KafkaTableSource
> 
>
> Key: FLINK-14729
> URL: https://issues.apache.org/jira/browse/FLINK-14729
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Kafka
>Reporter: Leo Zhang
>Priority: Major
>  Labels: features, pull-request-available
>
> Hi, all. I propose a new functionality of KafkaTableSource which can consume 
> multiple topics at the same time. 
> *Design plan*
>  * Add a new constructor in KafkaTableSource which accepts topics with List 
> type as one parameter.
>  * Modify the existed one which only accepts one topic as string type to call 
> the proposed one to finish the instantiation. That is to say, wrap this topic 
> in a list and pass it to the multi-topics-consuming constructor.
>  * Modify the overridden method createKafkaConsumer in KafkaTableSource to 
> pass topics as List instead of String.
>  * Replace the field topic with topics as List type in  KafkaTableSourceBase 
> and modify every place using topic with topics. So we just need to modify the 
> constructor KafkaTableSourceBase, method getDataStream, and equals and 
> hashCode.
> *Test plan*
> There is less to do as KafkaTableSource is based on FlinkKafkaConsumer which 
> already supports consuming multiple topics and is tested well. Of course, we 
> can easily add further more tests if needed.
>  
> So what's your opinion?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #8746: [hotfix][FLINK-11120][table]fix the bug of timestampadd handles time

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #8746: [hotfix][FLINK-11120][table]fix the 
bug of timestampadd handles time
URL: https://github.com/apache/flink/pull/8746#issuecomment-528887595
 
 
   
   ## CI report:
   
   * ed381a566a61e9d5e0677cc0b68e1eee4a4bf46d : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/126243427)
   * 6097409f289f66637fa3c537a76ee0d366e7bc69 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/137939971)
   * 19168a7e57a20ed4005c4f53634c83d4530c5b98 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138339868)
   * 367ed7b505f0e66fec24a9e32e694279f2c55996 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138499705)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8746: [hotfix][FLINK-11120][table]fix the bug of timestampadd handles time

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #8746: [hotfix][FLINK-11120][table]fix the 
bug of timestampadd handles time
URL: https://github.com/apache/flink/pull/8746#issuecomment-528887595
 
 
   
   ## CI report:
   
   * ed381a566a61e9d5e0677cc0b68e1eee4a4bf46d : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/126243427)
   * 6097409f289f66637fa3c537a76ee0d366e7bc69 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/137939971)
   * 19168a7e57a20ed4005c4f53634c83d4530c5b98 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138339868)
   * 367ed7b505f0e66fec24a9e32e694279f2c55996 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] XuQianJin-Stars commented on issue #8746: [hotfix][FLINK-11120][table]fix the bug of timestampadd handles time

2019-11-27 Thread GitBox
XuQianJin-Stars commented on issue #8746: [hotfix][FLINK-11120][table]fix the 
bug of timestampadd handles time
URL: https://github.com/apache/flink/pull/8746#issuecomment-559288102
 
 
   hi @walterddr This PR I addressed it.
   Thanks
   forward


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10327: [FLINK-14914][table] refactor CatalogFunction by adding Language support

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #10327: [FLINK-14914][table] refactor 
CatalogFunction by adding Language support
URL: https://github.com/apache/flink/pull/10327#issuecomment-558927258
 
 
   
   ## CI report:
   
   * c3a6514c8b6c984190d4d60fb50b6313851ebba5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138348193)
   * a76b2a241dbf3b5a02cc863bf709def9c7649077 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138424324)
   * 8c3202e3a0cb04975d2c4eef923481f67fc5d5b5 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138466836)
   * fff1120a5b9d751dd5373b880fd655639ff161ee : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138486720)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] walterddr commented on a change in pull request #8746: [hotfix][FLINK-11120][table]fix the bug of timestampadd handles time

2019-11-27 Thread GitBox
walterddr commented on a change in pull request #8746: 
[hotfix][FLINK-11120][table]fix the bug of timestampadd handles time
URL: https://github.com/apache/flink/pull/8746#discussion_r351532182
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/ScalarOperatorGens.scala
 ##
 @@ -171,7 +171,7 @@ object ScalarOperatorGens {
 resultType.getTypeRoot match {
   case DATE =>
 generateOperatorIfNotNull(ctx, new DateType(), left, right) {
-  (l, r) => s"$l $op ((int) ($r / ${MILLIS_PER_DAY}L))"
+  (l, r) => s"$l $op (Math.toIntExact($r / ${MILLIS_PER_DAY}L))"
 
 Review comment:
   `java.lang.Math.toIntExact`. better use the full qualified name during 
codegen IMO


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] walterddr commented on a change in pull request #8746: [hotfix][FLINK-11120][table]fix the bug of timestampadd handles time

2019-11-27 Thread GitBox
walterddr commented on a change in pull request #8746: 
[hotfix][FLINK-11120][table]fix the bug of timestampadd handles time
URL: https://github.com/apache/flink/pull/8746#discussion_r351533727
 
 

 ##
 File path: 
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/codegen/calls/ScalarOperators.scala
 ##
 @@ -894,7 +894,7 @@ object ScalarOperators {
 resultType match {
   case SqlTimeTypeInfo.DATE =>
 generateOperatorIfNotNull(nullCheck, SqlTimeTypeInfo.DATE, left, 
right) {
-  (l, r) => s"$l $op ((int) ($r / ${MILLIS_PER_DAY}L))"
+  (l, r) => s"$l $op (Math.toIntExact($r / ${MILLIS_PER_DAY}L))"
 
 Review comment:
   ditto 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] walterddr commented on a change in pull request #8746: [hotfix][FLINK-11120][table]fix the bug of timestampadd handles time

2019-11-27 Thread GitBox
walterddr commented on a change in pull request #8746: 
[hotfix][FLINK-11120][table]fix the bug of timestampadd handles time
URL: https://github.com/apache/flink/pull/8746#discussion_r351533755
 
 

 ##
 File path: 
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/codegen/calls/ScalarOperators.scala
 ##
 @@ -909,7 +909,14 @@ object ScalarOperators {
 
   case (SqlTimeTypeInfo.TIME, TimeIntervalTypeInfo.INTERVAL_MILLIS) =>
 generateOperatorIfNotNull(nullCheck, SqlTimeTypeInfo.TIME, left, 
right) {
-(l, r) => s"$l $op ((int) ($r))"
+  (l, r) =>
+s"((($l % ${MILLIS_PER_DAY} == 0) ? ${MILLIS_PER_DAY} : $l) " +
+  s"+ (Math.toIntExact($r % ${MILLIS_PER_DAY} == 0 ? 0 : $r))) % 
${MILLIS_PER_DAY}"
 
 Review comment:
   ditto 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] walterddr commented on a change in pull request #8746: [hotfix][FLINK-11120][table]fix the bug of timestampadd handles time

2019-11-27 Thread GitBox
walterddr commented on a change in pull request #8746: 
[hotfix][FLINK-11120][table]fix the bug of timestampadd handles time
URL: https://github.com/apache/flink/pull/8746#discussion_r351533636
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/ScalarOperatorGens.scala
 ##
 @@ -186,7 +186,12 @@ object ScalarOperatorGens {
 
   case (TIME_WITHOUT_TIME_ZONE, INTERVAL_DAY_TIME) =>
 generateOperatorIfNotNull(ctx, new TimeType(), left, right) {
-  (l, r) => s"$l $op ((int) ($r))"
+  (l, r) => s"($l $op (Math.toIntExact($r))) % ${MILLIS_PER_DAY}"
 
 Review comment:
   ditto 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10327: [FLINK-14914][table] refactor CatalogFunction by adding Language support

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #10327: [FLINK-14914][table] refactor 
CatalogFunction by adding Language support
URL: https://github.com/apache/flink/pull/10327#issuecomment-558927258
 
 
   
   ## CI report:
   
   * c3a6514c8b6c984190d4d60fb50b6313851ebba5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138348193)
   * a76b2a241dbf3b5a02cc863bf709def9c7649077 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138424324)
   * 8c3202e3a0cb04975d2c4eef923481f67fc5d5b5 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138466836)
   * fff1120a5b9d751dd5373b880fd655639ff161ee : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/138486720)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10327: [FLINK-14914][table] refactor CatalogFunction by adding Language support

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #10327: [FLINK-14914][table] refactor 
CatalogFunction by adding Language support
URL: https://github.com/apache/flink/pull/10327#issuecomment-558927258
 
 
   
   ## CI report:
   
   * c3a6514c8b6c984190d4d60fb50b6313851ebba5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138348193)
   * a76b2a241dbf3b5a02cc863bf709def9c7649077 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138424324)
   * 8c3202e3a0cb04975d2c4eef923481f67fc5d5b5 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138466836)
   * fff1120a5b9d751dd5373b880fd655639ff161ee : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] HuangZhenQiu commented on a change in pull request #10327: [FLINK-14914][table] refactor CatalogFunction by adding Language support

2019-11-27 Thread GitBox
HuangZhenQiu commented on a change in pull request #10327: [FLINK-14914][table] 
refactor CatalogFunction by adding Language support
URL: https://github.com/apache/flink/pull/10327#discussion_r351503284
 
 

 ##
 File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/CatalogFunctionImpl.java
 ##
 @@ -30,10 +30,17 @@
  */
 public class CatalogFunctionImpl implements CatalogFunction {
private final String className; // Fully qualified class name of the 
function
+   private final FunctionLanguage functionLanguage;
 
public CatalogFunctionImpl(String className) {
+   this(className, FunctionLanguage.JAVA);
+   }
+
+   public CatalogFunctionImpl(String className, FunctionLanguage 
functionLanguage) {
checkArgument(!StringUtils.isNullOrWhitespaceOnly(className), 
"className cannot be null or empty");
+   checkArgument(functionLanguage != null, "functionLanguage 
cannot be null");
this.className = className;
+   this.functionLanguage = functionLanguage;
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on a change in pull request #10327: [FLINK-14914][table] refactor CatalogFunction by adding Language support

2019-11-27 Thread GitBox
bowenli86 commented on a change in pull request #10327: [FLINK-14914][table] 
refactor CatalogFunction by adding Language support
URL: https://github.com/apache/flink/pull/10327#discussion_r351497173
 
 

 ##
 File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/CatalogFunctionImpl.java
 ##
 @@ -30,10 +30,17 @@
  */
 public class CatalogFunctionImpl implements CatalogFunction {
private final String className; // Fully qualified class name of the 
function
+   private final FunctionLanguage functionLanguage;
 
public CatalogFunctionImpl(String className) {
+   this(className, FunctionLanguage.JAVA);
+   }
+
+   public CatalogFunctionImpl(String className, FunctionLanguage 
functionLanguage) {
checkArgument(!StringUtils.isNullOrWhitespaceOnly(className), 
"className cannot be null or empty");
+   checkArgument(functionLanguage != null, "functionLanguage 
cannot be null");
this.className = className;
+   this.functionLanguage = functionLanguage;
 
 Review comment:
   this can be `this.functionLanguage=checkNotNull(functionLanguage, 
"functionLanguage cannot be null")`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9434: [FLINK-13634][Formats]: added module to compress data using StreamingFileSink

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #9434: [FLINK-13634][Formats]: added module 
to compress data using StreamingFileSink
URL: https://github.com/apache/flink/pull/9434#issuecomment-521185651
 
 
   
   ## CI report:
   
   * 05d6f916b6c732f7d0f30f96bad6f7b31d078311 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123197266)
   * 88178209460dc2a4735390402aa46904a5b8a8df : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/123202789)
   * 3c574a9f45b17372491a47fb2774941f02bf43b4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138450371)
   * fe1e751681c66abfe6e162b741444b039446445a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138466849)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10327: [FLINK-14914][table] refactor CatalogFunction by adding Language support

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #10327: [FLINK-14914][table] refactor 
CatalogFunction by adding Language support
URL: https://github.com/apache/flink/pull/10327#issuecomment-558927258
 
 
   
   ## CI report:
   
   * c3a6514c8b6c984190d4d60fb50b6313851ebba5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138348193)
   * a76b2a241dbf3b5a02cc863bf709def9c7649077 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138424324)
   * 8c3202e3a0cb04975d2c4eef923481f67fc5d5b5 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138466836)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10342: [FLINK-14967][table] Add a utility for creating data types via reflection

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #10342: [FLINK-14967][table] Add a utility 
for creating data types via reflection
URL: https://github.com/apache/flink/pull/10342#issuecomment-559202248
 
 
   
   ## CI report:
   
   * d7229d3906b6f1248c2eeebc174a5c9f043f5a27 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138463379)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9434: [FLINK-13634][Formats]: added module to compress data using StreamingFileSink

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #9434: [FLINK-13634][Formats]: added module 
to compress data using StreamingFileSink
URL: https://github.com/apache/flink/pull/9434#issuecomment-521185651
 
 
   
   ## CI report:
   
   * 05d6f916b6c732f7d0f30f96bad6f7b31d078311 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123197266)
   * 88178209460dc2a4735390402aa46904a5b8a8df : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/123202789)
   * 3c574a9f45b17372491a47fb2774941f02bf43b4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138450371)
   * fe1e751681c66abfe6e162b741444b039446445a : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/138466849)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10146: [FLINK-14188][runtime] TaskExecutor derive and register with default slot resource profile

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #10146: [FLINK-14188][runtime] TaskExecutor 
derive and register with default slot resource profile
URL: https://github.com/apache/flink/pull/10146#issuecomment-552307115
 
 
   
   ## CI report:
   
   * 25f9e4b87846e5a736aa329c834f82962e1f50c4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135875925)
   * c4b4f4d5c88a1a5009325a6260cf2d91ed69ca96 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135885120)
   * 2d5269d2498d96550682d113d61382b7a9ac9721 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135902960)
   * 5ee8701f76b9e6f2dcb451eb988371bea3b0a38d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136486584)
   * 2c52d7157f5e1b25dfaa00fe50cf7b04e7d6a97e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136497720)
   * 2d734eeff7480adc2ea1f3695f31ba5a169f3a05 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136501966)
   * 4edae43ff7eaf0357f5e8604b02b88749c8d153f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136522752)
   * 927a11838172fe792636923e9378677f92a48b73 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138028348)
   * a73de7a3fc63fe2d2a9bd12e03efb45bfcbf9ca8 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138455883)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10327: [FLINK-14914][table] refactor CatalogFunction by adding Language support

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #10327: [FLINK-14914][table] refactor 
CatalogFunction by adding Language support
URL: https://github.com/apache/flink/pull/10327#issuecomment-558927258
 
 
   
   ## CI report:
   
   * c3a6514c8b6c984190d4d60fb50b6313851ebba5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138348193)
   * a76b2a241dbf3b5a02cc863bf709def9c7649077 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138424324)
   * 8c3202e3a0cb04975d2c4eef923481f67fc5d5b5 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/138466836)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on a change in pull request #10318: [FLINK-14721][hive]HiveTableSource implement LimitableTableSource interface

2019-11-27 Thread GitBox
bowenli86 commented on a change in pull request #10318: 
[FLINK-14721][hive]HiveTableSource implement LimitableTableSource interface
URL: https://github.com/apache/flink/pull/10318#discussion_r351443919
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveTableInputFormat.java
 ##
 @@ -91,12 +91,17 @@
private int[] fields;
// Remember whether a row instance is reused. No need to set partition 
fields for reused rows
private transient boolean rowReused;
+   //We should limit the input read count of this splits, -1 represents no 
limit.
+   private long limit;
+   private boolean isLimit = false;
 
 Review comment:
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9434: [FLINK-13634][Formats]: added module to compress data using StreamingFileSink

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #9434: [FLINK-13634][Formats]: added module 
to compress data using StreamingFileSink
URL: https://github.com/apache/flink/pull/9434#issuecomment-521185651
 
 
   
   ## CI report:
   
   * 05d6f916b6c732f7d0f30f96bad6f7b31d078311 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123197266)
   * 88178209460dc2a4735390402aa46904a5b8a8df : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/123202789)
   * 3c574a9f45b17372491a47fb2774941f02bf43b4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138450371)
   * fe1e751681c66abfe6e162b741444b039446445a : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] eskabetxe edited a comment on issue #9434: [FLINK-13634][Formats]: added module to compress data using StreamingFileSink

2019-11-27 Thread GitBox
eskabetxe edited a comment on issue #9434: [FLINK-13634][Formats]: added module 
to compress data using StreamingFileSink
URL: https://github.com/apache/flink/pull/9434#issuecomment-559196761
 
 
   I think we should be prepared to add other implementations without problem 
if the codecs aren't present on hadoop
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10342: [FLINK-14967][table] Add a utility for creating data types via reflection

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #10342: [FLINK-14967][table] Add a utility 
for creating data types via reflection
URL: https://github.com/apache/flink/pull/10342#issuecomment-559202248
 
 
   
   ## CI report:
   
   * d7229d3906b6f1248c2eeebc174a5c9f043f5a27 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/138463379)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10341: [FLINK-14974][runtime] Calculate managed memory fractions with BigDecimal and round down it properly

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #10341: [FLINK-14974][runtime] Calculate 
managed memory fractions with BigDecimal and round down it properly
URL: https://github.com/apache/flink/pull/10341#issuecomment-559181765
 
 
   
   ## CI report:
   
   * a4045089cf8d671a4160c09e60cb3fece7fbd881 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138455849)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10330: [FLINK-14189][runtime] Extend TaskExecutor to support dynamic slot allocation

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #10330: [FLINK-14189][runtime] Extend 
TaskExecutor to support dynamic slot allocation
URL: https://github.com/apache/flink/pull/10330#issuecomment-558970102
 
 
   
   ## CI report:
   
   * 38f2c1c450122cae9aa99258c19c6d07e998 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138360699)
   * 14ecf374fe3f6587e6eee29a39bf15190fac269c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138375450)
   * 761359a2f2509a06483e422e5e592b66e2e5661a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138401670)
   * 88630bcdfd9c05fd352a422a88e59f26fba4dc7c : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138455828)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10327: [FLINK-14914][table] refactor CatalogFunction by adding Language support

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #10327: [FLINK-14914][table] refactor 
CatalogFunction by adding Language support
URL: https://github.com/apache/flink/pull/10327#issuecomment-558927258
 
 
   
   ## CI report:
   
   * c3a6514c8b6c984190d4d60fb50b6313851ebba5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138348193)
   * a76b2a241dbf3b5a02cc863bf709def9c7649077 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138424324)
   * 8c3202e3a0cb04975d2c4eef923481f67fc5d5b5 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] HuangZhenQiu commented on issue #10327: [FLINK-14914][table] refactor CatalogFunction by adding Language support

2019-11-27 Thread GitBox
HuangZhenQiu commented on issue #10327: [FLINK-14914][table] refactor 
CatalogFunction by adding Language support
URL: https://github.com/apache/flink/pull/10327#issuecomment-559205800
 
 
   @bowenli86 @xuefuz Thanks for the feedback. I just updated the PR 
accordingly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9434: [FLINK-13634][Formats]: added module to compress data using StreamingFileSink

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #9434: [FLINK-13634][Formats]: added module 
to compress data using StreamingFileSink
URL: https://github.com/apache/flink/pull/9434#issuecomment-521185651
 
 
   
   ## CI report:
   
   * 05d6f916b6c732f7d0f30f96bad6f7b31d078311 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123197266)
   * 88178209460dc2a4735390402aa46904a5b8a8df : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/123202789)
   * 3c574a9f45b17372491a47fb2774941f02bf43b4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138450371)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on issue #10327: [FLINK-14914][table] refactor CatalogFunction by adding Language support

2019-11-27 Thread GitBox
bowenli86 commented on issue #10327: [FLINK-14914][table] refactor 
CatalogFunction by adding Language support
URL: https://github.com/apache/flink/pull/10327#issuecomment-559204127
 
 
   same concerns as Xuefu's, otherwise lgtm


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10342: [FLINK-14967][table] Add a utility for creating data types via reflection

2019-11-27 Thread GitBox
flinkbot commented on issue #10342: [FLINK-14967][table] Add a utility for 
creating data types via reflection
URL: https://github.com/apache/flink/pull/10342#issuecomment-559202248
 
 
   
   ## CI report:
   
   * d7229d3906b6f1248c2eeebc174a5c9f043f5a27 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10339: [FLINK-14976][cassandra] Release semaphore on all Throwable's in send()

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #10339: [FLINK-14976][cassandra] Release 
semaphore on all Throwable's in send()
URL: https://github.com/apache/flink/pull/10339#issuecomment-559115891
 
 
   
   ## CI report:
   
   * 65f4f8552770a2e1fd05c604a5cc074f32487c74 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/138425013)
   * 8efd77647534f028e8549ac3a210447b78e8d6de : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138439684)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] eskabetxe commented on issue #9434: [FLINK-13634][Formats]: added module to compress data using StreamingFileSink

2019-11-27 Thread GitBox
eskabetxe commented on issue #9434: [FLINK-13634][Formats]: added module to 
compress data using StreamingFileSink
URL: https://github.com/apache/flink/pull/9434#issuecomment-559196761
 
 
   I think we should be prepared to add other implementations without problem, 
for example the flink-sequence-files could be absorbed by this module
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-14980) add documentation and example for function DDL

2019-11-27 Thread Bowen Li (Jira)
Bowen Li created FLINK-14980:


 Summary: add documentation and example for function DDL
 Key: FLINK-14980
 URL: https://issues.apache.org/jira/browse/FLINK-14980
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation
Reporter: Bowen Li
Assignee: Zhenqiu Huang
 Fix For: 1.10.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10146: [FLINK-14188][runtime] TaskExecutor derive and register with default slot resource profile

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #10146: [FLINK-14188][runtime] TaskExecutor 
derive and register with default slot resource profile
URL: https://github.com/apache/flink/pull/10146#issuecomment-552307115
 
 
   
   ## CI report:
   
   * 25f9e4b87846e5a736aa329c834f82962e1f50c4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135875925)
   * c4b4f4d5c88a1a5009325a6260cf2d91ed69ca96 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135885120)
   * 2d5269d2498d96550682d113d61382b7a9ac9721 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135902960)
   * 5ee8701f76b9e6f2dcb451eb988371bea3b0a38d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136486584)
   * 2c52d7157f5e1b25dfaa00fe50cf7b04e7d6a97e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136497720)
   * 2d734eeff7480adc2ea1f3695f31ba5a169f3a05 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136501966)
   * 4edae43ff7eaf0357f5e8604b02b88749c8d153f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136522752)
   * 927a11838172fe792636923e9378677f92a48b73 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138028348)
   * a73de7a3fc63fe2d2a9bd12e03efb45bfcbf9ca8 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/138455883)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10341: [FLINK-14974][runtime] Calculate managed memory fractions with BigDecimal and round down it properly

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #10341: [FLINK-14974][runtime] Calculate 
managed memory fractions with BigDecimal and round down it properly
URL: https://github.com/apache/flink/pull/10341#issuecomment-559181765
 
 
   
   ## CI report:
   
   * a4045089cf8d671a4160c09e60cb3fece7fbd881 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/138455849)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10336: [FLINK-14935][task, runtime] Use RunnableWithException in the Mailbox

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #10336: [FLINK-14935][task,runtime] Use 
RunnableWithException in the Mailbox
URL: https://github.com/apache/flink/pull/10336#issuecomment-559103160
 
 
   
   ## CI report:
   
   * 4f9dc7a1f7ea2cbf6dfc34ed6e5359cf7097eef2 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138416595)
   * 8a657ca1d480433d131a4dca43b7fdafb2771252 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138433596)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10330: [FLINK-14189][runtime] Extend TaskExecutor to support dynamic slot allocation

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #10330: [FLINK-14189][runtime] Extend 
TaskExecutor to support dynamic slot allocation
URL: https://github.com/apache/flink/pull/10330#issuecomment-558970102
 
 
   
   ## CI report:
   
   * 38f2c1c450122cae9aa99258c19c6d07e998 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138360699)
   * 14ecf374fe3f6587e6eee29a39bf15190fac269c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138375450)
   * 761359a2f2509a06483e422e5e592b66e2e5661a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/138401670)
   * 88630bcdfd9c05fd352a422a88e59f26fba4dc7c : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/138455828)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10342: [FLINK-14967][table] Add a utility for creating data types via reflection

2019-11-27 Thread GitBox
flinkbot commented on issue #10342: [FLINK-14967][table] Add a utility for 
creating data types via reflection
URL: https://github.com/apache/flink/pull/10342#issuecomment-559191223
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit d7229d3906b6f1248c2eeebc174a5c9f043f5a27 (Wed Nov 27 
17:52:57 UTC 2019)
   
   **Warnings:**
* **2 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10309: [FLINK-14909][runtime] Let tasks get scheduled in topological order when possible

2019-11-27 Thread GitBox
flinkbot edited a comment on issue #10309: [FLINK-14909][runtime] Let tasks get 
scheduled in topological order when possible
URL: https://github.com/apache/flink/pull/10309#issuecomment-558068110
 
 
   
   ## CI report:
   
   * 51e86fe9d2397879945677aee80da5983402afca : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138017542)
   * 05c35c051959237fe03cf96491c69f317fca0438 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/138366843)
   * 5cc1dad6b5c2c69434b633bce559ca6f171006bd : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138370781)
   * 95aecab94c2ca6fe9337fa76e86866200189309b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138405642)
   * 2e7c4c54a4b98268f21c4f8196040b036c35cbd6 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/138439651)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14967) Add a utility for creating data types via reflection

2019-11-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-14967:
---
Labels: pull-request-available  (was: )

> Add a utility for creating data types via reflection
> 
>
> Key: FLINK-14967
> URL: https://issues.apache.org/jira/browse/FLINK-14967
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Timo Walther
>Assignee: Timo Walther
>Priority: Major
>  Labels: pull-request-available
>
> As mentioned in FLIP-65, a utility will create data types from any kind of 
> class possibly enriched with {{DataTypeHint}} annotations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] twalthr opened a new pull request #10342: [FLINK-14967][table] Add a utility for creating data types via reflection

2019-11-27 Thread GitBox
twalthr opened a new pull request #10342: [FLINK-14967][table] Add a utility 
for creating data types via reflection
URL: https://github.com/apache/flink/pull/10342
 
 
   ## What is the purpose of the change
   
   This implements the data type extractor mentioned in FLIP-65. It is similar 
to Flink's core type information extractor but also adds a lot of SQL specific 
features and improves the overall user experience. In particular, it allows to 
annotate types, fields, and classes for parameterizing the extraction process. 
It is unified across Java and Scala.
   
   The following description is copied from `DataTypeHint`:
   
   ```
   /**
* A hint that influences the reflection-based extraction of a {@link 
DataType}.
*
* Data type hints can parameterize or replace the default extraction 
logic of individual function parameters
* and return types, structured classes, or fields of structured classes. An 
implementer can choose to
* what extent the default extraction logic should be modified.
*
* The following examples show how to explicitly specify data types, how 
to parameterize the extraction
* logic, or how to accept any data type as an input data type:
*
* {@code @DataTypeHint("INT")} defines an INT data type with a default 
conversion class.
*
* {@code @DataTypeHint(value = "TIMESTAMP(3)", bridgedTo = 
java.sql.Timestamp.class)} defines a TIMESTAMP
* data type of millisecond precision with an explicit conversion class.
*
* {@code @DataTypeHint(value = "RAW", rawSerializer = 
MyCustomSerializer.class)} defines a RAW data type
* with a custom serializer class.
*
* {@code @DataTypeHint(version = V1, allowRawGlobally = TRUE)} 
parameterizes the extraction by requesting
* a extraction logic version of 1 and allowing the RAW data type in this 
structured type (and possibly
* nested fields).
*
* {@code @DataTypeHint(bridgedTo = MyPojo.class, allowRawGlobally = 
TRUE)} defines that a type should be
* extracted from the given conversion class but with parameterized 
extraction for allowing RAW types.
*
* {@code @DataTypeHint(inputGroup = ANY)} defines that the input 
validation should accept any
* data type.
*
* Note: All hint parameters are optional. Hint parameters defined on top 
of a structured type are
* inherited by all (deeply) nested fields unless annotated differently. For 
example, all occurrences of
* {@link java.math.BigDecimal} will be extracted as {@code DECIMAL(12, 2)} 
if the enclosing structured
* class is annotated with {@code @DataTypeHint(defaultDecimalPrecision = 
12, defaultDecimalScale = 2)}. Individual
* field annotations allow to deviate from those default values.
*/
```
   
   ## Brief change log
   
   - Data type hint annotation added
   - Data type extractor added
   
   ## Verifying this change
   
   This change added tests and can be verified as follows: 
`DataTypeExtractorTest` and `DataTypeExtractorScalaTest`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): yes
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? yes
 - If yes, how is the feature documented? JavaDocs
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14968) Kerberized YARN on Docker test (custom fs plugin) fails on Travis

2019-11-27 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983776#comment-16983776
 ] 

Aljoscha Krettek commented on FLINK-14968:
--

[~fly_in_gis] You might be right? I'll keep investigating this myself. But if 
more people could look at it it would be good.

Maybe the reason is the same as FLINK-14834 in the end. 

> Kerberized YARN on Docker test (custom fs plugin) fails on Travis
> -
>
> Key: FLINK-14968
> URL: https://issues.apache.org/jira/browse/FLINK-14968
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Tests
>Affects Versions: 1.10.0
>Reporter: Gary Yao
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.10.0
>
>
> This change made the test flaky: 
> https://github.com/apache/flink/commit/749965348170e4608ff2a23c9617f67b8c341df5.
>  It changes the job to have two sources instead of one which, under normal 
> circumstances, requires too many slots to run and therefore the job will fail.
> The setup of this test is very intricate, we configure YARN to have two 
> NodeManagers with 2500mb memory each: 
> https://github.com/apache/flink/blob/413a77157caf25dbbfb8b0caaf2c9e12c7374d98/flink-end-to-end-tests/test-scripts/docker-hadoop-secure-cluster/config/yarn-site.xml#L39.
>  We run the job with parallelism 3 and configure Flink to use 1000mb as 
> TaskManager memory and 1000mb of JobManager memory. This means that the job 
> fits into the YARN memory budget but more TaskManagers would not fit. We also 
> don't simply increase the YARN resources because we want the Flink job to use 
> TMs on different NMs because we had a bug where Kerberos config file shipping 
> was not working correctly but the bug was not materialising if all TMs where 
> on the same NM.
> https://api.travis-ci.org/v3/job/612782888/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] aljoscha commented on issue #9434: [FLINK-13634][Formats]: added module to compress data using StreamingFileSink

2019-11-27 Thread GitBox
aljoscha commented on issue #9434: [FLINK-13634][Formats]: added module to 
compress data using StreamingFileSink
URL: https://github.com/apache/flink/pull/9434#issuecomment-559187210
 
 
   Thanks for the quick response!
   
   I'm afraid I didn't express myself too clearly. We can still have the 
builder pattern, i.e. `CompressionWriterFactory` can have the builder methods 
for setting extractor and codec. The naming of the `BulkWriter.Factory` is 
unfortunately not very good and it could have been called `BulkWriter.Builder` 
instead. I thought that an `Extractor` was always needed so the static method 
on `HadoopCompressionWriters` could have been `forExtractor(Extractor)` and 
that returns the factory where you have the methods for setting the compression 
codec. If that's not the case you can also have a static method without 
parameters and do everything via the "builder".
   
   Also, we shouldn't have both `hadoopcompress` and `hadoop`. We can either 
have only `hadoopcompress` or `compress` and then inside that 
`compress.hadoop`. Are you actually planning to add more compression codecs 
besides Hadoop? I think planning for those future additions would mean we have 
to potentially make the interfaces more independent of Hadoop.
   
   Some Javadoc comments are outdated now, for example 
`CompressionWriterFactory` has
   ```
* A {@link BulkWriter} implementation that compress file.
   ```
   
   Sorry for the inconvenience for you!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   5   >