[GitHub] [flink] rmetzger commented on issue #11306: [FLINK-16122][AZP] Upload build debug logs as artifacts

2020-03-04 Thread GitBox
rmetzger commented on issue #11306: [FLINK-16122][AZP] Upload build debug logs 
as artifacts
URL: https://github.com/apache/flink/pull/11306#issuecomment-595081187
 
 
   I'm happy with the tests: 
https://dev.azure.com/rmetzger/Flink/_build/results?buildId=5935=results


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on issue #11305: [FLINK-16410][e2e][build] Add explicit flink-runtime dependency

2020-03-04 Thread GitBox
rmetzger commented on issue #11305: [FLINK-16410][e2e][build] Add explicit 
flink-runtime dependency
URL: https://github.com/apache/flink/pull/11305#issuecomment-595075325
 
 
   @flinkbot run azure


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13987) add log list and read log by name

2020-03-04 Thread lining (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lining updated FLINK-13987:
---
Description: 
As the job running, the log files are becoming large.

As the application runs on JVM, sometimes the user needs to see the log of GC, 
but there isn't this content.

Above all, we need new apis:
 *  list taskmanager all log file
 ** /taskmanagers/taskmanagerid/logs
 ** 
{code:java}
{
  "logs": [
{
  "name": "taskmanager.log",
  "size": 12529
}
  ]
} {code}

 * read taskmanager log file
 **  /taskmanagers/logs/[filename]
 ** response: same as taskmanager’s log
 * list jobmanager all log file
 ** /jobmanager/logs
 ** 
{code:java}
{
  "logs": [
{
  "name": "jobmanager.log",
  "size": 12529
}
  ]
}{code}

 * read jobmanager log file
 **  /jobmanager/logs/[filename]
 ** response: same as jobmanager's log

  was:
As the job running, the log files are becoming large.

As the application runs on JVM, sometimes the user needs to see the log of GC, 
but there isn't this content.

Above all, we need new apis:
 *  list taskmanager all log file
 ** /taskmanagers/taskmanagerid/logs
 ** 
{code:java}
{
  "logs": [
{
  "name": "taskmanager.log",
  "size": 12529
}
  ]
} {code}

 * read taskmanager log file
 **  /taskmanagers/log/[filename]
 ** response: same as taskmanager’s log
 * list jobmanager all log file
 ** /jobmanager/logs
 ** 
{code:java}
{
  "logs": [
{
  "name": "jobmanager.log",
  "size": 12529
}
  ]
}{code}

 * read jobmanager log file
 **  /jobmanager/log/[filename]
 ** response: same as jobmanager's log


> add log list and read log by name
> -
>
> Key: FLINK-13987
> URL: https://issues.apache.org/jira/browse/FLINK-13987
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / REST
>Reporter: lining
>Assignee: lining
>Priority: Major
>
> As the job running, the log files are becoming large.
> As the application runs on JVM, sometimes the user needs to see the log of 
> GC, but there isn't this content.
> Above all, we need new apis:
>  *  list taskmanager all log file
>  ** /taskmanagers/taskmanagerid/logs
>  ** 
> {code:java}
> {
>   "logs": [
> {
>   "name": "taskmanager.log",
>   "size": 12529
> }
>   ]
> } {code}
>  * read taskmanager log file
>  **  /taskmanagers/logs/[filename]
>  ** response: same as taskmanager’s log
>  * list jobmanager all log file
>  ** /jobmanager/logs
>  ** 
> {code:java}
> {
>   "logs": [
> {
>   "name": "jobmanager.log",
>   "size": 12529
> }
>   ]
> }{code}
>  * read jobmanager log file
>  **  /jobmanager/logs/[filename]
>  ** response: same as jobmanager's log



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11250: [FLINK-16302][rest]add log list and read log by name for taskmanager

2020-03-04 Thread GitBox
flinkbot edited a comment on issue #11250: [FLINK-16302][rest]add log list and 
read log by name for taskmanager
URL: https://github.com/apache/flink/pull/11250#issuecomment-592336212
 
 
   
   ## CI report:
   
   * e474761e49927bb30bac6a297e992dc2ec98e01a UNKNOWN
   * 306b1e6622445245bdc59517d23790bac9d5ea52 UNKNOWN
   * 436c1a75f3c98a2ccd7cc1c53445449d396bb916 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/151891829) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5947)
 
   * 7a612b90d60e58bd59164575f3eef28fdfa5eb01 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11061: [FLINK-15782] [connectors/jdbc] JDBC sink DataStream API

2020-03-04 Thread GitBox
flinkbot edited a comment on issue #11061: [FLINK-15782] [connectors/jdbc] JDBC 
sink DataStream API
URL: https://github.com/apache/flink/pull/11061#issuecomment-584620778
 
 
   
   ## CI report:
   
   * c48ad371b961435b465aa1fd879d9d08c7d27f81 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/151231245) 
   * 246a3cb4a62b59256ced746abfecfa18d5064745 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/151894618) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5948)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on issue #11304: [FLINK-16418][hive] Hide hive version to avoid user confuse

2020-03-04 Thread GitBox
lirui-apache commented on issue #11304: [FLINK-16418][hive] Hide hive version 
to avoid user confuse
URL: https://github.com/apache/flink/pull/11304#issuecomment-595068612
 
 
   @bowenli86 @JingsongLi For HDP and CDH users, I think they probably should 
use the HDP and CDH version Hive jars. For example, HDP Hive-1.2.1 has cherry 
picked lots of patches from newer version and therefore is different from 
Apache Hive-1.2.1. So using the HDP version usually means users can have the 
extra bug fixes than the Apache version.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on issue #11236: [FLINK-16269][FLINK-16108][table-planner-blink] Fix schema of query and sink do not match when generic or POJO type is requested

2020-03-04 Thread GitBox
wuchong commented on issue #11236: 
[FLINK-16269][FLINK-16108][table-planner-blink] Fix schema of query and sink do 
not match when generic or POJO type is requested
URL: https://github.com/apache/flink/pull/11236#issuecomment-595066832
 
 
   Hi @JingsongLi , `LocalDateTime` will be converted into 
`DataTypes.TIMESTAMP(3).bridgedTo(LocalDateTime.class)`, so all the supported 
external types will not be converted into Raw type. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11061: [FLINK-15782] [connectors/jdbc] JDBC sink DataStream API

2020-03-04 Thread GitBox
flinkbot edited a comment on issue #11061: [FLINK-15782] [connectors/jdbc] JDBC 
sink DataStream API
URL: https://github.com/apache/flink/pull/11061#issuecomment-584620778
 
 
   
   ## CI report:
   
   * c48ad371b961435b465aa1fd879d9d08c7d27f81 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/151231245) 
   * 246a3cb4a62b59256ced746abfecfa18d5064745 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong edited a comment on issue #11236: [FLINK-16269][FLINK-16108][table-planner-blink] Fix schema of query and sink do not match when generic or POJO type is requested

2020-03-04 Thread GitBox
wuchong edited a comment on issue #11236: 
[FLINK-16269][FLINK-16108][table-planner-blink] Fix schema of query and sink do 
not match when generic or POJO type is requested
URL: https://github.com/apache/flink/pull/11236#issuecomment-595066832
 
 
   Hi @JingsongLi , `LocalDateTime` will be converted into 
`DataTypes.TIMESTAMP(3).bridgedTo(LocalDateTime.class)`, so all the supported 
external types will not be converted into Raw type.  See 
`LegacyTypeInfoDataTypeConverter`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11250: [FLINK-16302][rest]add log list and read log by name for taskmanager

2020-03-04 Thread GitBox
flinkbot edited a comment on issue #11250: [FLINK-16302][rest]add log list and 
read log by name for taskmanager
URL: https://github.com/apache/flink/pull/11250#issuecomment-592336212
 
 
   
   ## CI report:
   
   * e474761e49927bb30bac6a297e992dc2ec98e01a UNKNOWN
   * c07d058a0f55764b319d64034b4f1bfdc26d99e6 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/151502448) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5854)
 
   * 306b1e6622445245bdc59517d23790bac9d5ea52 UNKNOWN
   * 436c1a75f3c98a2ccd7cc1c53445449d396bb916 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/151891829) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5947)
 
   * 7a612b90d60e58bd59164575f3eef28fdfa5eb01 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11250: [FLINK-16302][rest]add log list and read log by name for taskmanager

2020-03-04 Thread GitBox
flinkbot edited a comment on issue #11250: [FLINK-16302][rest]add log list and 
read log by name for taskmanager
URL: https://github.com/apache/flink/pull/11250#issuecomment-592336212
 
 
   
   ## CI report:
   
   * e474761e49927bb30bac6a297e992dc2ec98e01a UNKNOWN
   * c07d058a0f55764b319d64034b4f1bfdc26d99e6 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/151502448) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5854)
 
   * 306b1e6622445245bdc59517d23790bac9d5ea52 UNKNOWN
   * 436c1a75f3c98a2ccd7cc1c53445449d396bb916 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] AHeise commented on issue #11312: [FLINK-16400][fs] Fixing e2e tests that directly use Hadoop fs.

2020-03-04 Thread GitBox
AHeise commented on issue #11312: [FLINK-16400][fs] Fixing e2e tests that 
directly use Hadoop fs.
URL: https://github.com/apache/flink/pull/11312#issuecomment-595057353
 
 
   There is still a test failure on azure (e2e) but it seems to be unrelated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16302) add log list and read log by name for taskmanager

2020-03-04 Thread lining (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lining updated FLINK-16302:
---
Description: 
*  list taskmanager all log file
 ** /taskmanagers/taskmanagerid/logs
 ** 
{code:java}
{
  "logs": [
{
  "name": "taskmanager.log",
  "size": 12529
}
  ]
} {code}

 * read taskmanager log file
 **  /taskmanagers/logs/[filename]
 ** response: same as taskmanager’s log

  was:
*  list taskmanager all log file
 ** /taskmanagers/taskmanagerid/logs
 ** 
{code:java}
{
  "logs": [
{
  "name": "taskmanager.log",
  "size": 12529
}
  ]
} {code}

 * read taskmanager log file
 **  /taskmanagers/log/[filename]
 ** response: same as taskmanager’s log


> add log list and read log by name for taskmanager
> -
>
> Key: FLINK-16302
> URL: https://issues.apache.org/jira/browse/FLINK-16302
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / REST
>Reporter: lining
>Assignee: lining
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> *  list taskmanager all log file
>  ** /taskmanagers/taskmanagerid/logs
>  ** 
> {code:java}
> {
>   "logs": [
> {
>   "name": "taskmanager.log",
>   "size": 12529
> }
>   ]
> } {code}
>  * read taskmanager log file
>  **  /taskmanagers/logs/[filename]
>  ** response: same as taskmanager’s log



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16005) Propagate yarn.application.classpath from client to TaskManager Classpath

2020-03-04 Thread Zhenqiu Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17051846#comment-17051846
 ] 

Zhenqiu Huang commented on FLINK-16005:
---

[~trohrmann][~fly_in_gis] 
We are on the same page now. Will create a PR in coming two days.

> Propagate yarn.application.classpath from client to TaskManager Classpath
> -
>
> Key: FLINK-16005
> URL: https://issues.apache.org/jira/browse/FLINK-16005
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Reporter: Zhenqiu Huang
>Priority: Major
>
> When Flink users want to override the hadoop yarn container classpath, they 
> should just specify the yarn.application.classpath in yarn-site.xml from cli 
> side. But currently, the classpath setting can only be used in flink 
> application master, the classpath of TM is still determined by the setting in 
> yarn host.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11250: [FLINK-16302][rest]add log list and read log by name for taskmanager

2020-03-04 Thread GitBox
flinkbot edited a comment on issue #11250: [FLINK-16302][rest]add log list and 
read log by name for taskmanager
URL: https://github.com/apache/flink/pull/11250#issuecomment-592336212
 
 
   
   ## CI report:
   
   * e474761e49927bb30bac6a297e992dc2ec98e01a UNKNOWN
   * c07d058a0f55764b319d64034b4f1bfdc26d99e6 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/151502448) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5854)
 
   * 306b1e6622445245bdc59517d23790bac9d5ea52 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on issue #11304: [FLINK-16418][hive] Hide hive version to avoid user confuse

2020-03-04 Thread GitBox
bowenli86 commented on issue #11304: [FLINK-16418][hive] Hide hive version to 
avoid user confuse
URL: https://github.com/apache/flink/pull/11304#issuecomment-595052986
 
 
   @lirui-apache can you confirm?
   
   otherwise, I'm +1 for this PR


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15584) Give nested data type of ROWs in ValidationException

2020-03-04 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17051840#comment-17051840
 ] 

Jingsong Lee commented on FLINK-15584:
--

[~ayushsaxena]:D Ah... OK~

> Give nested data type of ROWs in ValidationException
> 
>
> Key: FLINK-15584
> URL: https://issues.apache.org/jira/browse/FLINK-15584
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Legacy Planner
>Affects Versions: 1.9.1, 1.10.0, 1.11.0
>Reporter: Benoît Paris
>Assignee: Ayush Saxena
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.10.1, 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In 
> {code:java}
> INSERT INTO baz_sink
> SELECT
>   a,
>   ROW(b, c)
> FROM foo_source{code}
> Schema mismatch mistakes will not get proper detail level, yielding the 
> following:
> Caused by: org.apache.flink.table.api.ValidationException: Field types of 
> query result and registered TableSink [baz_sink] do not match.
>  Query result schema: [a: Integer, EXPR$2: Row]
>  TableSink schema: [a: Integer, payload: Row]
> Leaving the user with an opaque 'Row' type to debug.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-15584) Give nested data type of ROWs in ValidationException

2020-03-04 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-15584:


Assignee: Ayush Saxena  (was: Ayush Saxena)

> Give nested data type of ROWs in ValidationException
> 
>
> Key: FLINK-15584
> URL: https://issues.apache.org/jira/browse/FLINK-15584
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Legacy Planner
>Affects Versions: 1.9.1, 1.10.0, 1.11.0
>Reporter: Benoît Paris
>Assignee: Ayush Saxena
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.10.1, 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In 
> {code:java}
> INSERT INTO baz_sink
> SELECT
>   a,
>   ROW(b, c)
> FROM foo_source{code}
> Schema mismatch mistakes will not get proper detail level, yielding the 
> following:
> Caused by: org.apache.flink.table.api.ValidationException: Field types of 
> query result and registered TableSink [baz_sink] do not match.
>  Query result schema: [a: Integer, EXPR$2: Row]
>  TableSink schema: [a: Integer, payload: Row]
> Leaving the user with an opaque 'Row' type to debug.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388097453
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandler.java
 ##
 @@ -249,7 +248,7 @@ private void decodeMsg(Object msg) throws Throwable {
NettyMessage.BufferResponse bufferOrEvent = 
(NettyMessage.BufferResponse) msg;
 
RemoteInputChannel inputChannel = 
inputChannels.get(bufferOrEvent.receiverId);
-   if (inputChannel == null) {
+   if (inputChannel == null || inputChannel.isReleased()) {
 
 Review comment:
   we should consider the case of null data buffer to also cancel request. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388094614
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,369 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderRemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   channel,
+   false,
+   false,
+   false,
+   normalInputChannel.getInputChannelId(),
+   null);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
 
 Review comment:
   Except the test `testDownstreamMessageDecodeWithReleasedInputChannel`, all 
the other three tests have the same code paths, then we can deduplicate them.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16005) Propagate yarn.application.classpath from client to TaskManager Classpath

2020-03-04 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17051833#comment-17051833
 ] 

Yang Wang commented on FLINK-16005:
---

I agree that we could use {{flink.hadoop.*}} and {{flink.yarn.*}} as prefix for 
hadoop and yarn configuration.

> Propagate yarn.application.classpath from client to TaskManager Classpath
> -
>
> Key: FLINK-16005
> URL: https://issues.apache.org/jira/browse/FLINK-16005
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Reporter: Zhenqiu Huang
>Priority: Major
>
> When Flink users want to override the hadoop yarn container classpath, they 
> should just specify the yarn.application.classpath in yarn-site.xml from cli 
> side. But currently, the classpath setting can only be used in flink 
> application master, the classpath of TM is still determined by the setting in 
> yarn host.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388092398
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,369 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderRemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   channel,
+   false,
+   false,
+   false,
+   normalInputChannel.getInputChannelId(),
+   null);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388092125
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,369 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderRemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   channel,
+   false,
+   false,
+   false,
+   normalInputChannel.getInputChannelId(),
+   null);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388089494
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,369 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderRemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   channel,
+   false,
+   false,
+   false,
+   normalInputChannel.getInputChannelId(),
+   null);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388090118
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,369 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderRemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   channel,
+   false,
+   false,
+   false,
+   normalInputChannel.getInputChannelId(),
+   null);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388089494
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,369 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderRemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   channel,
+   false,
+   false,
+   false,
+   normalInputChannel.getInputChannelId(),
+   null);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   

[jira] [Resolved] (FLINK-16414) create udaf/udtf function using sql casuing ValidationException: SQL validation failed. null

2020-03-04 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee resolved FLINK-16414.
--
Resolution: Fixed

master: 0362d200e3cd9ed86fd363f0c48f1a7d2d7e852f

release-1.10: c73220cb196ccf648047d3dc8b838e1e1882b471

> create udaf/udtf function using sql casuing ValidationException: SQL 
> validation failed. null
> 
>
> Key: FLINK-16414
> URL: https://issues.apache.org/jira/browse/FLINK-16414
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.10.0
>Reporter: Terry Wang
>Assignee: Terry Wang
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When using TableEnvironment#sqlupdate to create a udaf or udtf function, 
> which doesn't override the getResultType() method, it's normal. But when 
> using this function in later insert sql,  some exception like following will 
> be throwed:
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> SQL validation failed. null
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:130)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:105)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:127)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlInsert(SqlToOperationConverter.java:342)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:142)
>   at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:66)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlUpdate(TableEnvironmentImpl.java:484)
> The reason is in FunctionDefinitionUtil#createFunctionDefinition, we 
> shouldn't direct call t.getResultType or a.getAccumulatorType() or 
> a.getResultType() but using 
> UserDefinedFunctionHelper#getReturnTypeOfTableFunction
>  UserDefinedFunctionHelper#getAccumulatorTypeOfAggregateFunction 
> UserDefinedFunctionHelper#getReturnTypeOfAggregateFunction instead.
> ```
>   if (udf instanceof ScalarFunction) {
>   return new ScalarFunctionDefinition(
>   name,
>   (ScalarFunction) udf
>   );
>   } else if (udf instanceof TableFunction) {
>   TableFunction t = (TableFunction) udf;
>   return new TableFunctionDefinition(
>   name,
>   t,
>   t.getResultType()
>   );
>   } else if (udf instanceof AggregateFunction) {
>   AggregateFunction a = (AggregateFunction) udf;
>   return new AggregateFunctionDefinition(
>   name,
>   a,
>   a.getAccumulatorType(),
>   a.getResultType()
>   );
>   } else if (udf instanceof TableAggregateFunction) {
>   TableAggregateFunction a = (TableAggregateFunction) udf;
>   return new TableAggregateFunctionDefinition(
>   name,
>   a,
>   a.getAccumulatorType(),
>   a.getResultType()
>   );
> ```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi closed pull request #11310: [FLINK-16414]fix sql validation failed when using udaf/udtf which doesn't implement getResultType

2020-03-04 Thread GitBox
JingsongLi closed pull request #11310: [FLINK-16414]fix sql validation failed 
when using udaf/udtf which doesn't  implement getResultType
URL: https://github.com/apache/flink/pull/11310
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on issue #11310: [FLINK-16414]fix sql validation failed when using udaf/udtf which doesn't implement getResultType

2020-03-04 Thread GitBox
JingsongLi commented on issue #11310: [FLINK-16414]fix sql validation failed 
when using udaf/udtf which doesn't  implement getResultType
URL: https://github.com/apache/flink/pull/11310#issuecomment-595035032
 
 
   Thanks @zjuwangg for contribution, merged.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388083659
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,369 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderRemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   channel,
+   false,
+   false,
+   false,
+   normalInputChannel.getInputChannelId(),
+   null);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388083507
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,369 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderRemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   channel,
+   false,
+   false,
+   false,
+   normalInputChannel.getInputChannelId(),
+   null);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
 
 Review comment:
   we 

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388083357
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,369 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderRemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   channel,
+   false,
+   false,
+   false,
+   normalInputChannel.getInputChannelId(),
+   null);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388082602
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,369 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderRemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   channel,
+   false,
+   false,
+   false,
+   normalInputChannel.getInputChannelId(),
+   null);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   

[jira] [Commented] (FLINK-15584) Give nested data type of ROWs in ValidationException

2020-03-04 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17051809#comment-17051809
 ] 

Ayush Saxena commented on FLINK-15584:
--

Thanx [~rmetzger] [~lzljs3620320] for the review and commit.
Can you assign it my correct Jira id, it is {{ayushtkn}}. This ticket seems to 
be assigned to different id

> Give nested data type of ROWs in ValidationException
> 
>
> Key: FLINK-15584
> URL: https://issues.apache.org/jira/browse/FLINK-15584
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Legacy Planner
>Affects Versions: 1.9.1, 1.10.0, 1.11.0
>Reporter: Benoît Paris
>Assignee: Ayush Saxena
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.10.1, 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In 
> {code:java}
> INSERT INTO baz_sink
> SELECT
>   a,
>   ROW(b, c)
> FROM foo_source{code}
> Schema mismatch mistakes will not get proper detail level, yielding the 
> following:
> Caused by: org.apache.flink.table.api.ValidationException: Field types of 
> query result and registered TableSink [baz_sink] do not match.
>  Query result schema: [a: Integer, EXPR$2: Row]
>  TableSink schema: [a: Integer, payload: Row]
> Leaving the user with an opaque 'Row' type to debug.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388076838
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,369 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderRemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
 
 Review comment:
   normalInputChannel -> inputChannel, do not need to emphasis `normal` in this 
test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388076411
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/partition/consumer/BufferProviderRemoteInputChannel.java
 ##
 @@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.partition.consumer;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.LocalConnectionManager;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.runtime.io.network.partition.InputChannelTestUtils;
+import org.apache.flink.runtime.io.network.partition.ResultPartitionID;
+import org.apache.flink.runtime.jobgraph.IntermediateResultPartitionID;
+
+import javax.annotation.Nullable;
+
+import static org.apache.flink.util.Preconditions.checkState;
+
+/**
+ * Special {@link RemoteInputChannel} implementation that correspond to buffer 
request.
 
 Review comment:
   correspond -> corresponds
   A special


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388076057
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/partition/consumer/BufferProviderRemoteInputChannel.java
 ##
 @@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.partition.consumer;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.LocalConnectionManager;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.runtime.io.network.partition.InputChannelTestUtils;
+import org.apache.flink.runtime.io.network.partition.ResultPartitionID;
+import org.apache.flink.runtime.jobgraph.IntermediateResultPartitionID;
+
+import javax.annotation.Nullable;
+
+import static org.apache.flink.util.Preconditions.checkState;
+
+/**
+ * Special {@link RemoteInputChannel} implementation that correspond to buffer 
request.
+ */
+public class BufferProviderRemoteInputChannel extends RemoteInputChannel {
+   private final int maxNumberOfBuffers;
+   private final int bufferSize;
+
+   private int allocatedBuffers;
+
+   public BufferProviderRemoteInputChannel(
+   SingleInputGate inputGate,
+   int maxNumberOfBuffers,
+   int bufferSize) {
+
+   super(
+   inputGate,
+   0,
+   new ResultPartitionID(),
+   InputChannelBuilder.STUB_CONNECTION_ID,
+   new LocalConnectionManager(),
+   0,
+   0,
+   
InputChannelTestUtils.newUnregisteredInputChannelMetrics(),
+   
InputChannelTestUtils.StubMemorySegmentProvider.getInstance());
+
+   inputGate.setInputChannel(new IntermediateResultPartitionID(), 
this);
+
+   this.maxNumberOfBuffers = maxNumberOfBuffers;
+   this.bufferSize = bufferSize;
+   }
+
+   @Nullable
+   @Override
+   public Buffer requestBuffer() {
+   if (isReleased()) {
+   return null;
+   }
+
+   checkState(allocatedBuffers < maxNumberOfBuffers,
+   String.format("The number of allocated buffers %d have 
reached the maximum allowed %d.", allocatedBuffers, maxNumberOfBuffers));
 
 Review comment:
   have -> has


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11291: [FLINK-16392] [API / DataStream] oneside sorted cache in intervaljoin

2020-03-04 Thread GitBox
flinkbot edited a comment on issue #11291: [FLINK-16392] [API / DataStream] 
oneside sorted cache in intervaljoin
URL: https://github.com/apache/flink/pull/11291#issuecomment-593677297
 
 
   
   ## CI report:
   
   * d6b2919dd28c55230e530ccc45ca7d93d90a60df UNKNOWN
   * 3d121cb731b45565917fa47ecfd999163ef06625 UNKNOWN
   * b09c4d1a7612259e851ad12620ebb3751adb55be UNKNOWN
   * 2f11700b601cbd8207f8f6de40d862e998b55ee0 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/151881747) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5944)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (FLINK-16084) Translate "Time Attributes" page of "Streaming Concepts" into Chinese

2020-03-04 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-16084.
-
Fix Version/s: 1.11.0
   Resolution: Fixed

Fixed in master(1.11.0): 059e71d607405446c40a1b380452f3e3189c94ae

> Translate "Time Attributes" page of "Streaming Concepts" into Chinese 
> --
>
> Key: FLINK-16084
> URL: https://issues.apache.org/jira/browse/FLINK-16084
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Benchao Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/streaming/time_attributes.html
> The markdown file is located in 
> {{flink/docs/dev/table/streaming/time_attributes.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #11102: [FLINK-16084][docs] Translate /dev/table/streaming/time_attributes.zh.md

2020-03-04 Thread GitBox
wuchong merged pull request #11102: [FLINK-16084][docs] Translate 
/dev/table/streaming/time_attributes.zh.md
URL: https://github.com/apache/flink/pull/11102
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388072527
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/ByteBufUtilsTest.java
 ##
 @@ -0,0 +1,128 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.buffer.Unpooled;
+import org.junit.Test;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertSame;
+
+/**
+ * Tests the methods in {@link ByteBufUtils}.
+ */
+public class ByteBufUtilsTest {
+   private static final byte ACCUMULATION_BYTE = 0x7d;
+   private static final byte NON_ACCUMULATION_BYTE = 0x23;
+
+   @Test
+   public void testAccumulateWithoutCopy() {
+   int sourceLength = 128;
+   int sourceReaderIndex = 32;
+   int expectedAccumulationSize = 16;
+
+   ByteBuf src = createSourceBuffer(sourceLength, 
sourceReaderIndex, expectedAccumulationSize);
+   ByteBuf target = Unpooled.buffer(expectedAccumulationSize);
+
+   // If src has enough data and no data has been copied yet, src 
will be returned without modification.
+   ByteBuf accumulated = ByteBufUtils.accumulate(target, src, 
expectedAccumulationSize, target.readableBytes());
+
+   assertSame(src, accumulated);
+   assertEquals(sourceReaderIndex, src.readerIndex());
+   verifyBufferContent(src, sourceReaderIndex, 
expectedAccumulationSize);
+   }
+
+   @Test
+   public void testAccumulateWithCopy() {
+   int sourceLength = 128;
+   int firstSourceReaderIndex = 32;
+   int secondSourceReaderIndex = 0;
+   int expectedAccumulationSize = 128;
+
+   int firstCopyLength = sourceLength - firstSourceReaderIndex;
 
 Review comment:
   firstCopyLength -> firstAccumulationSize, also for secondCopyLength. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (FLINK-16089) Translate "Data Types" page of "Table API & SQL" into Chinese

2020-03-04 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-16089.
-
Fix Version/s: 1.11.0
   Resolution: Fixed

Fixed in master(1.11.0): 5781c67a4c0c2a097c8c6659c155ba94948062d1

> Translate "Data Types" page of "Table API & SQL" into Chinese
> -
>
> Key: FLINK-16089
> URL: https://issues.apache.org/jira/browse/FLINK-16089
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Jiang Leilei
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/types.html
> The markdown file is located in {{flink/docs/dev/table/types.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] TisonKun commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
TisonKun commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388071985
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/ByteBufUtils.java
 ##
 @@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+
+/**
+ * Utility routines to process Netty ByteBuf.
+ */
+public class ByteBufUtils {
+
+   /**
+* Cumulates data from the source buffer to the target buffer.
+*
+* @param cumulationBuf The target buffer.
+* @param src The source buffer.
+* @param expectedSize The expected length to cumulate.
+*
+* @return The ByteBuf containing cumulated data or null if not enough 
data has been cumulated.
+*/
+   public static ByteBuf cumulate(ByteBuf cumulationBuf, ByteBuf src, int 
expectedSize) {
+   // If the cumulation buffer is empty and src has enought bytes,
+   // user could read from src directly without cumulation.
+   if (cumulationBuf.readerIndex() == 0
+   && cumulationBuf.writerIndex() == 0
+   && src.readableBytes() >= expectedSize) {
+
+   return src;
+   }
+
+   int copyLength = Math.min(src.readableBytes(), expectedSize - 
cumulationBuf.readableBytes());
+
+   if (copyLength > 0) {
+   cumulationBuf.writeBytes(src, copyLength);
+   }
+
+   if (cumulationBuf.readableBytes() == expectedSize) {
+   return cumulationBuf;
+   }
+
+   return null;
 
 Review comment:
   Make sense especially given that it is in critical path.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong closed pull request #11190: [FLINK-16089][docs] Translate "Data Type" page of "Table API & SQL" into Chinese

2020-03-04 Thread GitBox
wuchong closed pull request #11190: [FLINK-16089][docs] Translate "Data Type" 
page of "Table API & SQL" into Chinese
URL: https://github.com/apache/flink/pull/11190
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] gaoyunhaii commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
gaoyunhaii commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388071627
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/ByteBufUtils.java
 ##
 @@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+
+/**
+ * Utility routines to process Netty ByteBuf.
+ */
+public class ByteBufUtils {
+
+   /**
+* Cumulates data from the source buffer to the target buffer.
+*
+* @param cumulationBuf The target buffer.
+* @param src The source buffer.
+* @param expectedSize The expected length to cumulate.
+*
+* @return The ByteBuf containing cumulated data or null if not enough 
data has been cumulated.
+*/
+   public static ByteBuf cumulate(ByteBuf cumulationBuf, ByteBuf src, int 
expectedSize) {
+   // If the cumulation buffer is empty and src has enought bytes,
+   // user could read from src directly without cumulation.
+   if (cumulationBuf.readerIndex() == 0
+   && cumulationBuf.writerIndex() == 0
+   && src.readableBytes() >= expectedSize) {
+
+   return src;
+   }
+
+   int copyLength = Math.min(src.readableBytes(), expectedSize - 
cumulationBuf.readableBytes());
+
+   if (copyLength > 0) {
+   cumulationBuf.writeBytes(src, copyLength);
+   }
+
+   if (cumulationBuf.readableBytes() == expectedSize) {
+   return cumulationBuf;
+   }
+
+   return null;
 
 Review comment:
   Hi @TisonKun very thanks for review and very sorry for missing the comment 
since it is folded in the PR page. We also thought of using `Optional` before, 
however, considering that this method is not a part of public API and it should 
be performance-sensitive (will be called twice for each buffer), it might be 
better to keep the `null`. What do you think of that~ ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11291: [FLINK-16392] [API / DataStream] oneside sorted cache in intervaljoin

2020-03-04 Thread GitBox
flinkbot edited a comment on issue #11291: [FLINK-16392] [API / DataStream] 
oneside sorted cache in intervaljoin
URL: https://github.com/apache/flink/pull/11291#issuecomment-593677297
 
 
   
   ## CI report:
   
   * d6b2919dd28c55230e530ccc45ca7d93d90a60df UNKNOWN
   * 3d121cb731b45565917fa47ecfd999163ef06625 UNKNOWN
   * 87148bdb635a6981d9ecc6c827061f3e13a47966 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/151673112) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5892)
 
   * b09c4d1a7612259e851ad12620ebb3751adb55be UNKNOWN
   * 2f11700b601cbd8207f8f6de40d862e998b55ee0 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11284: [FLINK-15911][runtime] Make Flink work with NAT.

2020-03-04 Thread GitBox
flinkbot edited a comment on issue #11284: [FLINK-15911][runtime] Make Flink 
work with NAT.
URL: https://github.com/apache/flink/pull/11284#issuecomment-593407935
 
 
   
   ## CI report:
   
   * 3e9df27458782312e53586e53f49cd55c11f4df5 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/151875334) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5942)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #11190: [FLINK-16089][docs] Translate "Data Type" page of "Table API & SQL" into Chinese

2020-03-04 Thread GitBox
wuchong commented on a change in pull request #11190: [FLINK-16089][docs] 
Translate "Data Type" page of "Table API & SQL" into Chinese
URL: https://github.com/apache/flink/pull/11190#discussion_r388067901
 
 

 ##
 File path: docs/dev/table/types.zh.md
 ##
 @@ -22,62 +22,47 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Due to historical reasons, before Flink 1.9, Flink's Table & SQL API data 
types were
-tightly coupled to Flink's `TypeInformation`. `TypeInformation` is used in the 
DataStream
-and DataSet API and is sufficient to describe all information needed to 
serialize and
-deserialize JVM-based objects in a distributed setting.
+由于历史原因,在 Flink 1.9 之前,Flink Table & SQL API 的数据类型与 Flink 的 `TypeInformation` 
耦合紧密。`TypeInformation` 在 DataStream 和 DataSet API 中被使用,并且足以用来用于描述分布式环境中 JVM 
对象的序列化和反序列化操作所需的全部信息。
 
-However, `TypeInformation` was not designed to represent logical types 
independent of
-an actual JVM class. In the past, it was difficult to map SQL standard types 
to this
-abstraction. Furthermore, some types were not SQL-compliant and introduced 
without a
-bigger picture in mind.
+然而,`TypeInformation` 并不是被设计为表示独立于 JVM class 的逻辑类型。之前很难将 SQL 的标准类型映射到 
`TypeInformation` 抽象。此外,有一些类型并不是兼容 SQL 的并且在没有更好的规划的时候被引进。
 
-Starting with Flink 1.9, the Table & SQL API will receive a new type system 
that serves as a long-term
-solution for API stability and standard compliance.
+从 Flink 1.9 开始,Table & SQL API 将接收一种新的类型系统作为长期解决方案,用来保持 API 稳定性和 SQL 标准的兼容性。
 
-Reworking the type system is a major effort that touches almost all 
user-facing interfaces. Therefore, its
-introduction spans multiple releases, and the community aims to finish this 
effort by Flink 1.10.
+重新设计类型系统是一项涉及几乎所有的面向用户接口的重大工作。因此,它的引入跨越多个版本,社区的目标是在 Flink 1.10 完成这项工作。
 
-Due to the simultaneous addition of a new planner for table programs (see 
[FLINK-11439](https://issues.apache.org/jira/browse/FLINK-11439)),
-not every combination of planner and data type is supported. Furthermore, 
planners might not support every
-data type with the desired precision or parameter.
+同时由于为 Table 编程添加了新的 Planner 
详见([FLINK-11439](https://issues.apache.org/jira/browse/FLINK-11439)), 并不是每种 
Planner 都支持所有的数据类型。此外,Planner 对于数据类型的精度和参数化支持也可能是不完整的。
 
-Attention Please see the planner 
compatibility table and limitations
-section before using a data type.
+注意 在使用数据类型之前请参阅 Planner 的兼容性表和局限性章节。
 
 * This will be replaced by the TOC
 {:toc}
 
-Data Type
+数据类型
 -
 
-A *data type* describes the logical type of a value in the table ecosystem. It 
can be used to declare input and/or
-output types of operations.
+*数据类型* 描述 Table 编程环境中的值的逻辑类型。它可以被用来声明操作的输入输出类型。
 
-Flink's data types are similar to the SQL standard's *data type* terminology 
but also contain information
-about the nullability of a value for efficient handling of scalar expressions.
+Flink 的数据类型和 SQL 标准的 *数据类型* 术语类似,但也包含了 nullability 信息,可以被用于 scala expression 
的优化。
 
 Review comment:
   ```suggestion
   Flink 的数据类型和 SQL 标准的 *数据类型* 术语类似,但也包含了可空属性,可以被用于标量表达式(scalar expression)的优化。
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #11190: [FLINK-16089][docs] Translate "Data Type" page of "Table API & SQL" into Chinese

2020-03-04 Thread GitBox
wuchong commented on a change in pull request #11190: [FLINK-16089][docs] 
Translate "Data Type" page of "Table API & SQL" into Chinese
URL: https://github.com/apache/flink/pull/11190#discussion_r388065763
 
 

 ##
 File path: docs/dev/table/types.zh.md
 ##
 @@ -22,62 +22,47 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Due to historical reasons, before Flink 1.9, Flink's Table & SQL API data 
types were
-tightly coupled to Flink's `TypeInformation`. `TypeInformation` is used in the 
DataStream
-and DataSet API and is sufficient to describe all information needed to 
serialize and
-deserialize JVM-based objects in a distributed setting.
+由于历史原因,在 Flink 1.9 之前,Flink Table & SQL API 的数据类型与 Flink 的 `TypeInformation` 
耦合紧密。`TypeInformation` 在 DataStream 和 DataSet API 中被使用,并且足以用来用于描述分布式环境中 JVM 
对象的序列化和反序列化操作所需的全部信息。
 
-However, `TypeInformation` was not designed to represent logical types 
independent of
-an actual JVM class. In the past, it was difficult to map SQL standard types 
to this
-abstraction. Furthermore, some types were not SQL-compliant and introduced 
without a
-bigger picture in mind.
+然而,`TypeInformation` 并不是被设计为表示独立于 JVM class 的逻辑类型。之前很难将 SQL 的标准类型映射到 
`TypeInformation` 抽象。此外,有一些类型并不是兼容 SQL 的并且在没有更好的规划的时候被引进。
 
-Starting with Flink 1.9, the Table & SQL API will receive a new type system 
that serves as a long-term
-solution for API stability and standard compliance.
+从 Flink 1.9 开始,Table & SQL API 将接收一种新的类型系统作为长期解决方案,用来保持 API 稳定性和 SQL 标准的兼容性。
 
 Review comment:
   ```suggestion
   从 Flink 1.9 开始,Table & SQL API 开始启用一种新的类型系统作为长期解决方案,用来保持 API 稳定性和 SQL 标准的兼容性。
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #11190: [FLINK-16089][docs] Translate "Data Type" page of "Table API & SQL" into Chinese

2020-03-04 Thread GitBox
wuchong commented on a change in pull request #11190: [FLINK-16089][docs] 
Translate "Data Type" page of "Table API & SQL" into Chinese
URL: https://github.com/apache/flink/pull/11190#discussion_r388065541
 
 

 ##
 File path: docs/dev/table/types.zh.md
 ##
 @@ -22,62 +22,47 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Due to historical reasons, before Flink 1.9, Flink's Table & SQL API data 
types were
-tightly coupled to Flink's `TypeInformation`. `TypeInformation` is used in the 
DataStream
-and DataSet API and is sufficient to describe all information needed to 
serialize and
-deserialize JVM-based objects in a distributed setting.
+由于历史原因,在 Flink 1.9 之前,Flink Table & SQL API 的数据类型与 Flink 的 `TypeInformation` 
耦合紧密。`TypeInformation` 在 DataStream 和 DataSet API 中被使用,并且足以用来用于描述分布式环境中 JVM 
对象的序列化和反序列化操作所需的全部信息。
 
-However, `TypeInformation` was not designed to represent logical types 
independent of
-an actual JVM class. In the past, it was difficult to map SQL standard types 
to this
-abstraction. Furthermore, some types were not SQL-compliant and introduced 
without a
-bigger picture in mind.
+然而,`TypeInformation` 并不是被设计为表示独立于 JVM class 的逻辑类型。之前很难将 SQL 的标准类型映射到 
`TypeInformation` 抽象。此外,有一些类型并不是兼容 SQL 的并且在没有更好的规划的时候被引进。
 
 Review comment:
   ```suggestion
   然而,`TypeInformation` 并不是为独立于 JVM class 的逻辑类型而设计的。之前很难将 SQL 的标准类型映射到 
`TypeInformation` 抽象。此外,有一些类型并不是兼容 SQL 的并且引入的时候没有长远规划过。
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi edited a comment on issue #11304: [FLINK-16418][hive] Hide hive version to avoid user confuse

2020-03-04 Thread GitBox
JingsongLi edited a comment on issue #11304: [FLINK-16418][hive] Hide hive 
version to avoid user confuse
URL: https://github.com/apache/flink/pull/11304#issuecomment-595019043
 
 
   > what's the "hive version" of Hive in CDH or HDP?
   
   Is there any users use CDH or HDP client dependencies? The users that use 
CDH I know they just use open-source corresponding hive client dependencies.
   Like Spark, they just use built-in hive client dependencies.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on issue #11304: [FLINK-16418][hive] Hide hive version to avoid user confuse

2020-03-04 Thread GitBox
JingsongLi commented on issue #11304: [FLINK-16418][hive] Hide hive version to 
avoid user confuse
URL: https://github.com/apache/flink/pull/11304#issuecomment-595019043
 
 
   > what's the "hive version" of Hive in CDH or HDP?
   
   Is there any users use CDH or HDP client? The users that use CDH I know they 
just use open-source corresponding hive client dependencies.
   Like Spark, they just use built-in hive client dependencies.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on issue #11304: [FLINK-16418][hive] Hide hive version to avoid user confuse

2020-03-04 Thread GitBox
bowenli86 commented on issue #11304: [FLINK-16418][hive] Hide hive version to 
avoid user confuse
URL: https://github.com/apache/flink/pull/11304#issuecomment-595017818
 
 
   what's the "hive version" of Hive in CDH or HDP?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388067962
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/ByteBufUtilsTest.java
 ##
 @@ -0,0 +1,128 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.buffer.Unpooled;
+import org.junit.Test;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertSame;
+
+/**
+ * Tests the methods in {@link ByteBufUtils}.
+ */
+public class ByteBufUtilsTest {
+   private static final byte ACCUMULATION_BYTE = 0x7d;
+   private static final byte NON_ACCUMULATION_BYTE = 0x23;
+
+   @Test
+   public void testAccumulateWithoutCopy() {
+   int sourceLength = 128;
+   int sourceReaderIndex = 32;
+   int expectedAccumulationSize = 16;
+
+   ByteBuf src = createSourceBuffer(sourceLength, 
sourceReaderIndex, expectedAccumulationSize);
+   ByteBuf target = Unpooled.buffer(expectedAccumulationSize);
+
+   // If src has enough data and no data has been copied yet, src 
will be returned without modification.
+   ByteBuf accumulated = ByteBufUtils.accumulate(target, src, 
expectedAccumulationSize, target.readableBytes());
+
+   assertSame(src, accumulated);
+   assertEquals(sourceReaderIndex, src.readerIndex());
+   verifyBufferContent(src, sourceReaderIndex, 
expectedAccumulationSize);
+   }
+
+   @Test
+   public void testAccumulateWithCopy() {
+   int sourceLength = 128;
+   int firstSourceReaderIndex = 32;
+   int secondSourceReaderIndex = 0;
+   int expectedAccumulationSize = 128;
+
+   int firstCopyLength = sourceLength - firstSourceReaderIndex;
+   int secondCopyLength = expectedAccumulationSize - 
firstCopyLength;
+
+   ByteBuf firstSource = createSourceBuffer(sourceLength, 
firstSourceReaderIndex, firstCopyLength);
+   ByteBuf secondSource = createSourceBuffer(sourceLength, 
secondSourceReaderIndex, secondCopyLength);
+
+   ByteBuf target = Unpooled.buffer(expectedAccumulationSize);
+
+   // If src does not have enough data, src will be copied into 
target and null will be returned.
+   ByteBuf accumulated = ByteBufUtils.accumulate(
+   target,
+   firstSource,
+   expectedAccumulationSize,
+   target.readableBytes());
+   assertNull(accumulated);
+   assertEquals(sourceLength, firstSource.readerIndex());
+   assertEquals(firstCopyLength, target.readableBytes());
+
+   // The remaining data will be copied from the second buffer, 
and the target buffer will be returned
+   // after all data is accumulated.
+   accumulated = ByteBufUtils.accumulate(
+   target,
+   secondSource,
+   expectedAccumulationSize,
+   target.readableBytes());
+   assertSame(target, accumulated);
+   assertEquals(secondSourceReaderIndex + secondCopyLength, 
secondSource.readerIndex());
+   assertEquals(expectedAccumulationSize, target.readableBytes());
+
+   verifyBufferContent(accumulated, 0, expectedAccumulationSize);
+   }
+
+   /**
+* Create a source buffer whose length is size. The content between 
readerIndex and
 
 Review comment:
   \size\, also for other arguments in javadoc.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this 

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388067962
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/ByteBufUtilsTest.java
 ##
 @@ -0,0 +1,128 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.buffer.Unpooled;
+import org.junit.Test;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertSame;
+
+/**
+ * Tests the methods in {@link ByteBufUtils}.
+ */
+public class ByteBufUtilsTest {
+   private static final byte ACCUMULATION_BYTE = 0x7d;
+   private static final byte NON_ACCUMULATION_BYTE = 0x23;
+
+   @Test
+   public void testAccumulateWithoutCopy() {
+   int sourceLength = 128;
+   int sourceReaderIndex = 32;
+   int expectedAccumulationSize = 16;
+
+   ByteBuf src = createSourceBuffer(sourceLength, 
sourceReaderIndex, expectedAccumulationSize);
+   ByteBuf target = Unpooled.buffer(expectedAccumulationSize);
+
+   // If src has enough data and no data has been copied yet, src 
will be returned without modification.
+   ByteBuf accumulated = ByteBufUtils.accumulate(target, src, 
expectedAccumulationSize, target.readableBytes());
+
+   assertSame(src, accumulated);
+   assertEquals(sourceReaderIndex, src.readerIndex());
+   verifyBufferContent(src, sourceReaderIndex, 
expectedAccumulationSize);
+   }
+
+   @Test
+   public void testAccumulateWithCopy() {
+   int sourceLength = 128;
+   int firstSourceReaderIndex = 32;
+   int secondSourceReaderIndex = 0;
+   int expectedAccumulationSize = 128;
+
+   int firstCopyLength = sourceLength - firstSourceReaderIndex;
+   int secondCopyLength = expectedAccumulationSize - 
firstCopyLength;
+
+   ByteBuf firstSource = createSourceBuffer(sourceLength, 
firstSourceReaderIndex, firstCopyLength);
+   ByteBuf secondSource = createSourceBuffer(sourceLength, 
secondSourceReaderIndex, secondCopyLength);
+
+   ByteBuf target = Unpooled.buffer(expectedAccumulationSize);
+
+   // If src does not have enough data, src will be copied into 
target and null will be returned.
+   ByteBuf accumulated = ByteBufUtils.accumulate(
+   target,
+   firstSource,
+   expectedAccumulationSize,
+   target.readableBytes());
+   assertNull(accumulated);
+   assertEquals(sourceLength, firstSource.readerIndex());
+   assertEquals(firstCopyLength, target.readableBytes());
+
+   // The remaining data will be copied from the second buffer, 
and the target buffer will be returned
+   // after all data is accumulated.
+   accumulated = ByteBufUtils.accumulate(
+   target,
+   secondSource,
+   expectedAccumulationSize,
+   target.readableBytes());
+   assertSame(target, accumulated);
+   assertEquals(secondSourceReaderIndex + secondCopyLength, 
secondSource.readerIndex());
+   assertEquals(expectedAccumulationSize, target.readableBytes());
+
+   verifyBufferContent(accumulated, 0, expectedAccumulationSize);
+   }
+
+   /**
+* Create a source buffer whose length is size. The content between 
readerIndex and
 
 Review comment:
   \size\


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:

[jira] [Commented] (FLINK-16425) Add rate limiting feature for kafka table source

2020-03-04 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17051796#comment-17051796
 ] 

Jark Wu commented on FLINK-16425:
-

Yes. I'm fine with this feature. We should discuss how to expose this for 
connector property and descriptor API. 

> Add rate limiting feature for kafka table source
> 
>
> Key: FLINK-16425
> URL: https://issues.apache.org/jira/browse/FLINK-16425
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Reporter: Zou
>Priority: Major
>
> There is a rate limiting feature in kafka source, but kafka table source dose 
> not support this. We could add this feature in kafka table source.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-16132) Translate "Aliyun OSS" page of "File Systems" into Chinese

2020-03-04 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-16132.
-
Fix Version/s: 1.11.0
   Resolution: Fixed

Fixed in master(1.11.0): 0428b103663ccd489f3f7eff81ec838a6bfcc7ca

> Translate "Aliyun OSS" page of "File Systems" into Chinese 
> ---
>
> Key: FLINK-16132
> URL: https://issues.apache.org/jira/browse/FLINK-16132
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Qingsheng Ren
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/ops/filesystems/oss.html
> The markdown file is located in flink/docs/ops/filesystems/oss.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-16133) Translate "Azure Blob Storage" page of "File Systems" into Chinese

2020-03-04 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-16133.
-
Fix Version/s: 1.11.0
   Resolution: Fixed

Fixed in master(1.11.0): 59b65844e285009b60dd9481dd2aec0458dd7094

> Translate "Azure Blob Storage" page of "File Systems" into Chinese 
> ---
>
> Key: FLINK-16133
> URL: https://issues.apache.org/jira/browse/FLINK-16133
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Qingsheng Ren
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/ops/filesystems/azure.html
> The markdown file is located in flink/docs/ops/filesystems/azure.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong closed pull request #11232: [FLINK-16133] [docs] /ops/filesystems/azure.zh.md

2020-03-04 Thread GitBox
wuchong closed pull request #11232: [FLINK-16133] [docs] 
/ops/filesystems/azure.zh.md
URL: https://github.com/apache/flink/pull/11232
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong closed pull request #11231: [FLINK-16132] [docs] Translate /ops/filesystems/oss.zh.md into Chinese

2020-03-04 Thread GitBox
wuchong closed pull request #11231: [FLINK-16132] [docs] Translate 
/ops/filesystems/oss.zh.md into Chinese
URL: https://github.com/apache/flink/pull/11231
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11310: [FLINK-16414]fix sql validation failed when using udaf/udtf which doesn't implement getResultType

2020-03-04 Thread GitBox
flinkbot edited a comment on issue #11310: [FLINK-16414]fix sql validation 
failed when using udaf/udtf which doesn't  implement getResultType
URL: https://github.com/apache/flink/pull/11310#issuecomment-594510486
 
 
   
   ## CI report:
   
   * d4e0c55a5235b98ed23f0b10a5ec33cd0015faca Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/151727726) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #11231: [FLINK-16132] [docs] Translate /ops/filesystems/oss.zh.md into Chinese

2020-03-04 Thread GitBox
wuchong commented on a change in pull request #11231: [FLINK-16132] [docs] 
Translate /ops/filesystems/oss.zh.md into Chinese
URL: https://github.com/apache/flink/pull/11231#discussion_r388062490
 
 

 ##
 File path: docs/ops/filesystems/oss.zh.md
 ##
 @@ -23,66 +23,66 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-## OSS: Object Storage Service
+## OSS:对象存储服务
 
-[Aliyun Object Storage Service](https://www.aliyun.com/product/oss) (Aliyun 
OSS) is widely used, particularly popular among China’s cloud users, and it 
provides cloud object storage for a variety of use cases.
-You can use OSS with Flink for **reading** and **writing data** as well in 
conjunction with the [streaming **state backends**]({{ site.baseurl 
}}/ops/state/state_backends.html)
+[阿里云对象存储服务](https://www.aliyun.com/product/oss) (Aliyun OSS) 
使用广泛,尤其在中国云用户中十分流行,能提供多种应用场景下的云对象存储。OSS 可与 Flink 一起使用以读取与存储数据,以及与[流 State 
Backend]({{ site.baseurl }}/ops/state/state_backends.html) 结合使用。
 
 Review comment:
   ```suggestion
   [阿里云对象存储服务](https://www.aliyun.com/product/oss) (Aliyun OSS) 
使用广泛,尤其在中国云用户中十分流行,能提供多种应用场景下的云对象存储。OSS 可与 Flink 一起使用以读取与存储数据,以及与[流 State 
Backend]({{ site.baseurl }}/zh/ops/state/state_backends.html) 结合使用。
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #11232: [FLINK-16133] [docs] /ops/filesystems/azure.zh.md

2020-03-04 Thread GitBox
wuchong commented on a change in pull request #11232: [FLINK-16133] [docs] 
/ops/filesystems/azure.zh.md
URL: https://github.com/apache/flink/pull/11232#discussion_r388061934
 
 

 ##
 File path: docs/ops/filesystems/azure.zh.md
 ##
 @@ -23,60 +23,54 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-[Azure Blob Storage](https://docs.microsoft.com/en-us/azure/storage/) is a 
Microsoft-managed service providing cloud storage for a variety of use cases.
-You can use Azure Blob Storage with Flink for **reading** and **writing data** 
as well in conjunction with the [streaming **state backends**]({{ site.baseurl 
}}/ops/state/state_backends.html)  
+[Azure Blob 存储](https://docs.microsoft.com/en-us/azure/storage/) 是一项由 
Microsoft 管理的服务,能提供多种应用场景下的云存储。
+Azure Blob 存储可与 Flink 一起使用以**读取**和**写入数据**,以及与[流 State Backend]({{ 
site.baseurl }}/zh/ops/state/state_backends.html) 结合使用。
 
 * This will be replaced by the TOC
 {:toc}
 
-You can use Azure Blob Storage objects like regular files by specifying paths 
in the following format:
+通过以下格式指定路径,Azure Blob 存储对象可类似于普通文件使用:
 
 {% highlight plain %}
 
wasb://@$.blob.core.windows.net/
 
-// SSL encrypted access
+// SSL 加密访问
 
wasbs://@$.blob.core.windows.net/
 {% endhighlight %}
 
-See below for how to use Azure Blob Storage in a Flink job:
+参见以下代码了解如何在 Flink 作业中使用 Azure Blob 存储:
 
 {% highlight java %}
-// Read from Azure Blob storage
+// 读取 Azure Blob 存储
 
env.readTextFile("wasb://@$.blob.core.windows.net/");
 
-// Write to Azure Blob storage
+// 写入 Azure Blob 存储
 
stream.writeAsText("wasb://@$.blob.core.windows.net/")
 
-// Use Azure Blob Storage as FsStatebackend
+// 将 Azure Blob 存储用作 FsStatebackend
 env.setStateBackend(new 
FsStateBackend("wasb://@$.blob.core.windows.net/"));
 {% endhighlight %}
 
-### Shaded Hadoop Azure Blob Storage file system
+### Shaded Hadoop Azure Blob 存储文件系统
 
-To use `flink-azure-fs-hadoop,` copy the respective JAR file from the `opt` 
directory to the `plugins` directory of your Flink distribution before starting 
Flink, e.g.
+为使用 flink-azure-fs-hadoop,在启动 Flink 之前,将对应的 JAR 文件从 opt 目录复制到 Flink 发行版中的 
plugin 目录下的一个文件夹中,例如:
 
 {% highlight bash %}
 mkdir ./plugins/azure-fs-hadoop
 cp ./opt/flink-azure-fs-hadoop-{{ site.version }}.jar 
./plugins/azure-fs-hadoop/
 {% endhighlight %}
 
-`flink-azure-fs-hadoop` registers default FileSystem wrappers for URIs with 
the *wasb://* and *wasbs://* (SSL encrypted access) scheme.
+`flink-azure-fs-hadoop` 为使用 *wasb://* 和 *wasbs://* (SSL 加密访问) 的 URI 
注册了默认的文件系统包装器。
 
-### Credentials Configuration
-
-Hadoop's Azure Filesystem supports configuration of credentials via the Hadoop 
configuration as
-outlined in the [Hadoop Azure Blob Storage 
documentation](https://hadoop.apache.org/docs/current/hadoop-azure/index.html#Configuring_Credentials).
-For convenience Flink forwards all Flink configurations with a key prefix of 
`fs.azure` to the
-Hadoop configuration of the filesystem. Consequentially, the azure blob 
storage key can be configured
-in `flink-conf.yaml` via:
+### 凭据配置
+Hadoop 的 Azure 文件系统支持通过 Hadoop 配置来配置凭据,如 [Hadoop Azure Blob Storage 
documentation](https://hadoop.apache.org/docs/current/hadoop-azure/index.html#Configuring_Credentials)
 所述。
 
 Review comment:
   ```suggestion
   Hadoop 的 Azure 文件系统支持通过 Hadoop 配置来配置凭据,如 [Hadoop Azure Blob Storage 
文档](https://hadoop.apache.org/docs/current/hadoop-azure/index.html#Configuring_Credentials)
 所述。
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16425) Add rate limiting feature for kafka table source

2020-03-04 Thread Zou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17051781#comment-17051781
 ] 

Zou commented on FLINK-16425:
-

Hi, [~jark], do you think it’s a useful feature? If so, I am willing to take 
this. 

> Add rate limiting feature for kafka table source
> 
>
> Key: FLINK-16425
> URL: https://issues.apache.org/jira/browse/FLINK-16425
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Reporter: Zou
>Priority: Major
>
> There is a rate limiting feature in kafka source, but kafka table source dose 
> not support this. We could add this feature in kafka table source.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16299) Release containers recovered from previous attempt in which TaskExecutor is not started.

2020-03-04 Thread Yangze Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yangze Guo closed FLINK-16299.
--

> Release containers recovered from previous attempt in which TaskExecutor is 
> not started.
> 
>
> Key: FLINK-16299
> URL: https://issues.apache.org/jira/browse/FLINK-16299
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Reporter: Xintong Song
>Assignee: Yangze Guo
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> As discussed in FLINK-16215, on Yarn deployment, {{YarnResourceManager}} 
> starts a new {{TaskExecutor}} in two steps:
>  # Request a new container from Yarn
>  # Starts a {{TaskExecutor}} process in the allocated container
> If JM failover happens between the two steps, in the new attempt 
> {{YarnResourceManager}} will not start {{TaskExecutor}} processes in 
> recovered containers. That means such containers are neither used nor 
> released.
> A potential fix to this problem is to query for the container status by 
> calling {{NMClientAsync#getContainerStatusAsync}}, and release the containers 
> whose state is {{NEW}}, keeps only those whose state is {{RUNNING}} and 
> waiting for them to register.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16425) Add rate limiting feature for kafka table source

2020-03-04 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-16425:

Component/s: Table SQL / Ecosystem

> Add rate limiting feature for kafka table source
> 
>
> Key: FLINK-16425
> URL: https://issues.apache.org/jira/browse/FLINK-16425
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Reporter: Zou
>Priority: Major
>
> There is a rate limiting feature in kafka source, but kafka table source dose 
> not support this. We could add this feature in kafka table source.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11284: [FLINK-15911][runtime] Make Flink work with NAT.

2020-03-04 Thread GitBox
flinkbot edited a comment on issue #11284: [FLINK-15911][runtime] Make Flink 
work with NAT.
URL: https://github.com/apache/flink/pull/11284#issuecomment-593407935
 
 
   
   ## CI report:
   
   * 6e0a577481f9300c87299b0629fab6dc1b3bd71a Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/151494400) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5853)
 
   * 3e9df27458782312e53586e53f49cd55c11f4df5 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/151875334) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5942)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on issue #11304: [FLINK-16418][hive] Hide hive version to avoid user confuse

2020-03-04 Thread GitBox
JingsongLi commented on issue #11304: [FLINK-16418][hive] Hide hive version to 
avoid user confuse
URL: https://github.com/apache/flink/pull/11304#issuecomment-595001501
 
 
   > LGTM, except that we need documentation stating that
   > 
   > 1. hive versions will be automatically inferred at runtime
   > 2. hive versions in these configs are optional, users still can specify 
explicitly if they want to
   
   Hi @bowenli86 , I prefer not provide this message, I can not find the case 
users need to configure.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xintongsong commented on issue #11284: [FLINK-15911][runtime] Make Flink work with NAT.

2020-03-04 Thread GitBox
xintongsong commented on issue #11284: [FLINK-15911][runtime] Make Flink work 
with NAT.
URL: https://github.com/apache/flink/pull/11284#issuecomment-595001635
 
 
   @tillrohrmann 
   I checked the test failures.
   
   The failures on Azure are all unrelated.
   
   The NAT e2e test failure on Travis is cause by error message in logs.
   - The job is executed successfully, with correct result outputted.
   - The error message is about the RM RPC service not started yet when one of 
the TMs tries to connect to it. 
   - The following JM log shows that the message is successfully received 
before discarded, indicating the TM has no problem resolving the correct RM 
address and RPC port.
   `The rpc endpoint 
org.apache.flink.runtime.resourcemanager.StandaloneResourceManager has not been 
started yet. Discarding message 
org.apache.flink.runtime.rpc.messages.RemoteFencedMessage until processing is 
started.`
   - I think this is not a real problem, because TM will retry connecting to RM 
later.
   
   I've set `skip_check_exceptions` for this test case. I think relying on the 
result hash check should be enough.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11284: [FLINK-15911][runtime] Make Flink work with NAT.

2020-03-04 Thread GitBox
flinkbot edited a comment on issue #11284: [FLINK-15911][runtime] Make Flink 
work with NAT.
URL: https://github.com/apache/flink/pull/11284#issuecomment-593407935
 
 
   
   ## CI report:
   
   * 6e0a577481f9300c87299b0629fab6dc1b3bd71a Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/151494400) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5853)
 
   * 3e9df27458782312e53586e53f49cd55c11f4df5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13598) frocksdb doesn't have arm release

2020-03-04 Thread wangxiyuan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17051764#comment-17051764
 ] 

wangxiyuan commented on FLINK-13598:


[~azagrebin] recently, rocksdb community release a version 5.18.4. It works 
well on ARM.  Is it possible for Flink to move back to it using the 
flink-rocksdb-plugins?

 

[1]: [https://github.com/facebook/rocksdb/releases/tag/v5.18.4]

 

> frocksdb doesn't have arm release 
> --
>
> Key: FLINK-13598
> URL: https://issues.apache.org/jira/browse/FLINK-13598
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Affects Versions: 1.9.0, 2.0.0
>Reporter: wangxiyuan
>Priority: Major
>
> Flink now uses frocksdb which forks from rocksdb  for module 
> *flink-statebackend-rocksdb*. It doesn't contain arm release.
> Now rocksdb supports ARM from 
> [v6.2.2|https://search.maven.org/artifact/org.rocksdb/rocksdbjni/6.2.2/jar]
> Can frocksdb release an ARM package as well?
> Or AFAK, Since there were some bugs for rocksdb in the past, so that Flink 
> didn't use it directly. Have the bug been solved in rocksdb already? Can 
> Flink re-use rocksdb again now?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi commented on issue #11302: [FLINK-16414]fix sql validation failed when using udaf/udtf which doesn't implement getResultType

2020-03-04 Thread GitBox
JingsongLi commented on issue #11302: [FLINK-16414]fix sql validation failed 
when using udaf/udtf which doesn't implement getResultType
URL: https://github.com/apache/flink/pull/11302#issuecomment-594997362
 
 
   Modified commit title to `[FLINK-16414][table] Fix sql validation failed 
when using udaf/udtf has no getResultType`, @zjuwangg FYI.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15727) Let BashJavaUtils return dynamic configs and JVM parameters in one call

2020-03-04 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17051757#comment-17051757
 ] 

Yangze Guo commented on FLINK-15727:


Hi, [~trohrmann]. I'd like to work on it. Could you assign this to me?

> Let BashJavaUtils return dynamic configs and JVM parameters in one call
> ---
>
> Key: FLINK-15727
> URL: https://issues.apache.org/jira/browse/FLINK-15727
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Scripts
>Affects Versions: 1.10.0
>Reporter: Till Rohrmann
>Priority: Major
>  Labels: usability
> Fix For: 1.11.0
>
>
> In order to avoid logging diagnostic statements from the {{BashJavaUtils}} 
> util twice as we do it right now, I would suggest to not call the util twice, 
> once for the JVM args and once for the dynamic properties in 
> {{taskmanager.sh}}. Instead I propose that {{BashJavaUtils}} returns both 
> values as the two last lines with different prefixes so that the bash script 
> can easily filter it out.
> cc [~azagrebin] [~xintongsong]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xintongsong commented on issue #11284: [FLINK-15911][runtime] Make Flink work with NAT.

2020-03-04 Thread GitBox
xintongsong commented on issue #11284: [FLINK-15911][runtime] Make Flink work 
with NAT.
URL: https://github.com/apache/flink/pull/11284#issuecomment-594996261
 
 
   @flinkbot run azure


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11310: [FLINK-16414]fix sql validation failed when using udaf/udtf which doesn't implement getResultType

2020-03-04 Thread GitBox
flinkbot edited a comment on issue #11310: [FLINK-16414]fix sql validation 
failed when using udaf/udtf which doesn't  implement getResultType
URL: https://github.com/apache/flink/pull/11310#issuecomment-594510486
 
 
   
   ## CI report:
   
   * d4e0c55a5235b98ed23f0b10a5ec33cd0015faca Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/151727726) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10429) Redesign Flink Scheduling, introducing dedicated Scheduler component

2020-03-04 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-10429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17051741#comment-17051741
 ] 

Zhu Zhu commented on FLINK-10429:
-

Hi [~app-tarush], some of the work items are still open but we'd like to 
postpone them to not conflict with other work on the scheduler, like 'Remove 
legacy scheduler(FLINK-15626)' and 'Pipelined region scheduling(FLINK-16430)'.
I think at the moment you can follow the active tasks and join the discussion, 
review or development. 

> Redesign Flink Scheduling, introducing dedicated Scheduler component
> 
>
> Key: FLINK-10429
> URL: https://issues.apache.org/jira/browse/FLINK-10429
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Coordination
>Affects Versions: 1.7.0
>Reporter: Stefan Richter
>Priority: Major
>
> This epic tracks the redesign of scheduling in Flink. Scheduling is currently 
> a concern that is scattered across different components, mainly the 
> ExecutionGraph/Execution and the SlotPool. Scheduling also happens only on 
> the granularity of individual tasks, which make holistic scheduling 
> strategies hard to implement. In this epic we aim to introduce a dedicated 
> Scheduler component that can support use-case like auto-scaling, 
> local-recovery, and resource optimized batch.
> The design for this feature is developed here: 
> https://docs.google.com/document/d/1q7NOqt05HIN-PlKEEPB36JiuU1Iu9fnxxVGJzylhsxU/edit?usp=sharing



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi merged pull request #11302: [FLINK-16414]fix sql validation failed when using udaf/udtf which doesn't implement getResultType

2020-03-04 Thread GitBox
JingsongLi merged pull request #11302: [FLINK-16414]fix sql validation failed 
when using udaf/udtf which doesn't implement getResultType
URL: https://github.com/apache/flink/pull/11302
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-16430) Pipelined region scheduling

2020-03-04 Thread Zhu Zhu (Jira)
Zhu Zhu created FLINK-16430:
---

 Summary: Pipelined region scheduling
 Key: FLINK-16430
 URL: https://issues.apache.org/jira/browse/FLINK-16430
 Project: Flink
  Issue Type: Task
  Components: Runtime / Coordination
Affects Versions: 1.11.0
Reporter: Zhu Zhu
 Fix For: 1.11.0


Pipelined region scheduling is targeting to allow batch jobs with PIPELINED 
data exchanges to run without the risk to encounter a resource deadlock.

More details and work items will be added later when the detailed design is 
ready.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zjuwangg commented on issue #11310: [FLINK-16414]fix sql validation failed when using udaf/udtf which doesn't implement getResultType

2020-03-04 Thread GitBox
zjuwangg commented on issue #11310: [FLINK-16414]fix sql validation failed when 
using udaf/udtf which doesn't  implement getResultType
URL: https://github.com/apache/flink/pull/11310#issuecomment-594983646
 
 
   @flinkbot run travis


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
flinkbot edited a comment on issue #7368: [FLINK-10742][network] Let Netty use 
Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#issuecomment-567435407
 
 
   
   ## CI report:
   
   * 4d17b4de75015fade25228f8fa6668f0cf0d9dca UNKNOWN
   * ce48c4289d1f953d5f906c6e350c3ee8d971e225 UNKNOWN
   * bb9fa7f05b8aaf8bfdbd060830636fa53f9d8414 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/151862771) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5937)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16416) Shutdown the task manager gracefully in standalone mode

2020-03-04 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17051737#comment-17051737
 ] 

Yangze Guo commented on FLINK-16416:


[~azagrebin] Thanks for the explanation. I agree that there is no need to take 
care about the graceful shutdown TM at the moment. I'll close this ticket.

> Shutdown the task manager gracefully in standalone mode
> ---
>
> Key: FLINK-16416
> URL: https://issues.apache.org/jira/browse/FLINK-16416
> Project: Flink
>  Issue Type: Improvement
>  Components: Command Line Client
>Reporter: Yangze Guo
>Priority: Major
>
> Recently, I try to add a new {{GPUManager}} to the {{TaskExecutorServices}}. 
> I register the "GPUManager#close" function, in which I write some cleanup 
> logic, to the {{TaskExecutorServices#shutDown}}. However, I found that the 
> cleanup logic does not run as expected in standalone mode.
>  After an investigation in the codebase, I found that the 
> {{TaskExecutorServices#shutDown}} will be called only on a fatal error while 
> we just kill the TM process in the {{flink-daemon.sh}}. However, the LOG 
> shows that some services, e.g. TaskExecutorLocalStateStoresManager, did clean 
> up themselves by registering {{shutdownHook}}.
>  If that is the right way, then we need to register a {{shutdownHook}} for 
> {{TaskExecutorServices}} as well.
>  If that is not, we may find another solution to shutdown TM gracefully.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-16416) Shutdown the task manager gracefully in standalone mode

2020-03-04 Thread Yangze Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yangze Guo resolved FLINK-16416.

Resolution: Won't Fix

> Shutdown the task manager gracefully in standalone mode
> ---
>
> Key: FLINK-16416
> URL: https://issues.apache.org/jira/browse/FLINK-16416
> Project: Flink
>  Issue Type: Improvement
>  Components: Command Line Client
>Reporter: Yangze Guo
>Priority: Major
>
> Recently, I try to add a new {{GPUManager}} to the {{TaskExecutorServices}}. 
> I register the "GPUManager#close" function, in which I write some cleanup 
> logic, to the {{TaskExecutorServices#shutDown}}. However, I found that the 
> cleanup logic does not run as expected in standalone mode.
>  After an investigation in the codebase, I found that the 
> {{TaskExecutorServices#shutDown}} will be called only on a fatal error while 
> we just kill the TM process in the {{flink-daemon.sh}}. However, the LOG 
> shows that some services, e.g. TaskExecutorLocalStateStoresManager, did clean 
> up themselves by registering {{shutdownHook}}.
>  If that is the right way, then we need to register a {{shutdownHook}} for 
> {{TaskExecutorServices}} as well.
>  If that is not, we may find another solution to shutdown TM gracefully.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16416) Shutdown the task manager gracefully in standalone mode

2020-03-04 Thread Yangze Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yangze Guo closed FLINK-16416.
--

> Shutdown the task manager gracefully in standalone mode
> ---
>
> Key: FLINK-16416
> URL: https://issues.apache.org/jira/browse/FLINK-16416
> Project: Flink
>  Issue Type: Improvement
>  Components: Command Line Client
>Reporter: Yangze Guo
>Priority: Major
>
> Recently, I try to add a new {{GPUManager}} to the {{TaskExecutorServices}}. 
> I register the "GPUManager#close" function, in which I write some cleanup 
> logic, to the {{TaskExecutorServices#shutDown}}. However, I found that the 
> cleanup logic does not run as expected in standalone mode.
>  After an investigation in the codebase, I found that the 
> {{TaskExecutorServices#shutDown}} will be called only on a fatal error while 
> we just kill the TM process in the {{flink-daemon.sh}}. However, the LOG 
> shows that some services, e.g. TaskExecutorLocalStateStoresManager, did clean 
> up themselves by registering {{shutdownHook}}.
>  If that is the right way, then we need to register a {{shutdownHook}} for 
> {{TaskExecutorServices}} as well.
>  If that is not, we may find another solution to shutdown TM gracefully.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-15584) Give nested data type of ROWs in ValidationException

2020-03-04 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee resolved FLINK-15584.
--
Resolution: Fixed

> Give nested data type of ROWs in ValidationException
> 
>
> Key: FLINK-15584
> URL: https://issues.apache.org/jira/browse/FLINK-15584
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Legacy Planner
>Affects Versions: 1.9.1, 1.10.0, 1.11.0
>Reporter: Benoît Paris
>Assignee: Ayush Saxena
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.10.1, 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In 
> {code:java}
> INSERT INTO baz_sink
> SELECT
>   a,
>   ROW(b, c)
> FROM foo_source{code}
> Schema mismatch mistakes will not get proper detail level, yielding the 
> following:
> Caused by: org.apache.flink.table.api.ValidationException: Field types of 
> query result and registered TableSink [baz_sink] do not match.
>  Query result schema: [a: Integer, EXPR$2: Row]
>  TableSink schema: [a: Integer, payload: Row]
> Leaving the user with an opaque 'Row' type to debug.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15584) Give nested data type of ROWs in ValidationException

2020-03-04 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17051732#comment-17051732
 ] 

Jingsong Lee commented on FLINK-15584:
--

[~rmetzger] Sorry for late review.

Master: 3408eade79f59f7f9228c39ccc976ed5baab0581

release-1.10: 2949348e7be93fb0c60f7d329342155598d42dc5

> Give nested data type of ROWs in ValidationException
> 
>
> Key: FLINK-15584
> URL: https://issues.apache.org/jira/browse/FLINK-15584
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Legacy Planner
>Affects Versions: 1.9.1, 1.10.0, 1.11.0
>Reporter: Benoît Paris
>Assignee: Ayush Saxena
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.10.1, 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In 
> {code:java}
> INSERT INTO baz_sink
> SELECT
>   a,
>   ROW(b, c)
> FROM foo_source{code}
> Schema mismatch mistakes will not get proper detail level, yielding the 
> following:
> Caused by: org.apache.flink.table.api.ValidationException: Field types of 
> query result and registered TableSink [baz_sink] do not match.
>  Query result schema: [a: Integer, EXPR$2: Row]
>  TableSink schema: [a: Integer, payload: Row]
> Leaving the user with an opaque 'Row' type to debug.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15584) Give nested data type of ROWs in ValidationException

2020-03-04 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-15584:
-
Fix Version/s: 1.11.0
   1.10.1

> Give nested data type of ROWs in ValidationException
> 
>
> Key: FLINK-15584
> URL: https://issues.apache.org/jira/browse/FLINK-15584
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Legacy Planner
>Affects Versions: 1.9.1, 1.10.0, 1.11.0
>Reporter: Benoît Paris
>Assignee: Ayush Saxena
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.10.1, 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In 
> {code:java}
> INSERT INTO baz_sink
> SELECT
>   a,
>   ROW(b, c)
> FROM foo_source{code}
> Schema mismatch mistakes will not get proper detail level, yielding the 
> following:
> Caused by: org.apache.flink.table.api.ValidationException: Field types of 
> query result and registered TableSink [baz_sink] do not match.
>  Query result schema: [a: Integer, EXPR$2: Row]
>  TableSink schema: [a: Integer, payload: Row]
> Leaving the user with an opaque 'Row' type to debug.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi commented on issue #10978: [FLINK-15584] Give nested data type of ROWs in ValidationException.

2020-03-04 Thread GitBox
JingsongLi commented on issue #10978: [FLINK-15584] Give nested data type of 
ROWs in ValidationException.
URL: https://github.com/apache/flink/pull/10978#issuecomment-594979480
 
 
   Merged in: 3408eade79f59f7f9228c39ccc976ed5baab0581


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] jinglining commented on a change in pull request #11250: [FLINK-16302][rest]add log list and read log by name for taskmanager

2020-03-04 Thread GitBox
jinglining commented on a change in pull request #11250: [FLINK-16302][rest]add 
log list and read log by name for taskmanager
URL: https://github.com/apache/flink/pull/11250#discussion_r388031349
 
 

 ##
 File path: docs/_includes/generated/rest_v1_dispatcher.html
 ##
 @@ -4128,3 +4128,116 @@
 
   
 
+
+
+
+/taskmanagers/:taskmanagerid/logs
+
+
+Verb: GET
+Response code: 200 OK
+
+
+Provides access to task manager logs list.
+
+
+Path parameters
+
+
+
+
+taskmanagerid - 32-character hexadecimal 
string that identifies a task manager.
+
+
+
+
+
+Request
+
+  
+
+{}
+  
+
+
+
+
+
+Response
+
+  
+
+{
+  "type" : "object",
+  "id" : 
"urn:jsonschema:org:apache:flink:runtime:rest:messages:taskmanager:LogsInfo",
+  "properties" : {
+"logs" : {
+  "type" : "array",
+  "items" : {
+"type" : "object",
+"id" : 
"urn:jsonschema:org:apache:flink:runtime:rest:messages:taskmanager:LogInfo",
+"properties" : {
+  "name" : {
+"type" : "string"
+  },
+  "size" : {
+"type" : "long"
+  }
+}
+  }
+}
+  }
+}
+  
+
+
+
+
+
+
+
+
+/taskmanagers/:taskmanagerid/log/:filename
+
 
 Review comment:
   As `/taskmanagers/:taskmanagerid/log` don't define in here, should this URL 
define in here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi closed pull request #10978: [FLINK-15584] Give nested data type of ROWs in ValidationException.

2020-03-04 Thread GitBox
JingsongLi closed pull request #10978: [FLINK-15584] Give nested data type of 
ROWs in ValidationException.
URL: https://github.com/apache/flink/pull/10978
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on issue #10978: [FLINK-15584] Give nested data type of ROWs in ValidationException.

2020-03-04 Thread GitBox
JingsongLi commented on issue #10978: [FLINK-15584] Give nested data type of 
ROWs in ValidationException.
URL: https://github.com/apache/flink/pull/10978#issuecomment-594978475
 
 
   I will modify commit title to `[FLINK-15584][table-planner] Give nested data 
type of ROWs in ValidationException`, FYI. @ayushtkn 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10978: [FLINK-15584] Give nested data type of ROWs in ValidationException.

2020-03-04 Thread GitBox
JingsongLi commented on a change in pull request #10978: [FLINK-15584] Give 
nested data type of ROWs in ValidationException.
URL: https://github.com/apache/flink/pull/10978#discussion_r388029793
 
 

 ##
 File path: 
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/api/batch/sql/validation/InsertIntoValidationTest.scala
 ##
 @@ -74,4 +74,22 @@ class InsertIntoValidationTest extends TableTestBase {
 // must fail because partial insert is not supported yet.
 util.tableEnv.sqlUpdate(sql)
   }
+
+  @Test
+  def testValidationExceptionMessage(): Unit = {
+expectedException.expect(classOf[ValidationException])
+expectedException.expectMessage("TableSink schema:[a: Integer, b: Row" 
+
+  "(f0: Integer, f1: Integer, f2: Integer)]")
+val util = batchTestUtil()
+util.addTable[(Int, Long, String)]("sourceTable", 'a, 'b, 'c)
+val fieldNames = Array("a", "b")
+val fieldTypes: Array[TypeInformation[_]] = Array(Types.INT, Types.ROW
+(Types.INT, Types.INT, Types.INT))
+val sink = new MemoryTableSourceSinkUtil.UnsafeMemoryAppendTableSink
+util.tableEnv.registerTableSink("targetTable", sink.configure(fieldNames,
+  fieldTypes))
+
+val sql = "INSERT INTO targetTable SELECT a, b FROM sourceTable"
+
+util.tableEnv.sqlUpdate(sql)}
 
 Review comment:
   Minor, `}` should be new line. I will modify it before merge.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
flinkbot edited a comment on issue #7368: [FLINK-10742][network] Let Netty use 
Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#issuecomment-567435407
 
 
   
   ## CI report:
   
   * 4d17b4de75015fade25228f8fa6668f0cf0d9dca UNKNOWN
   * ce48c4289d1f953d5f906c6e350c3ee8d971e225 UNKNOWN
   * bb9fa7f05b8aaf8bfdbd060830636fa53f9d8414 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/151862771) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5937)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-16194) Refactor the Kubernetes decorator design

2020-03-04 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen closed FLINK-16194.
-
Resolution: Fixed

master via (reverse order)

dd8ce6c19eefeaf60c085365fe8bea960c1dace8
f6ad9685bcb8365277ffc2fa3fe1d54b2d445a28
10f53290dd07a64013623d7269c9d599dd7c2482
d29628ca6d64581f8627f81e81f05fa457a6b68d
4d8281a3c89514e16a923e20440095e974b9d91d
7f19c9cf14162c0f1b49e0b7a9bd7a0134cc7e3e
a50435d443395141fc526c6584ec9841e5968713
22735e3f3fdafc86b9f3c20b361a3830b30d44e3
8c7a17e6ef7e3da884e377437563d2508193a56d
20d0990f54272039e033f308cbb8a81fd0c7c000
0743b437c764a20b72ba4b14ad1e8f08755c2108
11fa0283d2abf7bfeca1a846a8163174dbfc3080

> Refactor the Kubernetes decorator design
> 
>
> Key: FLINK-16194
> URL: https://issues.apache.org/jira/browse/FLINK-16194
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes
>Affects Versions: 1.10.0
>Reporter: Canbin Zheng
>Assignee: Canbin Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> So far, Flink has made efforts for the native integration of Kubernetes. 
> However, it is always essential to evaluate the existing design and consider 
> alternatives that have better design and are easier to maintain in the long 
> run. We have suffered from some problems while developing new features base 
> on the current code. Here is some of them:
>  # We don’t have a unified monadic-step based orchestrator architecture to 
> construct all the Kubernetes resources.
>  ** There are inconsistencies between the orchestrator architecture that 
> client uses to create the Kubernetes resources, and the orchestrator 
> architecture that the master uses to create Pods; this confuses new 
> contributors, as there is a cognitive burden to understand two architectural 
> philosophies instead of one; for another, maintenance and new feature 
> development become quite challenging.
>  ** Pod construction is done in one step. With the introduction of new 
> features for the Pod, the construction process could become far more 
> complicated, and the functionality of a single class could explode, which 
> hurts code readability, writability, and testability. At the moment, we have 
> encountered such challenges and realized that it is not an easy thing to 
> develop new features related to the Pod.
>  ** The implementations of a specific feature are usually scattered in 
> multiple decoration classes. For example, the current design uses a 
> decoration class chain that contains five Decorator class to mount a 
> configuration file to the Pod. If people would like to introduce other 
> configuration files support, such as Hadoop configuration or Keytab files, 
> they have no choice but to repeat the same tedious and scattered process.
>  # We don’t have dedicated objects or tools for centrally parsing, verifying, 
> and managing the Kubernetes parameters, which has raised some maintenance and 
> inconsistency issues.
>  ** There are many duplicated parsing and validating code, including settings 
> of Image, ImagePullPolicy, ClusterID, ConfDir, Labels, etc. It not only harms 
> readability and testability but also is prone to mistakes. Refer to issue 
> FLINK-16025 for inconsistent parsing of the same parameter.
>  ** The parameters are scattered so that some of the method signatures have 
> to declare many unnecessary input parameters, such as 
> FlinkMasterDeploymentDecorator#createJobManagerContainer.
>  
> For solving these issues, we propose to 
>  # Introduce a unified monadic-step based orchestrator architecture that has 
> a better, cleaner and consistent abstraction for the Kubernetes resources 
> construction process. 
>  # Add some dedicated tools for centrally parsing, verifying, and managing 
> the Kubernetes parameters.
>  
> Refer to the design doc for the details, any feedback is welcome.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] TisonKun closed pull request #11233: [FLINK-16194][k8s] Refactor the Kubernetes decorator design

2020-03-04 Thread GitBox
TisonKun closed pull request #11233: [FLINK-16194][k8s] Refactor the Kubernetes 
decorator design
URL: https://github.com/apache/flink/pull/11233
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-16300) Rework SchedulerTestUtils with testing classes to replace mockito usages

2020-03-04 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu closed FLINK-16300.
---
Resolution: Fixed

implemented via 007b8755462e2237ac39a3218ccb9d1377b76180

> Rework SchedulerTestUtils with testing classes to replace mockito usages
> 
>
> Key: FLINK-16300
> URL: https://issues.apache.org/jira/browse/FLINK-16300
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination, Tests
>Affects Versions: 1.11.0
>Reporter: Zhu Zhu
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Mockito is used in SchedulerTestUtils to mock ExecutionVertex and Execution 
> for testing. It fails to mock every getter so that other tests use it may 
> encounter NPE issues, e.g. ExecutionVertex#getID().
> Mockito is also discouraged to be used in Flink tests. So I'd propose to 
> rework the utils with testing classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhuzhurk merged pull request #11230: [FLINK-16300] Rework SchedulerTestUtils with testing classes to replace mockito usages

2020-03-04 Thread GitBox
zhuzhurk merged pull request #11230: [FLINK-16300] Rework SchedulerTestUtils 
with testing classes to replace mockito usages
URL: https://github.com/apache/flink/pull/11230
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
flinkbot edited a comment on issue #7368: [FLINK-10742][network] Let Netty use 
Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#issuecomment-567435407
 
 
   
   ## CI report:
   
   * 4d17b4de75015fade25228f8fa6668f0cf0d9dca UNKNOWN
   * ce48c4289d1f953d5f906c6e350c3ee8d971e225 UNKNOWN
   * d0c0da56248bf4a072779b239144c84c42e54289 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/151723618) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5915)
 
   * bb9fa7f05b8aaf8bfdbd060830636fa53f9d8414 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/151862771) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5937)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-16429) failed to restore flink job from checkpoints due to unhandled exceptions

2020-03-04 Thread Yu Yang (Jira)
Yu Yang created FLINK-16429:
---

 Summary: failed to restore flink job from checkpoints due to 
unhandled exceptions
 Key: FLINK-16429
 URL: https://issues.apache.org/jira/browse/FLINK-16429
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.9.1
Reporter: Yu Yang


We are trying to restore our flink job from check-points, and run into 
AskTimeoutException related failures at a high frequency. Our environment is 
Hadoop 2.7.1 + Yarn + Flink 1.9.1. 

We hit this issue in 9 out of 10 runs, and were able to restore the application 
from given check-points from time to time. As the application can be restored, 
the check-point files shall not be corrupted. It seems that the issue is that 
jobmaster got timeout when it handles job submission request.  

 

Below is the exception stack trace, it is thrown from

[https://github.com/apache/flink/blob/2ec645a5bfd3cfadaf0057412401e91da0b21873/flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/AbstractHandler.java#L209]

2020-03-05 00:04:14,360 ERROR 
org.apache.flink.runtime.rest.handler.job.JobSubmitHandler - Unhandled 
exception: httpRequest uri:/v1/jobs, context: 
ChannelHandlerContext(org.apache.flink.runtime.rest.handler.router.RouterHandler_ROUTED_HANDLER,
 [id: 0xc39aca33, L:/10.1.85.22:41000 - R:/10.1.16.251:44]) 
akka.pattern.AskTimeoutException: Ask timed out on 
[Actor[akka://flink/user/dispatcher#-34498396]] after [1 ms]. Message of 
type [org.apache.flink.runtime.rpc.messages.LocalFencedMessage]. A typical 
reason for `AskTimeoutException` is that the recipient actor didn't send a 
reply. at akka.pattern.PromiseActorRef$$anonfun$2.apply(AskSupport.scala:635) 
at akka.pattern.PromiseActorRef$$anonfun$2.apply(AskSupport.scala:635) at 
akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:648) at 
akka.actor.Scheduler$$anon$4.run(Scheduler.scala:205) at 
scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
 at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109) 
at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599) 
at 
akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:328)
 at 
akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:279)
 at 
akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:283)
 at 
akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:235)
 at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
flinkbot edited a comment on issue #7368: [FLINK-10742][network] Let Netty use 
Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#issuecomment-567435407
 
 
   
   ## CI report:
   
   * 4d17b4de75015fade25228f8fa6668f0cf0d9dca UNKNOWN
   * ce48c4289d1f953d5f906c6e350c3ee8d971e225 UNKNOWN
   * d0c0da56248bf4a072779b239144c84c42e54289 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/151723618) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5915)
 
   * bb9fa7f05b8aaf8bfdbd060830636fa53f9d8414 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11306: [FLINK-16122][AZP] Upload build debug logs as artifacts

2020-03-04 Thread GitBox
flinkbot edited a comment on issue #11306: [FLINK-16122][AZP] Upload build 
debug logs as artifacts
URL: https://github.com/apache/flink/pull/11306#issuecomment-594429334
 
 
   
   ## CI report:
   
   * 60281cd4c56841b259a072e00d7a8597aedd4334 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/151828232) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   5   >