[jira] [Updated] (FLINK-17274) Maven: Premature end of Content-Length delimited message body

2020-04-21 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski updated FLINK-17274:
---
Fix Version/s: 1.11.0

> Maven: Premature end of Content-Length delimited message body
> -
>
> Key: FLINK-17274
> URL: https://issues.apache.org/jira/browse/FLINK-17274
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Reporter: Robert Metzger
>Priority: Critical
> Fix For: 1.11.0
>
>
> CI: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=7786=logs=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5=54421a62-0c80-5aad-3319-094ff69180bb
> {code}
> [ERROR] Failed to execute goal on project 
> flink-connector-elasticsearch7_2.11: Could not resolve dependencies for 
> project 
> org.apache.flink:flink-connector-elasticsearch7_2.11:jar:1.11-SNAPSHOT: Could 
> not transfer artifact org.apache.lucene:lucene-sandbox:jar:8.3.0 from/to 
> alicloud-mvn-mirror 
> (http://mavenmirror.alicloud.dak8s.net:/repository/maven-central/): GET 
> request of: org/apache/lucene/lucene-sandbox/8.3.0/lucene-sandbox-8.3.0.jar 
> from alicloud-mvn-mirror failed: Premature end of Content-Length delimited 
> message body (expected: 289920; received: 239832 -> [Help 1]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17274) Maven: Premature end of Content-Length delimited message body

2020-04-21 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski updated FLINK-17274:
---
Affects Version/s: (was: 1.11.0)

> Maven: Premature end of Content-Length delimited message body
> -
>
> Key: FLINK-17274
> URL: https://issues.apache.org/jira/browse/FLINK-17274
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Reporter: Robert Metzger
>Priority: Critical
>
> CI: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=7786=logs=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5=54421a62-0c80-5aad-3319-094ff69180bb
> {code}
> [ERROR] Failed to execute goal on project 
> flink-connector-elasticsearch7_2.11: Could not resolve dependencies for 
> project 
> org.apache.flink:flink-connector-elasticsearch7_2.11:jar:1.11-SNAPSHOT: Could 
> not transfer artifact org.apache.lucene:lucene-sandbox:jar:8.3.0 from/to 
> alicloud-mvn-mirror 
> (http://mavenmirror.alicloud.dak8s.net:/repository/maven-central/): GET 
> request of: org/apache/lucene/lucene-sandbox/8.3.0/lucene-sandbox-8.3.0.jar 
> from alicloud-mvn-mirror failed: Premature end of Content-Length delimited 
> message body (expected: 289920; received: 239832 -> [Help 1]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17274) Maven: Premature end of Content-Length delimited message body

2020-04-21 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski updated FLINK-17274:
---
Priority: Critical  (was: Major)

> Maven: Premature end of Content-Length delimited message body
> -
>
> Key: FLINK-17274
> URL: https://issues.apache.org/jira/browse/FLINK-17274
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Critical
>
> CI: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=7786=logs=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5=54421a62-0c80-5aad-3319-094ff69180bb
> {code}
> [ERROR] Failed to execute goal on project 
> flink-connector-elasticsearch7_2.11: Could not resolve dependencies for 
> project 
> org.apache.flink:flink-connector-elasticsearch7_2.11:jar:1.11-SNAPSHOT: Could 
> not transfer artifact org.apache.lucene:lucene-sandbox:jar:8.3.0 from/to 
> alicloud-mvn-mirror 
> (http://mavenmirror.alicloud.dak8s.net:/repository/maven-central/): GET 
> request of: org/apache/lucene/lucene-sandbox/8.3.0/lucene-sandbox-8.3.0.jar 
> from alicloud-mvn-mirror failed: Premature end of Content-Length delimited 
> message body (expected: 289920; received: 239832 -> [Help 1]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17274) Maven: Premature end of Content-Length delimited message body

2020-04-21 Thread Piotr Nowojski (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17089308#comment-17089308
 ] 

Piotr Nowojski commented on FLINK-17274:


another instance: 
https://dev.azure.com/rmetzger/Flink/_build/results?buildId=7847=logs=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5=54421a62-0c80-5aad-3319-094ff69180bb

> Maven: Premature end of Content-Length delimited message body
> -
>
> Key: FLINK-17274
> URL: https://issues.apache.org/jira/browse/FLINK-17274
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Major
>
> CI: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=7786=logs=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5=54421a62-0c80-5aad-3319-094ff69180bb
> {code}
> [ERROR] Failed to execute goal on project 
> flink-connector-elasticsearch7_2.11: Could not resolve dependencies for 
> project 
> org.apache.flink:flink-connector-elasticsearch7_2.11:jar:1.11-SNAPSHOT: Could 
> not transfer artifact org.apache.lucene:lucene-sandbox:jar:8.3.0 from/to 
> alicloud-mvn-mirror 
> (http://mavenmirror.alicloud.dak8s.net:/repository/maven-central/): GET 
> request of: org/apache/lucene/lucene-sandbox/8.3.0/lucene-sandbox-8.3.0.jar 
> from alicloud-mvn-mirror failed: Premature end of Content-Length delimited 
> message body (expected: 289920; received: 239832 -> [Help 1]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11850: [FLINK-17297] Log the lineage information between ExecutionAttemptID …

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11850:
URL: https://github.com/apache/flink/pull/11850#issuecomment-617559978


   
   ## CI report:
   
   * ba4ab2b747e1d13d1776e977523d9facd7865ffa Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161371639) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11829: [FLINK-17021][table-planner-blink] Blink batch planner set GlobalDataExchangeMode

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11829:
URL: https://github.com/apache/flink/pull/11829#issuecomment-616561190


   
   ## CI report:
   
   * c15a8214d2883f5a5c07bb6357dbfa465dd5cd39 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161357311) Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11832: [FLINK-17148][python] Support converting pandas dataframe to flink table

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11832:
URL: https://github.com/apache/flink/pull/11832#issuecomment-616617636


   
   ## CI report:
   
   * b463662a390b45757a682cee7ea89ed52e1fa6f6 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161065718) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7796)
 
   * c2105e0170d56fb95b849c4b830ca7a7b9d48fd3 UNKNOWN
   * 1e4aee0d61292ed92724bcabb97ce307833366da Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161369936) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11845: [FLINK-17240][docs] Add tutorial on Event-driven Apps

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11845:
URL: https://github.com/apache/flink/pull/11845#issuecomment-617236902


   
   ## CI report:
   
   * e30b1b23a9eb209b9fdac786aed7010577d32e08 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161296189) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7885)
 
   * 6c5a02a0608ddcee595d7cad7bfaeacbaf9b46e0 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on issue #11851: [FLINK-17298] Log the lineage information between SlotRequestID and A…

2020-04-21 Thread GitBox


flinkbot commented on issue #11851:
URL: https://github.com/apache/flink/pull/11851#issuecomment-617565355


   
   ## CI report:
   
   * 7d139f0b9c78f7a4dd986d38da0aecf4642ff047 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11702: [FLINK-16667][python][client] Support new Python dependency configuration options in flink-client.

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11702:
URL: https://github.com/apache/flink/pull/11702#issuecomment-612221236


   
   ## CI report:
   
   * b839e592856b50429a5097c7094d883fdbef1f2d Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161042493) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7788)
 
   * 9f0e138f75100bd547e4a7ca612c57df7729fec9 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161371530) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11687: [FLINK-16536][network][checkpointing] Implement InputChannel state recovery for unaligned checkpoint

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11687:
URL: https://github.com/apache/flink/pull/11687#issuecomment-611445542


   
   ## CI report:
   
   * 5bad018f00f85a7359345187a12d7938aa510d25 UNKNOWN
   * d51ce7f47381d99b843278cd701dcff223761a0b UNKNOWN
   * cba096ae3d8eba4a0d39c64659f76ad10a62be27 UNKNOWN
   * 23818c4b30eb9714154080253a090fc63d983925 UNKNOWN
   * 9b6278f15a4fc65d78671a145305df2c7310cc6a UNKNOWN
   * 1a28ea65925ab2094ef7496c4427bb635ff6f70c Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/161361259) Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20)
 
   * a1140f49a314268a6ad5272cf9bb8909ed01f0f7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11397: [FLINK-16217] [sql-client] catch all exceptions to avoid SQL client crashed

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11397:
URL: https://github.com/apache/flink/pull/11397#issuecomment-598575354


   
   ## CI report:
   
   * 3e020148288d2a3cddb7c8a6025e01f04898b4a0 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161361177) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-15006) Add option to close shuffle when dynamic partition inserting

2020-04-21 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-15006:


Assignee: Jingsong Lee

> Add option to close shuffle when dynamic partition inserting
> 
>
> Key: FLINK-15006
> URL: https://issues.apache.org/jira/browse/FLINK-15006
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
> Fix For: 1.11.0
>
>
> When partition values are rare or have skew, if we shuffle by dynamic 
> partitions, will break the performance.
> We can have an option to close shuffle in such cases:
> ‘connector.sink.shuffle-by-partition.enable’ = ...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #11851: [FLINK-17298] Log the lineage information between SlotRequestID and A…

2020-04-21 Thread GitBox


flinkbot commented on issue #11851:
URL: https://github.com/apache/flink/pull/11851#issuecomment-617561608


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 7d139f0b9c78f7a4dd986d38da0aecf4642ff047 (Wed Apr 22 
05:37:20 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17298) Log the lineage information between SlotRequestID and AllocationID

2020-04-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17298:
---
Labels: pull-request-available  (was: )

> Log the lineage information between SlotRequestID and AllocationID
> --
>
> Key: FLINK-17298
> URL: https://issues.apache.org/jira/browse/FLINK-17298
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Yangze Guo
>Assignee: Yangze Guo
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17313) Validation error when insert decimal/timestamp/varchar with precision into sink using TypeInformation of row

2020-04-21 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17089300#comment-17089300
 ] 

Jark Wu commented on FLINK-17313:
-

I think the root cause is that you are using the legacy type information which 
can't connect to planner smoothly. 
Because of the complexity, we don't plan to support new type system for 
UpsertStreamTableSink. But will fully support new system for FLIP-95 new sink 
interface. I think your problem will be solved once FLIP-95 is finished. 


> Validation error when insert decimal/timestamp/varchar with precision into 
> sink using TypeInformation of row
> 
>
> Key: FLINK-17313
> URL: https://issues.apache.org/jira/browse/FLINK-17313
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Terry Wang
>Priority: Major
>  Labels: pull-request-available
>
> Test code like follwing(in blink planner):
> {code:java}
>   tEnv.sqlUpdate("create table randomSource (" +
>   "   a varchar(10)," 
> +
>   "   b 
> decimal(20,2)" +
>   "   ) with (" +
>   "   'type' = 
> 'random'," +
>   "   'count' = '10'" 
> +
>   "   )");
>   tEnv.sqlUpdate("create table printSink (" +
>   "   a varchar(10)," 
> +
>   "   b 
> decimal(22,2)," +
>   "   c 
> timestamp(3)," +
>   "   ) with (" +
>   "   'type' = 'print'" +
>   "   )");
>   tEnv.sqlUpdate("insert into printSink select *, 
> current_timestamp from randomSource");
>   tEnv.execute("");
> {code}
> Print TableSink implements UpsertStreamTableSink and it's getReocrdType is as 
> following:
> {code:java}
> public TypeInformation getRecordType() {
>   return getTableSchema().toRowType();
>   }
> {code}
> Varchar column validation exception is:
> org.apache.flink.table.api.ValidationException: Type VARCHAR(10) of table 
> field 'a' does not match with the physical type STRING of the 'a' field of 
> the TableSink consumed type.
>   at 
> org.apache.flink.table.utils.TypeMappingUtils.lambda$checkPhysicalLogicalTypeCompatible$4(TypeMappingUtils.java:165)
>   at 
> org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:278)
>   at 
> org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:255)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeDefaultVisitor.visit(LogicalTypeDefaultVisitor.java:67)
>   at 
> org.apache.flink.table.types.logical.VarCharType.accept(VarCharType.java:157)
>   at 
> org.apache.flink.table.utils.TypeMappingUtils.checkIfCompatible(TypeMappingUtils.java:255)
>   at 
> org.apache.flink.table.utils.TypeMappingUtils.checkPhysicalLogicalTypeCompatible(TypeMappingUtils.java:161)
>   at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$$anonfun$validateLogicalPhysicalTypesCompatible$1.apply$mcVI$sp(TableSinkUtils.scala:315)
>   at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
>   at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$.validateLogicalPhysicalTypesCompatible(TableSinkUtils.scala:308)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:195)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:191)
>   at scala.Option.map(Option.scala:146)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:191)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at 

[GitHub] [flink] KarmaGYZ opened a new pull request #11851: [FLINK-17298] Log the lineage information between SlotRequestID and A…

2020-04-21 Thread GitBox


KarmaGYZ opened a new pull request #11851:
URL: https://github.com/apache/flink/pull/11851


   …llocationID
   
   
   
   ## What is the purpose of the change
   
   Log the lineage information between SlotRequestID and AllocationID.
   
   ## Brief change log
   
   Log the lineage information between SlotRequestID and AllocationID.
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on issue #11850: [FLINK-17297] Log the lineage information between ExecutionAttemptID …

2020-04-21 Thread GitBox


flinkbot commented on issue #11850:
URL: https://github.com/apache/flink/pull/11850#issuecomment-617559978


   
   ## CI report:
   
   * ba4ab2b747e1d13d1776e977523d9facd7865ffa UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11832: [FLINK-17148][python] Support converting pandas dataframe to flink table

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11832:
URL: https://github.com/apache/flink/pull/11832#issuecomment-616617636


   
   ## CI report:
   
   * b463662a390b45757a682cee7ea89ed52e1fa6f6 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161065718) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7796)
 
   * c2105e0170d56fb95b849c4b830ca7a7b9d48fd3 UNKNOWN
   * 1e4aee0d61292ed92724bcabb97ce307833366da UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11836: [FLINK-17188][python] Use pip instead of conda to install flake8 and sphinx

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11836:
URL: https://github.com/apache/flink/pull/11836#issuecomment-616922166


   
   ## CI report:
   
   * 8cfe6bcc6cee878730f9cfa75d7fa598f88a9833 UNKNOWN
   * 3ca11194996f3b2e7d51221f8f51ae928d93c3a3 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161355682) Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=17)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11615: [FLINK-16605] Add max limitation to the total number of slots

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11615:
URL: https://github.com/apache/flink/pull/11615#issuecomment-607717339


   
   ## CI report:
   
   * 2db316d977d0e790de8fa98b27dd219b68abb136 UNKNOWN
   * 0b0111ba893c9ecc7632394fc59e455f8d5c9db7 UNKNOWN
   * be9e8eb0b0906dff84686fb4db6ec72fd39f1292 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161042428) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7787)
 
   * ea11e8f4dcb5f24949e6de4a7e0bbeca3157daef Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161369895) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11727: [FLINK-17106][table] Support create and drop view in Flink SQL

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11727:
URL: https://github.com/apache/flink/pull/11727#issuecomment-613273432


   
   ## CI report:
   
   * fa5592cec0a6a5dcecc4f45c4bf72caf6e166eb4 UNKNOWN
   * a3901b149f603721be62fbb3d8bf46143d08c3d1 UNKNOWN
   * fabc951d769b00e1dac471b75ba2c99e62fcbf47 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161354179) Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=15)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11702: [FLINK-16667][python][client] Support new Python dependency configuration options in flink-client.

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11702:
URL: https://github.com/apache/flink/pull/11702#issuecomment-612221236


   
   ## CI report:
   
   * b839e592856b50429a5097c7094d883fdbef1f2d Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161042493) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7788)
 
   * 9f0e138f75100bd547e4a7ca612c57df7729fec9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17314) 【Flink Kafka Connector】The Config(connector.topic)support list topic

2020-04-21 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17089297#comment-17089297
 ] 

Jark Wu commented on FLINK-17314:
-

Is the topic discovery [1] what you are looking for? And you want to expose to 
SQL connector?

[1]: 
https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/kafka.html#topic-discovery

> 【Flink Kafka Connector】The Config(connector.topic)support list topic
> 
>
> Key: FLINK-17314
> URL: https://issues.apache.org/jira/browse/FLINK-17314
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Reporter: zhisheng
>Priority: Major
>
> sometime we may consume from more than one topic, and the data schema in  all 
> topic  is same



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17314) 【Flink Kafka Connector】The Config(connector.topic)support list topic

2020-04-21 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-17314:

Component/s: Table SQL / Ecosystem
 Connectors / Kafka

> 【Flink Kafka Connector】The Config(connector.topic)support list topic
> 
>
> Key: FLINK-17314
> URL: https://issues.apache.org/jira/browse/FLINK-17314
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Reporter: zhisheng
>Priority: Major
>
> sometime we may consume from more than one topic, and the data schema in  all 
> topic  is same



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #11850: [FLINK-17297] Log the lineage information between ExecutionAttemptID …

2020-04-21 Thread GitBox


flinkbot commented on issue #11850:
URL: https://github.com/apache/flink/pull/11850#issuecomment-617557431


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit ba4ab2b747e1d13d1776e977523d9facd7865ffa (Wed Apr 22 
05:22:30 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ opened a new pull request #11850: [FLINK-17297] Log the lineage information between ExecutionAttemptID …

2020-04-21 Thread GitBox


KarmaGYZ opened a new pull request #11850:
URL: https://github.com/apache/flink/pull/11850


   …and SlotRequestID
   
   
   
   ## What is the purpose of the change
   
   Log the lineage information between ExecutionAttemptID and SlotRequestID. 
   
   ## Brief change log
   
   Log the lineage information between ExecutionAttemptID and SlotRequestID.
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17297) Log the lineage information between ExecutionAttemptID and SlotRequestID

2020-04-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17297:
---
Labels: pull-request-available  (was: )

> Log the lineage information between ExecutionAttemptID and SlotRequestID
> 
>
> Key: FLINK-17297
> URL: https://issues.apache.org/jira/browse/FLINK-17297
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Yangze Guo
>Assignee: Yangze Guo
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on a change in pull request #11766: [FLINK-16812][jdbc] support array types in PostgresRowConverter

2020-04-21 Thread GitBox


wuchong commented on a change in pull request #11766:
URL: https://github.com/apache/flink/pull/11766#discussion_r412677524



##
File path: 
flink-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/source/row/converter/PostgresRowConverter.java
##
@@ -28,4 +34,39 @@
public PostgresRowConverter(RowType rowType) {
super(rowType);
}
+
+   @Override
+   public JDBCFieldConverter createConverter(LogicalType type) {
+   LogicalTypeRoot root = type.getTypeRoot();
+
+   if (root == LogicalTypeRoot.ARRAY) {
+   ArrayType arrayType = (ArrayType) type;
+   LogicalTypeRoot elemType = 
arrayType.getElementType().getTypeRoot();
+
+   if (elemType == LogicalTypeRoot.VARBINARY) {
+
+   return v -> {
+   PgArray pgArray = (PgArray) v;
+   Object[] in = (Object[]) 
pgArray.getArray();
+
+   Object[] out = new Object[in.length];
+   for (int i = 0; i < in.length; i++) {
+   out[i] = ((PGobject) 
in[i]).getValue().getBytes();
+   }
+
+   return out;
+   };
+   } else {
+   return v -> ((PgArray) v).getArray();

Review comment:
   I mean the currently default implementation of 
`AbstractJDBCRowConverter` will use `v -> v` for the array conversion, which 
puts `java.sql.Array` in a Row. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11687: [FLINK-16536][network][checkpointing] Implement InputChannel state recovery for unaligned checkpoint

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11687:
URL: https://github.com/apache/flink/pull/11687#issuecomment-611445542


   
   ## CI report:
   
   * 5bad018f00f85a7359345187a12d7938aa510d25 UNKNOWN
   * d51ce7f47381d99b843278cd701dcff223761a0b UNKNOWN
   * cba096ae3d8eba4a0d39c64659f76ad10a62be27 UNKNOWN
   * 23818c4b30eb9714154080253a090fc63d983925 UNKNOWN
   * 9b6278f15a4fc65d78671a145305df2c7310cc6a UNKNOWN
   * 1a28ea65925ab2094ef7496c4427bb635ff6f70c Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/161361259) Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11832: [FLINK-17148][python] Support converting pandas dataframe to flink table

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11832:
URL: https://github.com/apache/flink/pull/11832#issuecomment-616617636


   
   ## CI report:
   
   * b463662a390b45757a682cee7ea89ed52e1fa6f6 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161065718) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7796)
 
   * c2105e0170d56fb95b849c4b830ca7a7b9d48fd3 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11615: [FLINK-16605] Add max limitation to the total number of slots

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11615:
URL: https://github.com/apache/flink/pull/11615#issuecomment-607717339


   
   ## CI report:
   
   * 2db316d977d0e790de8fa98b27dd219b68abb136 UNKNOWN
   * 0b0111ba893c9ecc7632394fc59e455f8d5c9db7 UNKNOWN
   * be9e8eb0b0906dff84686fb4db6ec72fd39f1292 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161042428) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7787)
 
   * ea11e8f4dcb5f24949e6de4a7e0bbeca3157daef UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] WeiZhong94 commented on issue #11702: [FLINK-16667][python][client] Support new Python dependency configuration options in flink-client.

2020-04-21 Thread GitBox


WeiZhong94 commented on issue #11702:
URL: https://github.com/apache/flink/pull/11702#issuecomment-617551402


   @dianfu Thanks for your review! I have updated this PR according to your 
comments.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi commented on a change in pull request #11755: [FLINK-14257][Connectors / FileSystem]Integrate csv to FileSystemTableFactory

2020-04-21 Thread GitBox


JingsongLi commented on a change in pull request #11755:
URL: https://github.com/apache/flink/pull/11755#discussion_r412665572



##
File path: 
flink-formats/flink-csv/src/test/java/org/apache/flink/formats/csv/BaseRowCsvInputformatTest.java
##
@@ -0,0 +1,798 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.formats.csv;
+
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.core.fs.FileInputSplit;
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.table.dataformat.BaseRow;
+import org.apache.flink.table.dataformat.GenericRow;
+import org.apache.flink.table.dataformat.SqlTimestamp;
+import org.apache.flink.table.dataformat.TypeGetterSetters;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.logical.LogicalType;
+
+import org.junit.Test;
+
+import java.io.File;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.OutputStreamWriter;
+import java.nio.charset.StandardCharsets;
+import java.time.LocalDate;
+import java.time.LocalDateTime;
+import java.time.LocalTime;
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+/**
+ * Test suites for {@link BaseRowCsvInputformat}.
+ */
+public class BaseRowCsvInputformatTest {

Review comment:
   Can we abstract `RowCsvInputFormatTest` to reuse code?

##
File path: 
flink-formats/flink-csv/src/test/java/org/apache/flink/formats/csv/BaseRowCsvFilesystemITCase.java
##
@@ -0,0 +1,41 @@
+package org.apache.flink.formats.csv;
+
+import 
org.apache.flink.table.planner.runtime.batch.sql.BatchFileSystemITCaseBase;
+
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.List;
+
+/**
+ * Test fot {@link BaseRowCsvFileSystemFormatFactory}.
+ */
+@RunWith(Parameterized.class)
+public class BaseRowCsvFilesystemITCase extends BatchFileSystemITCaseBase {

Review comment:
   After https://github.com/apache/flink/pull/11796 , please add streaming 
test too.

##
File path: 
flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/BaseRowCsvInputformat.java
##
@@ -0,0 +1,310 @@
+package org.apache.flink.formats.csv;
+
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.common.typeinfo.Types;
+import org.apache.flink.api.java.typeutils.RowTypeInfo;
+import org.apache.flink.core.fs.FileInputSplit;
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.table.dataformat.BaseRow;
+import org.apache.flink.table.dataformat.BinaryString;
+import org.apache.flink.table.dataformat.DataFormatConverters;
+import org.apache.flink.table.dataformat.GenericRow;
+import org.apache.flink.table.filesystem.PartitionPathUtils;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.utils.TypeConversions;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.Preconditions;
+
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonNode;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.MappingIterator;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvMapper;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvSchema;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.NoSuchElementException;
+import java.util.stream.Collectors;
+
+import static 
org.apache.flink.formats.csv.CsvRowDeserializationSchema.createFieldRuntimeConverters;
+import static 
org.apache.flink.formats.csv.CsvRowDeserializationSchema.validateArity;
+import static 

[GitHub] [flink] flinkbot edited a comment on issue #11769: [FLINK-16766][python]Support create StreamTableEnvironment without pa…

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11769:
URL: https://github.com/apache/flink/pull/11769#issuecomment-614500814


   
   ## CI report:
   
   * 91e11790567ac173db216ac8d0a4bae76d2b81ae Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/160512177) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7577)
 
   * b052af39c5612f8ba67b3e650a3fdc3fdb8fdb9e Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161366540) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11849: [FLINK-17312][sqlclient] support sql client savepoint

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11849:
URL: https://github.com/apache/flink/pull/11849#issuecomment-617537614


   
   ## CI report:
   
   * c5817c1743c657627346fdf3e71b5e5ca9335e18 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/161365046) Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=23)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11829: [FLINK-17021][table-planner-blink] Blink batch planner set GlobalDataExchangeMode

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11829:
URL: https://github.com/apache/flink/pull/11829#issuecomment-616561190


   
   ## CI report:
   
   * c15a8214d2883f5a5c07bb6357dbfa465dd5cd39 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161357311) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-training] alpinegizmo commented on issue #7: [hotfix] two description updates

2020-04-21 Thread GitBox


alpinegizmo commented on issue #7:
URL: https://github.com/apache/flink-training/pull/7#issuecomment-617546490


   +1 LGTM



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ commented on a change in pull request #11615: [FLINK-16605] Add max limitation to the total number of slots

2020-04-21 Thread GitBox


KarmaGYZ commented on a change in pull request #11615:
URL: https://github.com/apache/flink/pull/11615#discussion_r412625928



##
File path: docs/_includes/generated/expert_scheduling_section.html
##
@@ -14,6 +14,12 @@
 Boolean
 Enable the slot spread out allocation strategy. This strategy 
tries to spread out the slots evenly across all available `TaskExecutors`.
 
+
+slotmanager.number-of-slots.max

Review comment:
   Yes, I'd like to. However, since it is a project-wide naming convention, 
I think it probably out of the scope of this PR. When we reach a consensus in 
dev ML, it makes sense to have another PR to apply this convention to all 
existing configs. WDYT?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dianfu commented on issue #11836: [FLINK-17188][python] Use pip instead of conda to install flake8 and sphinx

2020-04-21 Thread GitBox


dianfu commented on issue #11836:
URL: https://github.com/apache/flink/pull/11836#issuecomment-617544976


   @HuangXingBo Thanks a lot for investigating this problem and thanks 
@rmetzger for offering the help. I looked through the tests in [azure 
pipelines](https://dev.azure.com/rmetzger/Flink/_build?view=runs) and noticed 
that this problem has not appeared yesterday(not sure if it has appeared in the 
private azure pipelines). So I'm not confident enough that `these 16 tests 
passed` could prove that this problem can be solved after merging this PR. 
Maybe they passed just because the network is good enough yesterday? Do you 
think we could just merge it and see if it still happens or wait a while and 
test it again when we saw these kinds of failures appear frequently?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11836: [FLINK-17188][python] Use pip instead of conda to install flake8 and sphinx

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11836:
URL: https://github.com/apache/flink/pull/11836#issuecomment-616922166


   
   ## CI report:
   
   * 8cfe6bcc6cee878730f9cfa75d7fa598f88a9833 UNKNOWN
   * 3ca11194996f3b2e7d51221f8f51ae928d93c3a3 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161355682) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=17)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11849: [FLINK-17312][sqlclient] support sql client savepoint

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11849:
URL: https://github.com/apache/flink/pull/11849#issuecomment-617537614


   
   ## CI report:
   
   * c5817c1743c657627346fdf3e71b5e5ca9335e18 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161365046) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=23)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11769: [FLINK-16766][python]Support create StreamTableEnvironment without pa…

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11769:
URL: https://github.com/apache/flink/pull/11769#issuecomment-614500814


   
   ## CI report:
   
   * 91e11790567ac173db216ac8d0a4bae76d2b81ae Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/160512177) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7577)
 
   * b052af39c5612f8ba67b3e650a3fdc3fdb8fdb9e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11687: [FLINK-16536][network][checkpointing] Implement InputChannel state recovery for unaligned checkpoint

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11687:
URL: https://github.com/apache/flink/pull/11687#issuecomment-611445542


   
   ## CI report:
   
   * 5bad018f00f85a7359345187a12d7938aa510d25 UNKNOWN
   * d51ce7f47381d99b843278cd701dcff223761a0b UNKNOWN
   * cba096ae3d8eba4a0d39c64659f76ad10a62be27 UNKNOWN
   * 23818c4b30eb9714154080253a090fc63d983925 UNKNOWN
   * 9b6278f15a4fc65d78671a145305df2c7310cc6a UNKNOWN
   * 1a28ea65925ab2094ef7496c4427bb635ff6f70c Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161361259) Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lsyldliu commented on a change in pull request #11830: [FLINK-17096] [table] Support state ttl for Mini-Batch Group Agg using StateTtlConfig

2020-04-21 Thread GitBox


lsyldliu commented on a change in pull request #11830:
URL: https://github.com/apache/flink/pull/11830#discussion_r412659241



##
File path: 
flink-table/flink-table-runtime-blink/src/test/java/org/apache/flink/table/runtime/operators/aggregate/MiniBatchGroupAggFunctionTest.java
##
@@ -0,0 +1,99 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.operators.aggregate;
+
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.streaming.util.KeyedOneInputStreamOperatorTestHarness;
+import org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness;
+import org.apache.flink.table.dataformat.BaseRow;
+import org.apache.flink.table.runtime.operators.bundle.KeyedMapBundleOperator;
+import 
org.apache.flink.table.runtime.operators.bundle.trigger.CountBundleTrigger;
+import org.apache.flink.table.types.logical.RowType;
+
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static org.apache.flink.table.runtime.util.StreamRecordUtils.record;
+
+/**
+ * Tests for {@link MiniBatchGroupAggFunction}.
+ */
+public class MiniBatchGroupAggFunctionTest extends GroupAggFunctionTestBase {
+
+   private OneInputStreamOperatorTestHarness 
createTestHarness(
+   MiniBatchGroupAggFunction aggFunction
+   ) throws Exception {

Review comment:
   Okay





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-17315) UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointMassivelyParallel failed in timeout

2020-04-21 Thread Zhijiang (Jira)
Zhijiang created FLINK-17315:


 Summary: 
UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointMassivelyParallel 
failed in timeout
 Key: FLINK-17315
 URL: https://issues.apache.org/jira/browse/FLINK-17315
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing, Tests
Reporter: Zhijiang
 Fix For: 1.11.0


Build: 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=45cc9205-bdb7-5b54-63cd-89fdc0983323]

logs
{code:java}
2020-04-21T20:25:23.1139147Z [ERROR] Errors: 
2020-04-21T20:25:23.1140908Z [ERROR]   
UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointMassivelyParallel:80->execute:87
 » TestTimedOut
2020-04-21T20:25:23.1141383Z [INFO] 
2020-04-21T20:25:23.1141675Z [ERROR] Tests run: 1525, Failures: 0, Errors: 1, 
Skipped: 36
{code}
 
I run it in my local machine and it almost takes about 40 seconds to finish, so 
the configured 90 seconds timeout seems not enough in heavy load environment 
sometimes. Maybe we can remove the timeout in tests since azure already 
configured to monitor the timeout.
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xintongsong commented on a change in pull request #11323: [FLINK-16439][k8s] Make KubernetesResourceManager starts workers using WorkerResourceSpec requested by SlotManager

2020-04-21 Thread GitBox


xintongsong commented on a change in pull request #11323:
URL: https://github.com/apache/flink/pull/11323#discussion_r412654890



##
File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/KubernetesResourceManager.java
##
@@ -311,7 +322,9 @@ protected double getCpuCores(Configuration configuration) {
return 
TaskExecutorProcessUtils.getCpuCoresWithFallbackConfigOption(configuration, 
KubernetesConfigOptions.TASK_MANAGER_CPU);
}
 
-   private void internalStopPod(String podName) {
+   private Optional internalStopPod(String podName) {

Review comment:
   Could you explain what behaviors do you suggest to verify for this 
method?
   I think this is a private internal method. The code paths invoking this 
method should be already covered by `testStartAndStopWorker` and 
`testTaskManagerPodTerminated`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on issue #11849: [FLINK-17312][sqlclient] support sql client savepoint

2020-04-21 Thread GitBox


flinkbot commented on issue #11849:
URL: https://github.com/apache/flink/pull/11849#issuecomment-617537614


   
   ## CI report:
   
   * c5817c1743c657627346fdf3e71b5e5ca9335e18 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11848: [FLINK-17313]fix type validation error when sink table ddl exists column with precision of decimal/timestamp/varchar

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11848:
URL: https://github.com/apache/flink/pull/11848#issuecomment-617531785


   
   ## CI report:
   
   * 9b24a40f43a101b63ab73d1fa1e0c0f31178b36d Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161363103) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17274) Maven: Premature end of Content-Length delimited message body

2020-04-21 Thread Zhijiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17089263#comment-17089263
 ] 

Zhijiang commented on FLINK-17274:
--

Another instance 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8]

> Maven: Premature end of Content-Length delimited message body
> -
>
> Key: FLINK-17274
> URL: https://issues.apache.org/jira/browse/FLINK-17274
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Major
>
> CI: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=7786=logs=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5=54421a62-0c80-5aad-3319-094ff69180bb
> {code}
> [ERROR] Failed to execute goal on project 
> flink-connector-elasticsearch7_2.11: Could not resolve dependencies for 
> project 
> org.apache.flink:flink-connector-elasticsearch7_2.11:jar:1.11-SNAPSHOT: Could 
> not transfer artifact org.apache.lucene:lucene-sandbox:jar:8.3.0 from/to 
> alicloud-mvn-mirror 
> (http://mavenmirror.alicloud.dak8s.net:/repository/maven-central/): GET 
> request of: org/apache/lucene/lucene-sandbox/8.3.0/lucene-sandbox-8.3.0.jar 
> from alicloud-mvn-mirror failed: Premature end of Content-Length delimited 
> message body (expected: 289920; received: 239832 -> [Help 1]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] SteNicholas removed a comment on issue #11769: [FLINK-16766][python]Support create StreamTableEnvironment without pa…

2020-04-21 Thread GitBox


SteNicholas removed a comment on issue #11769:
URL: https://github.com/apache/flink/pull/11769#issuecomment-617534947


   @flinkbot run travis



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas commented on issue #11769: [FLINK-16766][python]Support create StreamTableEnvironment without pa…

2020-04-21 Thread GitBox


SteNicholas commented on issue #11769:
URL: https://github.com/apache/flink/pull/11769#issuecomment-617534947


   @flinkbot run travis



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17309) TPC-DS fail to run data generator

2020-04-21 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17089261#comment-17089261
 ] 

Leonard Xu commented on FLINK-17309:


I checked related scripts hasn't modified, and looked up some materials to find 
the root cause but not a reasonable answer. 

Hi  [~chesnay], could you help to answer the two questions?  I'm not familiar 
with Azure pipeline(many thanks).

(1) It seems these tests fails are random, Is the bash of Azuere's machine 
environment are same between pipelines? 

(2)`dsdgen_linux` need to be execute by root, so the scripts `chmod +x` before 
execution, Is there any possible the step may happen error when invoke `chmod 
+x` ?

> TPC-DS fail to run data generator
> -
>
> Key: FLINK-17309
> URL: https://issues.apache.org/jira/browse/FLINK-17309
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Dawid Wysakowicz
>Priority: Critical
>  Labels: test-stability
>
> {code}
> [INFO] Download data generator success.
> [INFO] 15:53:41 Generating TPC-DS qualification data, this need several 
> minutes, please wait...
> ./dsdgen_linux: line 1: 500:: command not found
> [FAIL] Test script contains errors.
> {code}
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=7849=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #11849: [FLINK-17312][sqlclient] support sql client savepoint

2020-04-21 Thread GitBox


flinkbot commented on issue #11849:
URL: https://github.com/apache/flink/pull/11849#issuecomment-617534213


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit c5817c1743c657627346fdf3e71b5e5ca9335e18 (Wed Apr 22 
03:56:15 UTC 2020)
   
   **Warnings:**
* Documentation files were touched, but no `.zh.md` files: Update Chinese 
documentation or file Jira ticket.
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-17312).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17312) Support sql client savepoint

2020-04-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17312:
---
Labels: pull-request-available  (was: )

> Support sql client savepoint
> 
>
> Key: FLINK-17312
> URL: https://issues.apache.org/jira/browse/FLINK-17312
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client
>Affects Versions: 1.10.0, 1.11.0
>Reporter: lun zhang
>Priority: Major
>  Labels: pull-request-available
>
> Sql client  not support sql job savepoint current. It's important when you 
> use this in really world. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] mrzhangboss opened a new pull request #11849: [FLINK-17312][sqlclient] support sql client savepoint

2020-04-21 Thread GitBox


mrzhangboss opened a new pull request #11849:
URL: https://github.com/apache/flink/pull/11849


   
   
   ## What is the purpose of the change
   
   Support sql client insert job start from a savepoint.
   
   ## Brief change log
   
   
 - *The SQL Client document about execution*
 - *Add optional savepoint path config in 
`org.apache.flink.table.client.config.entries.ExecutionEntry`*
 - *Add Configuration when SQL update in 
`org.apache.flink.table.client.gateway.local.LocalExecutor`*
   
   
   ## Verifying this change
   
   This change not added tests 
   
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (yes)
 - The runtime per-record code paths (performance sensitive): (yes)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes)
 - If yes, how is the feature documented? (docs )
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on issue #11848: [FLINK-17313]fix type validation error when sink table ddl exists column with precision of decimal/timestamp/varchar

2020-04-21 Thread GitBox


flinkbot commented on issue #11848:
URL: https://github.com/apache/flink/pull/11848#issuecomment-617531785


   
   ## CI report:
   
   * 9b24a40f43a101b63ab73d1fa1e0c0f31178b36d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11801: [Flink-16662] [table-blink/clients]Fix the bug that the blink Planner failed to generate JobGraph when use the udf in checkpointing mode.

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11801:
URL: https://github.com/apache/flink/pull/11801#issuecomment-615388238


   
   ## CI report:
   
   * 01721ae7a8a1aa84931c80a3ef99580324944c4d Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/160771983) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7689)
 
   * 40bb9f80fe4a3944121756cf08a6beb843af8461 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161361316) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11687: [FLINK-16536][network][checkpointing] Implement InputChannel state recovery for unaligned checkpoint

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11687:
URL: https://github.com/apache/flink/pull/11687#issuecomment-611445542


   
   ## CI report:
   
   * 5bad018f00f85a7359345187a12d7938aa510d25 UNKNOWN
   * d51ce7f47381d99b843278cd701dcff223761a0b UNKNOWN
   * cba096ae3d8eba4a0d39c64659f76ad10a62be27 UNKNOWN
   * 23818c4b30eb9714154080253a090fc63d983925 UNKNOWN
   * 9b6278f15a4fc65d78671a145305df2c7310cc6a UNKNOWN
   * 336b9fb055fc2d3bcf622abbb7e2a73e4fb8f4a1 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/161288974) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7881)
 
   * 1a28ea65925ab2094ef7496c4427bb635ff6f70c Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161361259) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=20)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11749: [FLINK-16669][python][table] Support Python UDF in SQL function DDL.

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11749:
URL: https://github.com/apache/flink/pull/11749#issuecomment-613901508


   
   ## CI report:
   
   * 0d7615e1c79289a0561529c8b2709ab5fa969ca4 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/161355622) Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11397: [FLINK-16217] [sql-client] catch all exceptions to avoid SQL client crashed

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11397:
URL: https://github.com/apache/flink/pull/11397#issuecomment-598575354


   
   ## CI report:
   
   * 14bde242e3aca096cbfc4e36782d055f8c01c119 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/160840112) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7708)
 
   * 3e020148288d2a3cddb7c8a6025e01f04898b4a0 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161361177) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17012) Expose stage of task initialization

2020-04-21 Thread Wenlong Lyu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17089260#comment-17089260
 ] 

Wenlong Lyu commented on FLINK-17012:
-

[~pnowojski] I think whether the task is ready to process data is what users 
really care. Including the initialization step in the DEPLOYING state of task 
can be an option.
 
I agrees that we should avoid `initialize()` or `configure()` if possible. 
Regarding initializing in constructor, I think we would need to do more do more 
check (at least null check) when clean up in the exception catch clause and it 
would be impossible clean up externally which may cause resource leak when we 
trying to cancel a task by interrupting it, because the Invokable is not 
accessable when failed in constructor, we may need a Factory as [~pnowojski] 
suggested,  which can construct and initialize all components(statebackend, 
operator chain etc.) , create task by providing them explicitly, and clean them 
up when necessary.

> Expose stage of task initialization
> ---
>
> Key: FLINK-17012
> URL: https://issues.apache.org/jira/browse/FLINK-17012
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Metrics, Runtime / Task
>Reporter: Wenlong Lyu
>Priority: Major
>
> Currently a task switches to running before fully initialized, does not take 
> state initialization and operator initialization(#open ) in to account, which 
> may take long time to finish. As a result, there would be a weird phenomenon 
> that all tasks are running but throughput is 0. 
> I think it could be good if we can expose the initialization stage of tasks. 
> What to you think?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11727: [FLINK-17106][table] Support create and drop view in Flink SQL

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11727:
URL: https://github.com/apache/flink/pull/11727#issuecomment-613273432


   
   ## CI report:
   
   * fa5592cec0a6a5dcecc4f45c4bf72caf6e166eb4 UNKNOWN
   * a3901b149f603721be62fbb3d8bf46143d08c3d1 UNKNOWN
   * fabc951d769b00e1dac471b75ba2c99e62fcbf47 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161354179) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=15)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on issue #11848: [FLINK-17313]fix type validation error when sink table ddl exists column with precision of decimal/timestamp/varchar

2020-04-21 Thread GitBox


flinkbot commented on issue #11848:
URL: https://github.com/apache/flink/pull/11848#issuecomment-617530206


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 9b24a40f43a101b63ab73d1fa1e0c0f31178b36d (Wed Apr 22 
03:39:45 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-17313).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17313) Validation error when insert decimal/timestamp/varchar with precision into sink using TypeInformation of row

2020-04-21 Thread Terry Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17089258#comment-17089258
 ] 

Terry Wang commented on FLINK-17313:


I open a 
[https://github.com/apache/flink/pull/11848|https://github.com/apache/flink/pull/11848]
 to help understanding and solve this validation exception.

> Validation error when insert decimal/timestamp/varchar with precision into 
> sink using TypeInformation of row
> 
>
> Key: FLINK-17313
> URL: https://issues.apache.org/jira/browse/FLINK-17313
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Terry Wang
>Priority: Major
>  Labels: pull-request-available
>
> Test code like follwing(in blink planner):
> {code:java}
>   tEnv.sqlUpdate("create table randomSource (" +
>   "   a varchar(10)," 
> +
>   "   b 
> decimal(20,2)" +
>   "   ) with (" +
>   "   'type' = 
> 'random'," +
>   "   'count' = '10'" 
> +
>   "   )");
>   tEnv.sqlUpdate("create table printSink (" +
>   "   a varchar(10)," 
> +
>   "   b 
> decimal(22,2)," +
>   "   c 
> timestamp(3)," +
>   "   ) with (" +
>   "   'type' = 'print'" +
>   "   )");
>   tEnv.sqlUpdate("insert into printSink select *, 
> current_timestamp from randomSource");
>   tEnv.execute("");
> {code}
> Print TableSink implements UpsertStreamTableSink and it's getReocrdType is as 
> following:
> {code:java}
> public TypeInformation getRecordType() {
>   return getTableSchema().toRowType();
>   }
> {code}
> Varchar column validation exception is:
> org.apache.flink.table.api.ValidationException: Type VARCHAR(10) of table 
> field 'a' does not match with the physical type STRING of the 'a' field of 
> the TableSink consumed type.
>   at 
> org.apache.flink.table.utils.TypeMappingUtils.lambda$checkPhysicalLogicalTypeCompatible$4(TypeMappingUtils.java:165)
>   at 
> org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:278)
>   at 
> org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:255)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeDefaultVisitor.visit(LogicalTypeDefaultVisitor.java:67)
>   at 
> org.apache.flink.table.types.logical.VarCharType.accept(VarCharType.java:157)
>   at 
> org.apache.flink.table.utils.TypeMappingUtils.checkIfCompatible(TypeMappingUtils.java:255)
>   at 
> org.apache.flink.table.utils.TypeMappingUtils.checkPhysicalLogicalTypeCompatible(TypeMappingUtils.java:161)
>   at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$$anonfun$validateLogicalPhysicalTypesCompatible$1.apply$mcVI$sp(TableSinkUtils.scala:315)
>   at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
>   at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$.validateLogicalPhysicalTypesCompatible(TableSinkUtils.scala:308)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:195)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:191)
>   at scala.Option.map(Option.scala:146)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:191)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at 

[jira] [Updated] (FLINK-17313) Validation error when insert decimal/timestamp/varchar with precision into sink using TypeInformation of row

2020-04-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17313:
---
Labels: pull-request-available  (was: )

> Validation error when insert decimal/timestamp/varchar with precision into 
> sink using TypeInformation of row
> 
>
> Key: FLINK-17313
> URL: https://issues.apache.org/jira/browse/FLINK-17313
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Terry Wang
>Priority: Major
>  Labels: pull-request-available
>
> Test code like follwing(in blink planner):
> {code:java}
>   tEnv.sqlUpdate("create table randomSource (" +
>   "   a varchar(10)," 
> +
>   "   b 
> decimal(20,2)" +
>   "   ) with (" +
>   "   'type' = 
> 'random'," +
>   "   'count' = '10'" 
> +
>   "   )");
>   tEnv.sqlUpdate("create table printSink (" +
>   "   a varchar(10)," 
> +
>   "   b 
> decimal(22,2)," +
>   "   c 
> timestamp(3)," +
>   "   ) with (" +
>   "   'type' = 'print'" +
>   "   )");
>   tEnv.sqlUpdate("insert into printSink select *, 
> current_timestamp from randomSource");
>   tEnv.execute("");
> {code}
> Print TableSink implements UpsertStreamTableSink and it's getReocrdType is as 
> following:
> {code:java}
> public TypeInformation getRecordType() {
>   return getTableSchema().toRowType();
>   }
> {code}
> Varchar column validation exception is:
> org.apache.flink.table.api.ValidationException: Type VARCHAR(10) of table 
> field 'a' does not match with the physical type STRING of the 'a' field of 
> the TableSink consumed type.
>   at 
> org.apache.flink.table.utils.TypeMappingUtils.lambda$checkPhysicalLogicalTypeCompatible$4(TypeMappingUtils.java:165)
>   at 
> org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:278)
>   at 
> org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:255)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeDefaultVisitor.visit(LogicalTypeDefaultVisitor.java:67)
>   at 
> org.apache.flink.table.types.logical.VarCharType.accept(VarCharType.java:157)
>   at 
> org.apache.flink.table.utils.TypeMappingUtils.checkIfCompatible(TypeMappingUtils.java:255)
>   at 
> org.apache.flink.table.utils.TypeMappingUtils.checkPhysicalLogicalTypeCompatible(TypeMappingUtils.java:161)
>   at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$$anonfun$validateLogicalPhysicalTypesCompatible$1.apply$mcVI$sp(TableSinkUtils.scala:315)
>   at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
>   at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$.validateLogicalPhysicalTypesCompatible(TableSinkUtils.scala:308)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:195)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:191)
>   at scala.Option.map(Option.scala:146)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:191)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:150)
>   at 
> 

[GitHub] [flink] zjuwangg opened a new pull request #11848: [FLINK-17313]fix type validation error when sink table ddl exists column with precision of decimal/timestamp/varchar

2020-04-21 Thread GitBox


zjuwangg opened a new pull request #11848:
URL: https://github.com/apache/flink/pull/11848


   ## What is the purpose of the change
   
   * Now TypeMappingUtils#checkPhysicalLogicalTypeCompatible method doesn't 
consider the different physical and logical type validation logic of source and 
sink: logical type should be able to cover the physical type in source, but 
physical type should be able to cover the logic type in sink vice verse. 
Besides, the decimal type should be taken more carefully, when target type is 
Legacy(Decimal), it should be able to accept any precision decimal type.*
   * This pr aims to solve the above problem.*
   
   
   ## Brief change log
   
 - 9b24a40 correct the validation logc
   
   ## Verifying this change
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no )
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-17314) 【Flink Kafka Connector】The Config(connector.topic)support list topic

2020-04-21 Thread zhisheng (Jira)
zhisheng created FLINK-17314:


 Summary: 【Flink Kafka Connector】The Config(connector.topic)support 
list topic
 Key: FLINK-17314
 URL: https://issues.apache.org/jira/browse/FLINK-17314
 Project: Flink
  Issue Type: Improvement
Reporter: zhisheng


sometime we may consume from more than one topic, and the data schema in  all 
topic  is same



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17313) Validation error when insert decimal/timestamp/varchar with precision into sink using TypeInformation of row

2020-04-21 Thread Terry Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Terry Wang updated FLINK-17313:
---
Description: 
Test code like follwing(in blink planner):
{code:java}
tEnv.sqlUpdate("create table randomSource (" +
"   a varchar(10)," 
+
"   b 
decimal(20,2)" +
"   ) with (" +
"   'type' = 
'random'," +
"   'count' = '10'" 
+
"   )");
tEnv.sqlUpdate("create table printSink (" +
"   a varchar(10)," 
+
"   b 
decimal(22,2)," +
"   c 
timestamp(3)," +
"   ) with (" +
"   'type' = 'print'" +
"   )");
tEnv.sqlUpdate("insert into printSink select *, 
current_timestamp from randomSource");
tEnv.execute("");
{code}

Print TableSink implements UpsertStreamTableSink and it's getReocrdType is as 
following:


{code:java}
public TypeInformation getRecordType() {
return getTableSchema().toRowType();
}
{code}


Varchar column validation exception is:

org.apache.flink.table.api.ValidationException: Type VARCHAR(10) of table field 
'a' does not match with the physical type STRING of the 'a' field of the 
TableSink consumed type.

at 
org.apache.flink.table.utils.TypeMappingUtils.lambda$checkPhysicalLogicalTypeCompatible$4(TypeMappingUtils.java:165)
at 
org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:278)
at 
org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:255)
at 
org.apache.flink.table.types.logical.utils.LogicalTypeDefaultVisitor.visit(LogicalTypeDefaultVisitor.java:67)
at 
org.apache.flink.table.types.logical.VarCharType.accept(VarCharType.java:157)
at 
org.apache.flink.table.utils.TypeMappingUtils.checkIfCompatible(TypeMappingUtils.java:255)
at 
org.apache.flink.table.utils.TypeMappingUtils.checkPhysicalLogicalTypeCompatible(TypeMappingUtils.java:161)
at 
org.apache.flink.table.planner.sinks.TableSinkUtils$$anonfun$validateLogicalPhysicalTypesCompatible$1.apply$mcVI$sp(TableSinkUtils.scala:315)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
at 
org.apache.flink.table.planner.sinks.TableSinkUtils$.validateLogicalPhysicalTypesCompatible(TableSinkUtils.scala:308)
at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:195)
at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:191)
at scala.Option.map(Option.scala:146)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:191)
at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150)
at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:150)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:863)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:855)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.execute(TableEnvironmentImpl.java:822)

Other type validation exception is similar, I dig into and think it's caused by 
TypeMappingUtils#checkPhysicalLogicalTypeCompatible. It seems that the method 
doesn't consider the different physical and logical type validation logic of 
source and sink:   logical type should be able to cover the physical type in 

[GitHub] [flink] flinkbot edited a comment on issue #11801: [Flink-16662] [table-blink/clients]Fix the bug that the blink Planner failed to generate JobGraph when use the udf in checkpointing mode.

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11801:
URL: https://github.com/apache/flink/pull/11801#issuecomment-615388238


   
   ## CI report:
   
   * 01721ae7a8a1aa84931c80a3ef99580324944c4d Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/160771983) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7689)
 
   * 40bb9f80fe4a3944121756cf08a6beb843af8461 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11687: [FLINK-16536][network][checkpointing] Implement InputChannel state recovery for unaligned checkpoint

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11687:
URL: https://github.com/apache/flink/pull/11687#issuecomment-611445542


   
   ## CI report:
   
   * 5bad018f00f85a7359345187a12d7938aa510d25 UNKNOWN
   * d51ce7f47381d99b843278cd701dcff223761a0b UNKNOWN
   * cba096ae3d8eba4a0d39c64659f76ad10a62be27 UNKNOWN
   * 23818c4b30eb9714154080253a090fc63d983925 UNKNOWN
   * 9b6278f15a4fc65d78671a145305df2c7310cc6a UNKNOWN
   * 336b9fb055fc2d3bcf622abbb7e2a73e4fb8f4a1 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/161288974) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7881)
 
   * 1a28ea65925ab2094ef7496c4427bb635ff6f70c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11397: [FLINK-16217] [sql-client] catch all exceptions to avoid SQL client crashed

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11397:
URL: https://github.com/apache/flink/pull/11397#issuecomment-598575354


   
   ## CI report:
   
   * 14bde242e3aca096cbfc4e36782d055f8c01c119 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/160840112) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7708)
 
   * 3e020148288d2a3cddb7c8a6025e01f04898b4a0 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-17273) Fix not calling ResourceManager#closeTaskManagerConnection in KubernetesResourceManager in case of registered TaskExecutor failure

2020-04-21 Thread Canbin Zheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17089254#comment-17089254
 ] 

Canbin Zheng edited comment on FLINK-17273 at 4/22/20, 3:25 AM:


Thanks for your attention! [~xintongsong] [~trohrmann] [~fly_in_gis] Given the 
following call stack,
{quote}{{ResourceManager#releaseResource}}

   - {{KubernetesResourceManager#stopWorker}}

      - {{KubernetesResourceManager#internalStopPod}}

   - {{ResourceManager#closeTaskManagerConnection}}
{quote}
 

I think it's enough to explicitly call 
{{ResourceManager#closeTaskManagerConnection}} in 
{{KubernetesResourceManager#removePodIfTerminated}} for this issue.


was (Author: felixzheng):
Thanks for your attention! [~xintongsong] [~fly_in_gis] Given the following 
call stack,
{quote}{{ResourceManager#releaseResource}}

   - {{KubernetesResourceManager#stopWorker}}

      - {{KubernetesResourceManager#internalStopPod}}

   - {{ResourceManager#closeTaskManagerConnection}}
{quote}
 

I think it's enough to explicitly call 
{{ResourceManager#closeTaskManagerConnection}} in 
{{KubernetesResourceManager#removePodIfTerminated}} for this issue.

> Fix not calling ResourceManager#closeTaskManagerConnection in 
> KubernetesResourceManager in case of registered TaskExecutor failure
> --
>
> Key: FLINK-17273
> URL: https://issues.apache.org/jira/browse/FLINK-17273
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes, Runtime / Coordination
>Affects Versions: 1.10.0, 1.10.1
>Reporter: Canbin Zheng
>Assignee: Canbin Zheng
>Priority: Major
> Fix For: 1.11.0
>
>
> At the moment, the {{KubernetesResourceManager}} does not call the method of 
> {{ResourceManager#closeTaskManagerConnection}} once it detects that a 
> currently registered task executor has failed. This ticket propoeses to fix 
> this problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17273) Fix not calling ResourceManager#closeTaskManagerConnection in KubernetesResourceManager in case of registered TaskExecutor failure

2020-04-21 Thread Canbin Zheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17089254#comment-17089254
 ] 

Canbin Zheng commented on FLINK-17273:
--

Thanks for your attention! [~xintongsong] [~fly_in_gis] Given the following 
call stack,
{quote}{{ResourceManager#releaseResource}}

   - {{KubernetesResourceManager#stopWorker}}

      - {{KubernetesResourceManager#internalStopPod}}

   - {{ResourceManager#closeTaskManagerConnection}}
{quote}
 

I think it's enough to explicitly call 
{{ResourceManager#closeTaskManagerConnection}} in 
{{KubernetesResourceManager#removePodIfTerminated}} for this issue.

> Fix not calling ResourceManager#closeTaskManagerConnection in 
> KubernetesResourceManager in case of registered TaskExecutor failure
> --
>
> Key: FLINK-17273
> URL: https://issues.apache.org/jira/browse/FLINK-17273
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes, Runtime / Coordination
>Affects Versions: 1.10.0, 1.10.1
>Reporter: Canbin Zheng
>Assignee: Canbin Zheng
>Priority: Major
> Fix For: 1.11.0
>
>
> At the moment, the {{KubernetesResourceManager}} does not call the method of 
> {{ResourceManager#closeTaskManagerConnection}} once it detects that a 
> currently registered task executor has failed. This ticket propoeses to fix 
> this problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17313) Validation error when insert decimal/timestamp/varchar with precision into sink using TypeInformation of row

2020-04-21 Thread Terry Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Terry Wang updated FLINK-17313:
---
Description: 
Test code like follwing(in blink planner):
{code:java}
tEnv.sqlUpdate("create table randomSource (" +
"   a varchar(10)," 
+
"   b 
decimal(20,2)" +
"   ) with (" +
"   'type' = 
'random'," +
"   'count' = '10'" 
+
"   )");
tEnv.sqlUpdate("create table printSink (" +
"   a varchar(10)," 
+
"   b 
decimal(22,2)," +
"   c 
timestamp(3)," +
"   ) with (" +
"   'type' = 'print'" +
"   )");
tEnv.sqlUpdate("insert into printSink select *, 
current_timestamp from randomSource");
tEnv.execute("");
{code}

Print TableSink implements UpsertStreamTableSink and it's getReocrdType is as 
following:


{code:java}
public TypeInformation getRecordType() {
return getTableSchema().toRowType();
}
{code}


Varchar column validation exception is:

org.apache.flink.table.api.ValidationException: Type VARCHAR(10) of table field 
'a' does not match with the physical type STRING of the 'a' field of the 
TableSink consumed type.

at 
org.apache.flink.table.utils.TypeMappingUtils.lambda$checkPhysicalLogicalTypeCompatible$4(TypeMappingUtils.java:165)
at 
org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:278)
at 
org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:255)
at 
org.apache.flink.table.types.logical.utils.LogicalTypeDefaultVisitor.visit(LogicalTypeDefaultVisitor.java:67)
at 
org.apache.flink.table.types.logical.VarCharType.accept(VarCharType.java:157)
at 
org.apache.flink.table.utils.TypeMappingUtils.checkIfCompatible(TypeMappingUtils.java:255)
at 
org.apache.flink.table.utils.TypeMappingUtils.checkPhysicalLogicalTypeCompatible(TypeMappingUtils.java:161)
at 
org.apache.flink.table.planner.sinks.TableSinkUtils$$anonfun$validateLogicalPhysicalTypesCompatible$1.apply$mcVI$sp(TableSinkUtils.scala:315)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
at 
org.apache.flink.table.planner.sinks.TableSinkUtils$.validateLogicalPhysicalTypesCompatible(TableSinkUtils.scala:308)
at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:195)
at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:191)
at scala.Option.map(Option.scala:146)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:191)
at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150)
at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:150)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:863)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:855)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.execute(TableEnvironmentImpl.java:822)

Other type validation exception is similar, I dig into and think it's caused by 
TypeMappingUtils#checkPhysicalLogicalTypeCompatible. It seems that the method 
doesn't consider the different physical and logical type validation logic of 
source and sink.






  was:
Test code like follwing(in blink planner):

[jira] [Commented] (FLINK-17313) Validation error when insert decimal/timestamp/varchar with precision into sink using TypeInformation of row

2020-04-21 Thread Terry Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17089251#comment-17089251
 ] 

Terry Wang commented on FLINK-17313:


cc [~ykt836][~jark][~dwysakowicz]Please have a look on this issue.

> Validation error when insert decimal/timestamp/varchar with precision into 
> sink using TypeInformation of row
> 
>
> Key: FLINK-17313
> URL: https://issues.apache.org/jira/browse/FLINK-17313
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Terry Wang
>Priority: Major
>
> Test code like follwing(in blink planner):
> {code:java}
>   tEnv.sqlUpdate("create table randomSource (" +
>   "   a varchar(10)," 
> +
>   "   b 
> decimal(20,2)" +
>   "   ) with (" +
>   "   'type' = 
> 'random'," +
>   "   'count' = '10'" 
> +
>   "   )");
>   tEnv.sqlUpdate("create table printSink (" +
>   "   a varchar(10)," 
> +
>   "   b 
> decimal(22,2)," +
>   "   c 
> timestamp(3)," +
>   "   ) with (" +
>   "   'type' = 'print'" +
>   "   )");
>   tEnv.sqlUpdate("insert into printSink select *, 
> current_timestamp from randomSource");
>   tEnv.execute("");
> {code}
> Print TableSink implements UpsertStreamTableSink and it's getReocrdType is as 
> following:
> {code:java}
> public TypeInformation getRecordType() {
>   return getTableSchema().toRowType();
>   }
> {code}
> Varchar column validation exception is:
> org.apache.flink.table.api.ValidationException: Type VARCHAR(10) of table 
> field 'a' does not match with the physical type STRING of the 'a' field of 
> the TableSink consumed type.
>   at 
> org.apache.flink.table.utils.TypeMappingUtils.lambda$checkPhysicalLogicalTypeCompatible$4(TypeMappingUtils.java:165)
>   at 
> org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:278)
>   at 
> org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:255)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeDefaultVisitor.visit(LogicalTypeDefaultVisitor.java:67)
>   at 
> org.apache.flink.table.types.logical.VarCharType.accept(VarCharType.java:157)
>   at 
> org.apache.flink.table.utils.TypeMappingUtils.checkIfCompatible(TypeMappingUtils.java:255)
>   at 
> org.apache.flink.table.utils.TypeMappingUtils.checkPhysicalLogicalTypeCompatible(TypeMappingUtils.java:161)
>   at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$$anonfun$validateLogicalPhysicalTypesCompatible$1.apply$mcVI$sp(TableSinkUtils.scala:315)
>   at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
>   at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$.validateLogicalPhysicalTypesCompatible(TableSinkUtils.scala:308)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:195)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:191)
>   at scala.Option.map(Option.scala:146)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:191)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:150)
>   

[GitHub] [flink] xintongsong commented on a change in pull request #11323: [FLINK-16439][k8s] Make KubernetesResourceManager starts workers using WorkerResourceSpec requested by SlotManager

2020-04-21 Thread GitBox


xintongsong commented on a change in pull request #11323:
URL: https://github.com/apache/flink/pull/11323#discussion_r412637786



##
File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/KubernetesResourceManager.java
##
@@ -239,70 +247,73 @@ private void recoverWorkerNodesFromPreviousAttempts() 
throws ResourceManagerExce
++currentMaxAttemptId);
}
 
-   private void requestKubernetesPod() {
-   numPendingPodRequests++;
+   private void requestKubernetesPod(WorkerResourceSpec 
workerResourceSpec) {
+   final KubernetesTaskManagerParameters parameters =
+   
createKubernetesTaskManagerParameters(workerResourceSpec);
+
+   final KubernetesPod taskManagerPod =
+   
KubernetesTaskManagerFactory.createTaskManagerComponent(parameters);
+   kubeClient.createTaskManagerPod(taskManagerPod)
+   .whenComplete(
+   (ignore, throwable) -> {
+   if (throwable != null) {
+   final Time retryInterval = 
configuration.getPodCreationRetryInterval();
+   log.error("Could not start 
TaskManager in pod {}, retry in {}. ",
+   
taskManagerPod.getName(), retryInterval, throwable);
+   scheduleRunAsync(
+   () -> 
requestKubernetesPodIfRequired(workerResourceSpec),
+   retryInterval);
+   } else {
+   
podWorkerResources.put(parameters.getPodName(), workerResourceSpec);
+   final int pendingWorkerNum = 
notifyNewWorkerRequested(workerResourceSpec);

Review comment:
   True.
   I'll move these two lines to before `kubeClient.createTaskManagerPod` (which 
is on the main thread), and clean the states if 
`kubeClient.createTaskManagerPod` is completed exceptionally.
   To guarantee the state cleaning happens also on the main thread and before 
the retry, I'll wrap it with another `runAsync`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17313) Validation error when insert decimal/timestamp/varchar with precision into sink using TypeInformation of row

2020-04-21 Thread Terry Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Terry Wang updated FLINK-17313:
---
Description: 
Test code like follwing(in blink planner):
{code:java}
tEnv.sqlUpdate("create table randomSource (" +
"   a varchar(10)," 
+
"   b 
decimal(20,2)" +
"   ) with (" +
"   'type' = 
'random'," +
"   'count' = '10'" 
+
"   )");
tEnv.sqlUpdate("create table printSink (" +
"   a varchar(10)," 
+
"   b 
decimal(22,2)," +
"   c 
timestamp(3)," +
"   ) with (" +
"   'type' = 'print'" +
"   )");
tEnv.sqlUpdate("insert into printSink select *, 
current_timestamp from randomSource");
tEnv.execute("");
{code}

Print TableSink implements UpsertStreamTableSink and it's getReocrdType is as 
following:


{code:java}
public TypeInformation getRecordType() {
return getTableSchema().toRowType();
}
{code}


Varchar column validation exception is:

org.apache.flink.table.api.ValidationException: Type VARCHAR(10) of table field 
'a' does not match with the physical type STRING of the 'a' field of the 
TableSink consumed type.

at 
org.apache.flink.table.utils.TypeMappingUtils.lambda$checkPhysicalLogicalTypeCompatible$4(TypeMappingUtils.java:165)
at 
org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:278)
at 
org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:255)
at 
org.apache.flink.table.types.logical.utils.LogicalTypeDefaultVisitor.visit(LogicalTypeDefaultVisitor.java:67)
at 
org.apache.flink.table.types.logical.VarCharType.accept(VarCharType.java:157)
at 
org.apache.flink.table.utils.TypeMappingUtils.checkIfCompatible(TypeMappingUtils.java:255)
at 
org.apache.flink.table.utils.TypeMappingUtils.checkPhysicalLogicalTypeCompatible(TypeMappingUtils.java:161)
at 
org.apache.flink.table.planner.sinks.TableSinkUtils$$anonfun$validateLogicalPhysicalTypesCompatible$1.apply$mcVI$sp(TableSinkUtils.scala:315)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
at 
org.apache.flink.table.planner.sinks.TableSinkUtils$.validateLogicalPhysicalTypesCompatible(TableSinkUtils.scala:308)
at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:195)
at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:191)
at scala.Option.map(Option.scala:146)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:191)
at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150)
at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:150)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:863)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:855)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.execute(TableEnvironmentImpl.java:822)

Other type validation exception is similar, I dig into and think it's caused by 
TypeMappingUtils#checkPhysicalLogicalTypeCompatible. It seems that the method 
don't consider the different affect of source and sink . I will open a PR soon 
to solve this problem.






  was:
Test code like follwing(in blink planner):

[jira] [Created] (FLINK-17313) Validation error when insert decimal/timestamp/varchar with precision into sink using TypeInformation of row

2020-04-21 Thread Terry Wang (Jira)
Terry Wang created FLINK-17313:
--

 Summary: Validation error when insert decimal/timestamp/varchar 
with precision into sink using TypeInformation of row
 Key: FLINK-17313
 URL: https://issues.apache.org/jira/browse/FLINK-17313
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Reporter: Terry Wang


Test code like follwing(in blink planner):
{code:java}
tEnv.sqlUpdate("create table randomSource (" +
"   a varchar(10)," 
+
"   b 
decimal(20,2)" +
"   ) with (" +
"   'type' = 
'random'," +
"   'count' = '10'" 
+
"   )");
tEnv.sqlUpdate("create table printSink (" +
"   a varchar(10)," 
+
"   b 
decimal(22,2)," +
"   c 
timestamp(3)," +
"   ) with (" +
"   'type' = 'print'" +
"   )");
tEnv.sqlUpdate("insert into printSink select *, 
current_timestamp from randomSource");
tEnv.execute("");
{code}

Print TableSink implements UpsertStreamTableSink and it's getReocrdType is as 
following:


{code:java}
public TypeInformation getRecordType() {
return getTableSchema().toRowType();
}
{code}


varchar type exception is:


||Heading 1||
|org.apache.flink.table.api.ValidationException: Type VARCHAR(10) of table 
field 'a' does not match with the physical type STRING of the 'a' field of the 
TableSink consumed type.

at 
org.apache.flink.table.utils.TypeMappingUtils.lambda$checkPhysicalLogicalTypeCompatible$4(TypeMappingUtils.java:165)
at 
org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:278)
at 
org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:255)
at 
org.apache.flink.table.types.logical.utils.LogicalTypeDefaultVisitor.visit(LogicalTypeDefaultVisitor.java:67)
at 
org.apache.flink.table.types.logical.VarCharType.accept(VarCharType.java:157)
at 
org.apache.flink.table.utils.TypeMappingUtils.checkIfCompatible(TypeMappingUtils.java:255)
at 
org.apache.flink.table.utils.TypeMappingUtils.checkPhysicalLogicalTypeCompatible(TypeMappingUtils.java:161)
at 
org.apache.flink.table.planner.sinks.TableSinkUtils$$anonfun$validateLogicalPhysicalTypesCompatible$1.apply$mcVI$sp(TableSinkUtils.scala:315)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
at 
org.apache.flink.table.planner.sinks.TableSinkUtils$.validateLogicalPhysicalTypesCompatible(TableSinkUtils.scala:308)
at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:195)
at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:191)
at scala.Option.map(Option.scala:146)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:191)
at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150)
at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:150)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:863)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:855)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.execute(TableEnvironmentImpl.java:822)
|
other type validation is similar, I dig it and found it's caused by 

[GitHub] [flink] flinkbot edited a comment on issue #11829: [FLINK-17021][table-planner-blink] Blink batch planner set GlobalDataExchangeMode

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11829:
URL: https://github.com/apache/flink/pull/11829#issuecomment-616561190


   
   ## CI report:
   
   * c01e235c3048c97accdd33f7cfe2b03f6f60c8b3 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161169854) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7822)
 
   * c15a8214d2883f5a5c07bb6357dbfa465dd5cd39 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161357311) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ commented on a change in pull request #11615: [FLINK-16605] Add max limitation to the total number of slots

2020-04-21 Thread GitBox


KarmaGYZ commented on a change in pull request #11615:
URL: https://github.com/apache/flink/pull/11615#discussion_r412632853



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/slotmanager/SlotManagerImpl.java
##
@@ -804,16 +850,31 @@ private void 
fulfillPendingSlotRequestWithPendingTaskManagerSlot(PendingSlotRequ
return Optional.empty();
}
 
-   private boolean isFulfillableByRegisteredSlots(ResourceProfile 
resourceProfile) {
+   private boolean isFulfillableByRegisteredOrPendingSlots(ResourceProfile 
resourceProfile) {
for (TaskManagerSlot slot : slots.values()) {
if 
(slot.getResourceProfile().isMatching(resourceProfile)) {
return true;
}
}
+
+   for (PendingTaskManagerSlot slot : pendingSlots.values()) {

Review comment:
   To summarize:
   - Before this change, this method do the "resource fit" check mainly for 
Standalone mode. There is no `pendingSlot` in this mode.
   - After this change, this method could also do the "resource fit" check for 
Yarn/K8s when the number of slots reached max limit. The `pendingSlot` may not 
register back quickly at the startup period. Only check the registered `slots` 
would cause a false alarm.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ commented on a change in pull request #11615: [FLINK-16605] Add max limitation to the total number of slots

2020-04-21 Thread GitBox


KarmaGYZ commented on a change in pull request #11615:
URL: https://github.com/apache/flink/pull/11615#discussion_r412623864



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/slotmanager/SlotManagerImpl.java
##
@@ -804,16 +850,31 @@ private void 
fulfillPendingSlotRequestWithPendingTaskManagerSlot(PendingSlotRequ
return Optional.empty();
}
 
-   private boolean isFulfillableByRegisteredSlots(ResourceProfile 
resourceProfile) {
+   private boolean isFulfillableByRegisteredOrPendingSlots(ResourceProfile 
resourceProfile) {
for (TaskManagerSlot slot : slots.values()) {
if 
(slot.getResourceProfile().isMatching(resourceProfile)) {
return true;
}
}
+
+   for (PendingTaskManagerSlot slot : pendingSlots.values()) {

Review comment:
   Before this change:
   - For Yarn/K8s mode, as long as the ResourceProfile of the `SlotRequest` 
could fit in the `defaultSlotResourceProfile`, the SlotManager would always 
trigger `allocateResource` and get `pendingSlot`. The "resource fit" check has 
already been done in `allocateResource` for Yarn/K8s. 
   - For Standalone mode, it will set `failUnfulfillableRequest` to false for a 
startup period to wait there is registered slot in the cluster. Since the 
`ResourceManager#allocateResource` would always return false in standalone 
mode, this method checks whether the ResourceProfile of the `SlotRequest` could 
fit in.
   
   After this change:
   - For Yarn/K8s mode, even if the ResourceProfile of the `SlotRequest` could 
fit in the `defaultSlotResourceProfile`, the `allocateResource` could return no 
`pendingSlot` when the total number of slot reaches max limit. In this case, 
the "resource fit in" check should be move to 
`isFulfillableByRegisteredOrPendingSlots` function. However, the `pendingSlot` 
might not register back at the startup period and the `slots` could be empty. 
So, we check `pendingSlot` there.
   - For Standalone mode, since there is no `pendingSlot`, we do not break the 
origin behavior.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11836: [FLINK-17188][python] Use pip instead of conda to install flake8 and sphinx

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11836:
URL: https://github.com/apache/flink/pull/11836#issuecomment-616922166


   
   ## CI report:
   
   * 02287c3c91f023a4eaad10059fa9a7890c11879c Travis: 
[CANCELED](https://travis-ci.com/github/flink-ci/flink/builds/161205815) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7845)
 
   * 8cfe6bcc6cee878730f9cfa75d7fa598f88a9833 UNKNOWN
   * 3ca11194996f3b2e7d51221f8f51ae928d93c3a3 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161355682) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=17)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11829: [FLINK-17021][table-planner-blink] Blink batch planner set GlobalDataExchangeMode

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11829:
URL: https://github.com/apache/flink/pull/11829#issuecomment-616561190


   
   ## CI report:
   
   * c01e235c3048c97accdd33f7cfe2b03f6f60c8b3 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161169854) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7822)
 
   * c15a8214d2883f5a5c07bb6357dbfa465dd5cd39 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11749: [FLINK-16669][python][table] Support Python UDF in SQL function DDL.

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11749:
URL: https://github.com/apache/flink/pull/11749#issuecomment-613901508


   
   ## CI report:
   
   * 7cbe2380ac7246cf703397d9a78d002c0491a959 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/160664905) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7621)
 
   * 0d7615e1c79289a0561529c8b2709ab5fa969ca4 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161355622) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ commented on a change in pull request #11615: [FLINK-16605] Add max limitation to the total number of slots

2020-04-21 Thread GitBox


KarmaGYZ commented on a change in pull request #11615:
URL: https://github.com/apache/flink/pull/11615#discussion_r412627200



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/StandaloneResourceManagerFactory.java
##
@@ -74,4 +81,21 @@
standaloneClusterStartupPeriodTime,
AkkaUtils.getTimeoutAsTime(configuration));
}
+
+   /**
+* Get the configuration for standalone ResourceManager, overwrite 
invalid configs.
+*
+* @param configuration configuration object
+* @return the configuration for standalone ResourceManager
+*/
+   private static Configuration 
getConfigurationForStandaloneResourceManager(Configuration configuration) {
+   final Configuration copiedConfig = new 
Configuration(configuration);
+   if 
(configuration.contains(ResourceManagerOptions.MAX_SLOT_NUM)) {
+   // The max slot limit should not take effect for 
standalone cluster, we overwrite the configure in case user
+   // sets this value by mistake.
+   LOG.warn("The {} should not take effect for standalone 
cluster, If configured, it will be ignored.", 
ResourceManagerOptions.MAX_SLOT_NUM.key());
+   
copiedConfig.removeConfig(ResourceManagerOptions.MAX_SLOT_NUM);

Review comment:
   Thanks for that pointer, @xintongsong .





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-17312) Support sql client savepoint

2020-04-21 Thread lun zhang (Jira)
lun zhang created FLINK-17312:
-

 Summary: Support sql client savepoint
 Key: FLINK-17312
 URL: https://issues.apache.org/jira/browse/FLINK-17312
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Client
Affects Versions: 1.10.0, 1.11.0
Reporter: lun zhang


Sql client  not support sql job savepoint current. It's important when you use 
this in really world. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xintongsong commented on a change in pull request #11323: [FLINK-16439][k8s] Make KubernetesResourceManager starts workers using WorkerResourceSpec requested by SlotManager

2020-04-21 Thread GitBox


xintongsong commented on a change in pull request #11323:
URL: https://github.com/apache/flink/pull/11323#discussion_r412626933



##
File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/KubernetesResourceManager.java
##
@@ -320,5 +333,16 @@ private void internalStopPod(String podName) {
}
}
);
+
+   final KubernetesWorkerNode kubernetesWorkerNode = 
workerNodes.remove(resourceId);
+   final WorkerResourceSpec workerResourceSpec = 
podWorkerResources.remove(podName);
+
+   // If the stopped pod is requested in the current attempt 
(workerResourceSpec is known) and is not yet added,
+   // we need to notify ActiveResourceManager to decrease the 
pending worker count.
+   if (workerResourceSpec != null && kubernetesWorkerNode == null) 
{

Review comment:
   I think we don't want to call `notifyNewWorkerAllocationFailed` and 
decrease the pending count when a recovered pod is being stopped. The 
`pendingWorkerCounter` only accounts for the workers requested in the current  
attempt.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ commented on a change in pull request #11615: [FLINK-16605] Add max limitation to the total number of slots

2020-04-21 Thread GitBox


KarmaGYZ commented on a change in pull request #11615:
URL: https://github.com/apache/flink/pull/11615#discussion_r412625928



##
File path: docs/_includes/generated/expert_scheduling_section.html
##
@@ -14,6 +14,12 @@
 Boolean
 Enable the slot spread out allocation strategy. This strategy 
tries to spread out the slots evenly across all available `TaskExecutors`.
 
+
+slotmanager.number-of-slots.max

Review comment:
   Yes, I'd like to. However, since it is a project-wide naming convention, 
I think it probably out of the scope of this PR. When we read a consensus in 
dev ML, it is another job to apply this convention to all existing configs. 
WDYT?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ commented on a change in pull request #11615: [FLINK-16605] Add max limitation to the total number of slots

2020-04-21 Thread GitBox


KarmaGYZ commented on a change in pull request #11615:
URL: https://github.com/apache/flink/pull/11615#discussion_r412625928



##
File path: docs/_includes/generated/expert_scheduling_section.html
##
@@ -14,6 +14,12 @@
 Boolean
 Enable the slot spread out allocation strategy. This strategy 
tries to spread out the slots evenly across all available `TaskExecutors`.
 
+
+slotmanager.number-of-slots.max

Review comment:
   Yes, I'd like to. However, since it is a project-wide naming convention, 
I think it probably out of the scope of this PR. When we read a consensus in 
dev ML, it makes sense to have another PR to apply this convention to all 
existing configs. WDYT?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ commented on a change in pull request #11615: [FLINK-16605] Add max limitation to the total number of slots

2020-04-21 Thread GitBox


KarmaGYZ commented on a change in pull request #11615:
URL: https://github.com/apache/flink/pull/11615#discussion_r412623864



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/slotmanager/SlotManagerImpl.java
##
@@ -804,16 +850,31 @@ private void 
fulfillPendingSlotRequestWithPendingTaskManagerSlot(PendingSlotRequ
return Optional.empty();
}
 
-   private boolean isFulfillableByRegisteredSlots(ResourceProfile 
resourceProfile) {
+   private boolean isFulfillableByRegisteredOrPendingSlots(ResourceProfile 
resourceProfile) {
for (TaskManagerSlot slot : slots.values()) {
if 
(slot.getResourceProfile().isMatching(resourceProfile)) {
return true;
}
}
+
+   for (PendingTaskManagerSlot slot : pendingSlots.values()) {

Review comment:
   Before this change:
   - For Yarn/K8s mode, as long as the ResourceProfile of the `SlotRequest` 
could fit in the `defaultSlotResourceProfile`, the SlotManager would always 
trigger `allocateResource` and get `pendingSlot`. The "resource fit" check has 
already been done in `allocateResource` for Yarn/K8s. 
   - For Standalone mode, it will set `failUnfulfillableRequest` to false for a 
startup period to wait there is registered slot in the cluster. Since the 
`ResourceManager#allocateResource` would always return false in standalone 
mode, this method checks whether the ResourceProfile of the `SlotRequest` could 
fit in.
   
   After this change:
   - For Yarn/K8s mode, even if the ResourceProfile of the `SlotRequest` could 
fit in the `defaultSlotResourceProfile`, the `allocateResource` could return no 
`pendingSlot` when the total number of slot reaches max limit. In this case, 
the "resource fit in" check should be move to 
`isFulfillableByRegisteredOrPendingSlots` function.
   - For Standalone mode, since there is no `pendingSlot`, we do not break the 
origin behavior.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-17017) Implement Bulk Slot Allocation in SchedulerImpl

2020-04-21 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu reassigned FLINK-17017:
---

Assignee: Zhu Zhu

> Implement Bulk Slot Allocation in SchedulerImpl
> ---
>
> Key: FLINK-17017
> URL: https://issues.apache.org/jira/browse/FLINK-17017
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.11.0
>Reporter: Zhu Zhu
>Assignee: Zhu Zhu
>Priority: Major
> Fix For: 1.11.0
>
>
> The SlotProvider interface should be extended with an bulk slot allocation 
> method which accepts a bulk of slot requests as one of the parameters.
> {code:java}
> CompletableFuture> allocateSlots(
>   Collection slotRequests,
>   Time allocationTimeout);
>  
> class LogicalSlotRequest {
>   SlotRequestId slotRequestId;
>   ScheduledUnit scheduledUnit;
>   SlotProfile slotProfile;
>   boolean slotWillBeOccupiedIndefinitely;
> }
>  
> class LogicalSlotRequestResult {
>   SlotRequestId slotRequestId;
>   LogicalSlot slot;
> }
> {code}
> More details see [FLIP-119#Bulk Slot 
> Allocation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-119+Pipelined+Region+Scheduling#FLIP-119PipelinedRegionScheduling-BulkSlotAllocation]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] KarmaGYZ commented on a change in pull request #11615: [FLINK-16605] Add max limitation to the total number of slots

2020-04-21 Thread GitBox


KarmaGYZ commented on a change in pull request #11615:
URL: https://github.com/apache/flink/pull/11615#discussion_r412623864



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/slotmanager/SlotManagerImpl.java
##
@@ -804,16 +850,31 @@ private void 
fulfillPendingSlotRequestWithPendingTaskManagerSlot(PendingSlotRequ
return Optional.empty();
}
 
-   private boolean isFulfillableByRegisteredSlots(ResourceProfile 
resourceProfile) {
+   private boolean isFulfillableByRegisteredOrPendingSlots(ResourceProfile 
resourceProfile) {
for (TaskManagerSlot slot : slots.values()) {
if 
(slot.getResourceProfile().isMatching(resourceProfile)) {
return true;
}
}
+
+   for (PendingTaskManagerSlot slot : pendingSlots.values()) {

Review comment:
   Before this change:
   - For Yarn/K8s mode, as long as the ResourceProfile of the `SlotRequest` 
could fit in the `defaultSlotResourceProfile`, the SlotManager would always 
trigger `allocateResource` and get `pendingSlot`.
   - For Standalone mode, it will set `failUnfulfillableRequest` to false for a 
startup period to wait there is registered slot in the cluster. Since the 
`ResourceManager#allocateResource` would always return false in standalone 
mode, this method checks whether the ResourceProfile of the `SlotRequest` could 
fit in.
   
   After this change:
   - For Yarn/K8s mode, even if the ResourceProfile of the `SlotRequest` could 
fit in the `defaultSlotResourceProfile`, the `allocateResource` could return no 
`pendingSlot` when the total number of slot reaches max limit. In this case, 
the "resource fit in" check should be move to 
`isFulfillableByRegisteredOrPendingSlots` function.
   - For Standalone mode, since there is no `pendingSlot`, we do not break the 
origin behavior.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17273) Fix not calling ResourceManager#closeTaskManagerConnection in KubernetesResourceManager in case of registered TaskExecutor failure

2020-04-21 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17089219#comment-17089219
 ] 

Xintong Song commented on FLINK-17273:
--

I think this is a valid issue. +1 for fixing it.
IIUC, we can call {{ResourceManager#closeTaskManagerConnection}} in 
{{KubernetesResourceManager#internalStopPod}}?

> Fix not calling ResourceManager#closeTaskManagerConnection in 
> KubernetesResourceManager in case of registered TaskExecutor failure
> --
>
> Key: FLINK-17273
> URL: https://issues.apache.org/jira/browse/FLINK-17273
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes, Runtime / Coordination
>Affects Versions: 1.10.0, 1.10.1
>Reporter: Canbin Zheng
>Assignee: Canbin Zheng
>Priority: Major
> Fix For: 1.11.0
>
>
> At the moment, the {{KubernetesResourceManager}} does not call the method of 
> {{ResourceManager#closeTaskManagerConnection}} once it detects that a 
> currently registered task executor has failed. This ticket propoeses to fix 
> this problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11836: [FLINK-17188][python] Use pip instead of conda to install flake8 and sphinx

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11836:
URL: https://github.com/apache/flink/pull/11836#issuecomment-616922166


   
   ## CI report:
   
   * 02287c3c91f023a4eaad10059fa9a7890c11879c Travis: 
[CANCELED](https://travis-ci.com/github/flink-ci/flink/builds/161205815) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7845)
 
   * 8cfe6bcc6cee878730f9cfa75d7fa598f88a9833 UNKNOWN
   * 3ca11194996f3b2e7d51221f8f51ae928d93c3a3 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11727: [FLINK-17106][table] Support create and drop view in Flink SQL

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11727:
URL: https://github.com/apache/flink/pull/11727#issuecomment-613273432


   
   ## CI report:
   
   * fa5592cec0a6a5dcecc4f45c4bf72caf6e166eb4 UNKNOWN
   * a3901b149f603721be62fbb3d8bf46143d08c3d1 UNKNOWN
   * 60d96398326fa63e3b2e53d5e3302d70feeb862d Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161221289) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7855)
 
   * fabc951d769b00e1dac471b75ba2c99e62fcbf47 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161354179) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=15)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11749: [FLINK-16669][python][table] Support Python UDF in SQL function DDL.

2020-04-21 Thread GitBox


flinkbot edited a comment on issue #11749:
URL: https://github.com/apache/flink/pull/11749#issuecomment-613901508


   
   ## CI report:
   
   * 7cbe2380ac7246cf703397d9a78d002c0491a959 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/160664905) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7621)
 
   * 0d7615e1c79289a0561529c8b2709ab5fa969ca4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on a change in pull request #11829: [FLINK-17021][table-planner-blink] Blink batch planner set GlobalDataExchangeMode

2020-04-21 Thread GitBox


zhuzhurk commented on a change in pull request #11829:
URL: https://github.com/apache/flink/pull/11829#discussion_r412621866



##
File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/utils/ExecutorUtils.java
##
@@ -85,19 +83,14 @@ public static void setBatchProperties(StreamGraph 
streamGraph, TableConfig table
if (streamGraph.getCheckpointConfig().isCheckpointingEnabled()) 
{
throw new IllegalArgumentException("Checkpoint is not 
supported for batch jobs.");
}
-   if (ExecutorUtils.isShuffleModeAllBatch(tableConfig)) {
-   
streamGraph.setGlobalDataExchangeMode(GlobalDataExchangeMode.ALL_EDGES_BLOCKING);
-   }
+   
streamGraph.setGlobalDataExchangeMode(getGlobalDataExchangeMode(tableConfig));
}
 
private static boolean isShuffleModeAllBatch(TableConfig tableConfig) {

Review comment:
   Ok.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xintongsong commented on a change in pull request #11615: [FLINK-16605] Add max limitation to the total number of slots

2020-04-21 Thread GitBox


xintongsong commented on a change in pull request #11615:
URL: https://github.com/apache/flink/pull/11615#discussion_r412619527



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/StandaloneResourceManagerFactory.java
##
@@ -74,4 +81,21 @@
standaloneClusterStartupPeriodTime,
AkkaUtils.getTimeoutAsTime(configuration));
}
+
+   /**
+* Get the configuration for standalone ResourceManager, overwrite 
invalid configs.
+*
+* @param configuration configuration object
+* @return the configuration for standalone ResourceManager
+*/
+   private static Configuration 
getConfigurationForStandaloneResourceManager(Configuration configuration) {
+   final Configuration copiedConfig = new 
Configuration(configuration);
+   if 
(configuration.contains(ResourceManagerOptions.MAX_SLOT_NUM)) {
+   // The max slot limit should not take effect for 
standalone cluster, we overwrite the configure in case user
+   // sets this value by mistake.
+   LOG.warn("The {} should not take effect for standalone 
cluster, If configured, it will be ignored.", 
ResourceManagerOptions.MAX_SLOT_NUM.key());
+   
copiedConfig.removeConfig(ResourceManagerOptions.MAX_SLOT_NUM);

Review comment:
   I think Gary is right.
   The community code style guide suggests that:
   > For Maps, avoid patterns that require multiple lookups.
   > https://flink.apache.org/contributing/code-style-and-quality-java.html
   
   I think this should be applied here, since the underlying data structure of 
`Configuration` is a map.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xintongsong commented on a change in pull request #11615: [FLINK-16605] Add max limitation to the total number of slots

2020-04-21 Thread GitBox


xintongsong commented on a change in pull request #11615:
URL: https://github.com/apache/flink/pull/11615#discussion_r412619527



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/StandaloneResourceManagerFactory.java
##
@@ -74,4 +81,21 @@
standaloneClusterStartupPeriodTime,
AkkaUtils.getTimeoutAsTime(configuration));
}
+
+   /**
+* Get the configuration for standalone ResourceManager, overwrite 
invalid configs.
+*
+* @param configuration configuration object
+* @return the configuration for standalone ResourceManager
+*/
+   private static Configuration 
getConfigurationForStandaloneResourceManager(Configuration configuration) {
+   final Configuration copiedConfig = new 
Configuration(configuration);
+   if 
(configuration.contains(ResourceManagerOptions.MAX_SLOT_NUM)) {
+   // The max slot limit should not take effect for 
standalone cluster, we overwrite the configure in case user
+   // sets this value by mistake.
+   LOG.warn("The {} should not take effect for standalone 
cluster, If configured, it will be ignored.", 
ResourceManagerOptions.MAX_SLOT_NUM.key());
+   
copiedConfig.removeConfig(ResourceManagerOptions.MAX_SLOT_NUM);

Review comment:
   I think Gary is right.
   The community code style guide suggests that:
   > For Maps, avoid patterns that require multiple lookups.
   > https://flink.apache.org/contributing/code-style-and-quality-java.html
   
   I think this should be applied here.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   5   6   >