[jira] [Closed] (FLINK-12466) Containerized e2e tests fail on Java 9

2019-07-11 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-12466.

Resolution: Won't Fix

Too much of a hassle to get these to work.

> Containerized e2e tests fail on Java 9
> --
>
> Key: FLINK-12466
> URL: https://issues.apache.org/jira/browse/FLINK-12466
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Docker, Deployment / Kubernetes, Tests
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Priority: Major
>
> The containerized tests use Java 8 images, which obviously don't work as is 
> with a Java 9 compiled Flink.
> I propose to delay fixing this until we work on Java 11.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Closed] (FLINK-12644) Setup Java 9 cron jobs

2019-07-11 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-12644.

Resolution: Fixed

> Setup Java 9 cron jobs
> --
>
> Key: FLINK-12644
> URL: https://issues.apache.org/jira/browse/FLINK-12644
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
> Fix For: 1.9.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] zhijiangW commented on issue #9062: [FLINK-13100][network] Fix the bug of throwing IOException while FileChannelBoundedData#nextBuffer

2019-07-11 Thread GitBox
zhijiangW commented on issue #9062: [FLINK-13100][network] Fix the bug of 
throwing IOException while FileChannelBoundedData#nextBuffer
URL: https://github.com/apache/flink/pull/9062#issuecomment-510429296
 
 
   Also addressed the commit message and constructor issue for Piotr's comments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9036: [FLINK-13112][table-planner-blink] Support LocalZonedTimestampType in blink

2019-07-11 Thread GitBox
wuchong commented on a change in pull request #9036: 
[FLINK-13112][table-planner-blink] Support LocalZonedTimestampType in blink
URL: https://github.com/apache/flink/pull/9036#discussion_r302427488
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/codegen/CodeGeneratorContext.scala
 ##
 @@ -526,14 +528,24 @@ class CodeGeneratorContext(val tableConfig: TableConfig) 
{
 * Adds a reusable TimeZone to the member area of the generated class.
 */
   def addReusableTimeZone(): String = {
-val zoneID = tableConfig.getTimeZone.getID
+val zoneID = TimeZone.getTimeZone(tableConfig.getLocalTimeZone).getID
 val stmt =
   s"""private static final java.util.TimeZone $DEFAULT_TIMEZONE_TERM =
  | 
java.util.TimeZone.getTimeZone("$zoneID");""".stripMargin
 addReusableMember(stmt)
 DEFAULT_TIMEZONE_TERM
   }
 
+  /**
+* Adds a reusable Time ZoneId to the member area of the generated class.
+*/
+  def addReusableTimeZoneID(): String = {
 
 Review comment:
   Never used. Remove?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-13212) Unstable ChainLengthIncreaseTest

2019-07-11 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-13212.

Resolution: Duplicate

> Unstable ChainLengthIncreaseTest
> 
>
> Key: FLINK-13212
> URL: https://issues.apache.org/jira/browse/FLINK-13212
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Reporter: Kurt Young
>Priority: Critical
>  Labels: test-stability
>
> 10:05:29.300 [ERROR] 
> ChainLengthIncreaseTest>AbstractOperatorRestoreTestBase.testMigrationAndRestore:102->AbstractOperatorRestoreTestBase.migrateJob:138
>  » Execution
> 10:05:29.301 [ERROR] 
> ChainLengthIncreaseTest>AbstractOperatorRestoreTestBase.testMigrationAndRestore:102->AbstractOperatorRestoreTestBase.migrateJob:138
>  » Execution
>  
> More details in: [https://api.travis-ci.org/v3/job/557222905/log.txt]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (FLINK-13215) Hive connector does not compile on Java 9

2019-07-11 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-13215:


Assignee: Chesnay Schepler

> Hive connector does not compile on Java 9
> -
>
> Key: FLINK-13215
> URL: https://issues.apache.org/jira/browse/FLINK-13215
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
> Fix For: 1.9.0
>
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.8.0:testCompile 
> (default-testCompile) on project flink-connector-hive_2.11: Compilation 
> failure
> [ERROR] 
> /C:/Dev/Repos/flink/flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/batch/connectors/hive/FlinkStandaloneHiveRunner.java:[56,15]
>  package sun.net.www is not visible
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13215) Hive connector does not compile on Java 9

2019-07-11 Thread Chesnay Schepler (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16882861#comment-16882861
 ] 

Chesnay Schepler commented on FLINK-13215:
--

Potential alternative for {{ParseUtil.encodePath(path);}} might be 
{{URLEncoder.encode(path, StandardCharsets.UTF_8.name())}}; running this on 
Travis now.

> Hive connector does not compile on Java 9
> -
>
> Key: FLINK-13215
> URL: https://issues.apache.org/jira/browse/FLINK-13215
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
> Fix For: 1.9.0
>
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.8.0:testCompile 
> (default-testCompile) on project flink-connector-hive_2.11: Compilation 
> failure
> [ERROR] 
> /C:/Dev/Repos/flink/flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/batch/connectors/hive/FlinkStandaloneHiveRunner.java:[56,15]
>  package sun.net.www is not visible
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] twalthr commented on a change in pull request #9079: [FLINK-13208][table-planner][table-planner-blink] Update Notice files after adding commons-codec to table package.

2019-07-11 Thread GitBox
twalthr commented on a change in pull request #9079: 
[FLINK-13208][table-planner][table-planner-blink] Update Notice files after 
adding commons-codec to table package.
URL: https://github.com/apache/flink/pull/9079#discussion_r302491231
 
 

 ##
 File path: flink-table/flink-table-planner-blink/pom.xml
 ##
 @@ -341,6 +347,18 @@ under the License.

com.fasterxml

org.apache.flink.calcite.shaded.com.fasterxml

+   
 
 Review comment:
   Why do we need Apache commons? Actually it took me quite some time last year 
to reduce the number of potentially conflicting dependencies. `EncodingUtils` 
has the most commonly used methods. Just adding a dependency because of one 
method doesn't seem right.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9039: [FLINK-13170][table-planner] Planner should get table factory from ca…

2019-07-11 Thread GitBox
flinkbot commented on issue #9039: [FLINK-13170][table-planner] Planner should 
get table factory from ca…
URL: https://github.com/apache/flink/pull/9039#issuecomment-510445729
 
 
   ## CI report:
   
   * a14e955ab6082bbd08fcf9b28a654ab771a57fb2 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/118761079)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13153) SplitAggregateITCase.testMinMaxWithRetraction failed on Travis

2019-07-11 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16882873#comment-16882873
 ] 

Till Rohrmann commented on FLINK-13153:
---

Another instance: https://api.travis-ci.org/v3/job/557214216/log.txt

> SplitAggregateITCase.testMinMaxWithRetraction failed on Travis
> --
>
> Key: FLINK-13153
> URL: https://issues.apache.org/jira/browse/FLINK-13153
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.9.0
>
>
> {{SplitAggregateITCase.testMinMaxWithRetraction}} failed on Travis with
> {code}
> Failures: 
> 10:50:43.355 [ERROR]   SplitAggregateITCase.testMinMaxWithRetraction:195 
> expected: but was: 6,2,2,1)>
> {code}
> https://api.travis-ci.org/v3/job/554991853/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] TsReaper commented on a change in pull request #9029: [FLINK-13118][jdbc] Introduce JDBC table factory and bridge JDBC table source with streaming table source

2019-07-11 Thread GitBox
TsReaper commented on a change in pull request #9029: [FLINK-13118][jdbc] 
Introduce JDBC table factory and bridge JDBC table source with streaming table 
source
URL: https://github.com/apache/flink/pull/9029#discussion_r302492726
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/test/java/org/apache/flink/api/java/io/jdbc/JDBCTestBase.java
 ##
 @@ -95,7 +99,7 @@ public static String getCreateQuery(String tableName) {
sqlQueryBuilder.append("id INT NOT NULL DEFAULT 0,");
sqlQueryBuilder.append("title VARCHAR(50) DEFAULT NULL,");
sqlQueryBuilder.append("author VARCHAR(50) DEFAULT NULL,");
-   sqlQueryBuilder.append("price FLOAT DEFAULT NULL,");
+   sqlQueryBuilder.append("price DOUBLE DEFAULT NULL,");
 
 Review comment:
   The type is `Double` in `TestEntry`. But as double and float are the same in 
derby, maybe I should change it back.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on issue #8911: [FLINK-12995][hive] Add Hive-1.2.1 build to Travis

2019-07-11 Thread GitBox
lirui-apache commented on issue #8911: [FLINK-12995][hive] Add Hive-1.2.1 build 
to Travis
URL: https://github.com/apache/flink/pull/8911#issuecomment-510451270
 
 
   I don't think the blink planner failures are related. Seems some of them can 
be reproduced w/o this patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13218) '*.count not supported in TableApi query

2019-07-11 Thread Jing Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhang updated FLINK-13218:
---
Component/s: Table SQL / Planner

> '*.count not supported in TableApi query
> 
>
> Key: FLINK-13218
> URL: https://issues.apache.org/jira/browse/FLINK-13218
> Project: Flink
>  Issue Type: Task
>  Components: Table SQL / Planner
>Reporter: Jing Zhang
>Assignee: Jing Zhang
>Priority: Major
>
> The following query is not supported yet:
> {code:java}
> val t = StreamTestData.get3TupleDataStream(env).toTable(tEnv, 'a, 'b, 'c)
>   .groupBy('b)
>   .select('b, 'a.sum, '*.count)
> {code}
> The following exception will be thrown.
> {code:java}
> org.apache.flink.table.api.ValidationException: Cannot resolve field [*], 
> input field list:[a, b, c].
>   at 
> org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.failForField(ReferenceResolverRule.java:80)
>   at 
> org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.lambda$null$4(ReferenceResolverRule.java:75)
>   at java.util.Optional.orElseThrow(Optional.java:290)
>   at 
> org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.lambda$null$5(ReferenceResolverRule.java:75)
>   at java.util.Optional.orElseGet(Optional.java:267)
>   at 
> org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.lambda$visit$6(ReferenceResolverRule.java:74)
>   at java.util.Optional.orElseGet(Optional.java:267)
>   at 
> org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.visit(ReferenceResolverRule.java:71)
>   at 
> org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.visit(ReferenceResolverRule.java:51)
>   at 
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] tillrohrmann commented on a change in pull request #9058: [FLINK-13166] Add support for batch slot requests to SlotPoolImpl

2019-07-11 Thread GitBox
tillrohrmann commented on a change in pull request #9058: [FLINK-13166] Add 
support for batch slot requests to SlotPoolImpl
URL: https://github.com/apache/flink/pull/9058#discussion_r302500613
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/slotpool/SlotPoolPendingRequestFailureTest.java
 ##
 @@ -127,13 +133,50 @@ public void 
testFailingResourceManagerRequestFailsPendingSlotRequestAndCancelsRM
}
}
 
+   /**
+* Tests that a pending slot request is failed with a timeout.
+*/
+   @Test
+   public void testPendingSlotRequestTimeout() throws Exception {
+   final ScheduledExecutorService singleThreadExecutor = 
Executors.newSingleThreadScheduledExecutor();
+   final ComponentMainThreadExecutor componentMainThreadExecutor = 
ComponentMainThreadExecutorServiceAdapter.forSingleThreadExecutor(singleThreadExecutor);
+
+   final SlotPoolImpl slotPool = 
setUpSlotPool(componentMainThreadExecutor);
+
+   try {
+   final Time timeout = Time.milliseconds(5L);
+
+   final CompletableFuture slotFuture = 
CompletableFuture
+   .supplyAsync(() -> 
requestNewAllocatedSlot(slotPool, new SlotRequestId(), timeout), 
componentMainThreadExecutor)
+   .thenCompose(Function.identity());
+
+   try {
+   slotFuture.get();
+   fail("Expected TimeoutFuture.");
 
 Review comment:
   True, will correct it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13219) Hive connector fails hadoop 2.4.1 builds

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13219:
---
Labels: pull-request-available  (was: )

> Hive connector fails hadoop 2.4.1 builds
> 
>
> Key: FLINK-13219
> URL: https://issues.apache.org/jira/browse/FLINK-13219
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>
> The hive connector does not work with hadoop 2.4, but the tests are still run 
> in the corresponding cron profile.
> https://travis-ci.org/apache/flink/jobs/555723021
> We should add a profile for skipping the hive tests that we enable for these 
> profiles.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] zentol opened a new pull request #9086: [FLINK-13219][hive] Disable tests for hadoop 2.4 profile

2019-07-11 Thread GitBox
zentol opened a new pull request #9086: [FLINK-13219][hive] Disable tests for 
hadoop 2.4 profile
URL: https://github.com/apache/flink/pull/9086
 
 
   Disables hive tests for the hadoop 2.4.1 travis profiles.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9062: [FLINK-13100][network] Fix the bug of throwing IOException while FileChannelBoundedData#nextBuffer

2019-07-11 Thread GitBox
flinkbot edited a comment on issue #9062: [FLINK-13100][network] Fix the bug of 
throwing IOException while FileChannelBoundedData#nextBuffer
URL: https://github.com/apache/flink/pull/9062#issuecomment-510405737
 
 
   ## CI report:
   
   * 106228aa31ad7065acf116029879a00fd998662b : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/118725251)
   * 2b189a8483bbd019ffee22d9746d052855ef6142 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/118755199)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] carp84 commented on a change in pull request #9030: [FLINK-13123] Align Stop/Cancel Commands in CLI and REST Interface and Improve Documentation

2019-07-11 Thread GitBox
carp84 commented on a change in pull request #9030: [FLINK-13123] Align 
Stop/Cancel Commands in CLI and REST Interface and Improve Documentation
URL: https://github.com/apache/flink/pull/9030#discussion_r302502495
 
 

 ##
 File path: docs/ops/cli.md
 ##
 @@ -170,29 +170,13 @@ These examples about how to manage a job in CLI.
 
 ./bin/flink cancel 
 
--   Cancel a job with a savepoint:
+-   Cancel a job with a savepoint (deprecated; use "stop" instead):
 
 Review comment:
   After some offline discussion, I agree we keep the doc as is and do 
necessary refinement later when FLIP-45/47 is done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on issue #9081: [FLINK-13209] Following FLINK-12951: Remove TableEnvironment#sql and …

2019-07-11 Thread GitBox
KurtYoung commented on issue #9081: [FLINK-13209] Following FLINK-12951: Remove 
TableEnvironment#sql and …
URL: https://github.com/apache/flink/pull/9081#issuecomment-510459955
 
 
   +1 from my side, waiting for travis


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13222) Add documentation for AdaptedRestartPipelinedRegionStrategyNG

2019-07-11 Thread Gary Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao updated FLINK-13222:
-
Affects Version/s: 1.9.0

> Add documentation for AdaptedRestartPipelinedRegionStrategyNG
> -
>
> Key: FLINK-13222
> URL: https://issues.apache.org/jira/browse/FLINK-13222
> Project: Flink
>  Issue Type: Task
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0
>Reporter: Gary Yao
>Priority: Blocker
>
> It should be documented that if {{jobmanager.execution.failover-strategy}} is 
> set to _region_, the new pipelined region failover strategy 
> ({{AdaptedRestartPipelinedRegionStrategyNG}}) will be used. 
> *Acceptance Criteria*
> * config values _region_ and _full_ are documented
> * to be decided: config values _region-legacy_ and _individual_ remain 
> undocumented



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13222) Add documentation for AdaptedRestartPipelinedRegionStrategyNG

2019-07-11 Thread Gary Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao updated FLINK-13222:
-
Priority: Blocker  (was: Major)

> Add documentation for AdaptedRestartPipelinedRegionStrategyNG
> -
>
> Key: FLINK-13222
> URL: https://issues.apache.org/jira/browse/FLINK-13222
> Project: Flink
>  Issue Type: Task
>  Components: Runtime / Coordination
>Reporter: Gary Yao
>Priority: Blocker
>
> It should be documented that if {{jobmanager.execution.failover-strategy}} is 
> set to _region_, the new pipelined region failover strategy 
> ({{AdaptedRestartPipelinedRegionStrategyNG}}) will be used. 
> *Acceptance Criteria*
> * config values _region_ and _full_ are documented
> * to be decided: config values _region-legacy_ and _individual_ remain 
> undocumented



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13222) Add documentation for AdaptedRestartPipelinedRegionStrategyNG

2019-07-11 Thread Gary Yao (JIRA)
Gary Yao created FLINK-13222:


 Summary: Add documentation for 
AdaptedRestartPipelinedRegionStrategyNG
 Key: FLINK-13222
 URL: https://issues.apache.org/jira/browse/FLINK-13222
 Project: Flink
  Issue Type: Task
  Components: Runtime / Coordination
Reporter: Gary Yao


It should be documented that if {{jobmanager.execution.failover-strategy}} is 
set to _region_, the new pipelined region failover strategy 
({{AdaptedRestartPipelinedRegionStrategyNG}}) will be used. 

*Acceptance Criteria*
* config values _region_ and _full_ are documented
* to be decided: config values _region-legacy_ and _individual_ remain 
undocumented



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] wuchong commented on issue #9074: [FLINK-13198] Introduce TimeInterval in configuration package

2019-07-11 Thread GitBox
wuchong commented on issue #9074: [FLINK-13198] Introduce TimeInterval in 
configuration package
URL: https://github.com/apache/flink/pull/9074#issuecomment-510408241
 
 
   Hi @zentol , do you mean java 8 `Duration.parse`? AFAIK, `Duration.parse` 
only accept ISO-8601 duration format, i.e. `PnDTnHnMn.nS`. For example: 
`Duration.parse("2s")` is invalid, and `Duration.parse("PT2S")` is valid, 
however, this is not what we want. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-8033) Build Flink with JDK 9

2019-07-11 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-8033.
---
   Resolution: Fixed
Fix Version/s: 1.9.0
 Release Note: 
Flink can be compiled and run on Java 9. Note that certain components 
interacting with external systems (connectors, filesystems, reporters) may not 
work since the respective projects may have skipped Java 9 support.
Modularized user-jars have not been tested and may or may not work.

> Build Flink with JDK 9
> --
>
> Key: FLINK-8033
> URL: https://issues.apache.org/jira/browse/FLINK-8033
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.4.0
>Reporter: Hai Zhou
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>
> This is a JIRA to track all issues that found to make Flink compatible with 
> Java 9.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] StephanEwen commented on a change in pull request #9058: [FLINK-13166] Add support for batch slot requests to SlotPoolImpl

2019-07-11 Thread GitBox
StephanEwen commented on a change in pull request #9058: [FLINK-13166] Add 
support for batch slot requests to SlotPoolImpl
URL: https://github.com/apache/flink/pull/9058#discussion_r302449684
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/slotpool/SlotPoolBatchSlotRequestTest.java
 ##
 @@ -0,0 +1,341 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.jobmaster.slotpool;
+
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.runtime.clusterframework.types.AllocationID;
+import org.apache.flink.runtime.clusterframework.types.ResourceID;
+import org.apache.flink.runtime.clusterframework.types.ResourceProfile;
+import org.apache.flink.runtime.concurrent.ComponentMainThreadExecutor;
+import 
org.apache.flink.runtime.concurrent.ComponentMainThreadExecutorServiceAdapter;
+import org.apache.flink.runtime.concurrent.FutureUtils;
+import 
org.apache.flink.runtime.executiongraph.utils.SimpleAckingTaskManagerGateway;
+import org.apache.flink.runtime.jobmaster.JobMasterId;
+import org.apache.flink.runtime.jobmaster.SlotRequestId;
+import org.apache.flink.runtime.resourcemanager.ResourceManagerGateway;
+import 
org.apache.flink.runtime.resourcemanager.utils.TestingResourceManagerGateway;
+import org.apache.flink.runtime.taskexecutor.slot.SlotOffer;
+import org.apache.flink.runtime.taskmanager.LocalTaskManagerLocation;
+import org.apache.flink.runtime.taskmanager.TaskManagerLocation;
+import org.apache.flink.runtime.testingUtils.TestingUtils;
+import org.apache.flink.runtime.util.clock.Clock;
+import org.apache.flink.runtime.util.clock.ManualClock;
+import org.apache.flink.runtime.util.clock.SystemClock;
+import org.apache.flink.util.ExceptionUtils;
+import org.apache.flink.util.FlinkException;
+import org.apache.flink.util.TestLogger;
+
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+import java.util.stream.IntStream;
+
+import static org.hamcrest.MatcherAssert.assertThat;
+import static org.hamcrest.Matchers.instanceOf;
+import static org.hamcrest.Matchers.is;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests for batch slot requests.
+ */
+public class SlotPoolBatchSlotRequestTest extends TestLogger {
+
+   private static final ResourceProfile resourceProfile = new 
ResourceProfile(1.0, 1024);
+   private static final ResourceProfile smallerResourceProfile = new 
ResourceProfile(0.5, 512);
+   public static final CompletableFuture[] COMPLETABLE_FUTURES_EMPTY_ARRAY 
= new CompletableFuture[0];
+   private static ScheduledExecutorService 
singleThreadScheduledExecutorService;
+   private static ComponentMainThreadExecutor mainThreadExecutor;
+
+   @BeforeClass
+   public static void setupClass() {
+   singleThreadScheduledExecutorService = 
Executors.newSingleThreadScheduledExecutor();
+   mainThreadExecutor = 
ComponentMainThreadExecutorServiceAdapter.forSingleThreadExecutor(singleThreadScheduledExecutorService);
+   }
+
+   @AfterClass
+   public static void teardownClass() {
+   if (singleThreadScheduledExecutorService != null) {
+   singleThreadScheduledExecutorService.shutdownNow();
+   }
+   }
+
+   /**
+* Tests that a batch slot request fails if there is no slot which can 
fulfill the
+* slot request.
+*/
+   @Test
+   public void testPendingBatchSlotRequestTimeout() throws Exception {
+   try (final SlotPoolImpl slotPool = new SlotPoolBuilder()
+   .build()) {
+   final CompletableFuture slotFuture = 

[GitHub] [flink] StephanEwen commented on issue #9058: [FLINK-13166] Add support for batch slot requests to SlotPoolImpl

2019-07-11 Thread GitBox
StephanEwen commented on issue #9058: [FLINK-13166] Add support for batch slot 
requests to SlotPoolImpl
URL: https://github.com/apache/flink/pull/9058#issuecomment-510410945
 
 
   +1 to merge this


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] StephanEwen commented on a change in pull request #9058: [FLINK-13166] Add support for batch slot requests to SlotPoolImpl

2019-07-11 Thread GitBox
StephanEwen commented on a change in pull request #9058: [FLINK-13166] Add 
support for batch slot requests to SlotPoolImpl
URL: https://github.com/apache/flink/pull/9058#discussion_r302450229
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/slotpool/SlotPoolPendingRequestFailureTest.java
 ##
 @@ -127,13 +133,50 @@ public void 
testFailingResourceManagerRequestFailsPendingSlotRequestAndCancelsRM
}
}
 
+   /**
+* Tests that a pending slot request is failed with a timeout.
+*/
+   @Test
+   public void testPendingSlotRequestTimeout() throws Exception {
+   final ScheduledExecutorService singleThreadExecutor = 
Executors.newSingleThreadScheduledExecutor();
+   final ComponentMainThreadExecutor componentMainThreadExecutor = 
ComponentMainThreadExecutorServiceAdapter.forSingleThreadExecutor(singleThreadExecutor);
+
+   final SlotPoolImpl slotPool = 
setUpSlotPool(componentMainThreadExecutor);
+
+   try {
+   final Time timeout = Time.milliseconds(5L);
+
+   final CompletableFuture slotFuture = 
CompletableFuture
+   .supplyAsync(() -> 
requestNewAllocatedSlot(slotPool, new SlotRequestId(), timeout), 
componentMainThreadExecutor)
+   .thenCompose(Function.identity());
+
+   try {
+   slotFuture.get();
+   fail("Expected TimeoutFuture.");
 
 Review comment:
   That error message seems a bit off.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] fhueske commented on issue #8844: [FLINK-12951][table-planner] Add logic to bridge DDL to table source(…

2019-07-11 Thread GitBox
fhueske commented on issue #8844: [FLINK-12951][table-planner] Add logic to 
bridge DDL to table source(…
URL: https://github.com/apache/flink/pull/8844#issuecomment-510414748
 
 
   Great, thank you @danny0405!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9081: [FLINK-13209] Following FLINK-12951: Remove TableEnvironment#sql and …

2019-07-11 Thread GitBox
flinkbot commented on issue #9081: [FLINK-13209] Following FLINK-12951: Remove 
TableEnvironment#sql and …
URL: https://github.com/apache/flink/pull/9081#issuecomment-510415681
 
 
   ## CI report:
   
   * d50585c74861ac8af2b35e0dae3ca163d4ca3d11 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/118747118)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-12689) flink-dist is missing flink-azure-fs-hadoop dependency

2019-07-11 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-12689:
-
Summary: flink-dist is missing flink-azure-fs-hadoop dependency  (was: 
Building flink-dist fails because flink-azure-fs-hadoop jar cannot be added to 
/opt)

> flink-dist is missing flink-azure-fs-hadoop dependency
> --
>
> Key: FLINK-12689
> URL: https://issues.apache.org/jira/browse/FLINK-12689
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems
>Affects Versions: 1.9.0
>Reporter: Gary Yao
>Assignee: Gary Yao
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Build fails when building with:
> {code}
> mvn clean install -pl flink-dist -am -DskipTests -Dfast 
> {code}
> {noformat}
> [INFO] flink-scala-shell .. SUCCESS [ 10.989 
> s]
> [INFO] flink-dist . FAILURE [ 26.068 
> s]
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 13:24 min
> [INFO] Finished at: 2019-05-31T09:55:22+02:00
> [INFO] Final Memory: 313M/1834M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-assembly-plugin:3.0.0:single (opt) on project 
> flink-dist_2.11: Failed to create assembly: Error adding file to archive: 
> /Users/gyao/Documents/work/code/github/flink/flink-dist/../flink-filesystems/flink-azure-fs-hadoop/target/flink-azure-fs-hadoop-1.9-SNAPSHOT.jar
>  -> [Help 1]
> {noformat}
> Azure FS dependency should be added to flink-dist with provided scope.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] wuchong commented on a change in pull request #9081: [FLINK-13209] Following FLINK-12951: Remove TableEnvironment#sql and …

2019-07-11 Thread GitBox
wuchong commented on a change in pull request #9081: [FLINK-13209] Following 
FLINK-12951: Remove TableEnvironment#sql and …
URL: https://github.com/apache/flink/pull/9081#discussion_r302459613
 
 

 ##
 File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/TableEnvironment.java
 ##
 @@ -430,11 +362,51 @@ static TableEnvironment create(EnvironmentSettings 
settings) {
 * }
 * 
 *
-* @deprecated Use {@link #sql(String)}.
+* A DDL statement can also execute to create a table:
+* For example, the below DDL statement would create a CSV table named 
`tbl1`
+* into the current catalog:
+* 
+*create table tbl1(
+*  a int,
+*  b bigint,
+*  c varchar
+*) with (
+*  connector = 'csv',
+*  csv.path = 'xxx'
 
 Review comment:
   Give a valid with property? 
   ```
   connector.type = 'filesystem',
   connector.property-version = '1',
   connector.path = '/path/to/file',
   format.type = 'csv',
   format.property-version = '1',
   format.derive-schema = 'true'
   ```
   However, I'm not sure the above properties work.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] carp84 commented on a change in pull request #9075: [FLINK-10245][hbase] Add an upsert table sink factory for HBase

2019-07-11 Thread GitBox
carp84 commented on a change in pull request #9075:  [FLINK-10245][hbase] Add 
an upsert table sink factory for HBase
URL: https://github.com/apache/flink/pull/9075#discussion_r302460549
 
 

 ##
 File path: 
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSchema.java
 ##
 @@ -54,7 +54,7 @@
 * @param qualifier the qualifier name
 * @param clazz the data type of the qualifier
 */
-   void addColumn(String family, String qualifier, Class clazz) {
+   public void addColumn(String family, String qualifier, Class clazz) {
 
 Review comment:
   I see, makes sense.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-4399) Add support for oversized messages

2019-07-11 Thread Biao Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-4399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16882829#comment-16882829
 ] 

Biao Liu commented on FLINK-4399:
-

I have written down my plan of this issue, see the doc attached. Any feedback 
is welcome!

> Add support for oversized messages
> --
>
> Key: FLINK-4399
> URL: https://issues.apache.org/jira/browse/FLINK-4399
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
> Environment: FLIP-6 feature branch
>Reporter: Stephan Ewen
>Assignee: Biao Liu
>Priority: Major
>  Labels: flip-6
>
> Currently, messages larger than the maximum Akka Framesize cause an error 
> when being transported. We should add a way to pass messages that are larger 
> than the Framesize, as may happen for:
>   - {{collect()}} calls that collect large data sets (via accumulators)
>   - Job submissions and operator deployments where the functions closures are 
> large (for example because it contains large pre-loaded data)
>   - Function restore in cases where restored state is larger than 
> checkpointed state (union state)
> I suggest to use the {{BlobManager}} to transfer large payload.
>   - On the sender side, oversized messages are stored under a transient blob 
> (which is deleted after first retrieval, or after a certain number of minutes)
>   - The sender sends a "pointer to blob message" instead.
>   - The receiver grabs the message from the blob upon receiving the pointer 
> message
> The RPC Service should be optionally initializable with a "large message 
> handler" which is internally the {{BlobManager}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] zhijiangW commented on issue #9062: [FLINK-13100][network] Fix the bug of throwing IOException while FileChannelBoundedData#nextBuffer

2019-07-11 Thread GitBox
zhijiangW commented on issue #9062: [FLINK-13100][network] Fix the bug of 
throwing IOException while FileChannelBoundedData#nextBuffer
URL: https://github.com/apache/flink/pull/9062#issuecomment-510428841
 
 
   @pnowojski @StephanEwen I have rebased the codes and made some modifications:
   
   - Adds the unit tests Stephan provided in his branch.
   
   - Make `BoundedBlockingSubpartitionType` configurable in ITCase/tests.
   
   - Add a new ITCase for verifying this fix. This case would throw 
`IOException` frequently before fixing, and after fixing it could run correctly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13215) Hive connector does not compile on Java 9

2019-07-11 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-13215:


 Summary: Hive connector does not compile on Java 9
 Key: FLINK-13215
 URL: https://issues.apache.org/jira/browse/FLINK-13215
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive
Affects Versions: 1.9.0
Reporter: Chesnay Schepler
 Fix For: 1.9.0


{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.8.0:testCompile 
(default-testCompile) on project flink-connector-hive_2.11: Compilation failure
[ERROR] 
/C:/Dev/Repos/flink/flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/batch/connectors/hive/FlinkStandaloneHiveRunner.java:[56,15]
 package sun.net.www is not visible
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Closed] (FLINK-13214) Hive connector is missing jdk.tools exclusion for Java 9

2019-07-11 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-13214.

Resolution: Fixed

master: bdcadfa2df3a39dbd3ddc4d7390ac66d76057b5c 

> Hive connector is missing jdk.tools exclusion for Java 9
> 
>
> Key: FLINK-13214
> URL: https://issues.apache.org/jira/browse/FLINK-13214
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.9.0
>
>
> {code}
> [ERROR] Failed to execute goal on project flink-connector-hive_2.12: Could 
> not resolve dependencies for project 
> org.apache.flink:flink-connector-hive_2.12:jar:1.9-SNAPSHOT: Could not find 
> artifact jdk.tools:jdk.tools:jar:1.7 at specified path 
> C:\Dev\Java\9/../lib/tools.jar -> [Help 1]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-8033) JDK 9 support

2019-07-11 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-8033:

Summary: JDK 9 support  (was: Build Flink with JDK 9)

> JDK 9 support
> -
>
> Key: FLINK-8033
> URL: https://issues.apache.org/jira/browse/FLINK-8033
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.4.0
>Reporter: Hai Zhou
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>
> This is a JIRA to track all issues that found to make Flink compatible with 
> Java 9.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] KurtYoung commented on issue #9006: [FLINK-13107][table-planner-blink] Copy TableApi IT and UT to Blink planner

2019-07-11 Thread GitBox
KurtYoung commented on issue #9006: [FLINK-13107][table-planner-blink] Copy 
TableApi IT and UT to Blink planner
URL: https://github.com/apache/flink/pull/9006#issuecomment-510443572
 
 
   > What are the implications on the test times when we copy so many tests?
   
   According to this: https://travis-ci.org/beyond1920/flink/builds/557233740, 
the test time increases less than 2 mins. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung closed pull request #9006: [FLINK-13107][table-planner-blink] Copy TableApi IT and UT to Blink planner

2019-07-11 Thread GitBox
KurtYoung closed pull request #9006: [FLINK-13107][table-planner-blink] Copy 
TableApi IT and UT to Blink planner
URL: https://github.com/apache/flink/pull/9006
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13107) Copy TableApi IT and UT to Blink planner

2019-07-11 Thread Kurt Young (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16882871#comment-16882871
 ] 

Kurt Young commented on FLINK-13107:


merged in 1.9.0: 116c10b2c67ca1187ccf7847cd795261802f74df

> Copy TableApi IT and UT to Blink planner
> 
>
> Key: FLINK-13107
> URL: https://issues.apache.org/jira/browse/FLINK-13107
> Project: Flink
>  Issue Type: Task
>  Components: Table SQL / Planner
>Reporter: Jing Zhang
>Assignee: Jing Zhang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The issue aims to copy the testcases in the following packages from 
> flink-planner and original blink to Blink-planner:
> 1. org.apache.flink.table.api.batch.table
> 2. org.apache.flink.table.api.stream.table
> 3. org.apache.flink.table.runtime.batch.table
> 4. org.apache.flink.table.runtime.stream.table



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Closed] (FLINK-13107) Copy TableApi IT and UT to Blink planner

2019-07-11 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young closed FLINK-13107.
--
   Resolution: Fixed
Fix Version/s: 1.9.0

> Copy TableApi IT and UT to Blink planner
> 
>
> Key: FLINK-13107
> URL: https://issues.apache.org/jira/browse/FLINK-13107
> Project: Flink
>  Issue Type: Task
>  Components: Table SQL / Planner
>Reporter: Jing Zhang
>Assignee: Jing Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The issue aims to copy the testcases in the following packages from 
> flink-planner and original blink to Blink-planner:
> 1. org.apache.flink.table.api.batch.table
> 2. org.apache.flink.table.api.stream.table
> 3. org.apache.flink.table.runtime.batch.table
> 4. org.apache.flink.table.runtime.stream.table



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13216) AggregateITCase.testNestedGroupByAgg fails on Travis

2019-07-11 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-13216:
-

 Summary: AggregateITCase.testNestedGroupByAgg fails on Travis
 Key: FLINK-13216
 URL: https://issues.apache.org/jira/browse/FLINK-13216
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.9.0
Reporter: Till Rohrmann
 Fix For: 1.9.0


The {{AggregateITCase.testNestedGroupByAgg}} fails on Travis with

{code}
AggregateITCase.testNestedGroupByAgg:472 expected: but was:
{code}

https://api.travis-ci.org/v3/job/557214216/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13218) '*.count not supported in TableApi query

2019-07-11 Thread Jing Zhang (JIRA)
Jing Zhang created FLINK-13218:
--

 Summary: '*.count not supported in TableApi query
 Key: FLINK-13218
 URL: https://issues.apache.org/jira/browse/FLINK-13218
 Project: Flink
  Issue Type: Task
Reporter: Jing Zhang
Assignee: Jing Zhang


The following query is not supported yet:

{code:java}
val t = StreamTestData.get3TupleDataStream(env).toTable(tEnv, 'a, 'b, 'c)
  .groupBy('b)
  .select('b, 'a.sum, '*.count)
{code}

The following exception will be thrown.

{code:java}
org.apache.flink.table.api.ValidationException: Cannot resolve field [*], input 
field list:[a, b, c].

at 
org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.failForField(ReferenceResolverRule.java:80)
at 
org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.lambda$null$4(ReferenceResolverRule.java:75)
at java.util.Optional.orElseThrow(Optional.java:290)
at 
org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.lambda$null$5(ReferenceResolverRule.java:75)
at java.util.Optional.orElseGet(Optional.java:267)
at 
org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.lambda$visit$6(ReferenceResolverRule.java:74)
at java.util.Optional.orElseGet(Optional.java:267)
at 
org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.visit(ReferenceResolverRule.java:71)
at 
org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.visit(ReferenceResolverRule.java:51)
at 
...
{code}




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #8925: [FLINK-12852][network] Fix the deadlock occured when requesting exclusive buffers

2019-07-11 Thread GitBox
flinkbot edited a comment on issue #8925: [FLINK-12852][network] Fix the 
deadlock occured when requesting exclusive buffers
URL: https://github.com/apache/flink/pull/8925#issuecomment-510405884
 
 
   ## CI report:
   
   * 31a51cbe260a78381dc44973e6724c20532b5deb : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/118718023)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann commented on issue #9073: [FLINK-13187] Introduce ScheduleMode#LAZY_FROM_SOURCES_WITH_BATCH_SLOT_REQUEST

2019-07-11 Thread GitBox
tillrohrmann commented on issue #9073: [FLINK-13187] Introduce 
ScheduleMode#LAZY_FROM_SOURCES_WITH_BATCH_SLOT_REQUEST
URL: https://github.com/apache/flink/pull/9073#issuecomment-510455540
 
 
   Thanks for the review @StephanEwen. I've addressed your comments. Merging 
once Travis gives green light.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann commented on a change in pull request #9073: [FLINK-13187] Introduce ScheduleMode#LAZY_FROM_SOURCES_WITH_BATCH_SLOT_REQUEST

2019-07-11 Thread GitBox
tillrohrmann commented on a change in pull request #9073: [FLINK-13187] 
Introduce ScheduleMode#LAZY_FROM_SOURCES_WITH_BATCH_SLOT_REQUEST
URL: https://github.com/apache/flink/pull/9073#discussion_r302503632
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/ScheduleMode.java
 ##
 @@ -19,21 +19,40 @@
 package org.apache.flink.runtime.jobgraph;
 
 /**
- * The ScheduleMode decides how tasks of an execution graph are started.  
+ * The ScheduleMode decides how tasks of an execution graph are started.
  */
 public enum ScheduleMode {
 
/** Schedule tasks lazily from the sources. Downstream tasks are 
started once their input data are ready */
-   LAZY_FROM_SOURCES,
+   LAZY_FROM_SOURCES {
 
 Review comment:
   Good point. I will update it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13218) '*.count not supported in TableApi query

2019-07-11 Thread Timo Walther (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16882895#comment-16882895
 ] 

Timo Walther commented on FLINK-13218:
--

Not sure if this should be supported. The {{'*}} is a field unrolling operation 
in the Table API. Which means that {{'*.count}} would be {{'a.count, 'b.count}} 
do we want that?

> '*.count not supported in TableApi query
> 
>
> Key: FLINK-13218
> URL: https://issues.apache.org/jira/browse/FLINK-13218
> Project: Flink
>  Issue Type: Task
>  Components: Table SQL / Planner
>Reporter: Jing Zhang
>Assignee: Jing Zhang
>Priority: Major
>
> The following query is not supported yet:
> {code:java}
> val t = StreamTestData.get3TupleDataStream(env).toTable(tEnv, 'a, 'b, 'c)
>   .groupBy('b)
>   .select('b, 'a.sum, '*.count)
> {code}
> The following exception will be thrown.
> {code:java}
> org.apache.flink.table.api.ValidationException: Cannot resolve field [*], 
> input field list:[a, b, c].
>   at 
> org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.failForField(ReferenceResolverRule.java:80)
>   at 
> org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.lambda$null$4(ReferenceResolverRule.java:75)
>   at java.util.Optional.orElseThrow(Optional.java:290)
>   at 
> org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.lambda$null$5(ReferenceResolverRule.java:75)
>   at java.util.Optional.orElseGet(Optional.java:267)
>   at 
> org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.lambda$visit$6(ReferenceResolverRule.java:74)
>   at java.util.Optional.orElseGet(Optional.java:267)
>   at 
> org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.visit(ReferenceResolverRule.java:71)
>   at 
> org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.visit(ReferenceResolverRule.java:51)
>   at 
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13199) ARM support for Flink

2019-07-11 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-13199:
-
Affects Version/s: (was: 1.9.0)

> ARM support for Flink
> -
>
> Key: FLINK-13199
> URL: https://issues.apache.org/jira/browse/FLINK-13199
> Project: Flink
>  Issue Type: Wish
>  Components: Build System
>Reporter: wangxiyuan
>Priority: Critical
>
> There is not official ARM release for Flink. But basing on my local test, 
> Flink which is made by Java and Scala is built and tested well. So is it 
> possible to support ARM release officially? And I think it's may not be a 
> huge work.
>  
> AFAIK, Flink now uses travis-ci which supports only x86 for CI gate. Is it 
> possible to add an ARM one? I'm from openlab community[1]. Similar with 
> travis-ci, it's is an opensource and free community which provide CI 
> resources and system for opensource projects, contains both ARM and X86 
> machines. And now it helps some community building there CI already. Such as 
> OpenStack and CNCF.
>  
> If Flink community agree to support ARM. I can spend my full time to help. 
> Such as job define, CI maintaining, test fix and so on. If Flink don't want 
> to rely on OpenLab, we can donate ARM resources directly as well.
>  
> I have sent out a discuess mail-list already[2]. Feel free to reply there or 
> here.
>  
> Thanks.
>  
> [1]:[https://openlabtesting.org/]
> [2]:[http://mail-archives.apache.org/mod_mbox/flink-dev/201907.mbox/browser]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Closed] (FLINK-12224) Kafka 0.10/0.11 e2e test fails on Java 9

2019-07-11 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-12224.

   Resolution: Won't Fix
Fix Version/s: (was: 1.9.0)

> Kafka 0.10/0.11 e2e test fails on Java 9
> 
>
> Key: FLINK-12224
> URL: https://issues.apache.org/jira/browse/FLINK-12224
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Priority: Major
>
> https://travis-ci.org/zentol/flink/jobs/519154949
> {code}
> java.net.ConnectException: Connection refused
>   at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>   at 
> java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
>   at 
> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
>   at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
> Exception in thread "main" org.I0Itec.zkclient.exception.ZkTimeoutException: 
> Unable to connect to zookeeper server 'localhost:2181' with timeout of 3 
> ms
>   at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1233)
>   at org.I0Itec.zkclient.ZkClient.(ZkClient.java:157)
>   at org.I0Itec.zkclient.ZkClient.(ZkClient.java:131)
>   at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:79)
>   at kafka.utils.ZkUtils$.apply(ZkUtils.scala:61)
>   at kafka.admin.TopicCommand$.main(TopicCommand.scala:53)
>   at kafka.admin.TopicCommand.main(TopicCommand.scala)
> No kafka server to stop
> {code}
> Could be that the bundled zk version doesn't work with java 9 at all or needs 
> extra configuration to make it work.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13199) ARM support for Flink

2019-07-11 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-13199:
-
Fix Version/s: (was: 1.9.0)
   (was: 2.0.0)

> ARM support for Flink
> -
>
> Key: FLINK-13199
> URL: https://issues.apache.org/jira/browse/FLINK-13199
> Project: Flink
>  Issue Type: Wish
>  Components: Build System
>Affects Versions: 1.9.0
>Reporter: wangxiyuan
>Priority: Critical
>
> There is not official ARM release for Flink. But basing on my local test, 
> Flink which is made by Java and Scala is built and tested well. So is it 
> possible to support ARM release officially? And I think it's may not be a 
> huge work.
>  
> AFAIK, Flink now uses travis-ci which supports only x86 for CI gate. Is it 
> possible to add an ARM one? I'm from openlab community[1]. Similar with 
> travis-ci, it's is an opensource and free community which provide CI 
> resources and system for opensource projects, contains both ARM and X86 
> machines. And now it helps some community building there CI already. Such as 
> OpenStack and CNCF.
>  
> If Flink community agree to support ARM. I can spend my full time to help. 
> Such as job define, CI maintaining, test fix and so on. If Flink don't want 
> to rely on OpenLab, we can donate ARM resources directly as well.
>  
> I have sent out a discuess mail-list already[2]. Feel free to reply there or 
> here.
>  
> Thanks.
>  
> [1]:[https://openlabtesting.org/]
> [2]:[http://mail-archives.apache.org/mod_mbox/flink-dev/201907.mbox/browser]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Closed] (FLINK-12054) HBaseConnectorITCase fails on Java 9

2019-07-11 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-12054.

Resolution: Won't Fix

Hbase project skipped Java 9.

> HBaseConnectorITCase fails on Java 9
> 
>
> Key: FLINK-12054
> URL: https://issues.apache.org/jira/browse/FLINK-12054
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Connectors / HBase
>Reporter: Chesnay Schepler
>Assignee: leesf
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> An issue in hbase.
> {code}
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 21.83 sec <<< 
> FAILURE! - in org.apache.flink.addons.hbase.HBaseConnectorITCase
> org.apache.flink.addons.hbase.HBaseConnectorITCase  Time elapsed: 21.829 sec  
> <<< FAILURE!
> java.lang.AssertionError: We should get a URLClassLoader
>   at 
> org.apache.flink.addons.hbase.HBaseConnectorITCase.activateHBaseCluster(HBaseConnectorITCase.java:81)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9036: [FLINK-13112][table-planner-blink] Support LocalZonedTimestampType in blink

2019-07-11 Thread GitBox
flinkbot edited a comment on issue #9036: [FLINK-13112][table-planner-blink] 
Support LocalZonedTimestampType in blink
URL: https://github.com/apache/flink/pull/9036#issuecomment-510405764
 
 
   ## CI report:
   
   * 1abd1e5a9d7dc7698432234638fdbe80eb191e8d : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/118718020)
   * 9310b113ba2bb380c1dffc7dc3ccecbd8bdd6c7a : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/118744421)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9064: [FLINK-13188][Runtime / State Backends][Test] Fix MockStateBackend#resolveCheckpointStorageLocation return null cause intellij assertion detect test

2019-07-11 Thread GitBox
flinkbot commented on issue #9064: [FLINK-13188][Runtime / State 
Backends][Test] Fix MockStateBackend#resolveCheckpointStorageLocation return 
null cause intellij assertion detect test failed
URL: https://github.com/apache/flink/pull/9064#issuecomment-510409332
 
 
   ## CI report:
   
   * 77d92ad5b8be29975fe61014ea1445955fc7841d : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/118744339)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9057: [FLINK-13121] [table-planner-blink] Set batch properties to runtime in blink batch executor

2019-07-11 Thread GitBox
flinkbot edited a comment on issue #9057: [FLINK-13121] [table-planner-blink] 
Set batch properties to runtime in blink batch executor
URL: https://github.com/apache/flink/pull/9057#issuecomment-510405733
 
 
   ## CI report:
   
   * b616282cb875778a7a5af22a2783eaaf48104908 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/118709811)
   * 62a9ff2805a3c04e160a7f7d52f40acf049d9c4b : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/118744386)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9074: [FLINK-13198] Introduce TimeInterval in configuration package

2019-07-11 Thread GitBox
flinkbot edited a comment on issue #9074: [FLINK-13198] Introduce TimeInterval 
in configuration package
URL: https://github.com/apache/flink/pull/9074#issuecomment-510405793
 
 
   ## CI report:
   
   * df6685a3394eeea7a6013f2a9f6c1b6bfeac87e5 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/118736911)
   * 2d3ea443452c7d22f35d700dbec18320cf048ca0 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/118744280)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9067: [FLINK-13069][hive] HiveTableSink should implement OverwritableTableSink

2019-07-11 Thread GitBox
flinkbot edited a comment on issue #9067: [FLINK-13069][hive] HiveTableSink 
should implement OverwritableTableSink
URL: https://github.com/apache/flink/pull/9067#issuecomment-510405753
 
 
   ## CI report:
   
   * 0034a70157b871b401cb1f8cd5a223427cf6223a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/118709799)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9021: [FLINK-13205][runtime] Make checkpoints injection ordered with stop-with-savepoint

2019-07-11 Thread GitBox
flinkbot edited a comment on issue #9021: [FLINK-13205][runtime] Make 
checkpoints injection ordered with stop-with-savepoint
URL: https://github.com/apache/flink/pull/9021#issuecomment-510405873
 
 
   ## CI report:
   
   * abed4b5678a2f09b3bb729bd62b5264e56b55b9f : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/118731856)
   * 505ec154b21e0340e112f16fcfcfb1eeb52fa345 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/11879)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13209) Following FLINK-12951: Remove TableEnvironment#sql and add create table ddl support to sqlUpdate

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13209:
---
Labels: pull-request-available  (was: )

> Following FLINK-12951: Remove TableEnvironment#sql and add create table ddl 
> support to sqlUpdate
> 
>
> Key: FLINK-13209
> URL: https://issues.apache.org/jira/browse/FLINK-13209
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: Danny Chan
>Assignee: Danny Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13202) Unstable StandaloneResourceManagerTest

2019-07-11 Thread Kurt Young (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16882798#comment-16882798
 ] 

Kurt Young commented on FLINK-13202:


Got it. [~till.rohrmann]

> Unstable StandaloneResourceManagerTest
> --
>
> Key: FLINK-13202
> URL: https://issues.apache.org/jira/browse/FLINK-13202
> Project: Flink
>  Issue Type: Test
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0
>Reporter: Kurt Young
>Priority: Critical
>  Labels: test-stability
>
> [https://api.travis-ci.org/v3/job/557150195/log.txt]
>  
> 06:37:02.888 [ERROR] Failures:
> 06:37:02.889 [ERROR] 
> StandaloneResourceManagerTest.testStartupPeriod:60->assertHappensUntil:114 
> condition was not fulfilled before the deadline



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] danny0405 opened a new pull request #9081: [FLINK-13209] Following FLINK-12951: Remove TableEnvironment#sql and …

2019-07-11 Thread GitBox
danny0405 opened a new pull request #9081: [FLINK-13209] Following FLINK-12951: 
Remove TableEnvironment#sql and …
URL: https://github.com/apache/flink/pull/9081
 
 
   ## What is the purpose of the change
   
   This patch move create table ddl support from `#sql` to `#sqlUpdate`, also 
remove the deprecation of `sqlUpdate` and `sqlQuery`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xuyang1706 opened a new pull request #9082: [FLINK-13207][ml] Add the algorithm of Fast Fourier Transformation(FFT)

2019-07-11 Thread GitBox
xuyang1706 opened a new pull request #9082: [FLINK-13207][ml] Add the algorithm 
of Fast Fourier Transformation(FFT)
URL: https://github.com/apache/flink/pull/9082
 
 
   
   
   ## What is the purpose of the change
   
   Add 2 common used algorithms of Fast Fourier Transformation(FFT)
   
   1. Cooley-Tukey algorithm, high performance, but only supports length of 
power-of-2.
   2. Chirp-Z algorithm, can perform FFT with any length.
   
   
   ## Brief change log
   
 - *add Cooley-Tukey algorithm*
 - *add Chirp-Z algorithm*
 - *add unit tests*
   
   
   ## Verifying this change
   This change added tests and can be verified as follows:
   
   - run test case pass
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes)
 - If yes, how is the feature documented? (JavaDocs)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9057: [FLINK-13121] [table-planner-blink] Set batch properties to runtime in blink batch executor

2019-07-11 Thread GitBox
flinkbot edited a comment on issue #9057: [FLINK-13121] [table-planner-blink] 
Set batch properties to runtime in blink batch executor
URL: https://github.com/apache/flink/pull/9057#issuecomment-510405733
 
 
   ## CI report:
   
   * b616282cb875778a7a5af22a2783eaaf48104908 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/118709811)
   * 62a9ff2805a3c04e160a7f7d52f40acf049d9c4b : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/118744386)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9082: [FLINK-13207][ml] Add the algorithm of Fast Fourier Transformation(FFT)

2019-07-11 Thread GitBox
flinkbot commented on issue #9082: [FLINK-13207][ml] Add the algorithm of Fast 
Fourier Transformation(FFT)
URL: https://github.com/apache/flink/pull/9082#issuecomment-510415691
 
 
   ## CI report:
   
   * 2b6f431f4f281936034418c8ce64e2e1ba10bd5f : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/118747092)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13211) Add drop table support for flink planner

2019-07-11 Thread Danny Chan (JIRA)
Danny Chan created FLINK-13211:
--

 Summary: Add drop table support for flink planner
 Key: FLINK-13211
 URL: https://issues.apache.org/jira/browse/FLINK-13211
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.9.0
Reporter: Danny Chan
Assignee: Danny Chan
 Fix For: 1.9.0






--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9067: [FLINK-13069][hive] HiveTableSink should implement OverwritableTableSink

2019-07-11 Thread GitBox
flinkbot edited a comment on issue #9067: [FLINK-13069][hive] HiveTableSink 
should implement OverwritableTableSink
URL: https://github.com/apache/flink/pull/9067#issuecomment-510405753
 
 
   ## CI report:
   
   * 0034a70157b871b401cb1f8cd5a223427cf6223a : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/118747156)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #9080: [FLINK-13115] [table-planner-blink] Introduce planner rule to support partition pruning for PartitionableTableSource

2019-07-11 Thread GitBox
KurtYoung commented on a change in pull request #9080: [FLINK-13115] 
[table-planner-blink] Introduce planner rule to support partition pruning for 
PartitionableTableSource
URL: https://github.com/apache/flink/pull/9080#discussion_r302458347
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/plan/rules/FlinkBatchRuleSets.scala
 ##
 @@ -243,6 +245,7 @@ object FlinkBatchRuleSets {
 // scan optimization
 PushProjectIntoTableSourceScanRule.INSTANCE,
 PushFilterIntoTableSourceScanRule.INSTANCE,
+PushPartitionIntoTableSourceScanRule.INSTANCE,
 
 Review comment:
   Will it cause any problem if this `PushPartitionIntoTableSourceScanRule` is 
introduced with 2 different programs?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13179) Add document on how to run examples.

2019-07-11 Thread Robert Metzger (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16882805#comment-16882805
 ] 

Robert Metzger commented on FLINK-13179:


Thanks, I understand now.

Afaik [~sjwiesman] is working on the examples as part of 
https://issues.apache.org/jira/browse/FLINK-12746.

> Add document on how to run examples.
> 
>
> Key: FLINK-13179
> URL: https://issues.apache.org/jira/browse/FLINK-13179
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Examples
>Reporter: Jiangjie Qin
>Priority: Major
>
> Some of the examples in Flink do not have sufficient document so users may 
> not able to run them easily. We can probably have a shell script to run the 
> examples. The shell script should include necessary libraries and maybe log4j 
> configs. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] KurtYoung commented on a change in pull request #9080: [FLINK-13115] [table-planner-blink] Introduce planner rule to support partition pruning for PartitionableTableSource

2019-07-11 Thread GitBox
KurtYoung commented on a change in pull request #9080: [FLINK-13115] 
[table-planner-blink] Introduce planner rule to support partition pruning for 
PartitionableTableSource
URL: https://github.com/apache/flink/pull/9080#discussion_r302459289
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/plan/rules/logical/PushPartitionIntoTableSourceScanRule.scala
 ##
 @@ -0,0 +1,166 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.plan.rules.logical
+
+import org.apache.flink.table.calcite.{FlinkContext, FlinkTypeFactory}
+import org.apache.flink.table.plan.schema.{FlinkRelOptTable, TableSourceTable}
+import org.apache.flink.table.plan.stats.FlinkStatistic
+import org.apache.flink.table.plan.util.{FlinkRelOptUtil, PartitionPruner, 
RexNodeExtractor}
+import org.apache.flink.table.sources.PartitionableTableSource
+
+import org.apache.calcite.plan.RelOptRule.{none, operand}
+import org.apache.calcite.plan.{RelOptRule, RelOptRuleCall}
+import org.apache.calcite.rel.core.Filter
+import org.apache.calcite.rel.logical.LogicalTableScan
+import org.apache.calcite.rex.{RexInputRef, RexNode, RexShuttle}
+
+import scala.collection.JavaConversions._
+
+/**
+  * Planner rule that tries to push partitions evaluated by filter condition 
into a
+  * [[PartitionableTableSource]].
+  */
+class PushPartitionIntoTableSourceScanRule extends RelOptRule(
+  operand(classOf[Filter],
+operand(classOf[LogicalTableScan], none)),
+  "PushPartitionIntoTableSourceScanRule") {
+
+  override def matches(call: RelOptRuleCall): Boolean = {
+val filter: Filter = call.rel(0)
+if (filter.getCondition == null) {
+  return false
+}
+
+val scan: LogicalTableScan = call.rel(1)
+scan.getTable.unwrap(classOf[TableSourceTable[_]]) match {
+  case table: TableSourceTable[_] =>
+table.tableSource match {
+  case p: PartitionableTableSource => p.getPartitionFieldNames.nonEmpty
+  case _ => false
+}
+  case _ => false
+}
+  }
+
+  override def onMatch(call: RelOptRuleCall): Unit = {
+val filter: Filter = call.rel(0)
+val scan: LogicalTableScan = call.rel(1)
+val table: FlinkRelOptTable = scan.getTable.asInstanceOf[FlinkRelOptTable]
+pushPartitionIntoScan(call, filter, scan, table)
+  }
+
+  private def pushPartitionIntoScan(
+  call: RelOptRuleCall,
+  filter: Filter,
+  scan: LogicalTableScan,
+  relOptTable: FlinkRelOptTable): Unit = {
+
+val tableSourceTable = relOptTable.unwrap(classOf[TableSourceTable[_]])
+val tableSource = 
tableSourceTable.tableSource.asInstanceOf[PartitionableTableSource]
+val partitionFieldNames = tableSource.getPartitionFieldNames.toList.toArray
+val inputFieldType = filter.getInput.getRowType
+
+val relBuilder = call.builder()
+val maxCnfNodeCount = FlinkRelOptUtil.getMaxCnfNodeCount(scan)
+val (partitionPredicate, nonPartitionPredicate) =
+  RexNodeExtractor.extractPartitionPredicates(
+filter.getCondition,
+maxCnfNodeCount,
+inputFieldType.getFieldNames.toList.toArray,
+relBuilder.getRexBuilder,
+partitionFieldNames
+  )
+
+if (partitionPredicate.isAlwaysTrue) {
+  // no partition predicates in filter
+  return
+}
+
+val finalPartitionPredicate = adjustPartitionPredicate(
+  inputFieldType.getFieldNames.toList.toArray,
+  partitionFieldNames,
+  partitionPredicate
+)
+val partitionFieldTypes = partitionFieldNames.map { name =>
+  val index = inputFieldType.getFieldNames.indexOf(name)
+  require(index >= 0, s"$name is not found in 
${inputFieldType.getFieldNames.mkString(", ")}")
+  inputFieldType.getFieldList.get(index).getType
+}.map(FlinkTypeFactory.toLogicalType)
+
+val allPartitions = tableSource.getPartitions
+val remainingPartitions = PartitionPruner.prunePartitions(
+  call.getPlanner.getContext.asInstanceOf[FlinkContext].getTableConfig,
+  partitionFieldNames,
+  partitionFieldTypes,
+  allPartitions,
+  finalPartitionPredicate
+)
+
+val newTableSource = 

[GitHub] [flink] flinkbot edited a comment on issue #9056: [FLINK-13185] [sql-parser][table-planner] Bump Calcite dependency to 1.20.0 in sql parser & flink planner

2019-07-11 Thread GitBox
flinkbot edited a comment on issue #9056: [FLINK-13185] 
[sql-parser][table-planner] Bump Calcite dependency to 1.20.0 in sql parser & 
flink planner
URL: https://github.com/apache/flink/pull/9056#issuecomment-510405716
 
 
   ## CI report:
   
   * e73503e4d0c3a07cc440bff8b0d62eefcb4834ec : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/118711210)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9080: [FLINK-13115] [table-planner-blink] Introduce planner rule to support partition pruning for PartitionableTableSource

2019-07-11 Thread GitBox
flinkbot commented on issue #9080: [FLINK-13115] [table-planner-blink] 
Introduce planner rule to support partition pruning for PartitionableTableSource
URL: https://github.com/apache/flink/pull/9080#issuecomment-510421703
 
 
   ## CI report:
   
   * 0031874f16da8b96d90c5b7c2392d792b372cfd8 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/118749997)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11654) Multiple transactional KafkaProducers writing to same cluster have clashing transaction IDs

2019-07-11 Thread Jiangjie Qin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16882806#comment-16882806
 ] 

Jiangjie Qin commented on FLINK-11654:
--

In practice, typically Kafka transaction.ids are assigned in the following way:
 # Users specifies something for each application to define its 
_transactional.id space,_ this guarantees no conflict _transactional.ids_ 
between applications.
 # Each application assigns producers with _transactional.id_ in their own 
space.

For Flink, JID can not be used because it may change across two runs of the 
same job. However, it seems {{JobName}} might be a reasonable option because it 
should probably be unique for each Job / Application, and it's not supposed to 
change. Changing the job name effectively make it another application and the 
exactly once guarantee for the previous application is no longer applicable.

[~jkreileder] the case you saw producers get fenced in a single job might be 
caused by FLINK-10455.

> Multiple transactional KafkaProducers writing to same cluster have clashing 
> transaction IDs
> ---
>
> Key: FLINK-11654
> URL: https://issues.apache.org/jira/browse/FLINK-11654
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.7.1
>Reporter: Jürgen Kreileder
>Priority: Major
> Fix For: 1.9.0
>
>
> We run multiple jobs on a cluster which write a lot to the same Kafka topic 
> from identically named sinks. When EXACTLY_ONCE semantic is enabled for the 
> KafkaProducers we run into a lot of ProducerFencedExceptions and all jobs go 
> into a restart cycle.
> Example exception from the Kafka log:
>  
> {code:java}
> [2019-02-18 18:05:28,485] ERROR [ReplicaManager broker=1] Error processing 
> append operation on partition finding-commands-dev-1-0 
> (kafka.server.ReplicaManager)
> org.apache.kafka.common.errors.ProducerFencedException: Producer's epoch is 
> no longer valid. There is probably another producer with a newer epoch. 483 
> (request epoch), 484 (server epoch)
> {code}
> The reason for this is the way FlinkKafkaProducer initializes the 
> TransactionalIdsGenerator:
> The IDs are only guaranteed to be unique for a single Job. But they can clash 
> between different Jobs (and Clusters).
>  
>  
> {code:java}
> --- 
> a/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer.java
> +++ 
> b/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer.java
> @@ -819,6 +819,7 @@ public class FlinkKafkaProducer
>                 nextTransactionalIdHintState = 
> context.getOperatorStateStore().getUnionListState(
>                         NEXT_TRANSACTIONAL_ID_HINT_DESCRIPTOR);
>                 transactionalIdsGenerator = new TransactionalIdsGenerator(
> + // the prefix probably should include job id and maybe cluster id
>                         getRuntimeContext().getTaskName() + "-" + 
> ((StreamingRuntimeContext) getRuntimeContext()).getOperatorUniqueID(),
>                         getRuntimeContext().getIndexOfThisSubtask(),
>                         
> getRuntimeContext().getNumberOfParallelSubtasks(),{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-12054) HBaseConnectorITCase fails on Java 9

2019-07-11 Thread Yu Li (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16882819#comment-16882819
 ] 

Yu Li commented on FLINK-12054:
---

Adding more background here, that HBase project decided to only support JDK 
releases marked as LTS so JDK 9 and 10 are not supported, and the work of 
supporting JDK 11 is still in progress (HBASE-21110). More information could be 
found in the [hbase 
refguid|http://hbase.apache.org/book.html#basic.prerequisites].

> HBaseConnectorITCase fails on Java 9
> 
>
> Key: FLINK-12054
> URL: https://issues.apache.org/jira/browse/FLINK-12054
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Connectors / HBase
>Reporter: Chesnay Schepler
>Assignee: leesf
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> An issue in hbase.
> {code}
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 21.83 sec <<< 
> FAILURE! - in org.apache.flink.addons.hbase.HBaseConnectorITCase
> org.apache.flink.addons.hbase.HBaseConnectorITCase  Time elapsed: 21.829 sec  
> <<< FAILURE!
> java.lang.AssertionError: We should get a URLClassLoader
>   at 
> org.apache.flink.addons.hbase.HBaseConnectorITCase.activateHBaseCluster(HBaseConnectorITCase.java:81)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-12858) Potentially not properly working Flink job in case of stop-with-savepoint failure

2019-07-11 Thread Alex (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16882826#comment-16882826
 ] 

Alex commented on FLINK-12858:
--

Hi [~guanghui], by non-source tasks I mean tasks that are not start nodes in a 
Flink job execution graph, they are nodes that have some incoming edges.

> Potentially not properly working Flink job in case of stop-with-savepoint 
> failure
> -
>
> Key: FLINK-12858
> URL: https://issues.apache.org/jira/browse/FLINK-12858
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.9.0
>Reporter: Alex
>Assignee: Alex
>Priority: Blocker
>
> Current implementation of stop-with-savepoint (FLINK-11458) would lock the 
> thread (on {{syncSavepointLatch}}) that carries 
> {{StreamTask.performCheckpoint()}}. For non-source tasks, this thread is 
> implied to be the task's main thread (stop-with-savepoint deliberately stops 
> any activity in the task's main thread).
> Unlocking happens either when the task is cancelled or when the corresponding 
> checkpoint is acknowledged.
> It's possible, that other downstream tasks of the same Flink job "soft" fail 
> the checkpoint/savepoint due to various reasons (for example, due to max 
> buffered bytes {{BarrierBuffer.checkSizeLimit()}}. In such case, the 
> checkpoint abortion would be notified to JM . But it looks like, the 
> checkpoint coordinator would handle such abortion as usual and assume that 
> the Flink job continues running.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot commented on issue #9083: [FLINK-13116] [table-planner-blink] Supports catalog statistics in blink planner

2019-07-11 Thread GitBox
flinkbot commented on issue #9083: [FLINK-13116] [table-planner-blink] Supports 
catalog statistics in blink planner
URL: https://github.com/apache/flink/pull/9083#issuecomment-510432442
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13213) MinIdleStateRetentionTime/MaxIdleStateRetentionTime in TableConfig will be removed after call toAppendStream/toRetractStream without QueryConfig parameters

2019-07-11 Thread Jing Zhang (JIRA)
Jing Zhang created FLINK-13213:
--

 Summary: MinIdleStateRetentionTime/MaxIdleStateRetentionTime in 
TableConfig will be removed after call toAppendStream/toRetractStream without 
QueryConfig parameters
 Key: FLINK-13213
 URL: https://issues.apache.org/jira/browse/FLINK-13213
 Project: Flink
  Issue Type: Task
  Components: Table SQL / API
Reporter: Jing Zhang
Assignee: Jing Zhang


There are two `toAppendStream` method in `StreamTableEnvironment`:
1.  def toAppendStream[T: TypeInformation](table: Table): DataStream[T]
2.   def toAppendStream[T: TypeInformation](table: Table, queryConfig: 
StreamQueryConfig): DataStream[T]

After convert `Table` to `DataStream` by call the first method or 
toRetractStream, the MinIdleStateRetentionTime/MaxIdleStateRetentionTime in 
TableConfig will be removed.




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #8920: [FLINK-13024][table] integrate FunctionCatalog with CatalogManager

2019-07-11 Thread GitBox
flinkbot edited a comment on issue #8920: [FLINK-13024][table] integrate 
FunctionCatalog with CatalogManager
URL: https://github.com/apache/flink/pull/8920#issuecomment-510405859
 
 
   ## CI report:
   
   * 4afedee15460ac0f1f2945ca657581c538ddfc06 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/118723073)
   * f639acfa778cc8e31581107f27e3cf0139e3a98d : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/118744491)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13214) Hive connector is missing jdk.tools exclusion for Java 9

2019-07-11 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-13214:
-
Summary: Hive connector is missing jdk.tools exclusion for Java 9  (was: 
Hive connectors does not compile with Java 9)

> Hive connector is missing jdk.tools exclusion for Java 9
> 
>
> Key: FLINK-13214
> URL: https://issues.apache.org/jira/browse/FLINK-13214
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.9.0
>
>
> {code}
> [ERROR] Failed to execute goal on project flink-connector-hive_2.12: Could 
> not resolve dependencies for project 
> org.apache.flink:flink-connector-hive_2.12:jar:1.9-SNAPSHOT: Could not find 
> artifact jdk.tools:jdk.tools:jar:1.7 at specified path 
> C:\Dev\Java\9/../lib/tools.jar -> [Help 1]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13215) Hive connector does not compile on Java 9

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13215:
---
Labels: pull-request-available  (was: )

> Hive connector does not compile on Java 9
> -
>
> Key: FLINK-13215
> URL: https://issues.apache.org/jira/browse/FLINK-13215
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.8.0:testCompile 
> (default-testCompile) on project flink-connector-hive_2.11: Compilation 
> failure
> [ERROR] 
> /C:/Dev/Repos/flink/flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/batch/connectors/hive/FlinkStandaloneHiveRunner.java:[56,15]
>  package sun.net.www is not visible
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] lirui-apache commented on issue #9039: [FLINK-13170][table-planner] Planner should get table factory from ca…

2019-07-11 Thread GitBox
lirui-apache commented on issue #9039: [FLINK-13170][table-planner] Planner 
should get table factory from ca…
URL: https://github.com/apache/flink/pull/9039#issuecomment-510442123
 
 
   @godfreyhe thanks for your suggestions. I have updated `StreamPlanner ` and 
added a test for blink planner in `SinkTest`. Please have a look.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on issue #9006: [FLINK-13107][table-planner-blink] Copy TableApi IT and UT to Blink planner

2019-07-11 Thread GitBox
KurtYoung commented on issue #9006: [FLINK-13107][table-planner-blink] Copy 
TableApi IT and UT to Blink planner
URL: https://github.com/apache/flink/pull/9006#issuecomment-510444333
 
 
   travis passed here: https://travis-ci.org/beyond1920/flink/builds/557233740
   I'm merging this


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zentol commented on a change in pull request #9023: [FLINK-13154][docs] Fix broken links of web docs

2019-07-11 Thread GitBox
zentol commented on a change in pull request #9023: [FLINK-13154][docs] Fix 
broken links of web docs
URL: https://github.com/apache/flink/pull/9023#discussion_r302490660
 
 

 ##
 File path: docs/dev/table/sqlClient.md
 ##
 @@ -456,8 +456,6 @@ catalogs:
 
 Currently Flink supports two types of catalog - `FlinkInMemoryCatalog` and 
`HiveCatalog`.
 
-For more information about catalog, see [Catalogs]({{ site.baseurl 
}}/dev/table/catalog.html).
 
 Review comment:
   If it's gonna be a while then we should merge the PR as is and add this line 
in #8976.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] twalthr commented on a change in pull request #9081: [FLINK-13209] Following FLINK-12951: Remove TableEnvironment#sql and …

2019-07-11 Thread GitBox
twalthr commented on a change in pull request #9081: [FLINK-13209] Following 
FLINK-12951: Remove TableEnvironment#sql and …
URL: https://github.com/apache/flink/pull/9081#discussion_r302490219
 
 

 ##
 File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/TableEnvironment.java
 ##
 @@ -430,11 +362,51 @@ static TableEnvironment create(EnvironmentSettings 
settings) {
 * }
 * 
 *
-* @deprecated Use {@link #sql(String)}.
+* A DDL statement can also execute to create a table:
 
 Review comment:
   Rephrase: "DDL statements can be executed to create tables:"


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] twalthr commented on a change in pull request #9081: [FLINK-13209] Following FLINK-12951: Remove TableEnvironment#sql and …

2019-07-11 Thread GitBox
twalthr commented on a change in pull request #9081: [FLINK-13209] Following 
FLINK-12951: Remove TableEnvironment#sql and …
URL: https://github.com/apache/flink/pull/9081#discussion_r302489743
 
 

 ##
 File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/TableEnvironment.java
 ##
 @@ -430,11 +362,51 @@ static TableEnvironment create(EnvironmentSettings 
settings) {
 * }
 * 
 *
-* @deprecated Use {@link #sql(String)}.
+* A DDL statement can also execute to create a table:
+* For example, the below DDL statement would create a CSV table named 
`tbl1`
+* into the current catalog:
+* 
+*create table tbl1(
+*  a int,
+*  b bigint,
+*  c varchar
+*) with (
+*  connector = 'csv',
+*  csv.path = 'xxx'
 
 Review comment:
   I agree with Jark. If we show examples, then examples that also work in the 
current version.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13219) Hive connector fails hadoop 2.4.1 builds

2019-07-11 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-13219:


 Summary: Hive connector fails hadoop 2.4.1 builds
 Key: FLINK-13219
 URL: https://issues.apache.org/jira/browse/FLINK-13219
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive
Affects Versions: 1.9.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.9.0


The hive connector does not work with hadoop 2.4, but the tests are still run 
in the corresponding cron profile.

https://travis-ci.org/apache/flink/jobs/555723021

We should add a profile for skipping the hive tests that we enable for these 
profiles.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] tillrohrmann commented on issue #9058: [FLINK-13166] Add support for batch slot requests to SlotPoolImpl

2019-07-11 Thread GitBox
tillrohrmann commented on issue #9058: [FLINK-13166] Add support for batch slot 
requests to SlotPoolImpl
URL: https://github.com/apache/flink/pull/9058#issuecomment-510453444
 
 
   Thanks for the review @StephanEwen. I've addressed your comments. Merging 
once Travis gives green light.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9086: [FLINK-13219][hive] Disable tests for hadoop 2.4 profile

2019-07-11 Thread GitBox
flinkbot commented on issue #9086: [FLINK-13219][hive] Disable tests for hadoop 
2.4 profile
URL: https://github.com/apache/flink/pull/9086#issuecomment-510453531
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Reopened] (FLINK-13123) Align Stop/Cancel Commands in CLI and REST Interface and Improve Documentation

2019-07-11 Thread Kostas Kloudas (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas reopened FLINK-13123:


> Align Stop/Cancel Commands in CLI and REST Interface and Improve Documentation
> --
>
> Key: FLINK-13123
> URL: https://issues.apache.org/jira/browse/FLINK-13123
> Project: Flink
>  Issue Type: Improvement
>  Components: Command Line Client, Documentation
>Affects Versions: 1.9.0
>Reporter: Konstantin Knauf
>Assignee: Konstantin Knauf
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, the REST API and CLI around stopping and cancelling jobs are not 
> aligned in terms of terminology and the differences between {{cancel}} and 
> {{job}} are not as clear as they could be.
> I would like to make the following changes to the CLI: 
> * add deprecation warning for {{cancel -s}} command and redirect users to  
> {{stop}}
> * rename {{-s}} of {{stop}} command to {{-p}} for savepoint location. 
> Emphasize that this is optional, as a savepoint is taken in any case
> I would like to make the following changes to the REST API: 
> * Rename {{stop-with-savepoint}} to {{stop}} 
> * Rename "endOfEventTime" to "drain" in accordance with the CLI



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13123) Align Stop/Cancel Commands in CLI and REST Interface and Improve Documentation

2019-07-11 Thread Kostas Kloudas (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-13123:
---
Release Note: From now on, the {{stop}} command with no further arguments 
(gracefully) stops the job with a savepoint at the default savepoint location, 
as configured by the user. This changes the semantics of the pre-existing stop 
command, as agreed in the mailing list.  (was: From now on, the {{stop}} 
command with no further arguments (gracefully) stops the job with a savepoint 
at the default savepoint location, as configured by the user. This changes the 
semantics of the pre-existing stop command, as agreed in the mailing list.

Merged with dafd48893c1adad22ad3ffd7085b6ae481c6cfc7)

> Align Stop/Cancel Commands in CLI and REST Interface and Improve Documentation
> --
>
> Key: FLINK-13123
> URL: https://issues.apache.org/jira/browse/FLINK-13123
> Project: Flink
>  Issue Type: Improvement
>  Components: Command Line Client, Documentation
>Affects Versions: 1.9.0
>Reporter: Konstantin Knauf
>Assignee: Konstantin Knauf
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, the REST API and CLI around stopping and cancelling jobs are not 
> aligned in terms of terminology and the differences between {{cancel}} and 
> {{job}} are not as clear as they could be.
> I would like to make the following changes to the CLI: 
> * add deprecation warning for {{cancel -s}} command and redirect users to  
> {{stop}}
> * rename {{-s}} of {{stop}} command to {{-p}} for savepoint location. 
> Emphasize that this is optional, as a savepoint is taken in any case
> I would like to make the following changes to the REST API: 
> * Rename {{stop-with-savepoint}} to {{stop}} 
> * Rename "endOfEventTime" to "drain" in accordance with the CLI



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Closed] (FLINK-13123) Align Stop/Cancel Commands in CLI and REST Interface and Improve Documentation

2019-07-11 Thread Kostas Kloudas (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas closed FLINK-13123.
--
  Resolution: Fixed
Release Note: 
>From now on, the {{stop}} command with no further arguments (gracefully) stops 
>the job with a savepoint at the default savepoint location, as configured by 
>the user. This changes the semantics of the pre-existing stop command, as 
>agreed in the mailing list.

Merged with dafd48893c1adad22ad3ffd7085b6ae481c6cfc7

  was:From now on, the {{stop}} command with no further arguments (gracefully) 
stops the job with a savepoint at the default savepoint location, as configured 
by the user. This changes the semantics of the pre-existing stop command, as 
agreed in the mailing list.


> Align Stop/Cancel Commands in CLI and REST Interface and Improve Documentation
> --
>
> Key: FLINK-13123
> URL: https://issues.apache.org/jira/browse/FLINK-13123
> Project: Flink
>  Issue Type: Improvement
>  Components: Command Line Client, Documentation
>Affects Versions: 1.9.0
>Reporter: Konstantin Knauf
>Assignee: Konstantin Knauf
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, the REST API and CLI around stopping and cancelling jobs are not 
> aligned in terms of terminology and the differences between {{cancel}} and 
> {{job}} are not as clear as they could be.
> I would like to make the following changes to the CLI: 
> * add deprecation warning for {{cancel -s}} command and redirect users to  
> {{stop}}
> * rename {{-s}} of {{stop}} command to {{-p}} for savepoint location. 
> Emphasize that this is optional, as a savepoint is taken in any case
> I would like to make the following changes to the REST API: 
> * Rename {{stop-with-savepoint}} to {{stop}} 
> * Rename "endOfEventTime" to "drain" in accordance with the CLI



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Reopened] (FLINK-13123) Align Stop/Cancel Commands in CLI and REST Interface and Improve Documentation

2019-07-11 Thread Kostas Kloudas (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas reopened FLINK-13123:


> Align Stop/Cancel Commands in CLI and REST Interface and Improve Documentation
> --
>
> Key: FLINK-13123
> URL: https://issues.apache.org/jira/browse/FLINK-13123
> Project: Flink
>  Issue Type: Improvement
>  Components: Command Line Client, Documentation
>Affects Versions: 1.9.0
>Reporter: Konstantin Knauf
>Assignee: Konstantin Knauf
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, the REST API and CLI around stopping and cancelling jobs are not 
> aligned in terms of terminology and the differences between {{cancel}} and 
> {{job}} are not as clear as they could be.
> I would like to make the following changes to the CLI: 
> * add deprecation warning for {{cancel -s}} command and redirect users to  
> {{stop}}
> * rename {{-s}} of {{stop}} command to {{-p}} for savepoint location. 
> Emphasize that this is optional, as a savepoint is taken in any case
> I would like to make the following changes to the REST API: 
> * Rename {{stop-with-savepoint}} to {{stop}} 
> * Rename "endOfEventTime" to "drain" in accordance with the CLI



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Closed] (FLINK-13123) Align Stop/Cancel Commands in CLI and REST Interface and Improve Documentation

2019-07-11 Thread Kostas Kloudas (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas closed FLINK-13123.
--
   Resolution: Fixed
Fix Version/s: 1.9.0
 Release Note: From now on, the {{stop}} command with no further arguments 
(gracefully) stops the job with a savepoint at the default savepoint location, 
as configured by the user. This changes the semantics of the pre-existing stop 
command, as agreed in the mailing list.

> Align Stop/Cancel Commands in CLI and REST Interface and Improve Documentation
> --
>
> Key: FLINK-13123
> URL: https://issues.apache.org/jira/browse/FLINK-13123
> Project: Flink
>  Issue Type: Improvement
>  Components: Command Line Client, Documentation
>Affects Versions: 1.9.0
>Reporter: Konstantin Knauf
>Assignee: Konstantin Knauf
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, the REST API and CLI around stopping and cancelling jobs are not 
> aligned in terms of terminology and the differences between {{cancel}} and 
> {{job}} are not as clear as they could be.
> I would like to make the following changes to the CLI: 
> * add deprecation warning for {{cancel -s}} command and redirect users to  
> {{stop}}
> * rename {{-s}} of {{stop}} command to {{-p}} for savepoint location. 
> Emphasize that this is optional, as a savepoint is taken in any case
> I would like to make the following changes to the REST API: 
> * Rename {{stop-with-savepoint}} to {{stop}} 
> * Rename "endOfEventTime" to "drain" in accordance with the CLI



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] kl0u closed pull request #9030: [FLINK-13123] Align Stop/Cancel Commands in CLI and REST Interface and Improve Documentation

2019-07-11 Thread GitBox
kl0u closed pull request #9030: [FLINK-13123] Align Stop/Cancel Commands in CLI 
and REST Interface and Improve Documentation
URL: https://github.com/apache/flink/pull/9030
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] kl0u commented on issue #9030: [FLINK-13123] Align Stop/Cancel Commands in CLI and REST Interface and Improve Documentation

2019-07-11 Thread GitBox
kl0u commented on issue #9030: [FLINK-13123] Align Stop/Cancel Commands in CLI 
and REST Interface and Improve Documentation
URL: https://github.com/apache/flink/pull/9030#issuecomment-510460310
 
 
   Merged.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13206) modify 'use database' syntax in SQL CLI to be consistant with standard sql

2019-07-11 Thread Timo Walther (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16882896#comment-16882896
 ] 

Timo Walther commented on FLINK-13206:
--

This is a tricky one because the current catalog API defines "catalog = SQL 
database" and "database = SQL schema". Right [~xuefuz]? 

> modify 'use database' syntax in SQL CLI to be consistant with standard sql
> --
>
> Key: FLINK-13206
> URL: https://issues.apache.org/jira/browse/FLINK-13206
> Project: Flink
>  Issue Type: Sub-task
>Reporter: zjuwangg
>Assignee: zjuwangg
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] azagrebin commented on a change in pull request #9038: [FLINK-13169][tests][coordination] IT test for fine-grained recovery (task executor failures)

2019-07-11 Thread GitBox
azagrebin commented on a change in pull request #9038: 
[FLINK-13169][tests][coordination] IT test for fine-grained recovery (task 
executor failures)
URL: https://github.com/apache/flink/pull/9038#discussion_r302510967
 
 

 ##
 File path: 
flink-tests/src/test/java/org/apache/flink/test/recovery/BatchFineGrainedRecoveryITCase.java
 ##
 @@ -81,13 +106,19 @@ public void setup() throws Exception {
 
miniCluster = new TestingMiniCluster(
new Builder()
-   .setNumTaskManagers(MAP_NUMBER)
+   .setNumTaskManagers(1)
 
 Review comment:
   Yes, it will. The idea is to test the backtracking with failing the finished 
producers. The test with loosing only some partitions will be more complicated 
but more general, of course. We could do it as a followup. Although I am not 
sure about its value atm.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8841: [FLINK-12765][coordinator] Bookkeeping of available resources of allocated slots in SlotPool

2019-07-11 Thread GitBox
flinkbot edited a comment on issue #8841: [FLINK-12765][coordinator] 
Bookkeeping of available resources of allocated slots in SlotPool
URL: https://github.com/apache/flink/pull/8841#issuecomment-510405743
 
 
   ## CI report:
   
   * 4106cd9017d5a20ae427c74b627d6c574c02cc40 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/118728314)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-12447) Bump required Maven version to 3.1.1 (from 3.0.3)

2019-07-11 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-12447.

Resolution: Fixed

> Bump required Maven version to 3.1.1 (from 3.0.3)
> -
>
> Key: FLINK-12447
> URL: https://issues.apache.org/jira/browse/FLINK-12447
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.9.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> See 
> https://lists.apache.org/thread.html/57dec7c338eb95247b7a05ded371f4a78420a964045ea9557d501c3f@%3Cdev.flink.apache.org%3E
>  
> The frontend-maven-plugin requires at least Maven 3.1.0.
> I propose to bump the required Maven version to 3.1.1.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-12935) package flink-connector-hive into flink distribution

2019-07-11 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-12935:
-
Fix Version/s: (was: 1.9.0)
   1.10

> package flink-connector-hive into flink distribution
> 
>
> Key: FLINK-12935
> URL: https://issues.apache.org/jira/browse/FLINK-12935
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] JingsongLi commented on issue #9036: [FLINK-13112][table-planner-blink] Support LocalZonedTimestampType in blink

2019-07-11 Thread GitBox
JingsongLi commented on issue #9036: [FLINK-13112][table-planner-blink] Support 
LocalZonedTimestampType in blink
URL: https://github.com/apache/flink/pull/9036#issuecomment-510417229
 
 
   travis in: https://travis-ci.org/JingsongLi/flink/builds/557200888


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on issue #9081: [FLINK-13209] Following FLINK-12951: Remove TableEnvironment#sql and …

2019-07-11 Thread GitBox
KurtYoung commented on issue #9081: [FLINK-13209] Following FLINK-12951: Remove 
TableEnvironment#sql and …
URL: https://github.com/apache/flink/pull/9081#issuecomment-510422437
 
 
   cc @twalthr 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


<    1   2   3   4   5   6   7   >