[GitHub] [flink] wuchong commented on a change in pull request #10563: [FLINK-15232][table] Message of NoMatchingTableFactoryException should tell users what's wrong

2019-12-16 Thread GitBox
wuchong commented on a change in pull request #10563: [FLINK-15232][table] 
Message of NoMatchingTableFactoryException should tell users what's wrong
URL: https://github.com/apache/flink/pull/10563#discussion_r358639663
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/api/NoMatchingTableFactoryException.java
 ##
 @@ -41,13 +43,15 @@
 
public NoMatchingTableFactoryException(
String message,
+   String matchCandidatesMessage,
 
 Review comment:
   Add `@Nullable` annotation to `matchCandidatesMessage` parameter.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10563: [FLINK-15232][table] Message of NoMatchingTableFactoryException should tell users what's wrong

2019-12-16 Thread GitBox
wuchong commented on a change in pull request #10563: [FLINK-15232][table] 
Message of NoMatchingTableFactoryException should tell users what's wrong
URL: https://github.com/apache/flink/pull/10563#discussion_r358640517
 
 

 ##
 File path: 
flink-table/flink-table-common/src/test/java/org/apache/flink/table/factories/TestTableSinkFactory.java
 ##
 @@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.factories;
+
+import org.apache.flink.table.sinks.TableSink;
+import org.apache.flink.types.Row;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import static 
org.apache.flink.table.descriptors.ConnectorDescriptorValidator.CONNECTOR_PROPERTY_VERSION;
+import static 
org.apache.flink.table.descriptors.ConnectorDescriptorValidator.CONNECTOR_TYPE;
+import static 
org.apache.flink.table.descriptors.FormatDescriptorValidator.FORMAT_PROPERTY_VERSION;
+import static 
org.apache.flink.table.descriptors.FormatDescriptorValidator.FORMAT_TYPE;
+
+/**
+ * Test table sink factory.
+ */
+public class TestTableSinkFactory implements TableFactory, 
TableSinkFactory {
 
 Review comment:
   Can we remove `TestTableSinkFactory` in flink-table-planner now?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10563: [FLINK-15232][table] Message of NoMatchingTableFactoryException should tell users what's wrong

2019-12-16 Thread GitBox
wuchong commented on a change in pull request #10563: [FLINK-15232][table] 
Message of NoMatchingTableFactoryException should tell users what's wrong
URL: https://github.com/apache/flink/pull/10563#discussion_r358641289
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/api/NoMatchingTableFactoryException.java
 ##
 @@ -58,24 +62,32 @@ public NoMatchingTableFactoryException(
Class factoryClass,
List factories,
Map properties) {
+   this(message, null, factoryClass, factories, properties, null);
+   }
 
-   this(message, factoryClass, factories, properties, null);
+   public NoMatchingTableFactoryException(
+   String message,
+   String matchCandidatesMessage,
+   Class factoryClass,
+   List factories,
+   Map properties) {
+   this(message, matchCandidatesMessage, factoryClass, factories, 
properties, null);
}
 
@Override
public String getMessage() {
+   String matchCandidatesString = matchCandidatesMessage == null ?
+   "" :
+   "The match candidates:\n" + matchCandidatesMessage + 
"\n\n";
 
 Review comment:
   `match` -> `matched` ?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10563: [FLINK-15232][table] Message of NoMatchingTableFactoryException should tell users what's wrong

2019-12-16 Thread GitBox
wuchong commented on a change in pull request #10563: [FLINK-15232][table] 
Message of NoMatchingTableFactoryException should tell users what's wrong
URL: https://github.com/apache/flink/pull/10563#discussion_r358639769
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/api/NoMatchingTableFactoryException.java
 ##
 @@ -58,24 +62,32 @@ public NoMatchingTableFactoryException(
Class factoryClass,
List factories,
Map properties) {
+   this(message, null, factoryClass, factories, properties, null);
+   }
 
-   this(message, factoryClass, factories, properties, null);
+   public NoMatchingTableFactoryException(
+   String message,
+   String matchCandidatesMessage,
+   Class factoryClass,
+   List factories,
+   Map properties) {
+   this(message, matchCandidatesMessage, factoryClass, factories, 
properties, null);
}
 
@Override
public String getMessage() {
+   String matchCandidatesString = matchCandidatesMessage == null ?
+   "" :
+   "The match candidates:\n" + matchCandidatesMessage + 
"\n\n";
return String.format(
"Could not find a suitable table factory for '%s' 
in\nthe classpath.\n\n" +
-   "Reason: %s\n\n" +
+   "Reason: %s\n\n" + matchCandidatesString +
 
 Review comment:
   Use String.format pattern. 
   
   ```
   "Reason: %s\n\n%s"
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10563: [FLINK-15232][table] Message of NoMatchingTableFactoryException should tell users what's wrong

2019-12-16 Thread GitBox
wuchong commented on a change in pull request #10563: [FLINK-15232][table] 
Message of NoMatchingTableFactoryException should tell users what's wrong
URL: https://github.com/apache/flink/pull/10563#discussion_r358639624
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/api/NoMatchingTableFactoryException.java
 ##
 @@ -32,6 +32,8 @@
 
// message that indicates the current matching step
private final String message;
+   // message that indicates the best matched factory
+   private final String matchCandidatesMessage;
 
 Review comment:
   Add `@Nullable` annotation to `matchCandidatesMessage`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10563: [FLINK-15232][table] Message of NoMatchingTableFactoryException should tell users what's wrong

2019-12-16 Thread GitBox
wuchong commented on a change in pull request #10563: [FLINK-15232][table] 
Message of NoMatchingTableFactoryException should tell users what's wrong
URL: https://github.com/apache/flink/pull/10563#discussion_r358643641
 
 

 ##
 File path: 
flink-table/flink-table-common/src/test/resources/log4j-test.properties
 ##
 @@ -0,0 +1,27 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+# Set root logger level to OFF to not flood build logs
+# set manually to INFO for debugging purposes
+log4j.rootLogger=OFF, testlogger
 
 Review comment:
   Are log4j and logback configuration files required? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15289) Run sql appear error of "Zero-length character strings have no serializable string representation".

2019-12-16 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-15289:
-
Fix Version/s: 1.10.0

> Run sql appear error of "Zero-length character strings have no serializable 
> string representation".
> ---
>
> Key: FLINK-15289
> URL: https://issues.apache.org/jira/browse/FLINK-15289
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: xiaojin.wy
>Priority: Critical
> Fix For: 1.10.0
>
>
> *The sql is:*
>  CREATE TABLE `INT8_TBL` (
>  q1 BIGINT,
>  q2 BIGINT
>  ) WITH (
>  'format.field-delimiter'=',',
>  'connector.type'='filesystem',
>  'format.derive-schema'='true',
>  
> 'connector.path'='/defender_test_data/daily_regression_batch_postgres_1.10/test_bigint/sources/INT8_TBL.csv',
>  'format.type'='csv'
>  );
> SELECT '' AS five, q1 AS plus, -q1 AS xm FROM INT8_TBL;
> *The error detail is :*
>  2019-12-17 15:35:07,026 ERROR org.apache.flink.table.client.SqlClient - SQL 
> Client must stop. Unexpected exception. This is a bug. Please consider filing 
> an issue.
>  org.apache.flink.table.api.TableException: Zero-length character strings 
> have no serializable string representation.
>  at 
> org.apache.flink.table.types.logical.CharType.asSerializableString(CharType.java:116)
>  at 
> org.apache.flink.table.descriptors.DescriptorProperties.putTableSchema(DescriptorProperties.java:218)
>  at 
> org.apache.flink.table.catalog.CatalogTableImpl.toProperties(CatalogTableImpl.java:75)
>  at 
> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSink(TableFactoryUtil.java:85)
>  at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryAndPersistInternal(LocalExecutor.java:688)
>  at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryAndPersist(LocalExecutor.java:488)
>  at org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:601)
>  at 
> org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:385)
>  at java.util.Optional.ifPresent(Optional.java:159)
>  at 
> org.apache.flink.table.client.cli.CliClient.submitSQLFile(CliClient.java:271)
>  at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:125)
>  at org.apache.flink.table.client.SqlClient.start(SqlClient.java:104)
>  at org.apache.flink.table.client.SqlClient.main(SqlClient.java:180)
> *The input data is:*
>  123,456
>  123,4567890123456789
>  4567890123456789,123
>  4567890123456789,4567890123456789
>  4567890123456789,-4567890123456789



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15289) Run sql appear error of "Zero-length character strings have no serializable string representation".

2019-12-16 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16997953#comment-16997953
 ] 

Jingsong Lee commented on FLINK-15289:
--

Related JIRA: https://issues.apache.org/jira/browse/FLINK-12874

CC: [~twalthr]

> Run sql appear error of "Zero-length character strings have no serializable 
> string representation".
> ---
>
> Key: FLINK-15289
> URL: https://issues.apache.org/jira/browse/FLINK-15289
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: xiaojin.wy
>Priority: Critical
>
> *The sql is:*
>  CREATE TABLE `INT8_TBL` (
>  q1 BIGINT,
>  q2 BIGINT
>  ) WITH (
>  'format.field-delimiter'=',',
>  'connector.type'='filesystem',
>  'format.derive-schema'='true',
>  
> 'connector.path'='/defender_test_data/daily_regression_batch_postgres_1.10/test_bigint/sources/INT8_TBL.csv',
>  'format.type'='csv'
>  );
> SELECT '' AS five, q1 AS plus, -q1 AS xm FROM INT8_TBL;
> *The error detail is :*
>  2019-12-17 15:35:07,026 ERROR org.apache.flink.table.client.SqlClient - SQL 
> Client must stop. Unexpected exception. This is a bug. Please consider filing 
> an issue.
>  org.apache.flink.table.api.TableException: Zero-length character strings 
> have no serializable string representation.
>  at 
> org.apache.flink.table.types.logical.CharType.asSerializableString(CharType.java:116)
>  at 
> org.apache.flink.table.descriptors.DescriptorProperties.putTableSchema(DescriptorProperties.java:218)
>  at 
> org.apache.flink.table.catalog.CatalogTableImpl.toProperties(CatalogTableImpl.java:75)
>  at 
> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSink(TableFactoryUtil.java:85)
>  at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryAndPersistInternal(LocalExecutor.java:688)
>  at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryAndPersist(LocalExecutor.java:488)
>  at org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:601)
>  at 
> org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:385)
>  at java.util.Optional.ifPresent(Optional.java:159)
>  at 
> org.apache.flink.table.client.cli.CliClient.submitSQLFile(CliClient.java:271)
>  at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:125)
>  at org.apache.flink.table.client.SqlClient.start(SqlClient.java:104)
>  at org.apache.flink.table.client.SqlClient.main(SqlClient.java:180)
> *The input data is:*
>  123,456
>  123,4567890123456789
>  4567890123456789,123
>  4567890123456789,4567890123456789
>  4567890123456789,-4567890123456789



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15289) Run sql appear error of "Zero-length character strings have no serializable string representation".

2019-12-16 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-15289:
--

 Summary: Run sql appear error of "Zero-length character strings 
have no serializable string representation".
 Key: FLINK-15289
 URL: https://issues.apache.org/jira/browse/FLINK-15289
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Affects Versions: 1.10.0
Reporter: xiaojin.wy


*The sql is:*
 CREATE TABLE `INT8_TBL` (
 q1 BIGINT,
 q2 BIGINT
 ) WITH (
 'format.field-delimiter'=',',
 'connector.type'='filesystem',
 'format.derive-schema'='true',
 
'connector.path'='/defender_test_data/daily_regression_batch_postgres_1.10/test_bigint/sources/INT8_TBL.csv',
 'format.type'='csv'
 );

SELECT '' AS five, q1 AS plus, -q1 AS xm FROM INT8_TBL;

*The error detail is :*
 2019-12-17 15:35:07,026 ERROR org.apache.flink.table.client.SqlClient - SQL 
Client must stop. Unexpected exception. This is a bug. Please consider filing 
an issue.
 org.apache.flink.table.api.TableException: Zero-length character strings have 
no serializable string representation.
 at 
org.apache.flink.table.types.logical.CharType.asSerializableString(CharType.java:116)
 at 
org.apache.flink.table.descriptors.DescriptorProperties.putTableSchema(DescriptorProperties.java:218)
 at 
org.apache.flink.table.catalog.CatalogTableImpl.toProperties(CatalogTableImpl.java:75)
 at 
org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSink(TableFactoryUtil.java:85)
 at 
org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryAndPersistInternal(LocalExecutor.java:688)
 at 
org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryAndPersist(LocalExecutor.java:488)
 at org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:601)
 at org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:385)
 at java.util.Optional.ifPresent(Optional.java:159)
 at 
org.apache.flink.table.client.cli.CliClient.submitSQLFile(CliClient.java:271)
 at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:125)
 at org.apache.flink.table.client.SqlClient.start(SqlClient.java:104)
 at org.apache.flink.table.client.SqlClient.main(SqlClient.java:180)

*The input data is:*
 123,456
 123,4567890123456789
 4567890123456789,123
 4567890123456789,4567890123456789
 4567890123456789,-4567890123456789



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10601: [FLINK-14867][api] Move TextInputFormat & TextOutputFormat to flink-core

2019-12-16 Thread GitBox
flinkbot commented on issue #10601: [FLINK-14867][api] Move TextInputFormat & 
TextOutputFormat to flink-core
URL: https://github.com/apache/flink/pull/10601#issuecomment-566421253
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 6d6828d55b015bc9c02ed2d63f247d6db957cf4b (Tue Dec 17 
07:38:19 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14867) Move TextInputFormat & TextOutputFormat to flink-core

2019-12-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-14867:
---
Labels: pull-request-available  (was: )

> Move TextInputFormat & TextOutputFormat to flink-core
> -
>
> Key: FLINK-14867
> URL: https://issues.apache.org/jira/browse/FLINK-14867
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataSet, API / DataStream
>Reporter: Zili Chen
>Assignee: Zili Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> This is one step to decouple the dependency from flink-streaming-java to 
> flink-java. We already have a package {{o.a.f.core.io}} for these formats.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] TisonKun opened a new pull request #10601: [FLINK-14867][api] Move TextInputFormat & TextOutputFormat to flink-core

2019-12-16 Thread GitBox
TisonKun opened a new pull request #10601: [FLINK-14867][api] Move 
TextInputFormat & TextOutputFormat to flink-core
URL: https://github.com/apache/flink/pull/10601
 
 
   ## What is the purpose of the change
   
   This is one step to decouple the dependency from flink-streaming-java to 
flink-java. We already have a package o.a.f.core.io for these formats.
   
   Different from `ClosureCleaner` these formats aren't used in connectors or 
be annotated as `Public`. However, they are somehow user-facing interfaces. So 
it is a trade-off whether we keep the package or move to a proper named package.
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   
   cc @aljoscha 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-15288) Starting jobmanager pod should respect containerized.heap-cutoff

2019-12-16 Thread Yang Wang (Jira)
Yang Wang created FLINK-15288:
-

 Summary: Starting jobmanager pod should respect 
containerized.heap-cutoff
 Key: FLINK-15288
 URL: https://issues.apache.org/jira/browse/FLINK-15288
 Project: Flink
  Issue Type: Sub-task
Reporter: Yang Wang


Starting jobmanager pod should respect containerized.heap-cutoff. The cutoff 
will be used to leave some memory for jvm off-heap, for example meta space, 
thread native memory and etc.

 
{code:java}
public static final ConfigOption CONTAINERIZED_HEAP_CUTOFF_RATIO = 
ConfigOptions
 .key("containerized.heap-cutoff-ratio")
 .defaultValue(0.25f)
 .withDeprecatedKeys("yarn.heap-cutoff-ratio")
 .withDescription("Percentage of heap space to remove from containers (YARN / 
Mesos / Kubernetes), to compensate" +
  " for other JVM memory usage.");
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] openinx commented on issue #10236: [FLINK-14703][e2e] Port the Kafka SQL related tests.

2019-12-16 Thread GitBox
openinx commented on issue #10236: [FLINK-14703][e2e] Port the Kafka SQL 
related tests.
URL: https://github.com/apache/flink/pull/10236#issuecomment-566419703
 
 
   Ping @zentol for reviewing, thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10562: [FLINK-15231][table-planner-blink] Do not support TIMESTAMP/DECIMAL t…

2019-12-16 Thread GitBox
KurtYoung commented on a change in pull request #10562: 
[FLINK-15231][table-planner-blink] Do not support TIMESTAMP/DECIMAL t…
URL: https://github.com/apache/flink/pull/10562#discussion_r358635640
 
 

 ##
 File path: 
flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/dataformat/vector/heap/AbstractHeapVector.java
 ##
 @@ -109,18 +107,7 @@ public static AbstractHeapVector 
createHeapColumn(LogicalType fieldType, int max
case TIME_WITHOUT_TIME_ZONE:
return new HeapIntVector(maxRows);
case BIGINT:
-   case TIMESTAMP_WITHOUT_TIME_ZONE:
 
 Review comment:
   looks like both `allocateHeapVectors` and `createHeapColumn` are useless?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14867) Move TextInputFormat & TextOutputFormat to flink-core

2019-12-16 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen updated FLINK-14867:
--
Description: This is one step to decouple the dependency from 
flink-streaming-java to flink-java. We already have a package {{o.a.f.core.io}} 
for these formats.  (was: This is one step to decouple the dependency from 
flink-streaming-java to flink-java. We already have a package {{o.a.f.core.io}} 
for these formats.

However, it possibly suffers from b/w compatibility issue so that we 
unfortunately move it under the same package in flink-core module.

CC [~aljoscha] also do you know how to verify whether or not such compatibility 
issue would happen?)

> Move TextInputFormat & TextOutputFormat to flink-core
> -
>
> Key: FLINK-14867
> URL: https://issues.apache.org/jira/browse/FLINK-14867
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataSet, API / DataStream
>Reporter: Zili Chen
>Assignee: Zili Chen
>Priority: Major
> Fix For: 1.11.0
>
>
> This is one step to decouple the dependency from flink-streaming-java to 
> flink-java. We already have a package {{o.a.f.core.io}} for these formats.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-15245) Flink running in one cluster cannot write data to Hive tables in another cluster

2019-12-16 Thread Kurt Young (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young closed FLINK-15245.
--
  Assignee: Rui Li
Resolution: Fixed

master: b0783b50cb1610611f9442d456501160322b5028

1.10.0: 34093227ed681f1b0c842c50eb39bc790ee10090

> Flink running in one cluster cannot write data to Hive tables in another 
> cluster
> 
>
> Key: FLINK-15245
> URL: https://issues.apache.org/jira/browse/FLINK-15245
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Launch Flink cluster and write some data to a Hive table in another cluster. 
> The job finishes successfully but data is not really written.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14867) Move TextInputFormat & TextOutputFormat to flink-core

2019-12-16 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen updated FLINK-14867:
--
Fix Version/s: 1.11.0

> Move TextInputFormat & TextOutputFormat to flink-core
> -
>
> Key: FLINK-14867
> URL: https://issues.apache.org/jira/browse/FLINK-14867
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataSet, API / DataStream
>Reporter: Zili Chen
>Assignee: Zili Chen
>Priority: Major
> Fix For: 1.11.0
>
>
> This is one step to decouple the dependency from flink-streaming-java to 
> flink-java. We already have a package {{o.a.f.core.io}} for these formats.
> However, it possibly suffers from b/w compatibility issue so that we 
> unfortunately move it under the same package in flink-core module.
> CC [~aljoscha] also do you know how to verify whether or not such 
> compatibility issue would happen?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] KurtYoung merged pull request #10568: [FLINK-15245][hive] Flink running in one cluster cannot write data to…

2019-12-16 Thread GitBox
KurtYoung merged pull request #10568: [FLINK-15245][hive] Flink running in one 
cluster cannot write data to…
URL: https://github.com/apache/flink/pull/10568
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on issue #10568: [FLINK-15245][hive] Flink running in one cluster cannot write data to…

2019-12-16 Thread GitBox
KurtYoung commented on issue #10568: [FLINK-15245][hive] Flink running in one 
cluster cannot write data to…
URL: https://github.com/apache/flink/pull/10568#issuecomment-566415713
 
 
   Merging..


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-14797) Remove power mockito from RemoteStreamExecutionEnvironmentTest

2019-12-16 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen closed FLINK-14797.
-
Resolution: Fixed

occasionally fixed by 9356c99b4c3623f96e9df3a628bfbf638ed9bad7

> Remove power mockito from RemoteStreamExecutionEnvironmentTest
> --
>
> Key: FLINK-14797
> URL: https://issues.apache.org/jira/browse/FLINK-14797
> Project: Flink
>  Issue Type: Test
>Reporter: Zili Chen
>Assignee: Zili Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-13998) Fix ORC test failure with Hive 2.0.x

2019-12-16 Thread Rui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16997942#comment-16997942
 ] 

Rui Li commented on FLINK-13998:


Related to HIVE-13163

> Fix ORC test failure with Hive 2.0.x
> 
>
> Key: FLINK-13998
> URL: https://issues.apache.org/jira/browse/FLINK-13998
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
>Priority: Major
> Fix For: 1.11.0
>
>
> Our test is using local file system, and orc in HIve 2.0.x seems having issue 
> with that. 
> {code}
> 06:54:43.156 [ORC_GET_SPLITS #0] ERROR org.apache.hadoop.hive.ql.io.AcidUtils 
> - Failed to get files with ID; using regular API
> java.lang.UnsupportedOperationException: Only supported for DFS; got class 
> org.apache.hadoop.fs.LocalFileSystem
>   at 
> org.apache.hadoop.hive.shims.Hadoop23Shims.ensureDfs(Hadoop23Shims.java:813) 
> ~[hive-exec-2.0.0.jar:2.0.0]
>   at 
> org.apache.hadoop.hive.shims.Hadoop23Shims.listLocatedHdfsStatus(Hadoop23Shims.java:784)
>  ~[hive-exec-2.0.0.jar:2.0.0]
>   at 
> org.apache.hadoop.hive.ql.io.AcidUtils.getAcidState(AcidUtils.java:477) 
> [hive-exec-2.0.0.jar:2.0.0]
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:890)
>  [hive-exec-2.0.0.jar:2.0.0]
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:875)
>  [hive-exec-2.0.0.jar:2.0.0]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [?:1.8.0_181]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [?:1.8.0_181]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [?:1.8.0_181]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_181]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_181]
>   at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10590: [FLINK-15205][hive][document] Update hive partition and orc for read_write_hive

2019-12-16 Thread GitBox
flinkbot edited a comment on issue #10590: [FLINK-15205][hive][document] Update 
hive partition and orc for read_write_hive
URL: https://github.com/apache/flink/pull/10590#issuecomment-565962731
 
 
   
   ## CI report:
   
   * ac16927be3e1cb361f7e4a5f14c9c1883fbd9903 UNKNOWN
   * a219ec5839a43aeea3ae73d878f2717ac17ef37e Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141174082) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3628)
 
   * e6d9cfac4eaa5cdfa5ea140422f2656ce4f49f3d UNKNOWN
   * 296437af9ee679e070a601b0365c4fba23fae198 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141336029) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3654)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15251) Fabric8FlinkKubeClient doesn't work if ingress has hostname but no IP

2019-12-16 Thread Yang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang updated FLINK-15251:
--
Parent: FLINK-9953
Issue Type: Sub-task  (was: Bug)

> Fabric8FlinkKubeClient doesn't work if ingress has hostname but no IP
> -
>
> Key: FLINK-15251
> URL: https://issues.apache.org/jira/browse/FLINK-15251
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes
>Affects Versions: 1.10.0
> Environment: Kubernetes for Docker on MacOS
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In my setup the ingress has a hostname but no IP here: 
> https://github.com/apache/flink/blob/f49e632bb290ded45b320f5d00ceaa1543a6bb1c/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/Fabric8FlinkKubeClient.java#L199
> This means that when I try to use the Kubernetes Executor I will get
> {code}
> Exception in thread "main" java.lang.NullPointerException: Address should not 
> be null.
>   at 
> org.apache.flink.util.Preconditions.checkNotNull(Preconditions.java:75)
>   at 
> org.apache.flink.kubernetes.kubeclient.Endpoint.(Endpoint.java:33)
>   at 
> org.apache.flink.kubernetes.kubeclient.Fabric8FlinkKubeClient.getRestEndpoint(Fabric8FlinkKubeClient.java:209)
>   at 
> org.apache.flink.kubernetes.KubernetesClusterDescriptor.lambda$createClusterClientProvider$0(KubernetesClusterDescriptor.java:82)
>   at 
> org.apache.flink.kubernetes.KubernetesClusterDescriptor.retrieve(KubernetesClusterDescriptor.java:106)
>   at 
> org.apache.flink.kubernetes.KubernetesClusterDescriptor.retrieve(KubernetesClusterDescriptor.java:53)
>   at 
> org.apache.flink.client.deployment.AbstractSessionClusterExecutor.execute(AbstractSessionClusterExecutor.java:60)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1741)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1712)
>   at 
> org.apache.flink.streaming.examples.statemachine.StateMachineExample.main(StateMachineExample.java:169)
> {code}
> I think we can just check if a hostname is set and use that if there is no IP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] KurtYoung commented on issue #10594: [FLINK-15266] Fix NPE for case operator code gen in blink planner

2019-12-16 Thread GitBox
KurtYoung commented on issue #10594: [FLINK-15266] Fix NPE for case operator 
code gen in blink planner
URL: https://github.com/apache/flink/pull/10594#issuecomment-566412890
 
 
   Please fix the code style error as well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-15287) Vectorized orc reader fails with Hive 2.0.1

2019-12-16 Thread Kurt Young (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young reassigned FLINK-15287:
--

Assignee: Jingsong Lee

> Vectorized orc reader fails with Hive 2.0.1
> ---
>
> Key: FLINK-15287
> URL: https://issues.apache.org/jira/browse/FLINK-15287
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Rui Li
>Assignee: Jingsong Lee
>Priority: Major
> Fix For: 1.10.0
>
>
> Reading ORC table from Hive 2.0.1 fails with:
> {noformat}
> Caused by: java.lang.NoSuchMethodError: 
> org.apache.orc.OrcFile.createReader(Lorg/apache/hadoop/fs/Path;Lorg/apache/orc/OrcFile$ReaderOptions;)Lorg/apache/orc/Reader;
>   at org.apache.flink.orc.OrcSplitReader.(OrcSplitReader.java:78)
>   at 
> org.apache.flink.orc.OrcColumnarRowSplitReader.(OrcColumnarRowSplitReader.java:53)
>   at 
> org.apache.flink.orc.OrcSplitReaderUtil.genPartColumnarRowReader(OrcSplitReaderUtil.java:93)
>   at 
> org.apache.flink.connectors.hive.read.HiveVectorizedOrcSplitReader.(HiveVectorizedOrcSplitReader.java:64)
>   at 
> org.apache.flink.connectors.hive.read.HiveTableInputFormat.open(HiveTableInputFormat.java:117)
>   at 
> org.apache.flink.connectors.hive.read.HiveTableInputFormat.open(HiveTableInputFormat.java:57)
>   at 
> org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:85)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:196)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15287) Vectorized orc reader fails with Hive 2.0.1

2019-12-16 Thread Kurt Young (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young updated FLINK-15287:
---
Affects Version/s: 1.10.0

> Vectorized orc reader fails with Hive 2.0.1
> ---
>
> Key: FLINK-15287
> URL: https://issues.apache.org/jira/browse/FLINK-15287
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: Rui Li
>Assignee: Jingsong Lee
>Priority: Major
> Fix For: 1.10.0
>
>
> Reading ORC table from Hive 2.0.1 fails with:
> {noformat}
> Caused by: java.lang.NoSuchMethodError: 
> org.apache.orc.OrcFile.createReader(Lorg/apache/hadoop/fs/Path;Lorg/apache/orc/OrcFile$ReaderOptions;)Lorg/apache/orc/Reader;
>   at org.apache.flink.orc.OrcSplitReader.(OrcSplitReader.java:78)
>   at 
> org.apache.flink.orc.OrcColumnarRowSplitReader.(OrcColumnarRowSplitReader.java:53)
>   at 
> org.apache.flink.orc.OrcSplitReaderUtil.genPartColumnarRowReader(OrcSplitReaderUtil.java:93)
>   at 
> org.apache.flink.connectors.hive.read.HiveVectorizedOrcSplitReader.(HiveVectorizedOrcSplitReader.java:64)
>   at 
> org.apache.flink.connectors.hive.read.HiveTableInputFormat.open(HiveTableInputFormat.java:117)
>   at 
> org.apache.flink.connectors.hive.read.HiveTableInputFormat.open(HiveTableInputFormat.java:57)
>   at 
> org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:85)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:196)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-15284) Sql error (Failed to push project into table source!)

2019-12-16 Thread Kurt Young (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young closed FLINK-15284.
--
Resolution: Duplicate

> Sql error (Failed to push project into table source!)
> -
>
> Key: FLINK-15284
> URL: https://issues.apache.org/jira/browse/FLINK-15284
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: xiaojin.wy
>Priority: Major
>
> *The sql is:*
> CREATE TABLE `t` (
>  x INT
>  ) WITH (
>  'format.field-delimiter'=',',
>  'connector.type'='filesystem',
>  'format.derive-schema'='true',
>  
> 'connector.path'='/defender_test_data/daily_regression_batch_spark_1.10/test_binary_comparison_coercion/sources/t.csv',
>  'format.type'='csv'
>  );
> SELECT cast(' ' as BINARY(2)) = X'0020' FROM t;
> *The exception is:*
> [ERROR] Could not execute SQL statement. Reason:
>  org.apache.flink.table.api.TableException: Failed to push project into table 
> source! table source with pushdown capability must override and change 
> explainSource() API to explain the pushdown applied!
>  
>  
> *The whole exception is:*
> Caused by: org.apache.flink.table.api.TableException: Sql optimization: 
> Cannot generate a valid execution plan for the given query:Caused by: 
> org.apache.flink.table.api.TableException: Sql optimization: Cannot generate 
> a valid execution plan for the given query:
>  
> LogicalSink(name=[`default_catalog`.`default_database`.`_tmp_table_2136189659`],
>  fields=[EXPR$0])+- LogicalProject(EXPR$0=[false])+   - 
> LogicalTableScan(table=[[default_catalog, default_database, t, source: 
> [CsvTableSource(read fields: x)]]])
>  Failed to push project into table source! table source with pushdown 
> capability must override and change explainSource() API to explain the 
> pushdown applied!Please check the documentation for the set of currently 
> supported SQL features. at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:86)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58)
>  at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>  at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157) at 
> scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104) at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57)
>  at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:83)
>  at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:56)
>  at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:44)
>  at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:44)
>  at scala.collection.immutable.List.foreach(List.scala:392) at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:44)
>  at 
> org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:223)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:150)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:680)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.insertIntoInternal(TableEnvironmentImpl.java:353)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.insertInto(TableEnvironmentImpl.java:341)
>  at 
> org.apache.flink.table.api.internal.TableImpl.insertInto(TableImpl.java:428) 
> at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$executeQueryAndPersistInternal$14(LocalExecutor.java:701)
>  

[jira] [Commented] (FLINK-13998) Fix ORC test failure with Hive 2.0.x

2019-12-16 Thread Rui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16997938#comment-16997938
 ] 

Rui Li commented on FLINK-13998:


We managed to make reading work, but the bigger problem is with writing to Orc 
tables. In Hive 2.0.x, the orc writer has to be used by a single thread. It 
must be the same thread to create/use/close the writer, which doesn't fit 
Flink's threading model. And the related error would be something like:
{noformat}
Caused by: java.lang.IllegalArgumentException: Owner thread expected 
Thread[Source: Values(tuples=[[{ 3, _UTF-16LE'c' }]], values=[EXPR$0, EXPR$1]) 
-> SinkConversionToRow -> Sink: Unnamed (1/1),5,Flink Task Threads], got 
Thread[Legacy Source Thread - Source: Values(tuples=[[{ 3, _UTF-16LE'c' }]], 
values=[EXPR$0, EXPR$1]) -> SinkConversionToRow -> Sink: Unnamed (1/1),5,Flink 
Task Threads]
at 
com.google.common.base.Preconditions.checkArgument(Preconditions.java:119)
at org.apache.orc.impl.MemoryManager.checkOwner(MemoryManager.java:104)
at org.apache.orc.impl.MemoryManager.addWriter(MemoryManager.java:118)
at org.apache.orc.impl.WriterImpl.(WriterImpl.java:204)
at 
org.apache.hadoop.hive.ql.io.orc.WriterImpl.(WriterImpl.java:91)
at 
org.apache.hadoop.hive.ql.io.orc.OrcFile.createWriter(OrcFile.java:308)
at 
org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat$OrcRecordWriter.write(OrcOutputFormat.java:101)
at 
org.apache.flink.connectors.hive.HiveOutputFormatFactory$HiveOutputFormat.writeRecord(HiveOutputFormatFactory.java:205)
at 
org.apache.flink.connectors.hive.HiveOutputFormatFactory$HiveOutputFormat.writeRecord(HiveOutputFormatFactory.java:177)
at 
org.apache.flink.table.filesystem.SingleDirectoryWriter.write(SingleDirectoryWriter.java:52)
at 
org.apache.flink.table.filesystem.FileSystemOutputFormat.writeRecord(FileSystemOutputFormat.java:120)
{noformat}

> Fix ORC test failure with Hive 2.0.x
> 
>
> Key: FLINK-13998
> URL: https://issues.apache.org/jira/browse/FLINK-13998
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
>Priority: Major
> Fix For: 1.11.0
>
>
> Our test is using local file system, and orc in HIve 2.0.x seems having issue 
> with that. 
> {code}
> 06:54:43.156 [ORC_GET_SPLITS #0] ERROR org.apache.hadoop.hive.ql.io.AcidUtils 
> - Failed to get files with ID; using regular API
> java.lang.UnsupportedOperationException: Only supported for DFS; got class 
> org.apache.hadoop.fs.LocalFileSystem
>   at 
> org.apache.hadoop.hive.shims.Hadoop23Shims.ensureDfs(Hadoop23Shims.java:813) 
> ~[hive-exec-2.0.0.jar:2.0.0]
>   at 
> org.apache.hadoop.hive.shims.Hadoop23Shims.listLocatedHdfsStatus(Hadoop23Shims.java:784)
>  ~[hive-exec-2.0.0.jar:2.0.0]
>   at 
> org.apache.hadoop.hive.ql.io.AcidUtils.getAcidState(AcidUtils.java:477) 
> [hive-exec-2.0.0.jar:2.0.0]
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:890)
>  [hive-exec-2.0.0.jar:2.0.0]
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:875)
>  [hive-exec-2.0.0.jar:2.0.0]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [?:1.8.0_181]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [?:1.8.0_181]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [?:1.8.0_181]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_181]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_181]
>   at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10291: [FLINK-14899] [table-planner-blink] Fix unexpected plan when PROCTIME() is defined in query

2019-12-16 Thread GitBox
flinkbot edited a comment on issue #10291: [FLINK-14899] [table-planner-blink] 
Fix unexpected plan when PROCTIME() is defined in query
URL: https://github.com/apache/flink/pull/10291#issuecomment-557488395
 
 
   
   ## CI report:
   
   * c3082d201afec00fff9e25d46059b3789c9d28d5 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/137741065) 
   * 76a6169e77815b25b5fb265a706ab222d2395d87 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/137744705) 
   * 365b5e6cc7b3953877838d371d971d8a3cbb3966 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/137973852) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-10938) Add e2e test for natively running Flink session cluster on Kubernetes

2019-12-16 Thread Yang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang updated FLINK-10938:
--
Summary: Add e2e test for natively running Flink session cluster on 
Kubernetes  (was: Enable active kubernetes integration e2e test for session 
cluster)

> Add e2e test for natively running Flink session cluster on Kubernetes
> -
>
> Key: FLINK-10938
> URL: https://issues.apache.org/jira/browse/FLINK-10938
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes
>Reporter: JIN SUN
>Assignee: Yang Wang
>Priority: Blocker
> Fix For: 1.10.0
>
>
> Add E2E tests to verify Flink on K8s integration



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on a change in pull request #10536: [FLINK-15191][connectors / kafka]Fix can't create table source for Kafka if watermark or computed column is defined.

2019-12-16 Thread GitBox
wuchong commented on a change in pull request #10536: [FLINK-15191][connectors 
/ kafka]Fix can't create table source for Kafka if watermark or computed column 
is defined.
URL: https://github.com/apache/flink/pull/10536#discussion_r358625975
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/TableSchemaUtils.java
 ##
 @@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.utils;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.api.TableColumn;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.table.sinks.TableSink;
+import org.apache.flink.table.sources.TableSource;
+import org.apache.flink.util.Preconditions;
+
+/**
+ * Utilities to {@link TableSchema}.
+ */
+@Internal
+public class TableSchemaUtils {
+
+   /**
+* Return {@link TableSchema} which consists of all physical columns. 
That means, the computed
+* columns are filtered out.
+*
+* Readers(or writers) such as {@link TableSource} and {@link 
TableSink} should use this physical
+* schema to generate {@link TableSource#getProducedDataType()} and 
{@link TableSource#getTableSchema()}
+* rather than using the raw TableSchema which may lead contains 
computed columns.
+*/
+   public static TableSchema getPhysicalSchema(TableSchema tableSchema) {
+   Preconditions.checkNotNull(tableSchema);
+   TableSchema.Builder builder = new TableSchema.Builder();
+   tableSchema.getTableColumns().forEach(
+   tableColumn -> {
+   if (!tableColumn.isGenerated()) {
+   builder.field(tableColumn.getName(), 
tableColumn.getType());
+   }
+   });
+   return builder.build();
+   }
+
+   /**
+* Returns whether there contains the generated {@link TableColumn} 
such as computed column and watermark.
+*/
+   public static boolean containsGeneratedColumn(TableSchema tableSchema) {
 
 Review comment:
   Remove unused method.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10536: [FLINK-15191][connectors / kafka]Fix can't create table source for Kafka if watermark or computed column is defined.

2019-12-16 Thread GitBox
wuchong commented on a change in pull request #10536: [FLINK-15191][connectors 
/ kafka]Fix can't create table source for Kafka if watermark or computed column 
is defined.
URL: https://github.com/apache/flink/pull/10536#discussion_r358625904
 
 

 ##
 File path: 
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/descriptors/SchemaValidatorTest.scala
 ##
 @@ -163,4 +164,59 @@ class SchemaValidatorTest {
 assertTrue(extractor.equals(new CustomExtractor("f3")))
 
assertTrue(rowtime.getWatermarkStrategy.isInstanceOf[BoundedOutOfOrderTimestamps])
   }
+
+  @Test
+  def testSchemaWithGeneratedColumnAndWatermark(): Unit = {
+val descriptor = new Schema()
+  .field("f1", DataTypes.STRING)
+  .field("f2", DataTypes.INT)
+  .field("f3", DataTypes.TIMESTAMP(3))
+
+val properties = new DescriptorProperties()
+properties.putProperties(descriptor.toProperties)
+properties.putString("schema.3.name", "generated-column")
+properties.putString("schema.3.data-type", DataTypes.INT.toString)
+properties.putString("schema.3.expr", "f2 + 1")
+properties.putString("schema.watermark.0.rowtime", "f3")
+properties.putString("schema.watermark.0.strategy.expr", "f3 - INTERVAL 
'5' SECOND")
+properties.putString("schema.watermark.0.strategy.data-type", 
DataTypes.TIMESTAMP(3).toString)
+
+new SchemaValidator(true, true, false).validate(properties)
+val expectd = TableSchema.builder()
+  .field("f1", DataTypes.STRING)
+  .field("f2", DataTypes.INT)
+  .field("f3", DataTypes.TIMESTAMP(3))
+  .build()
+val schema = SchemaValidator.deriveTableSinkSchema(properties)
+assertEquals(expectd, schema)
+   }
+
+  @Test(expected = classOf[IllegalStateException])
+  def testSchemaWithMultiWatermark(): Unit = {
 
 Review comment:
   I think we can remove this test, because multiple watermark validatation has 
been covered by the framework (i.e. TableSchema). 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] openinx commented on a change in pull request #10236: [FLINK-14703][e2e] Port the Kafka SQL related tests.

2019-12-16 Thread GitBox
openinx commented on a change in pull request #10236: [FLINK-14703][e2e] Port 
the Kafka SQL related tests.
URL: https://github.com/apache/flink/pull/10236#discussion_r358575061
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/src/test/java/org/apache/flink/tests/util/kafka/SQLClientKafkaITCase.java
 ##
 @@ -0,0 +1,248 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.tests.util.kafka;
+
+import org.apache.flink.tests.util.TestUtils;
+import org.apache.flink.tests.util.categories.PreCommit;
+import org.apache.flink.tests.util.categories.TravisGroup1;
+import org.apache.flink.tests.util.flink.ClusterController;
+import org.apache.flink.tests.util.flink.FlinkResource;
+import org.apache.flink.tests.util.flink.SQLJobSubmission;
+import org.apache.flink.testutils.junit.FailsOnJava11;
+import org.apache.flink.util.FileUtils;
+import org.apache.flink.util.TestLogger;
+
+import org.apache.flink.shaded.guava18.com.google.common.base.Charsets;
+
+import org.junit.Assert;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.rules.TemporaryFolder;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URL;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.nio.file.StandardOpenOption;
+import java.security.DigestInputStream;
+import java.security.MessageDigest;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import static org.apache.flink.util.StringUtils.byteToHexString;
+
+/**
+ * End-to-end test for the kafka SQL connectors.
+ */
+@RunWith(Parameterized.class)
+@Category(value = {TravisGroup1.class, PreCommit.class, FailsOnJava11.class})
+public class SQLClientKafkaITCase extends TestLogger {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(SQLClientKafkaITCase.class);
+
+   private static final String KAFKA_JSON_SOURCE_SCHEMA_YAML = 
"kafka_json_source_schema.yaml";
+
+   @Parameterized.Parameters(name = "{index}: kafka-version:{1} 
kafka-sql-version:{2} kafka-sql-jar-version:{3}")
+   public static Collection data() {
+   return Arrays.asList(new Object[][]{
+   {"0.10.2.0", "0.10", "kafka-0.10"},
+   {"0.11.0.2", "0.11", "kafka-0.11"},
+   {"2.2.0", "universal", "kafka_"}
+   });
+   }
+
+   @Rule
+   public final FlinkResource flink;
+
+   @Rule
+   public final KafkaResource kafka;
+
+   @Rule
+   public final TemporaryFolder tmp = new TemporaryFolder();
+
+   private final String kafkaSQLVersion;
+   private final Path result;
+   private final Path sqlClientSessionConf;
+
+   private final Path sqlAvroJar;
+
+   private final Path sqlJsonJar;
+   private final Path sqlToolBoxJar;
+   private final Path sqlConnectorKafkaJar;
+
+   public SQLClientKafkaITCase(String kafkaVersion, String 
kafkaSQLVersion, String kafkaSQLJarPattern) throws IOException {
+   this.flink = FlinkResource.get();
+
+   this.kafka = KafkaResource.get(kafkaVersion);
+   this.kafkaSQLVersion = kafkaSQLVersion;
+
+   tmp.create();
+   Path tmpPath = tmp.getRoot().toPath();
+   LOG.info("The current temporary path: {}", tmpPath);
+   this.sqlClientSessionConf = 
tmpPath.resolve("sql-client-session.conf");
+   this.result = tmpPath.resolve("result");
+
+   final Path parent = Paths.get("..");
+   this.sqlAvroJar = TestUtils.getResourceJar(parent, 
"flink-sql-client-test.*sql-jars.*avro");
+   this.sqlJsonJar = TestUtils.getResourceJar(parent, 
"flink-sql-client-test.*sql-jars.*json");
+   this.sqlToolBoxJar = TestUtils.getResourceJar(parent, 

[GitHub] [flink] openinx commented on a change in pull request #10236: [FLINK-14703][e2e] Port the Kafka SQL related tests.

2019-12-16 Thread GitBox
openinx commented on a change in pull request #10236: [FLINK-14703][e2e] Port 
the Kafka SQL related tests.
URL: https://github.com/apache/flink/pull/10236#discussion_r358573785
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/src/test/resources/kafka_json_source_schema.yaml
 ##
 @@ -0,0 +1,143 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+tables:
+  - name: $TABLE_NAME
+type: source-table
+update-mode: append
+schema:
+  - name: rowtime
+type: TIMESTAMP
+rowtime:
+  timestamps:
+type: from-field
+from: timestamp
+  watermarks:
+type: periodic-bounded
+delay: 2000
+  - name: user
+type: VARCHAR
+  - name: event
+type: ROW
+connector:
+  type: kafka
+  version: "$KAFKA_SQL_VERSION"
+  topic: $TOPIC_NAME
+  startup-mode: earliest-offset
+  properties:
+- key: zookeeper.connect
+  value: localhost:2181
+- key: bootstrap.servers
+  value: localhost:9092
+format:
+  type: json
+  json-schema: >
+{
+  "type": "object",
+  "properties": {
+"timestamp": {
+  "type": "string",
+  "format": "date-time"
+},
+"user": {
+  "type": ["string", "null"]
+},
+"event": {
+  "type": "object",
+  "properties": {
+"type": {
+  "type": "string"
+},
+"message": {
+  "type": "string"
+}
+  }
+}
+  }
+}
+  - name: AvroBothTable
+type: source-sink-table
+update-mode: append
+schema:
+  - name: event_timestamp
+type: VARCHAR
+  - name: user
+type: VARCHAR
+  - name: message
+type: VARCHAR
+  - name: duplicate_count
+type: BIGINT
+connector:
+  type: kafka
+  version: "$KAFKA_SQL_VERSION"
+  topic: test-avro
+  startup-mode: earliest-offset
+  properties:
+- key: zookeeper.connect
+  value: localhost:2181
+- key: bootstrap.servers
 
 Review comment:
   Also sounds good here, will fix this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] openinx commented on a change in pull request #10236: [FLINK-14703][e2e] Port the Kafka SQL related tests.

2019-12-16 Thread GitBox
openinx commented on a change in pull request #10236: [FLINK-14703][e2e] Port 
the Kafka SQL related tests.
URL: https://github.com/apache/flink/pull/10236#discussion_r349406335
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/src/test/java/org/apache/flink/tests/util/kafka/SQLClientKafkaITCase.java
 ##
 @@ -0,0 +1,248 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.tests.util.kafka;
+
+import org.apache.flink.tests.util.TestUtils;
+import org.apache.flink.tests.util.categories.PreCommit;
+import org.apache.flink.tests.util.categories.TravisGroup1;
+import org.apache.flink.tests.util.flink.ClusterController;
+import org.apache.flink.tests.util.flink.FlinkResource;
+import org.apache.flink.tests.util.flink.SQLJobSubmission;
+import org.apache.flink.testutils.junit.FailsOnJava11;
+import org.apache.flink.util.FileUtils;
+import org.apache.flink.util.TestLogger;
+
+import org.apache.flink.shaded.guava18.com.google.common.base.Charsets;
+
+import org.junit.Assert;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.rules.TemporaryFolder;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URL;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.nio.file.StandardOpenOption;
+import java.security.DigestInputStream;
+import java.security.MessageDigest;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import static org.apache.flink.util.StringUtils.byteToHexString;
+
+/**
+ * End-to-end test for the kafka SQL connectors.
+ */
+@RunWith(Parameterized.class)
+@Category(value = {TravisGroup1.class, PreCommit.class, FailsOnJava11.class})
 
 Review comment:
   I see, will remove the PreCommit annotation.  Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] openinx commented on a change in pull request #10236: [FLINK-14703][e2e] Port the Kafka SQL related tests.

2019-12-16 Thread GitBox
openinx commented on a change in pull request #10236: [FLINK-14703][e2e] Port 
the Kafka SQL related tests.
URL: https://github.com/apache/flink/pull/10236#discussion_r358587261
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/src/test/java/org/apache/flink/tests/util/kafka/SQLClientKafkaITCase.java
 ##
 @@ -0,0 +1,248 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.tests.util.kafka;
+
+import org.apache.flink.tests.util.TestUtils;
+import org.apache.flink.tests.util.categories.PreCommit;
+import org.apache.flink.tests.util.categories.TravisGroup1;
+import org.apache.flink.tests.util.flink.ClusterController;
+import org.apache.flink.tests.util.flink.FlinkResource;
+import org.apache.flink.tests.util.flink.SQLJobSubmission;
+import org.apache.flink.testutils.junit.FailsOnJava11;
+import org.apache.flink.util.FileUtils;
+import org.apache.flink.util.TestLogger;
+
+import org.apache.flink.shaded.guava18.com.google.common.base.Charsets;
+
+import org.junit.Assert;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.rules.TemporaryFolder;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URL;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.nio.file.StandardOpenOption;
+import java.security.DigestInputStream;
+import java.security.MessageDigest;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import static org.apache.flink.util.StringUtils.byteToHexString;
+
+/**
+ * End-to-end test for the kafka SQL connectors.
+ */
+@RunWith(Parameterized.class)
+@Category(value = {TravisGroup1.class, PreCommit.class, FailsOnJava11.class})
+public class SQLClientKafkaITCase extends TestLogger {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(SQLClientKafkaITCase.class);
+
+   private static final String KAFKA_JSON_SOURCE_SCHEMA_YAML = 
"kafka_json_source_schema.yaml";
+
+   @Parameterized.Parameters(name = "{index}: kafka-version:{1} 
kafka-sql-version:{2} kafka-sql-jar-version:{3}")
+   public static Collection data() {
+   return Arrays.asList(new Object[][]{
+   {"0.10.2.0", "0.10", "kafka-0.10"},
+   {"0.11.0.2", "0.11", "kafka-0.11"},
+   {"2.2.0", "universal", "kafka_"}
+   });
+   }
+
+   @Rule
+   public final FlinkResource flink;
+
+   @Rule
+   public final KafkaResource kafka;
+
+   @Rule
+   public final TemporaryFolder tmp = new TemporaryFolder();
+
+   private final String kafkaSQLVersion;
+   private final Path result;
+   private final Path sqlClientSessionConf;
+
+   private final Path sqlAvroJar;
+
+   private final Path sqlJsonJar;
+   private final Path sqlToolBoxJar;
+   private final Path sqlConnectorKafkaJar;
+
+   public SQLClientKafkaITCase(String kafkaVersion, String 
kafkaSQLVersion, String kafkaSQLJarPattern) throws IOException {
+   this.flink = FlinkResource.get();
+
+   this.kafka = KafkaResource.get(kafkaVersion);
+   this.kafkaSQLVersion = kafkaSQLVersion;
+
+   tmp.create();
+   Path tmpPath = tmp.getRoot().toPath();
+   LOG.info("The current temporary path: {}", tmpPath);
+   this.sqlClientSessionConf = 
tmpPath.resolve("sql-client-session.conf");
 
 Review comment:
   OK


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] openinx commented on a change in pull request #10236: [FLINK-14703][e2e] Port the Kafka SQL related tests.

2019-12-16 Thread GitBox
openinx commented on a change in pull request #10236: [FLINK-14703][e2e] Port 
the Kafka SQL related tests.
URL: https://github.com/apache/flink/pull/10236#discussion_r349407393
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-end-to-end-tests-common/src/main/java/org/apache/flink/tests/util/flink/SQLJobSubmission.java
 ##
 @@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.tests.util.flink;
+
+import org.apache.flink.util.Preconditions;
+
+import java.nio.file.Path;
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Programmatic definition of a SQL job-submission.
+ */
+public class SQLJobSubmission {
+
+   private boolean embedded;
+   private String defaultEnvFile;
 
 Review comment:
   OK


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] openinx commented on a change in pull request #10236: [FLINK-14703][e2e] Port the Kafka SQL related tests.

2019-12-16 Thread GitBox
openinx commented on a change in pull request #10236: [FLINK-14703][e2e] Port 
the Kafka SQL related tests.
URL: https://github.com/apache/flink/pull/10236#discussion_r358569066
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-end-to-end-tests-common/src/main/java/org/apache/flink/tests/util/FlinkDistribution.java
 ##
 @@ -236,6 +237,30 @@ public JobID submitJob(final JobSubmission jobSubmission) 
throws IOException {
}
}
 
+   public void submitSQLJob(SQLJobSubmission job) throws IOException {
+   final List commands = new ArrayList<>();
+   
commands.add(bin.resolve("sql-client.sh").toAbsolutePath().toString());
+   if (job.isEmbedded()) {
+   commands.add("embedded");
+   }
+   if (job.getDefaultEnvFile() != null) {
+   commands.add("--defaults");
+   commands.add(job.getDefaultEnvFile());
+   }
+   if (job.getSessionEnvFile() != null) {
+   commands.add("--environment");
+   commands.add(job.getSessionEnvFile());
+   }
+   for (String jar : job.getJars()) {
+   commands.add("--jar");
+   commands.add(jar);
+   }
+   commands.add("--update");
 
 Review comment:
   > Is the documentation of the sql client outdated? It doesn't list --update 
   
   Not outdated, the --update is used to execute one SQL statement in the 
client, such as:
   
   ```
   ./bin/sql-client.sh embedded --jar test.jar -u "select 1"
   ```
   
   > and embedded does not appear to be an optional argument.
   
   That's true, now it's a necessary argument, let me change this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] openinx commented on a change in pull request #10236: [FLINK-14703][e2e] Port the Kafka SQL related tests.

2019-12-16 Thread GitBox
openinx commented on a change in pull request #10236: [FLINK-14703][e2e] Port 
the Kafka SQL related tests.
URL: https://github.com/apache/flink/pull/10236#discussion_r349407413
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-end-to-end-tests-common/src/main/java/org/apache/flink/tests/util/flink/SQLJobSubmission.java
 ##
 @@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.tests.util.flink;
+
+import org.apache.flink.util.Preconditions;
+
+import java.nio.file.Path;
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Programmatic definition of a SQL job-submission.
+ */
+public class SQLJobSubmission {
+
+   private boolean embedded;
+   private String defaultEnvFile;
+   private String sessionEnvFile;
+   private List jars;
+   private String sql;
+
+   private SQLJobSubmission(
+   boolean embedded,
+   String defaultEnvFile,
+   String sessionEnvFile,
+   List jars,
+   String sql
+   ) {
+   Preconditions.checkNotNull(jars);
+   Preconditions.checkNotNull(sql);
+
+   this.embedded = embedded;
+   this.defaultEnvFile = defaultEnvFile;
+   this.sessionEnvFile = sessionEnvFile;
+   this.jars = jars;
+   this.sql = sql;
+   }
+
+   public boolean isEmbedded() {
+   return embedded;
+   }
+
+   public String getDefaultEnvFile() {
+   return defaultEnvFile;
+   }
+
+   public String getSessionEnvFile() {
+   return sessionEnvFile;
+   }
+
+   public List getJars() {
+   return this.jars;
+   }
+
+   public String getSQL(){
+   return this.sql;
+   }
+
+   /**
+* Builder for the {@link SQLJobSubmission}.
+*/
+   public static class SQLJobSubmissionBuilder {
+   private List jars = new ArrayList<>();
+   private boolean embedded = true;
+   private String defaultEnvFile = null;
+   private String sessionEnvFile = null;
+   private String sql = null;
+
+   public SQLJobSubmissionBuilder isEmbedded(boolean embedded) {
+   this.embedded = embedded;
+   return this;
+   }
+
+   public SQLJobSubmissionBuilder setDefaultEnvFile(String 
defaultEnvFile) {
+   this.defaultEnvFile = defaultEnvFile;
+   return this;
+   }
+
+   public SQLJobSubmissionBuilder setSessionEnvFile(String 
sessionEnvFile) {
+   this.sessionEnvFile = sessionEnvFile;
+   return this;
+   }
+
+   public SQLJobSubmissionBuilder addJar(Path jarFile) {
+   this.jars.add(jarFile.toAbsolutePath().toString());
+   return this;
+   }
+
+   public SQLJobSubmissionBuilder addSQL(String sql) {
 
 Review comment:
   OK


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] openinx commented on a change in pull request #10236: [FLINK-14703][e2e] Port the Kafka SQL related tests.

2019-12-16 Thread GitBox
openinx commented on a change in pull request #10236: [FLINK-14703][e2e] Port 
the Kafka SQL related tests.
URL: https://github.com/apache/flink/pull/10236#discussion_r358587289
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/src/test/java/org/apache/flink/tests/util/kafka/SQLClientKafkaITCase.java
 ##
 @@ -0,0 +1,248 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.tests.util.kafka;
+
+import org.apache.flink.tests.util.TestUtils;
+import org.apache.flink.tests.util.categories.PreCommit;
+import org.apache.flink.tests.util.categories.TravisGroup1;
+import org.apache.flink.tests.util.flink.ClusterController;
+import org.apache.flink.tests.util.flink.FlinkResource;
+import org.apache.flink.tests.util.flink.SQLJobSubmission;
+import org.apache.flink.testutils.junit.FailsOnJava11;
+import org.apache.flink.util.FileUtils;
+import org.apache.flink.util.TestLogger;
+
+import org.apache.flink.shaded.guava18.com.google.common.base.Charsets;
+
+import org.junit.Assert;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.rules.TemporaryFolder;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URL;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.nio.file.StandardOpenOption;
+import java.security.DigestInputStream;
+import java.security.MessageDigest;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import static org.apache.flink.util.StringUtils.byteToHexString;
+
+/**
+ * End-to-end test for the kafka SQL connectors.
+ */
+@RunWith(Parameterized.class)
+@Category(value = {TravisGroup1.class, PreCommit.class, FailsOnJava11.class})
+public class SQLClientKafkaITCase extends TestLogger {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(SQLClientKafkaITCase.class);
+
+   private static final String KAFKA_JSON_SOURCE_SCHEMA_YAML = 
"kafka_json_source_schema.yaml";
+
+   @Parameterized.Parameters(name = "{index}: kafka-version:{1} 
kafka-sql-version:{2} kafka-sql-jar-version:{3}")
+   public static Collection data() {
+   return Arrays.asList(new Object[][]{
+   {"0.10.2.0", "0.10", "kafka-0.10"},
+   {"0.11.0.2", "0.11", "kafka-0.11"},
+   {"2.2.0", "universal", "kafka_"}
+   });
+   }
+
+   @Rule
+   public final FlinkResource flink;
+
+   @Rule
+   public final KafkaResource kafka;
+
+   @Rule
+   public final TemporaryFolder tmp = new TemporaryFolder();
+
+   private final String kafkaSQLVersion;
+   private final Path result;
+   private final Path sqlClientSessionConf;
+
+   private final Path sqlAvroJar;
+
+   private final Path sqlJsonJar;
+   private final Path sqlToolBoxJar;
+   private final Path sqlConnectorKafkaJar;
+
+   public SQLClientKafkaITCase(String kafkaVersion, String 
kafkaSQLVersion, String kafkaSQLJarPattern) throws IOException {
+   this.flink = FlinkResource.get();
+
+   this.kafka = KafkaResource.get(kafkaVersion);
+   this.kafkaSQLVersion = kafkaSQLVersion;
+
+   tmp.create();
+   Path tmpPath = tmp.getRoot().toPath();
+   LOG.info("The current temporary path: {}", tmpPath);
+   this.sqlClientSessionConf = 
tmpPath.resolve("sql-client-session.conf");
+   this.result = tmpPath.resolve("result");
 
 Review comment:
   Sounds good, will do.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache 

[GitHub] [flink] openinx commented on a change in pull request #10236: [FLINK-14703][e2e] Port the Kafka SQL related tests.

2019-12-16 Thread GitBox
openinx commented on a change in pull request #10236: [FLINK-14703][e2e] Port 
the Kafka SQL related tests.
URL: https://github.com/apache/flink/pull/10236#discussion_r358573710
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/src/test/resources/kafka_json_source_schema.yaml
 ##
 @@ -0,0 +1,143 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+tables:
+  - name: $TABLE_NAME
+type: source-table
+update-mode: append
+schema:
+  - name: rowtime
+type: TIMESTAMP
+rowtime:
+  timestamps:
+type: from-field
+from: timestamp
+  watermarks:
+type: periodic-bounded
+delay: 2000
+  - name: user
+type: VARCHAR
+  - name: event
+type: ROW
+connector:
+  type: kafka
+  version: "$KAFKA_SQL_VERSION"
+  topic: $TOPIC_NAME
+  startup-mode: earliest-offset
+  properties:
+- key: zookeeper.connect
+  value: localhost:2181
+- key: bootstrap.servers
+  value: localhost:9092
+format:
+  type: json
+  json-schema: >
+{
+  "type": "object",
+  "properties": {
+"timestamp": {
+  "type": "string",
+  "format": "date-time"
+},
+"user": {
+  "type": ["string", "null"]
+},
+"event": {
+  "type": "object",
+  "properties": {
+"type": {
+  "type": "string"
+},
+"message": {
+  "type": "string"
+}
+  }
+}
+  }
+}
+  - name: AvroBothTable
+type: source-sink-table
+update-mode: append
+schema:
+  - name: event_timestamp
+type: VARCHAR
+  - name: user
+type: VARCHAR
+  - name: message
+type: VARCHAR
+  - name: duplicate_count
+type: BIGINT
+connector:
+  type: kafka
+  version: "$KAFKA_SQL_VERSION"
+  topic: test-avro
+  startup-mode: earliest-offset
+  properties:
+- key: zookeeper.connect
+  value: localhost:2181
 
 Review comment:
   Sounds good here, will fix this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15267) Fix NoSuchElementException if rowtime field is remapped in TableSource

2019-12-16 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-15267:

Summary: Fix NoSuchElementException if rowtime field is remapped in 
TableSource  (was: Streaming SQL end-to-end test (Blink planner) fails on 
travis)

> Fix NoSuchElementException if rowtime field is remapped in TableSource
> --
>
> Key: FLINK-15267
> URL: https://issues.apache.org/jira/browse/FLINK-15267
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.10.0
>Reporter: Yu Li
>Assignee: Jark Wu
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> As titled, the 'Streaming SQL end-to-end test (Blink planner)' case failed 
> with below error:
> {code}
> The program finished with the following exception:
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: key not found: ts
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
>   at 
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:146)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:671)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:216)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:916)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:989)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:989)
> Caused by: java.util.NoSuchElementException: key not found: ts
>   at scala.collection.MapLike$class.default(MapLike.scala:228)
>   at scala.collection.AbstractMap.default(Map.scala:59)
>   at scala.collection.MapLike$class.apply(MapLike.scala:141)
>   at scala.collection.AbstractMap.apply(Map.scala:59)
>   at 
> org.apache.flink.table.planner.sources.TableSourceUtil$$anonfun$6.apply(TableSourceUtil.scala:164)
>   at 
> org.apache.flink.table.planner.sources.TableSourceUtil$$anonfun$6.apply(TableSourceUtil.scala:163)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.sources.TableSourceUtil$.fixPrecisionForProducedDataType(TableSourceUtil.scala:163)
>   at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:143)
>   at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:60)
>   at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
>   at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlan(StreamExecTableSourceScan.scala:60)
>   at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalc.translateToPlanInternal(StreamExecCalc.scala:54)
> {code}
> https://api.travis-ci.org/v3/job/625037124/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-15267) Fix NoSuchElementException if rowtime field is remapped in TableSource

2019-12-16 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-15267.
-
Resolution: Fixed

1.10.0: 4b4f81f29f40e82bed27a157b2b81bd06c5fc86b
1.11.0: 60098a707f7f7d114dcc24ee794c9afe8c814141

> Fix NoSuchElementException if rowtime field is remapped in TableSource
> --
>
> Key: FLINK-15267
> URL: https://issues.apache.org/jira/browse/FLINK-15267
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.10.0
>Reporter: Yu Li
>Assignee: Jark Wu
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> As titled, the 'Streaming SQL end-to-end test (Blink planner)' case failed 
> with below error:
> {code}
> The program finished with the following exception:
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: key not found: ts
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
>   at 
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:146)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:671)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:216)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:916)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:989)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:989)
> Caused by: java.util.NoSuchElementException: key not found: ts
>   at scala.collection.MapLike$class.default(MapLike.scala:228)
>   at scala.collection.AbstractMap.default(Map.scala:59)
>   at scala.collection.MapLike$class.apply(MapLike.scala:141)
>   at scala.collection.AbstractMap.apply(Map.scala:59)
>   at 
> org.apache.flink.table.planner.sources.TableSourceUtil$$anonfun$6.apply(TableSourceUtil.scala:164)
>   at 
> org.apache.flink.table.planner.sources.TableSourceUtil$$anonfun$6.apply(TableSourceUtil.scala:163)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.sources.TableSourceUtil$.fixPrecisionForProducedDataType(TableSourceUtil.scala:163)
>   at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:143)
>   at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:60)
>   at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
>   at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlan(StreamExecTableSourceScan.scala:60)
>   at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalc.translateToPlanInternal(StreamExecCalc.scala:54)
> {code}
> https://api.travis-ci.org/v3/job/625037124/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #10596: [FLINK-15267][table-planner-blink] Fix NoSuchElementException if rowtime field is remapped

2019-12-16 Thread GitBox
wuchong merged pull request #10596: [FLINK-15267][table-planner-blink] Fix 
NoSuchElementException if rowtime field is remapped
URL: https://github.com/apache/flink/pull/10596
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on issue #10596: [FLINK-15267][table-planner-blink] Fix NoSuchElementException if rowtime field is remapped

2019-12-16 Thread GitBox
wuchong commented on issue #10596: [FLINK-15267][table-planner-blink] Fix 
NoSuchElementException if rowtime field is remapped
URL: https://github.com/apache/flink/pull/10596#issuecomment-566402966
 
 
   Thanks @JingsongLi for the reviewing. 
   Merging...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on issue #10291: [FLINK-14899] [table-planner-blink] Fix unexpected plan when PROCTIME() is defined in query

2019-12-16 Thread GitBox
wuchong commented on issue #10291: [FLINK-14899] [table-planner-blink] Fix 
unexpected plan when PROCTIME() is defined in query
URL: https://github.com/apache/flink/pull/10291#issuecomment-566402576
 
 
   @flinkbot run travis


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-15287) Vectorized orc reader fails with Hive 2.0.1

2019-12-16 Thread Rui Li (Jira)
Rui Li created FLINK-15287:
--

 Summary: Vectorized orc reader fails with Hive 2.0.1
 Key: FLINK-15287
 URL: https://issues.apache.org/jira/browse/FLINK-15287
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive
Reporter: Rui Li
 Fix For: 1.10.0


Reading ORC table from Hive 2.0.1 fails with:
{noformat}
Caused by: java.lang.NoSuchMethodError: 
org.apache.orc.OrcFile.createReader(Lorg/apache/hadoop/fs/Path;Lorg/apache/orc/OrcFile$ReaderOptions;)Lorg/apache/orc/Reader;
at org.apache.flink.orc.OrcSplitReader.(OrcSplitReader.java:78)
at 
org.apache.flink.orc.OrcColumnarRowSplitReader.(OrcColumnarRowSplitReader.java:53)
at 
org.apache.flink.orc.OrcSplitReaderUtil.genPartColumnarRowReader(OrcSplitReaderUtil.java:93)
at 
org.apache.flink.connectors.hive.read.HiveVectorizedOrcSplitReader.(HiveVectorizedOrcSplitReader.java:64)
at 
org.apache.flink.connectors.hive.read.HiveTableInputFormat.open(HiveTableInputFormat.java:117)
at 
org.apache.flink.connectors.hive.read.HiveTableInputFormat.open(HiveTableInputFormat.java:57)
at 
org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:85)
at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
at 
org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:196)
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-13197) Fix Hive view row type mismatch when expanding in planner

2019-12-16 Thread Danny Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16997924#comment-16997924
 ] 

Danny Chen commented on FLINK-13197:


PR updated https://github.com/apache/flink/pull/10595/files , can someone helps 
to review ? Thanks in advance ~

> Fix Hive view row type mismatch when expanding in planner
> -
>
> Key: FLINK-13197
> URL: https://issues.apache.org/jira/browse/FLINK-13197
> Project: Flink
>  Issue Type: Test
>  Components: Connectors / Hive
>Reporter: Bowen Li
>Assignee: Rui Li
>Priority: Major
> Fix For: 1.10.0
>
>
> One goal of HiveCatalog and hive integration is to enable Flink-Hive 
> interoperability, that is Flink should understand existing Hive meta-objects, 
> and Hive meta-objects created thru Flink should be understood by Hive.
> Taking an example of a Hive view v1 in HiveCatalog and database hc.db. Unlike 
> an equivalent Flink view whose full path in expanded query should be 
> hc.db.v1, the Hive view's full path in the expanded query should be db.v1 
> such that Hive can understand it, no matter it's created by Hive or Flink.
> [~lirui] can you help to ensure that Flink can also query Hive's view in both 
> Flink planner and Blink planner?
> cc [~xuefuz]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lirui-apache commented on a change in pull request #10587: [FLINK-15269][table] Fix hive dialect limitation to overwrite and partition syntax

2019-12-16 Thread GitBox
lirui-apache commented on a change in pull request #10587: [FLINK-15269][table] 
Fix hive dialect limitation to overwrite and partition syntax
URL: https://github.com/apache/flink/pull/10587#discussion_r358619173
 
 

 ##
 File path: 
flink-table/flink-sql-parser/src/main/resources/org.apache.flink.sql.parser.utils/ParserResource.properties
 ##
 @@ -17,6 +17,5 @@
 # See wrapper class org.apache.calcite.runtime.CalciteResource.
 #
 MultipleWatermarksUnsupported=Multiple WATERMARK statements is not supported 
yet.
-OverwriteIsOnlyAllowedForHive=OVERWRITE expression is only allowed for HIVE 
dialect.
 OverwriteIsOnlyUsedWithInsert=OVERWRITE expression is only used with INSERT 
statement.
 PartitionIsOnlyAllowedForHive=PARTITION expression is only allowed for HIVE 
dialect.
 
 Review comment:
   It seems incorrect that "PARTITION expression is only allowed for HIVE 
dialect", because with this PR, users can insert into/overwrite a partition w/o 
the Hive dialect. So we should change it to something like "Creating 
partitioned table is only allowed for HIVE dialect"?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10590: [FLINK-15205][hive][document] Update hive partition and orc for read_write_hive

2019-12-16 Thread GitBox
flinkbot edited a comment on issue #10590: [FLINK-15205][hive][document] Update 
hive partition and orc for read_write_hive
URL: https://github.com/apache/flink/pull/10590#issuecomment-565962731
 
 
   
   ## CI report:
   
   * ac16927be3e1cb361f7e4a5f14c9c1883fbd9903 UNKNOWN
   * a219ec5839a43aeea3ae73d878f2717ac17ef37e Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141174082) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3628)
 
   * e6d9cfac4eaa5cdfa5ea140422f2656ce4f49f3d UNKNOWN
   * 296437af9ee679e070a601b0365c4fba23fae198 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/141336029) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3654)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15267) Streaming SQL end-to-end test (Blink planner) fails on travis

2019-12-16 Thread Yu Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated FLINK-15267:
--
Fix Version/s: 1.10.0

> Streaming SQL end-to-end test (Blink planner) fails on travis
> -
>
> Key: FLINK-15267
> URL: https://issues.apache.org/jira/browse/FLINK-15267
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.10.0
>Reporter: Yu Li
>Assignee: Jark Wu
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As titled, the 'Streaming SQL end-to-end test (Blink planner)' case failed 
> with below error:
> {code}
> The program finished with the following exception:
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: key not found: ts
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
>   at 
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:146)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:671)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:216)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:916)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:989)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:989)
> Caused by: java.util.NoSuchElementException: key not found: ts
>   at scala.collection.MapLike$class.default(MapLike.scala:228)
>   at scala.collection.AbstractMap.default(Map.scala:59)
>   at scala.collection.MapLike$class.apply(MapLike.scala:141)
>   at scala.collection.AbstractMap.apply(Map.scala:59)
>   at 
> org.apache.flink.table.planner.sources.TableSourceUtil$$anonfun$6.apply(TableSourceUtil.scala:164)
>   at 
> org.apache.flink.table.planner.sources.TableSourceUtil$$anonfun$6.apply(TableSourceUtil.scala:163)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.sources.TableSourceUtil$.fixPrecisionForProducedDataType(TableSourceUtil.scala:163)
>   at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:143)
>   at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:60)
>   at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
>   at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlan(StreamExecTableSourceScan.scala:60)
>   at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalc.translateToPlanInternal(StreamExecCalc.scala:54)
> {code}
> https://api.travis-ci.org/v3/job/625037124/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10590: [FLINK-15205][hive][document] Update hive partition and orc for read_write_hive

2019-12-16 Thread GitBox
flinkbot edited a comment on issue #10590: [FLINK-15205][hive][document] Update 
hive partition and orc for read_write_hive
URL: https://github.com/apache/flink/pull/10590#issuecomment-565962731
 
 
   
   ## CI report:
   
   * ac16927be3e1cb361f7e4a5f14c9c1883fbd9903 UNKNOWN
   * a219ec5839a43aeea3ae73d878f2717ac17ef37e Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141174082) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3628)
 
   * e6d9cfac4eaa5cdfa5ea140422f2656ce4f49f3d UNKNOWN
   * 296437af9ee679e070a601b0365c4fba23fae198 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14610) Add documentation for how to use watermark syntax in DDL

2019-12-16 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-14610:

Parent: (was: FLINK-14320)
Issue Type: Test  (was: Sub-task)

> Add documentation for how to use watermark syntax in DDL
> 
>
> Key: FLINK-14610
> URL: https://issues.apache.org/jira/browse/FLINK-14610
> Project: Flink
>  Issue Type: Test
>  Components: Documentation
>Reporter: Jark Wu
>Priority: Major
> Fix For: 1.10.0
>
>
> Add documentation for how to use watermark syntax in DDL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14610) Add documentation for how to use watermark syntax in DDL

2019-12-16 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-14610:

Parent: FLINK-15273
Issue Type: Sub-task  (was: Test)

> Add documentation for how to use watermark syntax in DDL
> 
>
> Key: FLINK-14610
> URL: https://issues.apache.org/jira/browse/FLINK-14610
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: Jark Wu
>Priority: Major
> Fix For: 1.10.0
>
>
> Add documentation for how to use watermark syntax in DDL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15205) add doc and exmaple of INSERT OVERWRITE and insert into partitioned table for Hive connector

2019-12-16 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16997898#comment-16997898
 ] 

Jingsong Lee commented on FLINK-15205:
--

Hi [~phoenixjiangnan] [~jark] , can you assign this ticket to me?

> add doc and exmaple of INSERT OVERWRITE and insert into partitioned table for 
> Hive connector
> 
>
> Key: FLINK-15205
> URL: https://issues.apache.org/jira/browse/FLINK-15205
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Documentation
>Reporter: Bowen Li
>Assignee: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10590: [FLINK-15205][hive][document] Update hive partition and orc for read_write_hive

2019-12-16 Thread GitBox
flinkbot edited a comment on issue #10590: [FLINK-15205][hive][document] Update 
hive partition and orc for read_write_hive
URL: https://github.com/apache/flink/pull/10590#issuecomment-565962731
 
 
   
   ## CI report:
   
   * ac16927be3e1cb361f7e4a5f14c9c1883fbd9903 UNKNOWN
   * a219ec5839a43aeea3ae73d878f2717ac17ef37e Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141174082) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3628)
 
   * e6d9cfac4eaa5cdfa5ea140422f2656ce4f49f3d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15205) add doc and exmaple of INSERT OVERWRITE and insert into partitioned table for Hive connector

2019-12-16 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-15205:
-
Parent: FLINK-15273
Issue Type: Sub-task  (was: Task)

> add doc and exmaple of INSERT OVERWRITE and insert into partitioned table for 
> Hive connector
> 
>
> Key: FLINK-15205
> URL: https://issues.apache.org/jira/browse/FLINK-15205
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Documentation
>Reporter: Bowen Li
>Assignee: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15204) add documentation for Flink-Hive timestamp conversions in table and udf

2019-12-16 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-15204:
-
Parent: FLINK-15273
Issue Type: Sub-task  (was: Task)

> add documentation for Flink-Hive timestamp conversions in table and udf
> ---
>
> Key: FLINK-15204
> URL: https://issues.apache.org/jira/browse/FLINK-15204
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Documentation
>Reporter: Bowen Li
>Assignee: Rui Li
>Priority: Major
> Fix For: 1.10.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15193) Move DDL to first tab in table connector page

2019-12-16 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-15193:
-
Parent: FLINK-15273
Issue Type: Sub-task  (was: Task)

> Move DDL to first tab in table connector page
> -
>
> Key: FLINK-15193
> URL: https://issues.apache.org/jira/browse/FLINK-15193
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, Table SQL / API
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
> Fix For: 1.10.0
>
>
> Since we have a good support for DDL in tableEnv.sqlUpdate and SQL-CLI, I 
> think it is time to highlight DDL in the document.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15189) add documentation for catalog view and hive view

2019-12-16 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-15189:
-
Parent: FLINK-15273
Issue Type: Sub-task  (was: Task)

> add documentation for catalog view and hive view
> 
>
> Key: FLINK-15189
> URL: https://issues.apache.org/jira/browse/FLINK-15189
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Documentation
>Reporter: Bowen Li
>Assignee: Rui Li
>Priority: Blocker
> Fix For: 1.10.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14796) Add document about limitations of different Hive versions

2019-12-16 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-14796:
-
Parent: FLINK-15273
Issue Type: Sub-task  (was: Task)

> Add document about limitations of different Hive versions
> -
>
> Key: FLINK-14796
> URL: https://issues.apache.org/jira/browse/FLINK-14796
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Documentation
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Major
> Fix For: 1.10.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi commented on a change in pull request #10590: [FLINK-15205][hive][document] Update hive partition and orc for read_write_hive

2019-12-16 Thread GitBox
JingsongLi commented on a change in pull request #10590: 
[FLINK-15205][hive][document] Update hive partition and orc for read_write_hive
URL: https://github.com/apache/flink/pull/10590#discussion_r358597410
 
 

 ##
 File path: docs/dev/table/hive/read_write_hive.md
 ##
 @@ -111,16 +111,44 @@ __ __
 
 ## Writing To Hive
 
-Similarly, data can be written into hive using an `INSERT INTO` clause. 
+Similarly, data can be written into hive using an `INSERT` clause. Consider 
there is a mytable table with two columns: name, age.
 
 {% highlight bash %}
-Flink SQL> INSERT INTO mytable (name, value) VALUES ('Tom', 4.72);
+# -- Insert with append mode -- 
+Flink SQL> INSERT INTO mytable SELECT 'Tom', 25;
+
+# -- Insert with overwrite mode -- 
+Flink SQL> INSERT OVERWRITE mytable SELECT 'Tom', 25;
+{% endhighlight %}
+
+We support partition table too, Consider there is a myparttable table with 
four columns: name, age, my_type and my_date. Column my_type and column my_date 
are the partition columns.
+
+{% highlight bash %}
+# -- Insert with static partition -- 
+Flink SQL> INSERT OVERWRITE myparttable PARTITION (my_type='type_1', 
my_date='2019-08-08') SELECT 'Tom', 25;
+
+# -- Insert with dynamic partition -- 
+Flink SQL> INSERT OVERWRITE myparttable SELECT 'Tom', 25, 'type_1', 
'2019-08-08';
+
+# -- Insert with static(my_type) and dynamic(my_date) partition -- 
+Flink SQL> INSERT OVERWRITE myparttable PARTITION (my_type='type_1') SELECT 
'Tom', 25, '2019-08-08';
 {% endhighlight %}
 
 ## Formats
 
 We have tested on the following of table storage formats: text, csv, 
SequenceFile, ORC, and Parquet.
 
+# -- ORC Vectorized Optimization -- 
 
 Review comment:
   Yes


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10590: [FLINK-15205][hive][document] Update hive partition and orc for read_write_hive

2019-12-16 Thread GitBox
JingsongLi commented on a change in pull request #10590: 
[FLINK-15205][hive][document] Update hive partition and orc for read_write_hive
URL: https://github.com/apache/flink/pull/10590#discussion_r358596893
 
 

 ##
 File path: docs/dev/table/hive/read_write_hive.md
 ##
 @@ -111,16 +111,44 @@ __ __
 
 ## Writing To Hive
 
-Similarly, data can be written into hive using an `INSERT INTO` clause. 
+Similarly, data can be written into hive using an `INSERT` clause. Consider 
there is a mytable table with two columns: name, age.
 
 {% highlight bash %}
-Flink SQL> INSERT INTO mytable (name, value) VALUES ('Tom', 4.72);
+# -- Insert with append mode -- 
+Flink SQL> INSERT INTO mytable SELECT 'Tom', 25;
+
+# -- Insert with overwrite mode -- 
+Flink SQL> INSERT OVERWRITE mytable SELECT 'Tom', 25;
 
 Review comment:
   Will modify to:
   ```
   INSERT INTO will append to the table or partition, keeping the existing data 
intact.
   INSERT OVERWRITE will overwrite any existing data in the table or partition.
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-15148) Add doc for create/drop/alter database ddl

2019-12-16 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-15148.

Resolution: Duplicate

> Add doc for create/drop/alter database ddl
> --
>
> Key: FLINK-15148
> URL: https://issues.apache.org/jira/browse/FLINK-15148
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: Terry Wang
>Priority: Blocker
> Fix For: 1.10.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-15147) Add doc for alter table set properties and rename table ddl

2019-12-16 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-15147.

Resolution: Duplicate

> Add doc for alter table set properties and rename table ddl
> ---
>
> Key: FLINK-15147
> URL: https://issues.apache.org/jira/browse/FLINK-15147
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: Terry Wang
>Priority: Blocker
> Fix For: 1.10.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10597: [FLINK-15270][python][docs] Add documentation about how to specify third-party dependencies via API for Python UDFs

2019-12-16 Thread GitBox
flinkbot edited a comment on issue #10597: [FLINK-15270][python][docs] Add 
documentation about how to specify third-party dependencies via API for Python 
UDFs
URL: https://github.com/apache/flink/pull/10597#issuecomment-566080478
 
 
   
   ## CI report:
   
   * e720c562f447695d4a91c0ccf4133a11ee7df566 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141220051) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3641)
 
   * 8c07c1ff11ab4071fcc1e2c0956a61ae190589e2 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/141323087) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3651)
 
   * 22ee02de074d072f5092ce66df72a6b9c9c96040 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/141324894) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3652)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10592: [FLINK-15251][kubernetes] Use hostname to create end point when ip in load balancer is null

2019-12-16 Thread GitBox
flinkbot edited a comment on issue #10592: [FLINK-15251][kubernetes] Use 
hostname to create end point when ip in load balancer is null
URL: https://github.com/apache/flink/pull/10592#issuecomment-566043432
 
 
   
   ## CI report:
   
   * 8c3142ed5d56c3dc550029bf28ba490f1e0dfdff Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141205724) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3634)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-15270) Add documentation about how to specify third-party dependencies via API for Python UDFs

2019-12-16 Thread Hequn Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16997834#comment-16997834
 ] 

Hequn Cheng edited comment on FLINK-15270 at 12/17/19 3:50 AM:
---

Resolved
in 1.10.0 via 
624b5b87c7b7932f409101e09932d894ff0d317d..653f38ae9b8044eff80b572ab92a997087297af3
in 1.11.0 via 
6f71ef05efc8fa9bc417bfe9caf5aba68264c9f7..5b1df6ec915eefac1803ea46f135d8c240486a12


was (Author: hequn8128):
Resolved
in 1.10.0 via 624b5b87c7b7932f409101e09932d894ff0d317d
in 1.11.0 via 6f71ef05efc8fa9bc417bfe9caf5aba68264c9f7

> Add documentation about how to specify third-party dependencies via API for 
> Python UDFs
> ---
>
> Key: FLINK-15270
> URL: https://issues.apache.org/jira/browse/FLINK-15270
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python, Documentation
>Affects Versions: 1.10.0
>Reporter: Dian Fu
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently we have already provided APIs and command line options to allow 
> users to specify third-part dependencies which may be used in Python UDFs. 
> There are already documentation about how to specify third-part dependencies 
> in the command line options. We should also add documentation about how to 
> specify third-party dependencies via API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10597: [FLINK-15270][python][docs] Add documentation about how to specify third-party dependencies via API for Python UDFs

2019-12-16 Thread GitBox
flinkbot edited a comment on issue #10597: [FLINK-15270][python][docs] Add 
documentation about how to specify third-party dependencies via API for Python 
UDFs
URL: https://github.com/apache/flink/pull/10597#issuecomment-566080478
 
 
   
   ## CI report:
   
   * e720c562f447695d4a91c0ccf4133a11ee7df566 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141220051) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3641)
 
   * 8c07c1ff11ab4071fcc1e2c0956a61ae190589e2 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/141323087) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3651)
 
   * 22ee02de074d072f5092ce66df72a6b9c9c96040 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10593: [FLINK-15271][python][docs] Add documentation about the environment requirements for running Python UDF

2019-12-16 Thread GitBox
flinkbot edited a comment on issue #10593: [FLINK-15271][python][docs] Add 
documentation about the environment requirements for running Python UDF
URL: https://github.com/apache/flink/pull/10593#issuecomment-566043462
 
 
   
   ## CI report:
   
   * 265563e943d2cee03fd543cb996583c808c88f70 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141205754) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3635)
 
   * 83b3ee9a1ae8995c60f69a48f8b8110d6925787f Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/141323071) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3650)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-15270) Add documentation about how to specify third-party dependencies via API for Python UDFs

2019-12-16 Thread Hequn Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng closed FLINK-15270.
---
Resolution: Resolved

> Add documentation about how to specify third-party dependencies via API for 
> Python UDFs
> ---
>
> Key: FLINK-15270
> URL: https://issues.apache.org/jira/browse/FLINK-15270
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python, Documentation
>Affects Versions: 1.10.0
>Reporter: Dian Fu
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently we have already provided APIs and command line options to allow 
> users to specify third-part dependencies which may be used in Python UDFs. 
> There are already documentation about how to specify third-part dependencies 
> in the command line options. We should also add documentation about how to 
> specify third-party dependencies via API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15270) Add documentation about how to specify third-party dependencies via API for Python UDFs

2019-12-16 Thread Hequn Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16997834#comment-16997834
 ] 

Hequn Cheng commented on FLINK-15270:
-

Resolved
in 1.10.0 via 624b5b87c7b7932f409101e09932d894ff0d317d
in 1.11.0 via 6f71ef05efc8fa9bc417bfe9caf5aba68264c9f7

> Add documentation about how to specify third-party dependencies via API for 
> Python UDFs
> ---
>
> Key: FLINK-15270
> URL: https://issues.apache.org/jira/browse/FLINK-15270
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python, Documentation
>Affects Versions: 1.10.0
>Reporter: Dian Fu
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently we have already provided APIs and command line options to allow 
> users to specify third-part dependencies which may be used in Python UDFs. 
> There are already documentation about how to specify third-part dependencies 
> in the command line options. We should also add documentation about how to 
> specify third-party dependencies via API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] hequn8128 closed pull request #10597: [FLINK-15270][python][docs] Add documentation about how to specify third-party dependencies via API for Python UDFs

2019-12-16 Thread GitBox
hequn8128 closed pull request #10597: [FLINK-15270][python][docs] Add 
documentation about how to specify third-party dependencies via API for Python 
UDFs
URL: https://github.com/apache/flink/pull/10597
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-15271) Add documentation about the Python environment requirements

2019-12-16 Thread Hequn Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng closed FLINK-15271.
---
Resolution: Resolved

> Add documentation about the Python environment requirements
> ---
>
> Key: FLINK-15271
> URL: https://issues.apache.org/jira/browse/FLINK-15271
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python, Documentation
>Affects Versions: 1.10.0
>Reporter: Dian Fu
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Python UDF has specific requirements about the Python environments, such as 
> Python 3.5+, Beam 2.15.0, etc.  We should add clear documentation about these 
> requirements.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] hequn8128 closed pull request #10593: [FLINK-15271][python][docs] Add documentation about the environment requirements for running Python UDF

2019-12-16 Thread GitBox
hequn8128 closed pull request #10593: [FLINK-15271][python][docs] Add 
documentation about the environment requirements for running Python UDF
URL: https://github.com/apache/flink/pull/10593
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hequn8128 commented on issue #10597: [FLINK-15270][python][docs] Add documentation about how to specify third-party dependencies via API for Python UDFs

2019-12-16 Thread GitBox
hequn8128 commented on issue #10597: [FLINK-15270][python][docs] Add 
documentation about how to specify third-party dependencies via API for Python 
UDFs
URL: https://github.com/apache/flink/pull/10597#issuecomment-566363392
 
 
   @dianfu Thanks. Merging...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15271) Add documentation about the Python environment requirements

2019-12-16 Thread Hequn Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16997831#comment-16997831
 ] 

Hequn Cheng commented on FLINK-15271:
-

Resolved 
in 1.11.0 via 393c96e8a7b6541683c859fb6709a78f876d0bf0
in 1.10.0 via 969c210933fcf95d7681aa85d0298f4cf6b435ed

> Add documentation about the Python environment requirements
> ---
>
> Key: FLINK-15271
> URL: https://issues.apache.org/jira/browse/FLINK-15271
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python, Documentation
>Affects Versions: 1.10.0
>Reporter: Dian Fu
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Python UDF has specific requirements about the Python environments, such as 
> Python 3.5+, Beam 2.15.0, etc.  We should add clear documentation about these 
> requirements.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dianfu commented on issue #10597: [FLINK-15270][python][docs] Add documentation about how to specify third-party dependencies via API for Python UDFs

2019-12-16 Thread GitBox
dianfu commented on issue #10597: [FLINK-15270][python][docs] Add documentation 
about how to specify third-party dependencies via API for Python UDFs
URL: https://github.com/apache/flink/pull/10597#issuecomment-566363050
 
 
   @hequn8128 Thanks a lot for the update. LGTM. +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hequn8128 commented on issue #10593: [FLINK-15271][python][docs] Add documentation about the environment requirements for running Python UDF

2019-12-16 Thread GitBox
hequn8128 commented on issue #10593: [FLINK-15271][python][docs] Add 
documentation about the environment requirements for running Python UDF
URL: https://github.com/apache/flink/pull/10593#issuecomment-566362319
 
 
   @dianfu Thanks. Merging...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10597: [FLINK-15270][python][docs] Add documentation about how to specify third-party dependencies via API for Python UDFs

2019-12-16 Thread GitBox
flinkbot edited a comment on issue #10597: [FLINK-15270][python][docs] Add 
documentation about how to specify third-party dependencies via API for Python 
UDFs
URL: https://github.com/apache/flink/pull/10597#issuecomment-566080478
 
 
   
   ## CI report:
   
   * e720c562f447695d4a91c0ccf4133a11ee7df566 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141220051) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3641)
 
   * 8c07c1ff11ab4071fcc1e2c0956a61ae190589e2 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10593: [FLINK-15271][python][docs] Add documentation about the environment requirements for running Python UDF

2019-12-16 Thread GitBox
flinkbot edited a comment on issue #10593: [FLINK-15271][python][docs] Add 
documentation about the environment requirements for running Python UDF
URL: https://github.com/apache/flink/pull/10593#issuecomment-566043462
 
 
   
   ## CI report:
   
   * 265563e943d2cee03fd543cb996583c808c88f70 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141205754) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3635)
 
   * 83b3ee9a1ae8995c60f69a48f8b8110d6925787f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10592: [FLINK-15251][kubernetes] Use hostname to create end point when ip in load balancer is null

2019-12-16 Thread GitBox
flinkbot edited a comment on issue #10592: [FLINK-15251][kubernetes] Use 
hostname to create end point when ip in load balancer is null
URL: https://github.com/apache/flink/pull/10592#issuecomment-566043432
 
 
   
   ## CI report:
   
   * 8c3142ed5d56c3dc550029bf28ba490f1e0dfdff Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141205724) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3634)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #5929: [FLINK-8497] [connectors] KafkaConsumer throws NPE if topic doesn't exist

2019-12-16 Thread GitBox
flinkbot edited a comment on issue #5929: [FLINK-8497] [connectors] 
KafkaConsumer throws NPE if topic doesn't exist
URL: https://github.com/apache/flink/pull/5929#issuecomment-566347441
 
 
   
   ## CI report:
   
   * d4da41dd8b53d4c481afd0b1fac1401dc371fed0 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/141319176) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #6336: [FLINK-9630] [connector] Kafka09PartitionDiscoverer cause connection …

2019-12-16 Thread GitBox
flinkbot edited a comment on issue #6336: [FLINK-9630] [connector] 
Kafka09PartitionDiscoverer cause connection …
URL: https://github.com/apache/flink/pull/6336#issuecomment-566346401
 
 
   
   ## CI report:
   
   * 0aa8d75af085c2465e8cfd9e5a572770a5d95738 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/141319163) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wangxiyuan commented on issue #9768: [FLINK-14086][test][core] Add OperatingArchitecture Enum

2019-12-16 Thread GitBox
wangxiyuan commented on issue #9768: [FLINK-14086][test][core] Add 
OperatingArchitecture Enum
URL: https://github.com/apache/flink/pull/9768#issuecomment-566359608
 
 
   @StephanEwen , Hi, can this PR get your feedback again? Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] dianfu commented on issue #10593: [FLINK-15271][python][docs] Add documentation about the environment requirements for running Python UDF

2019-12-16 Thread GitBox
dianfu commented on issue #10593: [FLINK-15271][python][docs] Add documentation 
about the environment requirements for running Python UDF
URL: https://github.com/apache/flink/pull/10593#issuecomment-566357353
 
 
   @hequn8128 Thanks a lot for the update. LGTM. +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hequn8128 commented on issue #10593: [FLINK-15271][python][docs] Add documentation about the environment requirements for running Python UDF

2019-12-16 Thread GitBox
hequn8128 commented on issue #10593: [FLINK-15271][python][docs] Add 
documentation about the environment requirements for running Python UDF
URL: https://github.com/apache/flink/pull/10593#issuecomment-566356830
 
 
   @dianfu Thanks a lot for the review. The PR has been updated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hequn8128 commented on issue #10597: [FLINK-15270][python][docs] Add documentation about how to specify third-party dependencies via API for Python UDFs

2019-12-16 Thread GitBox
hequn8128 commented on issue #10597: [FLINK-15270][python][docs] Add 
documentation about how to specify third-party dependencies via API for Python 
UDFs
URL: https://github.com/apache/flink/pull/10597#issuecomment-566355301
 
 
   @dianfu Thanks a lot for your suggestions.  I have updated the PR and would 
be great if you can take another look.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] roshannaik edited a comment on issue #10089: [FLINK-12342][yarn] Remove container requests in order to reduce excess containers

2019-12-16 Thread GitBox
roshannaik edited a comment on issue #10089: [FLINK-12342][yarn] Remove 
container requests in order to reduce excess containers
URL: https://github.com/apache/flink/pull/10089#issuecomment-566354943
 
 
   Was directed to this patch by @HuangZhenQiu.  Applied this to Flink 1.6 and 
used it to launch a job with 500 yarn containers. See definite improvement with 
the overallocation problem. However once the job comes up (as per yarn ui), the 
Flink UI for the job fails to load. 
   
   Perhaps the UI is having trouble with large number of containers ? Have 
there been any scalability improvements to the UI that we can additionally also 
try out ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10592: [FLINK-15251][kubernetes] Use hostname to create end point when ip in load balancer is null

2019-12-16 Thread GitBox
flinkbot edited a comment on issue #10592: [FLINK-15251][kubernetes] Use 
hostname to create end point when ip in load balancer is null
URL: https://github.com/apache/flink/pull/10592#issuecomment-566043432
 
 
   
   ## CI report:
   
   * 8c3142ed5d56c3dc550029bf28ba490f1e0dfdff Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141205724) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3634)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] roshannaik commented on issue #10089: [FLINK-12342][yarn] Remove container requests in order to reduce excess containers

2019-12-16 Thread GitBox
roshannaik commented on issue #10089: [FLINK-12342][yarn] Remove container 
requests in order to reduce excess containers
URL: https://github.com/apache/flink/pull/10089#issuecomment-566354943
 
 
   Was directed to this patch by @HuangZhenQiu.  Applied this to Flink 1.6 and 
used this to launch a job with 500 yarn containers. See definite improvement 
with the overallocation problem. However once the job comes up (as yet yarn 
ui), the Flink UI fails to load. 
   
   Perhaps the UI is having trouble with large number of containers ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] dianfu commented on a change in pull request #10593: [FLINK-15271][python][docs] Add documentation about the environment requirements for running Python UDF

2019-12-16 Thread GitBox
dianfu commented on a change in pull request #10593: 
[FLINK-15271][python][docs] Add documentation about the environment 
requirements for running Python UDF
URL: https://github.com/apache/flink/pull/10593#discussion_r358572258
 
 

 ##
 File path: docs/dev/table/functions/udfs.md
 ##
 @@ -176,6 +176,8 @@ table_env.sql_query("SELECT string, bigint, 
hashCode(string), py_hash_code(bigin
 
 There are many ways to define a Python scalar function besides extending the 
base class `ScalarFunction`. The following example shows the different ways to 
define a Python scalar function which takes two columns of bigint as input 
parameters and returns the sum of them as the result.
 
+Note Python 3.5+ and apache-beam==2.15.0 
are required to run the python `ScalarFunction`.
 
 Review comment:
   ScalarFunction -> scalar function?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] dianfu commented on a change in pull request #10593: [FLINK-15271][python][docs] Add documentation about the environment requirements for running Python UDF

2019-12-16 Thread GitBox
dianfu commented on a change in pull request #10593: 
[FLINK-15271][python][docs] Add documentation about the environment 
requirements for running Python UDF
URL: https://github.com/apache/flink/pull/10593#discussion_r358572205
 
 

 ##
 File path: docs/dev/table/functions/udfs.md
 ##
 @@ -176,6 +176,8 @@ table_env.sql_query("SELECT string, bigint, 
hashCode(string), py_hash_code(bigin
 
 There are many ways to define a Python scalar function besides extending the 
base class `ScalarFunction`. The following example shows the different ways to 
define a Python scalar function which takes two columns of bigint as input 
parameters and returns the sum of them as the result.
 
+Note Python 3.5+ and apache-beam==2.15.0 
are required to run the python `ScalarFunction`.
 
 Review comment:
   python -> Python


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] dianfu commented on a change in pull request #10597: [FLINK-15270][python][docs] Add documentation about how to specify third-party dependencies via API for Python UDFs

2019-12-16 Thread GitBox
dianfu commented on a change in pull request #10597: 
[FLINK-15270][python][docs] Add documentation about how to specify third-party 
dependencies via API for Python UDFs
URL: https://github.com/apache/flink/pull/10597#discussion_r358572060
 
 

 ##
 File path: docs/dev/table/functions/udfs.md
 ##
 @@ -211,6 +211,76 @@ table_env.register_function("add", add)
 # use the function in Python Table API
 my_table.select("add(a, b)")
 {% endhighlight %}
+
+If the python scalar function depends on other dependencies, you can specify 
the dependencies with the following table APIs or through command line directly when submit the 
job.
+
+
+  
+
+  Dependencies
+  Description
+
+  
+
+  
+
+  files
+  
+Adds python file dependencies which could be python files, python 
packages or local directories. They will be added to the PYTHONPATH of the 
python UDF worker.
+{% highlight python %}
+table_env.add_python_file(file_path)
+{% endhighlight %}
+  
+
+
+  requirements
+  
+Specifies a requirements.txt file which defines the third-party 
dependencies. These dependencies will be installed to a temporary directory and 
added to the PYTHONPATH of the python UDF worker. For the dependencies which 
could not be accessed in the cluster, a directory which contains the 
installation packages of these dependencies could be specified using the 
parameter "requirements_cached_dir". It will be uploaded to the cluster to 
support offline installation.
+{% highlight python %}
+# commands executed in shell
+echo numpy==1.16.5 > requirements.txt
+pip download -d cached_dir -r requirements.txt --no-binary :all:
+
+# python code
+table_env.set_python_requirements("requirements.txt", "cached_dir")
+{% endhighlight %}
+Please make sure the installation packages matches the platform of 
the cluster and the python version used. These packages will be installed using 
pip.
+  
+
+
+  archive
+  
+Adds a python archive file dependency. The file will be extracted 
to the working directory of python UDF worker. If the parameter "target_dir" is 
specified, the archive file will be extracted to a directory named 
"target_dir". Otherwise, the archive file will be extracted to a directory with 
the same name of the archive file.
+{% highlight python %}
+# command executed in shell
+# assert the relative path of python interpreter is py_env/bin/python
+zip -r py_env.zip py_env
+
+# python code
+table_env.add_python_archive("py_env.zip")
+table_env.get_config().set_python_executable("py_env.zip/py_env/bin/python")
+
+# or
+table_env.add_python_archive("py_env.zip", "myenv")
 
 Review comment:
   What about adding an example about how to use the data files of the archive 
in Python UDF?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] dianfu commented on a change in pull request #10597: [FLINK-15270][python][docs] Add documentation about how to specify third-party dependencies via API for Python UDFs

2019-12-16 Thread GitBox
dianfu commented on a change in pull request #10597: 
[FLINK-15270][python][docs] Add documentation about how to specify third-party 
dependencies via API for Python UDFs
URL: https://github.com/apache/flink/pull/10597#discussion_r358571664
 
 

 ##
 File path: docs/dev/table/functions/udfs.md
 ##
 @@ -211,6 +211,76 @@ table_env.register_function("add", add)
 # use the function in Python Table API
 my_table.select("add(a, b)")
 {% endhighlight %}
+
+If the python scalar function depends on other dependencies, you can specify 
the dependencies with the following table APIs or through command line directly when submit the 
job.
 
 Review comment:
   submit -> submitting


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] dianfu commented on a change in pull request #10597: [FLINK-15270][python][docs] Add documentation about how to specify third-party dependencies via API for Python UDFs

2019-12-16 Thread GitBox
dianfu commented on a change in pull request #10597: 
[FLINK-15270][python][docs] Add documentation about how to specify third-party 
dependencies via API for Python UDFs
URL: https://github.com/apache/flink/pull/10597#discussion_r358571541
 
 

 ##
 File path: docs/dev/table/functions/udfs.md
 ##
 @@ -211,6 +211,76 @@ table_env.register_function("add", add)
 # use the function in Python Table API
 my_table.select("add(a, b)")
 {% endhighlight %}
+
+If the python scalar function depends on other dependencies, you can specify 
the dependencies with the following table APIs or through command line directly when submit the 
job.
+
+
+  
+
+  Dependencies
+  Description
+
+  
+
+  
+
+  files
+  
+Adds python file dependencies which could be python files, python 
packages or local directories. They will be added to the PYTHONPATH of the 
python UDF worker.
+{% highlight python %}
+table_env.add_python_file(file_path)
+{% endhighlight %}
+  
+
+
+  requirements
 
 Review comment:
   what about in bold format?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] dianfu commented on a change in pull request #10597: [FLINK-15270][python][docs] Add documentation about how to specify third-party dependencies via API for Python UDFs

2019-12-16 Thread GitBox
dianfu commented on a change in pull request #10597: 
[FLINK-15270][python][docs] Add documentation about how to specify third-party 
dependencies via API for Python UDFs
URL: https://github.com/apache/flink/pull/10597#discussion_r358571616
 
 

 ##
 File path: docs/dev/table/functions/udfs.md
 ##
 @@ -211,6 +211,76 @@ table_env.register_function("add", add)
 # use the function in Python Table API
 my_table.select("add(a, b)")
 {% endhighlight %}
+
+If the python scalar function depends on other dependencies, you can specify 
the dependencies with the following table APIs or through command line directly when submit the 
job.
 
 Review comment:
   what about change to `depends on third-party dependencies`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #5929: [FLINK-8497] [connectors] KafkaConsumer throws NPE if topic doesn't exist

2019-12-16 Thread GitBox
flinkbot edited a comment on issue #5929: [FLINK-8497] [connectors] 
KafkaConsumer throws NPE if topic doesn't exist
URL: https://github.com/apache/flink/pull/5929#issuecomment-566347441
 
 
   
   ## CI report:
   
   * d4da41dd8b53d4c481afd0b1fac1401dc371fed0 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/141319176) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #6336: [FLINK-9630] [connector] Kafka09PartitionDiscoverer cause connection …

2019-12-16 Thread GitBox
flinkbot edited a comment on issue #6336: [FLINK-9630] [connector] 
Kafka09PartitionDiscoverer cause connection …
URL: https://github.com/apache/flink/pull/6336#issuecomment-566346401
 
 
   
   ## CI report:
   
   * 0aa8d75af085c2465e8cfd9e5a572770a5d95738 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/141319163) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   >