[GitHub] [flink] flinkbot edited a comment on issue #10212: [FLINK-14806][table-planner-blink] Add setTimestamp/getTimestamp inte…

2019-11-18 Thread GitBox
flinkbot edited a comment on issue #10212: [FLINK-14806][table-planner-blink] 
Add setTimestamp/getTimestamp inte…
URL: https://github.com/apache/flink/pull/10212#issuecomment-554261720
 
 
   
   ## CI report:
   
   * 95ec85f121bef0d4c51ed9846e35277c26c58aac : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136660869)
   * 055f9f757ffe55906c6a74bb223c6cb985e07151 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136972277)
   * e04f4bb1b9df8779ec01962985bf11dca5d86b6b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136989944)
   * 37434b1fe0a6c3f2b16f3a5b68e3049c9cc9c380 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/137134585)
   * 037b91560c1d341c3cbc3336f276d59814a6e876 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] danny0405 commented on a change in pull request #10224: [FLINK-14716][table-planner-blink] Cooperate computed column with push down rules

2019-11-18 Thread GitBox
danny0405 commented on a change in pull request #10224: 
[FLINK-14716][table-planner-blink] Cooperate computed column with push down 
rules
URL: https://github.com/apache/flink/pull/10224#discussion_r347773299
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/catalog/CatalogSchemaTable.java
 ##
 @@ -0,0 +1,222 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.catalog;
+
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.table.catalog.Catalog;
+import org.apache.flink.table.catalog.CatalogBaseTable;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.catalog.ConnectorCatalogTable;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.QueryOperationCatalogView;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.stats.CatalogColumnStatistics;
+import org.apache.flink.table.catalog.stats.CatalogTableStatistics;
+import org.apache.flink.table.plan.stats.TableStats;
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory;
+import org.apache.flink.table.planner.plan.stats.FlinkStatistic;
+import org.apache.flink.table.planner.sources.TableSourceUtil;
+import org.apache.flink.table.planner.utils.JavaScalaConversionUtil;
+import org.apache.flink.table.runtime.types.LogicalTypeDataTypeConverter;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.TimestampKind;
+import org.apache.flink.table.types.logical.TimestampType;
+
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.schema.TemporalTable;
+import org.apache.calcite.schema.impl.AbstractTable;
+
+import javax.annotation.Nonnull;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.stream.Collectors;
+
+import scala.Option;
+
+import static java.lang.String.format;
+import static 
org.apache.flink.table.util.CatalogTableStatisticsConverter.convertToTableStats;
+
+/**
+ * Represents a wrapper for {@link CatalogBaseTable} in {@link 
org.apache.calcite.schema.Schema}.
+ *
+ * This table would be converted to
+ * {@link org.apache.flink.table.planner.plan.schema.FlinkPreparingTableBase}
+ * based on its internal source type during sql-to-rel conversion.
+ *
+ * See
+ * {@link 
org.apache.flink.table.planner.plan.FlinkCalciteCatalogReader#getTable(List)}
+ * for details.
+ */
+public class CatalogSchemaTable extends AbstractTable implements TemporalTable 
{
+   //~ Instance fields 

+
+   private final ObjectIdentifier objectIdentifier;
+   private final Catalog catalog;
+   private final CatalogBaseTable catalogBaseTable;
+   private final boolean isStreamingMode;
+   private final boolean isTemporary;
+
+   //~ Constructors 
---
+
+   public CatalogSchemaTable(
+   ObjectIdentifier objectIdentifier,
+   Catalog catalog,
+   CatalogBaseTable catalogBaseTable,
+   boolean isStreaming,
+   boolean isTemporary) {
+   this.objectIdentifier = objectIdentifier;
+   this.catalog = catalog;
+   this.catalogBaseTable = catalogBaseTable;
+   this.isStreamingMode = isStreaming;
+   this.isTemporary = isTemporary;
+   }
+
+   //~ Methods 

+
+   public Catalog getCatalog() {
+   return catalog;
+   }
+
+   public ObjectIdentifier getObjectIdentifier() {
+   return objectIdentifier;
+   }
+
+   public CatalogBaseTable getCatalogTable() {
+   

[jira] [Updated] (FLINK-14849) Can not submit job when use hive connector

2019-11-18 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-14849:
-
Description: 
{code:java}
With:

org.apache.hive
hive-exec
3.1.1


Caused by: java.lang.ClassCastException: org.codehaus.janino.CompilerFactory 
cannot be cast to org.codehaus.commons.compiler.ICompilerFactory
at 
org.codehaus.commons.compiler.CompilerFactoryFactory.getCompilerFactory(CompilerFactoryFactory.java:129)
at 
org.codehaus.commons.compiler.CompilerFactoryFactory.getDefaultCompilerFactory(CompilerFactoryFactory.java:79)
at 
org.apache.calcite.rel.metadata.JaninoRelMetadataProvider.compile(JaninoRelMetadataProvider.java:432)
... 68 more
{code}
After https://issues.apache.org/jira/browse/FLINK-13749 , flink-client will use 
default child-first resolve-order.

If user jar has some conflict dependents, there will be some problem.

Maybe we should update document to add some exclusions to hive dependents.

  was:
{code:java}
With:

org.apache.hive
hive-exec
3.1.1


Caused by: java.lang.ClassCastException: org.codehaus.janino.CompilerFactory 
cannot be cast to org.codehaus.commons.compiler.ICompilerFactory
at 
org.codehaus.commons.compiler.CompilerFactoryFactory.getCompilerFactory(CompilerFactoryFactory.java:129)
at 
org.codehaus.commons.compiler.CompilerFactoryFactory.getDefaultCompilerFactory(CompilerFactoryFactory.java:79)
at 
org.apache.calcite.rel.metadata.JaninoRelMetadataProvider.compile(JaninoRelMetadataProvider.java:432)
... 68 more
{code}
After https://issues.apache.org/jira/browse/FLINK-13749 , flink-client will use 
default child-first resolve-order.

If user jar has some conflict dependents, there will be some problem.


> Can not submit job when use hive connector
> --
>
> Key: FLINK-14849
> URL: https://issues.apache.org/jira/browse/FLINK-14849
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Jingsong Lee
>Priority: Major
>
> {code:java}
> With:
> 
> org.apache.hive
> hive-exec
> 3.1.1
> 
> Caused by: java.lang.ClassCastException: org.codehaus.janino.CompilerFactory 
> cannot be cast to org.codehaus.commons.compiler.ICompilerFactory
>   at 
> org.codehaus.commons.compiler.CompilerFactoryFactory.getCompilerFactory(CompilerFactoryFactory.java:129)
>   at 
> org.codehaus.commons.compiler.CompilerFactoryFactory.getDefaultCompilerFactory(CompilerFactoryFactory.java:79)
>   at 
> org.apache.calcite.rel.metadata.JaninoRelMetadataProvider.compile(JaninoRelMetadataProvider.java:432)
>   ... 68 more
> {code}
> After https://issues.apache.org/jira/browse/FLINK-13749 , flink-client will 
> use default child-first resolve-order.
> If user jar has some conflict dependents, there will be some problem.
> Maybe we should update document to add some exclusions to hive dependents.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14849) Can not submit job when use hive connector

2019-11-18 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-14849:


 Summary: Can not submit job when use hive connector
 Key: FLINK-14849
 URL: https://issues.apache.org/jira/browse/FLINK-14849
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive
Reporter: Jingsong Lee


{code:java}
With:

org.apache.hive
hive-exec
3.1.1


Caused by: java.lang.ClassCastException: org.codehaus.janino.CompilerFactory 
cannot be cast to org.codehaus.commons.compiler.ICompilerFactory
at 
org.codehaus.commons.compiler.CompilerFactoryFactory.getCompilerFactory(CompilerFactoryFactory.java:129)
at 
org.codehaus.commons.compiler.CompilerFactoryFactory.getDefaultCompilerFactory(CompilerFactoryFactory.java:79)
at 
org.apache.calcite.rel.metadata.JaninoRelMetadataProvider.compile(JaninoRelMetadataProvider.java:432)
... 68 more
{code}
After https://issues.apache.org/jira/browse/FLINK-13749 , flink-client will use 
default child-first resolve-order.

If user jar has some conflict dependents, there will be some problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-14834) Kerberized YARN on Docker test (custom fs plugin) fails on Travis

2019-11-18 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek reassigned FLINK-14834:


Assignee: Aljoscha Krettek

> Kerberized YARN on Docker test (custom fs plugin) fails on Travis
> -
>
> Key: FLINK-14834
> URL: https://issues.apache.org/jira/browse/FLINK-14834
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Tests
>Affects Versions: 1.10.0
>Reporter: Gary Yao
>Assignee: Aljoscha Krettek
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.10.0
>
>
> https://api.travis-ci.org/v3/job/612782888/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10245: [FLINK-10936][kubernetes] Implement Kubernetes command line tools to support session cluster.

2019-11-18 Thread GitBox
flinkbot commented on issue #10245: [FLINK-10936][kubernetes] Implement 
Kubernetes command line tools to support session cluster.
URL: https://github.com/apache/flink/pull/10245#issuecomment-555375223
 
 
   
   ## CI report:
   
   * 71f8401be1cb214e56f94e486c495ef7c800cfc4 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10246: [FLINK-10937][dist] Add kubernetes-entry.sh and kubernetes-session.sh.

2019-11-18 Thread GitBox
flinkbot commented on issue #10246: [FLINK-10937][dist] Add kubernetes-entry.sh 
and kubernetes-session.sh.
URL: https://github.com/apache/flink/pull/10246#issuecomment-555375256
 
 
   
   ## CI report:
   
   * 9f06575f29a7cb1102357c8919b57b3471dc80fd : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10224: [FLINK-14716][table-planner-blink] Cooperate computed column with push down rules

2019-11-18 Thread GitBox
KurtYoung commented on a change in pull request #10224: 
[FLINK-14716][table-planner-blink] Cooperate computed column with push down 
rules
URL: https://github.com/apache/flink/pull/10224#discussion_r347760695
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/catalog/DatabaseCalciteSchema.java
 ##
 @@ -80,150 +69,49 @@ public Table getTable(String tableName) {
return catalogManager.getTable(identifier)
.map(result -> {
CatalogBaseTable table = result.getTable();
-   if (result.isTemporary()) {
-   return convertTemporaryTable(new 
ObjectPath(databaseName, tableName), table);
-   } else {
-   return convertPermanentTable(
-   identifier.toObjectPath(),
-   table,
-   
catalogManager.getCatalog(catalogName)
-   
.flatMap(Catalog::getTableFactory)
-   .orElse(null)
-   );
-   }
+   Catalog catalog = 
catalogManager.getCatalog(catalogName).get();
+   FlinkStatistic statistic = 
getStatistic(result.isTemporary(),
+   catalog, table, identifier);
+   return new CatalogSchemaTable(identifier,
+   table,
+   statistic,
+   catalog.getTableFactory().orElse(null),
+   isStreamingMode,
+   result.isTemporary());
})
.orElse(null);
}
 
-   private Table convertPermanentTable(
-   ObjectPath tablePath,
-   CatalogBaseTable table,
-   @Nullable TableFactory tableFactory) {
-   if (table instanceof QueryOperationCatalogView) {
-   return 
convertQueryOperationView((QueryOperationCatalogView) table);
-   } else if (table instanceof ConnectorCatalogTable) {
-   ConnectorCatalogTable connectorTable = 
(ConnectorCatalogTable) table;
-   if ((connectorTable).getTableSource().isPresent()) {
-   TableStats tableStats = 
extractTableStats(connectorTable, tablePath);
-   return convertSourceTable(connectorTable, 
tableStats);
-   } else {
-   return convertSinkTable(connectorTable);
-   }
-   } else if (table instanceof CatalogTable) {
-   CatalogTable catalogTable = (CatalogTable) table;
-   TableStats tableStats = extractTableStats(catalogTable, 
tablePath);
-   return convertCatalogTable(tablePath, catalogTable, 
tableFactory, tableStats);
-   } else {
-   throw new TableException("Unsupported table type: " + 
table);
-   }
-   }
-
-   private Table convertTemporaryTable(
-   ObjectPath tablePath,
-   CatalogBaseTable table) {
-   if (table instanceof QueryOperationCatalogView) {
-   return 
convertQueryOperationView((QueryOperationCatalogView) table);
-   } else if (table instanceof ConnectorCatalogTable) {
-   ConnectorCatalogTable connectorTable = 
(ConnectorCatalogTable) table;
-   if ((connectorTable).getTableSource().isPresent()) {
-   return convertSourceTable(connectorTable, 
TableStats.UNKNOWN);
-   } else {
-   return convertSinkTable(connectorTable);
-   }
-   } else if (table instanceof CatalogTable) {
-   return convertCatalogTable(tablePath, (CatalogTable) 
table, null, TableStats.UNKNOWN);
-   } else {
-   throw new TableException("Unsupported table type: " + 
table);
+   private static FlinkStatistic getStatistic(boolean isTemporary, Catalog 
catalog,
+   CatalogBaseTable catalogBaseTable, ObjectIdentifier 
tableIdentifier) {
+   if (isTemporary || catalogBaseTable instanceof 
QueryOperationCatalogView) {
+   return FlinkStatistic.UNKNOWN();
}
-   }
-
-   private Table convertQueryOperationView(QueryOperationCatalogView 
table) {
-   return Query

[GitHub] [flink] KurtYoung commented on a change in pull request #10224: [FLINK-14716][table-planner-blink] Cooperate computed column with push down rules

2019-11-18 Thread GitBox
KurtYoung commented on a change in pull request #10224: 
[FLINK-14716][table-planner-blink] Cooperate computed column with push down 
rules
URL: https://github.com/apache/flink/pull/10224#discussion_r347765013
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/PushProjectIntoTableSourceScanRule.scala
 ##
 @@ -94,16 +94,12 @@ class PushProjectIntoTableSourceScanRule extends 
RelOptRule(
 }
 
 // project push down does not change the statistic, we can reuse origin 
statistic
-val newTableSourceTable = new TableSourceTable(
+val newTableSourceTable = tableSourceTable.copy(
   newTableSource,
-  tableSourceTable.isStreamingMode,
-  tableSourceTable.statistic,
-  Option(usedFields),
-  tableSourceTable.catalogTable)
+  tableSourceTable.getStatistic,
 
 Review comment:
   no need to pass the statistics to this copy method


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10224: [FLINK-14716][table-planner-blink] Cooperate computed column with push down rules

2019-11-18 Thread GitBox
KurtYoung commented on a change in pull request #10224: 
[FLINK-14716][table-planner-blink] Cooperate computed column with push down 
rules
URL: https://github.com/apache/flink/pull/10224#discussion_r347762559
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/FlinkCalciteCatalogReader.java
 ##
 @@ -62,17 +80,181 @@ public FlinkCalciteCatalogReader(
if (originRelOptTable == null) {
return null;
} else {
-   // Wrap FlinkTable as FlinkRelOptTable to use in query 
optimization.
-   FlinkTable table = 
originRelOptTable.unwrap(FlinkTable.class);
+   // Wrap as FlinkPreparingTableBase to use in query 
optimization.
+   CatalogSchemaTable table = 
originRelOptTable.unwrap(CatalogSchemaTable.class);
if (table != null) {
-   return FlinkRelOptTable.create(
-   originRelOptTable.getRelOptSchema(),
-   originRelOptTable.getRowType(),
+   return 
toPreparingTable(originRelOptTable.getRelOptSchema(),
originRelOptTable.getQualifiedName(),
+   originRelOptTable.getRowType(),
table);
} else {
return originRelOptTable;
}
}
}
+
+   /**
+* Translate this {@link CatalogSchemaTable} into Flink source table.
+*/
+   private static FlinkPreparingTableBase toPreparingTable(
+   RelOptSchema relOptSchema,
+   List names,
+   RelDataType rowType,
+   CatalogSchemaTable table) {
+   if (table.isTemporary()) {
+   return convertTemporaryTable(
+   relOptSchema,
+   names,
+   rowType,
+   table);
+   } else {
+   return convertPermanentTable(
+   relOptSchema,
+   names,
+   rowType,
+   table);
+   }
+   }
+
+   private static FlinkPreparingTableBase convertPermanentTable(
+   RelOptSchema relOptSchema,
+   List names,
+   RelDataType rowType,
+   CatalogSchemaTable schemaTable) {
+   final CatalogBaseTable table = schemaTable.getCatalogTable();
+   if (table instanceof QueryOperationCatalogView) {
+   return convertQueryOperationView(relOptSchema,
+   names,
+   rowType,
+   (QueryOperationCatalogView) table);
+   } else if (table instanceof ConnectorCatalogTable) {
+   ConnectorCatalogTable connectorTable = 
(ConnectorCatalogTable) table;
+   if ((connectorTable).getTableSource().isPresent()) {
+   return convertSourceTable(relOptSchema,
+   names,
+   rowType,
+   connectorTable,
+   schemaTable.getStatistic(),
+   schemaTable.isStreamingMode());
+   } else {
+   throw new ValidationException("Cannot convert a 
connector table " +
+   "without source.");
+   }
+   } else if (table instanceof CatalogTable) {
+   CatalogTable catalogTable = (CatalogTable) table;
+   return convertCatalogTable(relOptSchema,
+   names,
+   rowType,
+   catalogTable,
+   schemaTable);
+   } else {
+   throw new ValidationException("Unsupported table type: 
" + table);
+   }
+   }
+
+   private static FlinkPreparingTableBase convertTemporaryTable(
+   RelOptSchema relOptSchema,
+   List names,
+   RelDataType rowType,
+   CatalogSchemaTable schemaTable) {
+   final CatalogBaseTable baseTable = 
schemaTable.getCatalogTable();
+   if (baseTable instanceof QueryOperationCatalogView) {
+   return convertQueryOperationView(relOptSchema,
+   names,
+ 

[GitHub] [flink] KurtYoung commented on a change in pull request #10224: [FLINK-14716][table-planner-blink] Cooperate computed column with push down rules

2019-11-18 Thread GitBox
KurtYoung commented on a change in pull request #10224: 
[FLINK-14716][table-planner-blink] Cooperate computed column with push down 
rules
URL: https://github.com/apache/flink/pull/10224#discussion_r347759479
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/catalog/DatabaseCalciteSchema.java
 ##
 @@ -80,150 +69,49 @@ public Table getTable(String tableName) {
return catalogManager.getTable(identifier)
.map(result -> {
CatalogBaseTable table = result.getTable();
-   if (result.isTemporary()) {
-   return convertTemporaryTable(new 
ObjectPath(databaseName, tableName), table);
-   } else {
-   return convertPermanentTable(
-   identifier.toObjectPath(),
-   table,
-   
catalogManager.getCatalog(catalogName)
-   
.flatMap(Catalog::getTableFactory)
-   .orElse(null)
-   );
-   }
+   Catalog catalog = 
catalogManager.getCatalog(catalogName).get();
+   FlinkStatistic statistic = 
getStatistic(result.isTemporary(),
+   catalog, table, identifier);
+   return new CatalogSchemaTable(identifier,
+   table,
+   statistic,
+   catalog.getTableFactory().orElse(null),
+   isStreamingMode,
+   result.isTemporary());
})
.orElse(null);
}
 
-   private Table convertPermanentTable(
-   ObjectPath tablePath,
-   CatalogBaseTable table,
-   @Nullable TableFactory tableFactory) {
-   if (table instanceof QueryOperationCatalogView) {
-   return 
convertQueryOperationView((QueryOperationCatalogView) table);
-   } else if (table instanceof ConnectorCatalogTable) {
-   ConnectorCatalogTable connectorTable = 
(ConnectorCatalogTable) table;
-   if ((connectorTable).getTableSource().isPresent()) {
-   TableStats tableStats = 
extractTableStats(connectorTable, tablePath);
-   return convertSourceTable(connectorTable, 
tableStats);
-   } else {
-   return convertSinkTable(connectorTable);
-   }
-   } else if (table instanceof CatalogTable) {
-   CatalogTable catalogTable = (CatalogTable) table;
-   TableStats tableStats = extractTableStats(catalogTable, 
tablePath);
-   return convertCatalogTable(tablePath, catalogTable, 
tableFactory, tableStats);
-   } else {
-   throw new TableException("Unsupported table type: " + 
table);
-   }
-   }
-
-   private Table convertTemporaryTable(
-   ObjectPath tablePath,
-   CatalogBaseTable table) {
-   if (table instanceof QueryOperationCatalogView) {
-   return 
convertQueryOperationView((QueryOperationCatalogView) table);
-   } else if (table instanceof ConnectorCatalogTable) {
-   ConnectorCatalogTable connectorTable = 
(ConnectorCatalogTable) table;
-   if ((connectorTable).getTableSource().isPresent()) {
-   return convertSourceTable(connectorTable, 
TableStats.UNKNOWN);
-   } else {
-   return convertSinkTable(connectorTable);
-   }
-   } else if (table instanceof CatalogTable) {
-   return convertCatalogTable(tablePath, (CatalogTable) 
table, null, TableStats.UNKNOWN);
-   } else {
-   throw new TableException("Unsupported table type: " + 
table);
+   private static FlinkStatistic getStatistic(boolean isTemporary, Catalog 
catalog,
+   CatalogBaseTable catalogBaseTable, ObjectIdentifier 
tableIdentifier) {
+   if (isTemporary || catalogBaseTable instanceof 
QueryOperationCatalogView) {
+   return FlinkStatistic.UNKNOWN();
}
-   }
-
-   private Table convertQueryOperationView(QueryOperationCatalogView 
table) {
-   return Query

[GitHub] [flink] KurtYoung commented on a change in pull request #10224: [FLINK-14716][table-planner-blink] Cooperate computed column with push down rules

2019-11-18 Thread GitBox
KurtYoung commented on a change in pull request #10224: 
[FLINK-14716][table-planner-blink] Cooperate computed column with push down 
rules
URL: https://github.com/apache/flink/pull/10224#discussion_r347761976
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/FlinkCalciteCatalogReader.java
 ##
 @@ -62,17 +80,181 @@ public FlinkCalciteCatalogReader(
if (originRelOptTable == null) {
return null;
} else {
-   // Wrap FlinkTable as FlinkRelOptTable to use in query 
optimization.
-   FlinkTable table = 
originRelOptTable.unwrap(FlinkTable.class);
+   // Wrap as FlinkPreparingTableBase to use in query 
optimization.
+   CatalogSchemaTable table = 
originRelOptTable.unwrap(CatalogSchemaTable.class);
if (table != null) {
-   return FlinkRelOptTable.create(
-   originRelOptTable.getRelOptSchema(),
-   originRelOptTable.getRowType(),
+   return 
toPreparingTable(originRelOptTable.getRelOptSchema(),
originRelOptTable.getQualifiedName(),
+   originRelOptTable.getRowType(),
table);
} else {
return originRelOptTable;
}
}
}
+
+   /**
+* Translate this {@link CatalogSchemaTable} into Flink source table.
+*/
+   private static FlinkPreparingTableBase toPreparingTable(
+   RelOptSchema relOptSchema,
+   List names,
+   RelDataType rowType,
+   CatalogSchemaTable table) {
+   if (table.isTemporary()) {
 
 Review comment:
   If i'm not mistaken, `convertTemporaryTable` and `convertPermanentTable` 
have exactly the same logic here. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10224: [FLINK-14716][table-planner-blink] Cooperate computed column with push down rules

2019-11-18 Thread GitBox
KurtYoung commented on a change in pull request #10224: 
[FLINK-14716][table-planner-blink] Cooperate computed column with push down 
rules
URL: https://github.com/apache/flink/pull/10224#discussion_r347765410
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/schema/TableSourceTable.scala
 ##
 @@ -70,60 +70,39 @@ class TableSourceTable[T](
   " via DefinedRowtimeAttributes interface.")
   }
 
-  // TODO implements this
-  // TableSourceUtil.validateTableSource(tableSource)
-
-  override def getRowType(typeFactory: RelDataTypeFactory): RelDataType = {
-val factory = typeFactory.asInstanceOf[FlinkTypeFactory]
-val (fieldNames, fieldTypes) = TableSourceUtil.getFieldNameType(
-  catalogTable.getSchema,
-  tableSource,
-  selectedFields,
-  streaming = isStreamingMode)
-// patch rowtime field according to WatermarkSpec
-val patchedTypes = if (isStreamingMode && watermarkSpec.isDefined) {
-  // TODO: [FLINK-14473] we only support top-level rowtime attribute right 
now
-  val rowtime = watermarkSpec.get.getRowtimeAttribute
-  if (rowtime.contains(".")) {
-throw new TableException(
-  s"Nested field '$rowtime' as rowtime attribute is not supported 
right now.")
-  }
-  val idx = fieldNames.indexOf(rowtime)
-  val originalType = fieldTypes(idx).asInstanceOf[TimestampType]
-  val rowtimeType = new TimestampType(
-originalType.isNullable,
-TimestampKind.ROWTIME,
-originalType.getPrecision)
-  fieldTypes.patch(idx, Seq(rowtimeType), 1)
-} else {
-  fieldTypes
-}
-factory.buildRelNodeRowType(fieldNames, patchedTypes)
-  }
+  override def getQualifiedName: JList[String] = 
explainSourceAsString(tableSource)
 
   /**
-* Creates a copy of this table, changing statistic.
+* Creates a copy of this table, changing table source and statistic.
 *
-* @param statistic A new FlinkStatistic.
-* @return Copy of this table, substituting statistic.
+* @param tableSource tableSource to replace
+* @param statistic New FlinkStatistic to replace
+* @return New TableSourceTable instance with specified table source and 
[[FlinkStatistic]]
 */
-  override def copy(statistic: FlinkStatistic): TableSourceTable[T] = {
-new TableSourceTable(tableSource, isStreamingMode, statistic, catalogTable)
+  def copy(tableSource: TableSource[_], statistic: FlinkStatistic): 
TableSourceTable[T] = {
+new TableSourceTable[T](relOptSchema, names, rowType, statistic,
+  tableSource.asInstanceOf[TableSource[T]], isStreamingMode, catalogTable)
   }
 
   /**
-* Returns statistics of current table.
-*/
-  override def getStatistic: FlinkStatistic = statistic
-
-  /**
-* Replaces table source with the given one, and create a new table source 
table.
+* Creates a copy of this table, changing table source, statistic and 
rowType based on
+* selected fields.
 *
-* @param tableSource tableSource to replace.
-* @return new TableSourceTable
+* @param tableSource tableSource to replace
+* @param statistic New FlinkStatistic to replace
+* @param selectedFields Selected indices of the table source output fields
+* @return New TableSourceTable instance with specified table source, 
[[FlinkStatistic]],
+* and selected fields
 */
-  def replaceTableSource(tableSource: TableSource[T]): TableSourceTable[T] = {
-new TableSourceTable[T](
-  tableSource, isStreamingMode, statistic, catalogTable)
+  def copy(tableSource: TableSource[_], statistic: FlinkStatistic,
 
 Review comment:
   This copy don't have to provide `statistic`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] Mrart commented on issue #10230: [FLINK-14803][metrics]Support Consistency Level for InfluxDB metrics

2019-11-18 Thread GitBox
Mrart commented on issue #10230: [FLINK-14803][metrics]Support Consistency 
Level for InfluxDB metrics
URL: https://github.com/apache/flink/pull/10230#issuecomment-555371384
 
 
   @flinkbot run travis


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10212: [FLINK-14806][table-planner-blink] Add setTimestamp/getTimestamp inte…

2019-11-18 Thread GitBox
JingsongLi commented on a change in pull request #10212: 
[FLINK-14806][table-planner-blink] Add setTimestamp/getTimestamp inte…
URL: https://github.com/apache/flink/pull/10212#discussion_r347764260
 
 

 ##
 File path: 
flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/dataformat/vector/VectorizedColumnBatch.java
 ##
 @@ -132,4 +133,17 @@ public Decimal getDecimal(int rowId, int colId, int 
precision, int scale) {
return Decimal.fromUnscaledBytes(precision, scale, 
bytes);
}
}
+
+   public SqlTimestamp getTimestamp(int rowId, int colId, int precision) {
+   if (isNullAt(rowId, colId)) {
+   return null;
+   }
+
+   if (columns[colId] instanceof TimestampColumnVector) {
+   return ((TimestampColumnVector) 
(columns[colId])).getTimestamp(rowId, precision);
+   } else {
+   // by default, we assume the underlying 
LongColumnVector holds millisecond since Epoch.
+   return SqlTimestamp.fromEpochMillis(((LongColumnVector) 
columns[colId]).getLong(rowId));
 
 Review comment:
   -1 for this implementation.
   This implementation is not compatible with anyone. And now there is an 
interface(`TimestampColumnVector`), why should there be a specific 
implementation?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10246: [FLINK-10937][dist] Add kubernetes-entry.sh and kubernetes-session.sh.

2019-11-18 Thread GitBox
flinkbot commented on issue #10246: [FLINK-10937][dist] Add kubernetes-entry.sh 
and kubernetes-session.sh.
URL: https://github.com/apache/flink/pull/10246#issuecomment-555370657
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 9f06575f29a7cb1102357c8919b57b3471dc80fd (Tue Nov 19 
07:25:02 UTC 2019)
   
   **Warnings:**
* **4 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10152: [FLINK-14466][runtime] Let YarnJobClusterEntrypoint use user code class loader

2019-11-18 Thread GitBox
flinkbot edited a comment on issue #10152: [FLINK-14466][runtime] Let 
YarnJobClusterEntrypoint use user code class loader
URL: https://github.com/apache/flink/pull/10152#issuecomment-552497066
 
 
   
   ## CI report:
   
   * 5875fa6d987995f327cffae7912c47f4dc51e944 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135953858)
   * 1c9b982ef3ee82b3088ab2c6bf1c48971ad79cc8 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136142763)
   * 7e5b99825bc1fd7ffe04163158e5cfcb7164bfb9 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136272745)
   * a4ced0f532ca317e3495d35758faf46c0252d44b : UNKNOWN
   * 537133983af86ac1a25c784523be74b012ec8ee3 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136471493)
   * e33ca327aef49b9236a933b2f9a5a0ba4e9418ce : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136556489)
   * 89664127f3614ab017338d43308fd5fa36fd053a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136701823)
   * a2a883bbdedc3a2e62d86d1f2bbbad9dd45dc35d : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/137134573)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-10937) Add entrypoint scripts for k8s

2019-11-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-10937:
---
Labels: pull-request-available  (was: )

> Add entrypoint scripts for k8s
> --
>
> Key: FLINK-10937
> URL: https://issues.apache.org/jira/browse/FLINK-10937
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes
>Reporter: JIN SUN
>Assignee: Yang Wang
>Priority: Major
>  Labels: pull-request-available
>
> Flink official docker image could be used to active kubernetes integration. 
> An entrypoint script for k8s should be added.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wangyang0918 opened a new pull request #10246: [FLINK-10937][dist] Add kubernetes-entry.sh for JobManager and TaskManager container entrypoint. Add kubernetes-session.sh for starting

2019-11-18 Thread GitBox
wangyang0918 opened a new pull request #10246: [FLINK-10937][dist] Add 
kubernetes-entry.sh for JobManager and TaskManager container entrypoint. Add 
kubernetes-session.sh for starting session cluster.
URL: https://github.com/apache/flink/pull/10246
 
 
   
   
   ## What is the purpose of the change
   
   Add kubernetes-entry.sh for JobManager and TaskManager container entrypoint. 
Add kubernetes-session.sh for starting session cluster.
   
   This PR is based on  #9957 #9965 #9973 #9984 #9985 #9986 #10245.
   
   ```
   wangyang-pc:build-target danrtsey.wy$ ./bin/kubernetes-session.sh -h
   Usage:
  Required
   
  Optional
-d,--detached   If present, runs the job in detached 
mode
-D  use value for given property
-h,--help   Help for Kubernetes session CLI.
-i,--image  Image to use for Flink containers.
-id,--clusterIdThe cluster id that will be used for 
flink cluster. If it's not set, the client will generate a random UUID name.
-jm,--jobManagerMemory Memory for JobManager Container with 
optional unit (default: MB)
-s,--slots Number of slots per TaskManager
-tm,--taskManagerMemoryMemory per TaskManager Container with 
optional unit (default: MB)
   ```
   
   
   ## Brief change log
   
   * Add kubernetes-entry.sh and kubernetes-session.sh
   * Update conf/log4j-cli.properties to support kubernetes
   
   
   ## Verifying this change
   
   * This PR could be manually checked. After this PR, active Flink integration 
could be used.
   * Start a new session
   ```
   ./bin/kubernetes-session.sh -d -id flink-native-k8s-session-1 \
   -i flink:flink-1.10-SNAPSHOT-k8s \
   -jm 2048 -tm 4096 -s 8 \
   -Dkubernetes.jobmanager.cpu=0.5 -Dkubernetes.taskmanager.cpu=2 \
   -Dkubernetes.container.image.pullPolicy=Always
   ```
   * Submit a job to the existed session
   ```
   ./bin/flink run -d -kid flink-native-k8s-session-1 
examples/streaming/WindowJoin.jar
   ```
   * Stop the session
   ```
   echo 'stop' | ./bin/kubernetes-session.sh -id flink-native-k8s-session-1
   ```
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes)
 - If yes, how is the feature documented? (will add doc in a new PR)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14813) Expose the new mechanism implemented in FLINK-14472 as a "is back-pressured" metric

2019-11-18 Thread lining (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lining updated FLINK-14813:
---
Description: 
{

  "id": "0.Shuffle.Netty.BackPressure.isBackPressured",

 "value": "true"

}

> Expose the new mechanism implemented in FLINK-14472 as a "is back-pressured" 
> metric
> ---
>
> Key: FLINK-14813
> URL: https://issues.apache.org/jira/browse/FLINK-14813
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Metrics, Runtime / Network, Runtime / REST
>Reporter: lining
>Assignee: lining
>Priority: Major
>
> {
>   "id": "0.Shuffle.Netty.BackPressure.isBackPressured",
>  "value": "true"
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10245: [FLINK-10936][kubernetes] Implement Kubernetes command line tools to support session cluster. The FlinkKubernetesCustomCli will be added to customC

2019-11-18 Thread GitBox
flinkbot commented on issue #10245: [FLINK-10936][kubernetes] Implement 
Kubernetes command line tools to support session cluster. The 
FlinkKubernetesCustomCli will be added to customCommandLines of CliFrotend.
URL: https://github.com/apache/flink/pull/10245#issuecomment-555368480
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 71f8401be1cb214e56f94e486c495ef7c800cfc4 (Tue Nov 19 
07:17:25 UTC 2019)
   
   **Warnings:**
* **4 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9986: [FLINK-10933][kubernetes] Implement KubernetesSessionClusterEntrypoint.

2019-11-18 Thread GitBox
flinkbot edited a comment on issue #9986: [FLINK-10933][kubernetes] Implement 
KubernetesSessionClusterEntrypoint.
URL: https://github.com/apache/flink/pull/9986#issuecomment-545891630
 
 
   
   ## CI report:
   
   * 593bf42620faf09c1accbd692494646194e3d574 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133371151)
   * bb9fbd1d51a478793f63ae8b6d6e92b6a5a53775 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133940616)
   * bc25d444faeeaa773f040b14159aafe5a6a5a975 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133953929)
   * d8819bf3615c497b501399bc476de889c17dc239 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133998596)
   * d1430642ae91d8ed58479fb9d1492c433312a9b2 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134010391)
   * 17ffc7f7d12f2115eb6b4c86af2c627ce1ad68aa : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/137136929)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-10936) Implement Command line tools

2019-11-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-10936:
---
Labels: pull-request-available  (was: )

> Implement Command line tools
> 
>
> Key: FLINK-10936
> URL: https://issues.apache.org/jira/browse/FLINK-10936
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes
>Reporter: JIN SUN
>Assignee: Yang Wang
>Priority: Major
>  Labels: pull-request-available
>
> Implement command tools to start kubernetes sessions: 
>  * k8s-session.sh to start and stop a session like we did in yarn-session.sh
>  * customized command line that will be invoked by CliFrontEnd and 
> ./bin/flink run to submit job to kubernetes cluster



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9433: [FLINK-13708] [table-planner-blink] transformations should be cleared after execution in blink planner

2019-11-18 Thread GitBox
flinkbot edited a comment on issue #9433: [FLINK-13708] [table-planner-blink] 
transformations should be cleared after execution in blink planner
URL: https://github.com/apache/flink/pull/9433#issuecomment-521131546
 
 
   
   ## CI report:
   
   * 22d047614613c293a7aca416268449b3cabcad6a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123164756)
   * 255e8d57f2eabf7fbfeefe73f10287493e8a5c2d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123375768)
   * aacac7867ac81946a8e4427334e91c65c0d3e08f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123451412)
   * e68d7394eaba76a806020b12bf4d3ea61cb4f8f3 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/123482934)
   * b77e7a21d562a83717793490573fab7dfe297b78 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/126357307)
   * 1c32f7c517f78c8d0dd4f093689ddcda138430b4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133151778)
   * 141ddcd9eec5702c20f9d1aff0e52e81d46d407b : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135600094)
   * 6cccdad60bd618ab3fae4ce0ebac9ca2ca35067d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136676363)
   * d4f6564ee3118b35275a90640e4570b09a4605bd : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137009265)
   * 11bdbdee2fb1183134f858ffbcc6e41beeebdb9f : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/137136911)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wangyang0918 opened a new pull request #10245: [FLINK-10936][kubernetes] Implement Kubernetes command line tools to support session cluster. The FlinkKubernetesCustomCli will be added

2019-11-18 Thread GitBox
wangyang0918 opened a new pull request #10245: [FLINK-10936][kubernetes] 
Implement Kubernetes command line tools to support session cluster. The 
FlinkKubernetesCustomCli will be added to customCommandLines of CliFrotend.
URL: https://github.com/apache/flink/pull/10245
 
 
   
   
   ## What is the purpose of the change
   Implement Kubernetes command line tools to support session cluster. The 
FlinkKubernetesCustomCli will be added to customCommandLines of CliFrotend. The 
cli will support the following options.
   ```
 Options for kubernetes-cluster mode:
-d,--detached  If present, runs the job in
   detached mode
-kDuse value for given property
-kh,--kuberneteshelp   Help for Kubernetes session 
CLI.
-ki,--kubernetesimage  Image to use for Flink
   containers.
-kid,--kubernetesclusterIdThe cluster id that will be 
used
   for flink cluster. If it's 
not
   set, the client will 
generate a
   random UUID name.
-kjm,--kubernetesjobManagerMemory Memory for JobManager 
Container
   with optional unit (default: 
MB)
-ks,--kubernetesslots Number of slots per 
TaskManager
-ktm,--kubernetestaskManagerMemoryMemory per TaskManager 
Container
   with optional unit (default: 
MB)
   ```
   
   
   ## Brief change log
   
   * Add FlinkKubernetesCustomCli
   * Update CliFrontend to support kubernetes
   
   
   ## Verifying this change
   * This PR is covered by unit test
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14164) Add a metric to show failover count regarding fine grained recovery

2019-11-18 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16977184#comment-16977184
 ] 

Zhu Zhu commented on FLINK-14164:
-

Thanks for confirming it! [~stevenz3wu]

> Add a metric to show failover count regarding fine grained recovery
> ---
>
> Key: FLINK-14164
> URL: https://issues.apache.org/jira/browse/FLINK-14164
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination, Runtime / Metrics
>Affects Versions: 1.10.0
>Reporter: Zhu Zhu
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Previously Flink uses restart all strategy to recover jobs from failures. And 
> the metric "fullRestart" is used to show the count of failovers.
> However, with fine grained recovery introduced in 1.9.0, the "fullRestart" 
> metric only reveals how many times the entire graph has been restarted, not 
> including the number of fine grained failure recoveries.
> As many users want to build their job alerting based on failovers, I'd 
> propose to add such a new metric {{numberOfRestarts}} which also respects 
> fine grained recoveries. The metric should be a Gauge.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14760) Documentation links check failed on travis

2019-11-18 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16977176#comment-16977176
 ] 

Dian Fu commented on FLINK-14760:
-

[~phoenixjiangnan] Sorry for interrupt. Is there any progress on this issue? It 
breaks the nightly end to end test and so I guess we should fix it ASAP.

> Documentation links check failed on travis
> --
>
> Key: FLINK-14760
> URL: https://issues.apache.org/jira/browse/FLINK-14760
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.10.0
>Reporter: Dian Fu
>Assignee: Bowen Li
>Priority: Blocker
> Fix For: 1.10.0
>
>
> {code:java}
> [2019-11-13 16:21:19] ERROR `/dev/table/udfs.html' not found
> [2019-11-13 16:21:19] ERROR `/dev/table/functions.html' not found
> [2019-11-13 16:21:25] ERROR 
> `/zh/getting-started/tutorials/datastream_api.html' not found
> [2019-11-13 16:21:25] ERROR `/zh/dev/table/udfs.html' not found
> [2019-11-13 16:21:25] ERROR `/zh/dev/table/functions.html' not found.
> http://localhost:4000/dev/table/udfs.html:
> Remote file does not exist -- broken link!!!
> http://localhost:4000/dev/table/functions.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/zh/getting-started/tutorials/datastream_api.html:
> Remote file does not exist -- broken link!!!
> http://localhost:4000/zh/dev/table/udfs.html:
> Remote file does not exist -- broken link!!!
> http://localhost:4000/zh/dev/table/functions.html:
> Remote file does not exist -- broken link!!!
> ---
> Found 5 broken links.
> {code}
> full log: [https://travis-ci.org/apache/flink/jobs/611350857]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10230: [FLINK-14803][metrics]Support Consistency Level for InfluxDB metrics

2019-11-18 Thread GitBox
flinkbot edited a comment on issue #10230: [FLINK-14803][metrics]Support 
Consistency Level for InfluxDB metrics
URL: https://github.com/apache/flink/pull/10230#issuecomment-554717265
 
 
   
   ## CI report:
   
   * 520a41fb6c3b0fdde8fdcea87a348d12918b2481 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/136856364)
   * 0dd23d956554fda33a50b8a70d85e6f63cad3ff9 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136856960)
   * 2f42fa850f5a53dde8e98a1f2b84e50893402ec3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137132521)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10212: [FLINK-14806][table-planner-blink] Add setTimestamp/getTimestamp inte…

2019-11-18 Thread GitBox
flinkbot edited a comment on issue #10212: [FLINK-14806][table-planner-blink] 
Add setTimestamp/getTimestamp inte…
URL: https://github.com/apache/flink/pull/10212#issuecomment-554261720
 
 
   
   ## CI report:
   
   * 95ec85f121bef0d4c51ed9846e35277c26c58aac : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136660869)
   * 055f9f757ffe55906c6a74bb223c6cb985e07151 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136972277)
   * e04f4bb1b9df8779ec01962985bf11dca5d86b6b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136989944)
   * 37434b1fe0a6c3f2b16f3a5b68e3049c9cc9c380 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/137134585)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] docete commented on a change in pull request #10212: [FLINK-14806][table-planner-blink] Add setTimestamp/getTimestamp inte…

2019-11-18 Thread GitBox
docete commented on a change in pull request #10212: 
[FLINK-14806][table-planner-blink] Add setTimestamp/getTimestamp inte…
URL: https://github.com/apache/flink/pull/10212#discussion_r347755901
 
 

 ##
 File path: 
flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/dataformat/vector/VectorizedColumnBatch.java
 ##
 @@ -132,4 +135,43 @@ public Decimal getDecimal(int rowId, int colId, int 
precision, int scale) {
return Decimal.fromUnscaledBytes(precision, scale, 
bytes);
}
}
+
+   public SqlTimestamp getTimestamp(int rowId, int colId, int precision) {
+   if (isNullAt(rowId, colId)) {
+   return null;
+   }
+
+   // The precision of Timestamp in parquet should be one of 
MILLIS, MICROS or NANOS.
+   // 
https://github.com/apache/parquet-format/blob/master/LogicalTypes.md#timestamp
+   //
+   // For MILLIS, the underlying INT64 holds milliseconds
+   // For MICROS, the underlying INT64 holds microseconds
+   // For NANOS, the underlying INT96 holds nanoOfDay(8 bytes) and 
julianDay(4 bytes)
+   if (columns[colId] instanceof TimestampColumnVector) {
 
 Review comment:
   OK, I will remove the Parquet protocal part. and the Parquet reader should 
holds a Long or Bytes vector and implement the `TimestmapColumnVector` 
interface.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] leonardBang commented on issue #10239: [Flink-11491][Test] Support all TPC-DS queries

2019-11-18 Thread GitBox
leonardBang commented on issue #10239: [Flink-11491][Test] Support all TPC-DS 
queries
URL: https://github.com/apache/flink/pull/10239#issuecomment-555361814
 
 
   @KurtYoung thanks for your comment, I'll address them one by one.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10152: [FLINK-14466][runtime] Let YarnJobClusterEntrypoint use user code class loader

2019-11-18 Thread GitBox
flinkbot edited a comment on issue #10152: [FLINK-14466][runtime] Let 
YarnJobClusterEntrypoint use user code class loader
URL: https://github.com/apache/flink/pull/10152#issuecomment-552497066
 
 
   
   ## CI report:
   
   * 5875fa6d987995f327cffae7912c47f4dc51e944 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135953858)
   * 1c9b982ef3ee82b3088ab2c6bf1c48971ad79cc8 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136142763)
   * 7e5b99825bc1fd7ffe04163158e5cfcb7164bfb9 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136272745)
   * a4ced0f532ca317e3495d35758faf46c0252d44b : UNKNOWN
   * 537133983af86ac1a25c784523be74b012ec8ee3 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136471493)
   * e33ca327aef49b9236a933b2f9a5a0ba4e9418ce : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136556489)
   * 89664127f3614ab017338d43308fd5fa36fd053a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136701823)
   * a2a883bbdedc3a2e62d86d1f2bbbad9dd45dc35d : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/137134573)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9986: [FLINK-10933][kubernetes] Implement KubernetesSessionClusterEntrypoint.

2019-11-18 Thread GitBox
flinkbot edited a comment on issue #9986: [FLINK-10933][kubernetes] Implement 
KubernetesSessionClusterEntrypoint.
URL: https://github.com/apache/flink/pull/9986#issuecomment-545891630
 
 
   
   ## CI report:
   
   * 593bf42620faf09c1accbd692494646194e3d574 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133371151)
   * bb9fbd1d51a478793f63ae8b6d6e92b6a5a53775 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133940616)
   * bc25d444faeeaa773f040b14159aafe5a6a5a975 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133953929)
   * d8819bf3615c497b501399bc476de889c17dc239 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133998596)
   * d1430642ae91d8ed58479fb9d1492c433312a9b2 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134010391)
   * 17ffc7f7d12f2115eb6b4c86af2c627ce1ad68aa : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14843) Streaming bucketing end-to-end test can fail with Output hash mismatch

2019-11-18 Thread Gary Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao updated FLINK-14843:
-
Description: 
*Description*
Streaming bucketing end-to-end test ({{test_streaming_bucketing.sh}}) can fail 
with Output hash mismatch.

{noformat}
Number of running task managers has reached 4.
Job (e0b7a86e4d4111f3947baa3d004e083a) is running.
Waiting until all values have been produced
Truncating buckets
Number of produced values 26930/6
Truncating buckets
Number of produced values 30890/6
Truncating buckets
Number of produced values 37340/6
Truncating buckets
Number of produced values 41290/6
Truncating buckets
Number of produced values 46710/6
Truncating buckets
Number of produced values 52120/6
Truncating buckets
Number of produced values 57110/6
Truncating buckets
Number of produced values 62530/6
Cancelling job e0b7a86e4d4111f3947baa3d004e083a.
Cancelled job e0b7a86e4d4111f3947baa3d004e083a.
Waiting for job (e0b7a86e4d4111f3947baa3d004e083a) to reach terminal state 
CANCELED ...
Job (e0b7a86e4d4111f3947baa3d004e083a) reached terminal state CANCELED
Job e0b7a86e4d4111f3947baa3d004e083a was cancelled, time to verify
FAIL Bucketing Sink: Output hash mismatch.  Got 
9e00429abfb30eea4f459eb812b470ad, expected 01aba5ff77a0ef5e5cf6a727c248bdc3.
head hexdump of actual:
000   (   2   ,   1   0   ,   0   ,   S   o   m   e   p   a   y
010   l   o   a   d   .   .   .   )  \n   (   2   ,   1   0   ,   1
020   ,   S   o   m   e   p   a   y   l   o   a   d   .   .   .
030   )  \n   (   2   ,   1   0   ,   2   ,   S   o   m   e   p
040   a   y   l   o   a   d   .   .   .   )  \n   (   2   ,   1   0
050   ,   3   ,   S   o   m   e   p   a   y   l   o   a   d   .
060   .   .   )  \n   (   2   ,   1   0   ,   4   ,   S   o   m   e
070   p   a   y   l   o   a   d   .   .   .   )  \n   (   2   ,
080   1   0   ,   5   ,   S   o   m   e   p   a   y   l   o   a
090   d   .   .   .   )  \n   (   2   ,   1   0   ,   6   ,   S   o
0a0   m   e   p   a   y   l   o   a   d   .   .   .   )  \n   (
0b0   2   ,   1   0   ,   7   ,   S   o   m   e   p   a   y   l
0c0   o   a   d   .   .   .   )  \n   (   2   ,   1   0   ,   8   ,
0d0   S   o   m   e   p   a   y   l   o   a   d   .   .   .   )
0e0  \n   (   2   ,   1   0   ,   9   ,   S   o   m   e   p   a
0f0   y   l   o   a   d   .   .   .   )  \n
0fa
Stopping taskexecutor daemon (pid: 55164) on host gyao-desktop.
Stopping standalonesession daemon (pid: 51073) on host gyao-desktop.
Stopping taskexecutor daemon (pid: 51504) on host gyao-desktop.
Skipping taskexecutor daemon (pid: 52034), because it is not running anymore on 
gyao-desktop.
Skipping taskexecutor daemon (pid: 52472), because it is not running anymore on 
gyao-desktop.
Skipping taskexecutor daemon (pid: 52916), because it is not running anymore on 
gyao-desktop.
Stopping taskexecutor daemon (pid: 54121) on host gyao-desktop.
Stopping taskexecutor daemon (pid: 54726) on host gyao-desktop.
[FAIL] Test script contains errors.
Checking of logs skipped.

[FAIL] 'flink-end-to-end-tests/test-scripts/test_streaming_bucketing.sh' failed 
after 2 minutes and 3 seconds! Test exited with exit code 1
{noformat}


*How to reproduce*
Comment out the delay of 10s after the 1st TM is restarted to provoke the issue:

{code:bash}
echo "Restarting 1 TM"
$FLINK_DIR/bin/taskmanager.sh start
wait_for_number_of_running_tms 4

#sleep 10

echo "Killing 2 TMs"
kill_random_taskmanager
kill_random_taskmanager
wait_for_number_of_running_tms 2
{code}

Command to run the test:
{noformat}
FLINK_DIR=build-target/ flink-end-to-end-tests/run-single-test.sh skip 
flink-end-to-end-tests/test-scripts/test_streaming_bucketing.sh
{noformat}




  was:
*Description*
Streaming bucketing end-to-end test ({{test_streaming_bucketing.sh}}) can fail 
with Output hash mismatch.

{noformat}
Number of running task managers has reached 4.
Job (67212178694f8b2a9bc9d9572567a53f) is running.
Waiting until all values have been produced
Truncating buckets
Number of produced values 26325/6
Truncating buckets
Number of produced values 31315/6
Truncating buckets
Number of produced values 36735/6
Truncating buckets
Number of produced values 40705/6
Truncating buckets
Number of produced values 46125/6
Truncating buckets
Number of produced values 51135/6
Truncating buckets
Number of produced values 56555/6
Truncating buckets
Number of produced values 61935/6
Cancelling job 67212178694f8b2a9bc9d9572567a53f.
Cancelled job 67212178694f8b2a9bc9d9572567a53f.
Waiting for job (67212178694f8b2a9bc9d9572567a53f) to reach terminal state 
CANCELED ...
Job (67212178694f8b2a9bc9d9572567a53f) reached terminal state CANCELED
Job 67212178694f8b2a9bc9d9572567a53f was cancelled, time to verify
FAIL Bucketing Sink: Ou

[jira] [Updated] (FLINK-14843) Streaming bucketing end-to-end test can fail with Output hash mismatch

2019-11-18 Thread Gary Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao updated FLINK-14843:
-
Attachment: flink-gary-standalonesession-0-gyao-desktop.log
flink-gary-taskexecutor-0-gyao-desktop.log
flink-gary-taskexecutor-1-gyao-desktop.log
flink-gary-taskexecutor-2-gyao-desktop.log
flink-gary-taskexecutor-3-gyao-desktop.log
flink-gary-taskexecutor-4-gyao-desktop.log
flink-gary-taskexecutor-5-gyao-desktop.log
flink-gary-taskexecutor-6-gyao-desktop.log
complete_result

> Streaming bucketing end-to-end test can fail with Output hash mismatch
> --
>
> Key: FLINK-14843
> URL: https://issues.apache.org/jira/browse/FLINK-14843
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Tests
>Affects Versions: 1.10.0
> Environment: rev: dcc1330375826b779e4902176bb2473704dabb11
>Reporter: Gary Yao
>Priority: Critical
>  Labels: test-stability
> Attachments: complete_result, 
> flink-gary-standalonesession-0-gyao-desktop.log, 
> flink-gary-taskexecutor-0-gyao-desktop.log, 
> flink-gary-taskexecutor-1-gyao-desktop.log, 
> flink-gary-taskexecutor-2-gyao-desktop.log, 
> flink-gary-taskexecutor-3-gyao-desktop.log, 
> flink-gary-taskexecutor-4-gyao-desktop.log, 
> flink-gary-taskexecutor-5-gyao-desktop.log, 
> flink-gary-taskexecutor-6-gyao-desktop.log
>
>
> *Description*
> Streaming bucketing end-to-end test ({{test_streaming_bucketing.sh}}) can 
> fail with Output hash mismatch.
> {noformat}
> Number of running task managers has reached 4.
> Job (67212178694f8b2a9bc9d9572567a53f) is running.
> Waiting until all values have been produced
> Truncating buckets
> Number of produced values 26325/6
> Truncating buckets
> Number of produced values 31315/6
> Truncating buckets
> Number of produced values 36735/6
> Truncating buckets
> Number of produced values 40705/6
> Truncating buckets
> Number of produced values 46125/6
> Truncating buckets
> Number of produced values 51135/6
> Truncating buckets
> Number of produced values 56555/6
> Truncating buckets
> Number of produced values 61935/6
> Cancelling job 67212178694f8b2a9bc9d9572567a53f.
> Cancelled job 67212178694f8b2a9bc9d9572567a53f.
> Waiting for job (67212178694f8b2a9bc9d9572567a53f) to reach terminal state 
> CANCELED ...
> Job (67212178694f8b2a9bc9d9572567a53f) reached terminal state CANCELED
> Job 67212178694f8b2a9bc9d9572567a53f was cancelled, time to verify
> FAIL Bucketing Sink: Output hash mismatch.  Got 
> 4e2d1859e41184a38e5bc95090fe9941, expected 01aba5ff77a0ef5e5cf6a727c248bdc3.
> head hexdump of actual:
> 000   (   2   ,   1   0   ,   0   ,   S   o   m   e   p   a   y
> 010   l   o   a   d   .   .   .   )  \n   (   2   ,   1   0   ,   1
> 020   ,   S   o   m   e   p   a   y   l   o   a   d   .   .   .
> 030   )  \n   (   2   ,   1   0   ,   2   ,   S   o   m   e   p
> 040   a   y   l   o   a   d   .   .   .   )  \n   (   2   ,   1   0
> 050   ,   3   ,   S   o   m   e   p   a   y   l   o   a   d   .
> 060   .   .   )  \n   (   2   ,   1   0   ,   4   ,   S   o   m   e
> 070   p   a   y   l   o   a   d   .   .   .   )  \n   (   2   ,
> 080   1   0   ,   5   ,   S   o   m   e   p   a   y   l   o   a
> 090   d   .   .   .   )  \n   (   2   ,   1   0   ,   6   ,   S   o
> 0a0   m   e   p   a   y   l   o   a   d   .   .   .   )  \n   (
> 0b0   2   ,   1   0   ,   7   ,   S   o   m   e   p   a   y   l
> 0c0   o   a   d   .   .   .   )  \n   (   2   ,   1   0   ,   8   ,
> 0d0   S   o   m   e   p   a   y   l   o   a   d   .   .   .   )
> 0e0  \n   (   2   ,   1   0   ,   9   ,   S   o   m   e   p   a
> 0f0   y   l   o   a   d   .   .   .   )  \n
> 0fa
> Stopping taskexecutor daemon (pid: 654547) on host gyao-desktop.
> Stopping standalonesession daemon (pid: 650368) on host gyao-desktop.
> Stopping taskexecutor daemon (pid: 650812) on host gyao-desktop.
> Skipping taskexecutor daemon (pid: 651347), because it is not running anymore 
> on gyao-desktop.
> Skipping taskexecutor daemon (pid: 651795), because it is not running anymore 
> on gyao-desktop.
> Skipping taskexecutor daemon (pid: 652249), because it is not running anymore 
> on gyao-desktop.
> Stopping taskexecutor daemon (pid: 653481) on host gyao-desktop.
> Stopping taskexecutor daemon (pid: 654099) on host gyao-desktop.
> [FAIL] Test script contains errors.
> Checking of logs skipped.
> [FAIL] 'flink-end-to-end-tests/test-scripts/test_streaming_bucketing.sh' 
> failed after 2 minutes and 3 seconds! Test exited with exit co

[GitHub] [flink] flinkbot edited a comment on issue #9433: [FLINK-13708] [table-planner-blink] transformations should be cleared after execution in blink planner

2019-11-18 Thread GitBox
flinkbot edited a comment on issue #9433: [FLINK-13708] [table-planner-blink] 
transformations should be cleared after execution in blink planner
URL: https://github.com/apache/flink/pull/9433#issuecomment-521131546
 
 
   
   ## CI report:
   
   * 22d047614613c293a7aca416268449b3cabcad6a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123164756)
   * 255e8d57f2eabf7fbfeefe73f10287493e8a5c2d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123375768)
   * aacac7867ac81946a8e4427334e91c65c0d3e08f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123451412)
   * e68d7394eaba76a806020b12bf4d3ea61cb4f8f3 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/123482934)
   * b77e7a21d562a83717793490573fab7dfe297b78 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/126357307)
   * 1c32f7c517f78c8d0dd4f093689ddcda138430b4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133151778)
   * 141ddcd9eec5702c20f9d1aff0e52e81d46d407b : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135600094)
   * 6cccdad60bd618ab3fae4ce0ebac9ca2ca35067d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136676363)
   * d4f6564ee3118b35275a90640e4570b09a4605bd : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137009265)
   * 11bdbdee2fb1183134f858ffbcc6e41beeebdb9f : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14848) BaseRowSerializer.toBinaryRow wrongly process null for objects with variable-length part

2019-11-18 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16977162#comment-16977162
 ] 

Jingsong Lee commented on FLINK-14848:
--

Hi [~docete], I think you should explain it clearer. And maybe you should add a 
simple case to repetition this bug.

I know there is a bug, but I don't think your description is right.

> BaseRowSerializer.toBinaryRow wrongly process null for objects with 
> variable-length part
> 
>
> Key: FLINK-14848
> URL: https://issues.apache.org/jira/browse/FLINK-14848
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.1
>Reporter: Zhenghua Gao
>Priority: Major
>
> For the fixed-length objects, the writer calls setNullAt() to update 
> fixed-length part(which set null bits and initialize fixed-length part with 0;
> For the variable-length objects, the writer calls setNullAt to update 
> fixed-length part and need to assign & initialize variable-length part



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10240: [FLINK-14841][table] add create and drop function ddl in SQL parser

2019-11-18 Thread GitBox
flinkbot edited a comment on issue #10240: [FLINK-14841][table] add create and 
drop function ddl in SQL parser
URL: https://github.com/apache/flink/pull/10240#issuecomment-555114098
 
 
   
   ## CI report:
   
   * f266bef0733c356444bec417236724cc9f0b35ac : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137045415)
   * 6c6cfc49b5fb262292c0806aafaf87ff5b55e8f5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137130646)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14848) BaseRowSerializer.toBinaryRow wrongly process null for objects with variable-length part

2019-11-18 Thread Kurt Young (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young updated FLINK-14848:
---
Affects Version/s: (was: 1.10.0)
   1.9.1

> BaseRowSerializer.toBinaryRow wrongly process null for objects with 
> variable-length part
> 
>
> Key: FLINK-14848
> URL: https://issues.apache.org/jira/browse/FLINK-14848
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.1
>Reporter: Zhenghua Gao
>Priority: Major
>
> For the fixed-length objects, the writer calls setNullAt() to update 
> fixed-length part(which set null bits and initialize fixed-length part with 0;
> For the variable-length objects, the writer calls setNullAt to update 
> fixed-length part and need to assign & initialize variable-length part



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] KurtYoung commented on a change in pull request #10212: [FLINK-14806][table-planner-blink] Add setTimestamp/getTimestamp inte…

2019-11-18 Thread GitBox
KurtYoung commented on a change in pull request #10212: 
[FLINK-14806][table-planner-blink] Add setTimestamp/getTimestamp inte…
URL: https://github.com/apache/flink/pull/10212#discussion_r347751208
 
 

 ##
 File path: 
flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/dataformat/vector/VectorizedColumnBatch.java
 ##
 @@ -132,4 +135,43 @@ public Decimal getDecimal(int rowId, int colId, int 
precision, int scale) {
return Decimal.fromUnscaledBytes(precision, scale, 
bytes);
}
}
+
+   public SqlTimestamp getTimestamp(int rowId, int colId, int precision) {
+   if (isNullAt(rowId, colId)) {
+   return null;
+   }
+
+   // The precision of Timestamp in parquet should be one of 
MILLIS, MICROS or NANOS.
+   // 
https://github.com/apache/parquet-format/blob/master/LogicalTypes.md#timestamp
+   //
+   // For MILLIS, the underlying INT64 holds milliseconds
+   // For MICROS, the underlying INT64 holds microseconds
+   // For NANOS, the underlying INT96 holds nanoOfDay(8 bytes) and 
julianDay(4 bytes)
+   if (columns[colId] instanceof TimestampColumnVector) {
 
 Review comment:
   I'm not sure this is right. This class should have Flink's own protocol? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-14848) BaseRowSerializer.toBinaryRow wrongly process null for objects with variable-length part

2019-11-18 Thread Zhenghua Gao (Jira)
Zhenghua Gao created FLINK-14848:


 Summary: BaseRowSerializer.toBinaryRow wrongly process null for 
objects with variable-length part
 Key: FLINK-14848
 URL: https://issues.apache.org/jira/browse/FLINK-14848
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.10.0
Reporter: Zhenghua Gao


For the fixed-length objects, the writer calls setNullAt() to update 
fixed-length part(which set null bits and initialize fixed-length part with 0;

For the variable-length objects, the writer calls setNullAt to update 
fixed-length part and need to assign & initialize variable-length part



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-14846) Correct the default writerbuffer size documentation of RocksDB

2019-11-18 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16977159#comment-16977159
 ] 

Yun Tang edited comment on FLINK-14846 at 11/19/19 6:32 AM:


I noticed that this has already mislead our training slides of Veverica at 
Flink-Forward-Europe. [~azagrebin], what do you think of this, please assign to 
me if possible.

 


was (Author: yunta):
I noticed that this has already mislead our training slides of Veverica at 
Flink-Forward-berlin. [~azagrebin], what do you think of this, please assign to 
me if possible.

 

> Correct the default writerbuffer size documentation of RocksDB
> --
>
> Key: FLINK-14846
> URL: https://issues.apache.org/jira/browse/FLINK-14846
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Reporter: Yun Tang
>Priority: Major
> Fix For: 1.10.0
>
>
> When introduce {{RocksDBConfigurableOptions}}, the default writer buffer size 
> is referenced from RocksDB's javadoc. Unfortunately, RocksDB's official 
> javadoc was described incorrectly as {{4MB}} for a long time until I create a 
> [PR|https://github.com/facebook/rocksdb/pull/5670] to correct it. This also 
> leads [our 
> description|https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#state-backend-rocksdb-writebuffer-size]
>  of default write-buffer size not correct, we should fix this to avoid to 
> mislead users.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14846) Correct the default writerbuffer size documentation of RocksDB

2019-11-18 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16977159#comment-16977159
 ] 

Yun Tang commented on FLINK-14846:
--

I noticed that this has already mislead our training slides of Veverica at 
Flink-Forward-berlin. [~azagrebin], what do you think of this, please assign to 
me if possible.

 

> Correct the default writerbuffer size documentation of RocksDB
> --
>
> Key: FLINK-14846
> URL: https://issues.apache.org/jira/browse/FLINK-14846
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Reporter: Yun Tang
>Priority: Major
> Fix For: 1.10.0
>
>
> When introduce {{RocksDBConfigurableOptions}}, the default writer buffer size 
> is referenced from RocksDB's javadoc. Unfortunately, RocksDB's official 
> javadoc was described incorrectly as {{4MB}} for a long time until I create a 
> [PR|https://github.com/facebook/rocksdb/pull/5670] to correct it. This also 
> leads [our 
> description|https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#state-backend-rocksdb-writebuffer-size]
>  of default write-buffer size not correct, we should fix this to avoid to 
> mislead users.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14847) Support retrieving Hive PK constraints

2019-11-18 Thread Rui Li (Jira)
Rui Li created FLINK-14847:
--

 Summary: Support retrieving Hive PK constraints
 Key: FLINK-14847
 URL: https://issues.apache.org/jira/browse/FLINK-14847
 Project: Flink
  Issue Type: Task
  Components: Connectors / Hive
Reporter: Rui Li






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14846) Correct the default writerbuffer size documentation of RocksDB

2019-11-18 Thread Yun Tang (Jira)
Yun Tang created FLINK-14846:


 Summary: Correct the default writerbuffer size documentation of 
RocksDB
 Key: FLINK-14846
 URL: https://issues.apache.org/jira/browse/FLINK-14846
 Project: Flink
  Issue Type: Bug
  Components: Runtime / State Backends
Reporter: Yun Tang
 Fix For: 1.10.0


When introduce {{RocksDBConfigurableOptions}}, the default writer buffer size 
is referenced from RocksDB's javadoc. Unfortunately, RocksDB's official javadoc 
was described incorrectly as {{4MB}} for a long time until I create a 
[PR|https://github.com/facebook/rocksdb/pull/5670] to correct it. This also 
leads [our 
description|https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#state-backend-rocksdb-writebuffer-size]
 of default write-buffer size not correct, we should fix this to avoid to 
mislead users.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10230: [FLINK-14803][metrics]Support Consistency Level for InfluxDB metrics

2019-11-18 Thread GitBox
flinkbot edited a comment on issue #10230: [FLINK-14803][metrics]Support 
Consistency Level for InfluxDB metrics
URL: https://github.com/apache/flink/pull/10230#issuecomment-554717265
 
 
   
   ## CI report:
   
   * 520a41fb6c3b0fdde8fdcea87a348d12918b2481 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/136856364)
   * 0dd23d956554fda33a50b8a70d85e6f63cad3ff9 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136856960)
   * 2f42fa850f5a53dde8e98a1f2b84e50893402ec3 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/137132521)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10224: [FLINK-14716][table-planner-blink] Cooperate computed column with push down rules

2019-11-18 Thread GitBox
flinkbot edited a comment on issue #10224: [FLINK-14716][table-planner-blink] 
Cooperate computed column with push down rules
URL: https://github.com/apache/flink/pull/10224#issuecomment-554384687
 
 
   
   ## CI report:
   
   * f0ffbf0cff4a72ded1062772674408f58783cbfb : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136711107)
   * f995a7d7929f64ec94a7f9e6642a1f0b5b5c213d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136957940)
   * c139b5e5e3214769a99b09a8466b6d8d21f8981d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136967446)
   * 5e8ea168ff489689ef9bd62e9b5ec8ec9192aa2e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137128646)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10212: [FLINK-14806][table-planner-blink] Add setTimestamp/getTimestamp inte…

2019-11-18 Thread GitBox
flinkbot edited a comment on issue #10212: [FLINK-14806][table-planner-blink] 
Add setTimestamp/getTimestamp inte…
URL: https://github.com/apache/flink/pull/10212#issuecomment-554261720
 
 
   
   ## CI report:
   
   * 95ec85f121bef0d4c51ed9846e35277c26c58aac : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136660869)
   * 055f9f757ffe55906c6a74bb223c6cb985e07151 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136972277)
   * e04f4bb1b9df8779ec01962985bf11dca5d86b6b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136989944)
   * 37434b1fe0a6c3f2b16f3a5b68e3049c9cc9c380 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10152: [FLINK-14466][runtime] Let YarnJobClusterEntrypoint use user code class loader

2019-11-18 Thread GitBox
flinkbot edited a comment on issue #10152: [FLINK-14466][runtime] Let 
YarnJobClusterEntrypoint use user code class loader
URL: https://github.com/apache/flink/pull/10152#issuecomment-552497066
 
 
   
   ## CI report:
   
   * 5875fa6d987995f327cffae7912c47f4dc51e944 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135953858)
   * 1c9b982ef3ee82b3088ab2c6bf1c48971ad79cc8 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136142763)
   * 7e5b99825bc1fd7ffe04163158e5cfcb7164bfb9 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136272745)
   * a4ced0f532ca317e3495d35758faf46c0252d44b : UNKNOWN
   * 537133983af86ac1a25c784523be74b012ec8ee3 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136471493)
   * e33ca327aef49b9236a933b2f9a5a0ba4e9418ce : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136556489)
   * 89664127f3614ab017338d43308fd5fa36fd053a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136701823)
   * a2a883bbdedc3a2e62d86d1f2bbbad9dd45dc35d : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14164) Add a metric to show failover count regarding fine grained recovery

2019-11-18 Thread Steven Zhen Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16977152#comment-16977152
 ] 

Steven Zhen Wu commented on FLINK-14164:


[~zhuzh] yeah. Gauge is fine

> Add a metric to show failover count regarding fine grained recovery
> ---
>
> Key: FLINK-14164
> URL: https://issues.apache.org/jira/browse/FLINK-14164
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination, Runtime / Metrics
>Affects Versions: 1.10.0
>Reporter: Zhu Zhu
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Previously Flink uses restart all strategy to recover jobs from failures. And 
> the metric "fullRestart" is used to show the count of failovers.
> However, with fine grained recovery introduced in 1.9.0, the "fullRestart" 
> metric only reveals how many times the entire graph has been restarted, not 
> including the number of fine grained failure recoveries.
> As many users want to build their job alerting based on failovers, I'd 
> propose to add such a new metric {{numberOfRestarts}} which also respects 
> fine grained recoveries. The metric should be a Gauge.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10061: [FLINK-14581][python] Let python UDF execution no longer rely on the flink directory structure to support running python UDFs on yarn.

2019-11-18 Thread GitBox
flinkbot edited a comment on issue #10061: [FLINK-14581][python] Let python UDF 
execution no longer rely on the flink directory structure to support running 
python UDFs on yarn.
URL: https://github.com/apache/flink/pull/10061#issuecomment-548335375
 
 
   
   ## CI report:
   
   * 6f081dffc4e0da56df96f1535e796d2b6e8bb045 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134380778)
   * df8e6837713a3e5683ee106c54558927d64a1d60 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/135409909)
   * 0113b4e79277187b182b4314c42bee631231c355 : UNKNOWN
   * fb22e17536f39a765fae52ff0a6bd4d7ebf12452 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135414128)
   * da49a932918d4cfc4dc0d049b8ade3f3d9979d80 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136994883)
   * 99b53272f2de39f8fc211d4156f7f5079fb7735a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/137128640)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hequn8128 commented on issue #10061: [FLINK-14581][python] Let python UDF execution no longer rely on the flink directory structure to support running python UDFs on yarn.

2019-11-18 Thread GitBox
hequn8128 commented on issue #10061: [FLINK-14581][python] Let python UDF 
execution no longer rely on the flink directory structure to support running 
python UDFs on yarn.
URL: https://github.com/apache/flink/pull/10061#issuecomment-555351153
 
 
   @WeiZhong94 Thanks a lot for the update. Merging...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10244: [hotfix][docs] Fix broken links to table functions

2019-11-18 Thread GitBox
flinkbot edited a comment on issue #10244: [hotfix][docs] Fix broken links to 
table functions
URL: https://github.com/apache/flink/pull/10244#issuecomment-555329668
 
 
   
   ## CI report:
   
   * 24a0f63f335a3d2e074b408531269d3c03054a1d : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/137128630)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13943) Provide api to convert flink table to java List (e.g. Table#collect)

2019-11-18 Thread Jiangjie Qin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16977145#comment-16977145
 ] 

Jiangjie Qin commented on FLINK-13943:
--

[~TsReaper] I think it makes sense to have a {{Table#collect}} method. Asking 
user to use {{DataStreamUtil#collect}} is more of an workaround rather than a 
long term solution. For {{Table#collect}}. We can also return an {{iterable}}. 
But that is an API change thus needs a FLIP.

> Provide api to convert flink table to java List (e.g. Table#collect)
> 
>
> Key: FLINK-13943
> URL: https://issues.apache.org/jira/browse/FLINK-13943
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Jeff Zhang
>Assignee: Caizhi Weng
>Priority: Major
>
> It would be nice to convert flink table to java List so that I can do other 
> data manipulation in client side after execution flink job. For flink 
> planner, I can convert flink table to DataSet and use DataSet#collect, but 
> for blink planner, there's no such api.
> EDIT from FLINK-14807:
> Currently, it is very unconvinient for user to fetch data of flink job unless 
> specify sink expclitly and then fetch data from this sink via its api (e.g. 
> write to hdfs sink, then read data from hdfs). However, most of time user 
> just want to get the data and do whatever processing he want. So it is very 
> necessary for flink to provide api Table#collect for this purpose. 
> Other apis such as Table#head, Table#print is also helpful.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] guoweiM commented on a change in pull request #10152: [FLINK-14466][runtime] Let YarnJobClusterEntrypoint use user code class loader

2019-11-18 Thread GitBox
guoweiM commented on a change in pull request #10152: [FLINK-14466][runtime] 
Let YarnJobClusterEntrypoint use user code class loader
URL: https://github.com/apache/flink/pull/10152#discussion_r347742496
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/entrypoint/component/FileJobGraphRetriever.java
 ##
 @@ -45,26 +51,39 @@
@Nonnull
private final String jobGraphFile;
 
-   public FileJobGraphRetriever(@Nonnull String jobGraphFile) {
+   public FileJobGraphRetriever(@Nonnull String jobGraphFile, @Nullable 
File usrLibDir) throws IOException {
+   super(usrLibDir);
this.jobGraphFile = jobGraphFile;
}
 
@Override
public JobGraph retrieveJobGraph(Configuration configuration) throws 
FlinkException {
-   File fp = new File(jobGraphFile);
+   final File fp = new File(jobGraphFile);
 
try (FileInputStream input = new FileInputStream(fp);
ObjectInputStream obInput = new 
ObjectInputStream(input)) {
-
-   return (JobGraph) obInput.readObject();
+   final JobGraph jobGraph = (JobGraph) 
obInput.readObject();
+   addUserClassPathsToJobGraph(jobGraph);
+   return jobGraph;
} catch (FileNotFoundException e) {
throw new FlinkException("Could not find the JobGraph 
file.", e);
} catch (ClassNotFoundException | IOException e) {
throw new FlinkException("Could not load the JobGraph 
from file.", e);
}
}
 
-   public static FileJobGraphRetriever createFrom(Configuration 
configuration) {
-   return new 
FileJobGraphRetriever(configuration.getString(JOB_GRAPH_FILE_PATH));
+   private void addUserClassPathsToJobGraph(JobGraph jobGraph) {
+   final List classPaths = new ArrayList<>();
+
+   if (jobGraph.getClasspaths() != null) {
 
 Review comment:
   Yes. It cound. After that pr we could remove the checker. Thank you for 
letting me know.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10230: [FLINK-14803][metrics]Support Consistency Level for InfluxDB metrics

2019-11-18 Thread GitBox
flinkbot edited a comment on issue #10230: [FLINK-14803][metrics]Support 
Consistency Level for InfluxDB metrics
URL: https://github.com/apache/flink/pull/10230#issuecomment-554717265
 
 
   
   ## CI report:
   
   * 520a41fb6c3b0fdde8fdcea87a348d12918b2481 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/136856364)
   * 0dd23d956554fda33a50b8a70d85e6f63cad3ff9 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/136856960)
   * 2f42fa850f5a53dde8e98a1f2b84e50893402ec3 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] yanghua commented on issue #10215: [FLINK-14767] Mark TaskManagerOptions#EXIT_ON_FATAL_AKKA_ERROR with @Deprecated annotation

2019-11-18 Thread GitBox
yanghua commented on issue #10215: [FLINK-14767] Mark 
TaskManagerOptions#EXIT_ON_FATAL_AKKA_ERROR with @Deprecated annotation
URL: https://github.com/apache/flink/pull/10215#issuecomment-555343904
 
 
   cc @GJL and @zentol 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14838) Cleanup the description about container number config option in Scala and python shell doc

2019-11-18 Thread vinoyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16977136#comment-16977136
 ] 

vinoyang commented on FLINK-14838:
--

[~trohrmann] or [~gjy] Can you assign this ticket?

> Cleanup the description about container number config option in Scala and 
> python shell doc
> --
>
> Key: FLINK-14838
> URL: https://issues.apache.org/jira/browse/FLINK-14838
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: vinoyang
>Priority: Major
>
> Currently, the config option {{-n}} for Flink on Yarn has not been supported 
> since Flink 1.8+. FLINK-12362 did the cleanup job about this config option. 
> However, the scala shell and python doc still contains some description about 
> {{-n}} which may make users confused. This issue used to track the cleanup 
> work.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14767) Mark TaskManagerOptions#EXIT_ON_FATAL_AKKA_ERROR with @Deprecated annotation

2019-11-18 Thread vinoyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16977134#comment-16977134
 ] 

vinoyang commented on FLINK-14767:
--

[~yunta] OK, I will be careful next time.

> Mark TaskManagerOptions#EXIT_ON_FATAL_AKKA_ERROR with @Deprecated annotation
> 
>
> Key: FLINK-14767
> URL: https://issues.apache.org/jira/browse/FLINK-14767
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Since {{TaskManagerOptions#EXIT_ON_FATAL_AKKA_ERROR}} has no longer been  
> used. IMO, we can remove this config option.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10240: [FLINK-14841][table] add create and drop function ddl in SQL parser

2019-11-18 Thread GitBox
flinkbot edited a comment on issue #10240: [FLINK-14841][table] add create and 
drop function ddl in SQL parser
URL: https://github.com/apache/flink/pull/10240#issuecomment-555114098
 
 
   
   ## CI report:
   
   * f266bef0733c356444bec417236724cc9f0b35ac : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137045415)
   * 6c6cfc49b5fb262292c0806aafaf87ff5b55e8f5 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/137130646)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] docete commented on a change in pull request #10212: [FLINK-14806][table-planner-blink] Add setTimestamp/getTimestamp inte…

2019-11-18 Thread GitBox
docete commented on a change in pull request #10212: 
[FLINK-14806][table-planner-blink] Add setTimestamp/getTimestamp inte…
URL: https://github.com/apache/flink/pull/10212#discussion_r347737397
 
 

 ##
 File path: 
flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/dataformat/AbstractBinaryWriter.java
 ##
 @@ -181,6 +181,29 @@ public void writeDecimal(int pos, Decimal value, int 
precision) {
}
}
 
+   @Override
+   public void writeTimestamp(int pos, SqlTimestamp value, int precision) {
+   if (SqlTimestamp.isCompact(precision)) {
+   assert 0 == value.getNanoOfMillisecond();
+   writeLong(pos, value.getMillisecond());
+   } else {
+   // store the nanoOfMillisecond in fixed-length part as 
offset and nanoOfMillisecond
+   ensureCapacity(8);
+
+   if (value == null) {
 
 Review comment:
   For the fixed-length objects, the writer calls setNullAt() to update 
fixed-length part(which set null bits and initialize fixed-length part with 0); 
For the variable-length objects, the writer calls setNullAt to update 
fixed-length part and need to assign & initialize variable-length part.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10224: [FLINK-14716][table-planner-blink] Cooperate computed column with push down rules

2019-11-18 Thread GitBox
flinkbot edited a comment on issue #10224: [FLINK-14716][table-planner-blink] 
Cooperate computed column with push down rules
URL: https://github.com/apache/flink/pull/10224#issuecomment-554384687
 
 
   
   ## CI report:
   
   * f0ffbf0cff4a72ded1062772674408f58783cbfb : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136711107)
   * f995a7d7929f64ec94a7f9e6642a1f0b5b5c213d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136957940)
   * c139b5e5e3214769a99b09a8466b6d8d21f8981d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136967446)
   * 5e8ea168ff489689ef9bd62e9b5ec8ec9192aa2e : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/137128646)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-14725) Remove unused (anymore) TaskManagerOptions#EXIT_ON_FATAL_AKKA_ERROR

2019-11-18 Thread Yun Tang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Tang closed FLINK-14725.

Resolution: Duplicate

> Remove unused (anymore) TaskManagerOptions#EXIT_ON_FATAL_AKKA_ERROR
> ---
>
> Key: FLINK-14725
> URL: https://issues.apache.org/jira/browse/FLINK-14725
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Reporter: Yun Tang
>Assignee: Yun Tang
>Priority: Major
> Fix For: 1.10.0
>
>
> After we removed {{TaskManager.scala}}, the only place where to use {{ 
> TaskManagerOptions#EXIT_ON_FATAL_AKKA_ERROR}} has gone. Thus, we should 
> remove this options now.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10061: [FLINK-14581][python] Let python UDF execution no longer rely on the flink directory structure to support running python UDFs on yarn.

2019-11-18 Thread GitBox
flinkbot edited a comment on issue #10061: [FLINK-14581][python] Let python UDF 
execution no longer rely on the flink directory structure to support running 
python UDFs on yarn.
URL: https://github.com/apache/flink/pull/10061#issuecomment-548335375
 
 
   
   ## CI report:
   
   * 6f081dffc4e0da56df96f1535e796d2b6e8bb045 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134380778)
   * df8e6837713a3e5683ee106c54558927d64a1d60 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/135409909)
   * 0113b4e79277187b182b4314c42bee631231c355 : UNKNOWN
   * fb22e17536f39a765fae52ff0a6bd4d7ebf12452 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135414128)
   * da49a932918d4cfc4dc0d049b8ade3f3d9979d80 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136994883)
   * 99b53272f2de39f8fc211d4156f7f5079fb7735a : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/137128640)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14767) Mark TaskManagerOptions#EXIT_ON_FATAL_AKKA_ERROR with @Deprecated annotation

2019-11-18 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16977124#comment-16977124
 ] 

Yun Tang commented on FLINK-14767:
--

[~yanghua], hope you could just take a look at existing JIRAs next time. Not to 
mention both  FLINK-14725 and FLINK-14767 belong to the same umbrella issue.

> Mark TaskManagerOptions#EXIT_ON_FATAL_AKKA_ERROR with @Deprecated annotation
> 
>
> Key: FLINK-14767
> URL: https://issues.apache.org/jira/browse/FLINK-14767
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Since {{TaskManagerOptions#EXIT_ON_FATAL_AKKA_ERROR}} has no longer been  
> used. IMO, we can remove this config option.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10244: [hotfix][docs] Fix broken links to table functions

2019-11-18 Thread GitBox
flinkbot edited a comment on issue #10244: [hotfix][docs] Fix broken links to 
table functions
URL: https://github.com/apache/flink/pull/10244#issuecomment-555329668
 
 
   
   ## CI report:
   
   * 24a0f63f335a3d2e074b408531269d3c03054a1d : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/137128630)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10240: [FLINK-14841][table] add create and drop function ddl in SQL parser

2019-11-18 Thread GitBox
flinkbot edited a comment on issue #10240: [FLINK-14841][table] add create and 
drop function ddl in SQL parser
URL: https://github.com/apache/flink/pull/10240#issuecomment-555114098
 
 
   
   ## CI report:
   
   * f266bef0733c356444bec417236724cc9f0b35ac : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137045415)
   * 6c6cfc49b5fb262292c0806aafaf87ff5b55e8f5 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10224: [FLINK-14716][table-planner-blink] Cooperate computed column with push down rules

2019-11-18 Thread GitBox
flinkbot edited a comment on issue #10224: [FLINK-14716][table-planner-blink] 
Cooperate computed column with push down rules
URL: https://github.com/apache/flink/pull/10224#issuecomment-554384687
 
 
   
   ## CI report:
   
   * f0ffbf0cff4a72ded1062772674408f58783cbfb : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136711107)
   * f995a7d7929f64ec94a7f9e6642a1f0b5b5c213d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136957940)
   * c139b5e5e3214769a99b09a8466b6d8d21f8981d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136967446)
   * 5e8ea168ff489689ef9bd62e9b5ec8ec9192aa2e : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10143: [FLINK-13184]Starting a TaskExecutor blocks the YarnResourceManager's main thread

2019-11-18 Thread GitBox
flinkbot edited a comment on issue #10143: [FLINK-13184]Starting a TaskExecutor 
blocks the YarnResourceManager's main thread
URL: https://github.com/apache/flink/pull/10143#issuecomment-552173061
 
 
   
   ## CI report:
   
   * 89945a11272d023b9b41f23ac40fb47ff2931fb8 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135808680)
   * 624a1e2f85bce886c198f223c1b8d161295af501 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/135813622)
   * 1adc1cbe8f4a751aad2e62d91b4d1537725fcca2 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135814484)
   * 6885e128c831283cde5a374cff406f48869c32d7 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135928664)
   * f158d75ad62fe62c67cbf90875a057e58095bb1a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/137124170)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10061: [FLINK-14581][python] Let python UDF execution no longer rely on the flink directory structure to support running python UDFs on yarn.

2019-11-18 Thread GitBox
flinkbot edited a comment on issue #10061: [FLINK-14581][python] Let python UDF 
execution no longer rely on the flink directory structure to support running 
python UDFs on yarn.
URL: https://github.com/apache/flink/pull/10061#issuecomment-548335375
 
 
   
   ## CI report:
   
   * 6f081dffc4e0da56df96f1535e796d2b6e8bb045 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134380778)
   * df8e6837713a3e5683ee106c54558927d64a1d60 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/135409909)
   * 0113b4e79277187b182b4314c42bee631231c355 : UNKNOWN
   * fb22e17536f39a765fae52ff0a6bd4d7ebf12452 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135414128)
   * da49a932918d4cfc4dc0d049b8ade3f3d9979d80 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/136994883)
   * 99b53272f2de39f8fc211d4156f7f5079fb7735a : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10244: [hotfix][docs] Fix broken links to table functions

2019-11-18 Thread GitBox
flinkbot commented on issue #10244: [hotfix][docs] Fix broken links to table 
functions
URL: https://github.com/apache/flink/pull/10244#issuecomment-555329668
 
 
   
   ## CI report:
   
   * 24a0f63f335a3d2e074b408531269d3c03054a1d : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] HuangZhenQiu commented on a change in pull request #10240: [FLINK-14841][table] add create and drop function ddl in SQL parser

2019-11-18 Thread GitBox
HuangZhenQiu commented on a change in pull request #10240: [FLINK-14841][table] 
add create and drop function ddl in SQL parser
URL: https://github.com/apache/flink/pull/10240#discussion_r347726729
 
 

 ##
 File path: 
flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl
 ##
 @@ -177,6 +177,87 @@ SqlDescribeDatabase SqlDescribeDatabase() :
 
 }
 
+SqlCreate SqlCreateFunction(Span s, boolean replace) :
+{
+SqlIdentifier functionName = null;
+SqlCharStringLiteral functionClassName = null;
+String functionLanguage = null;
+boolean ifNotExists = false;
+boolean hasTemporary = false;
+boolean isSystemFunction = false;
+}
+{
+[
+   {hasTemporary = true;}
+(
+   { isSystemFunction = true; }
+|
+{isSystemFunction = false; }
+)
+]
+
+(
+   { ifNotExists = true; }
+|
+{ ifNotExists = false; }
+)
+functionName = CompoundIdentifier()
+  {
+String p = SqlParserUtil.parseString(token.image);
+functionClassName = SqlLiteral.createCharString(p, getPos());
+}
+[
+(
+  { functionLanguage = "JAVA"; }
+|
+ { functionLanguage = "SCALA"; }
 
 Review comment:
   As SCALA function has different initialization and verification mechanism, 
how about leave it here for now?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] HuangZhenQiu commented on a change in pull request #10240: [FLINK-14841][table] add create and drop function ddl in SQL parser

2019-11-18 Thread GitBox
HuangZhenQiu commented on a change in pull request #10240: [FLINK-14841][table] 
add create and drop function ddl in SQL parser
URL: https://github.com/apache/flink/pull/10240#discussion_r347726508
 
 

 ##
 File path: 
flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl
 ##
 @@ -177,6 +177,87 @@ SqlDescribeDatabase SqlDescribeDatabase() :
 
 }
 
+SqlCreate SqlCreateFunction(Span s, boolean replace) :
+{
+SqlIdentifier functionName = null;
+SqlCharStringLiteral functionClassName = null;
+String functionLanguage = null;
+boolean ifNotExists = false;
+boolean hasTemporary = false;
+boolean isSystemFunction = false;
+}
+{
+[
+   {hasTemporary = true;}
+(
+   { isSystemFunction = true; }
+|
+{isSystemFunction = false; }
 
 Review comment:
   Updated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] HuangZhenQiu commented on a change in pull request #10240: [FLINK-14841][table] add create and drop function ddl in SQL parser

2019-11-18 Thread GitBox
HuangZhenQiu commented on a change in pull request #10240: [FLINK-14841][table] 
add create and drop function ddl in SQL parser
URL: https://github.com/apache/flink/pull/10240#discussion_r347726596
 
 

 ##
 File path: 
flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlCreateFunction.java
 ##
 @@ -0,0 +1,135 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.sql.parser.ddl;
+
+import org.apache.flink.sql.parser.ExtendedSqlNode;
+import org.apache.flink.sql.parser.error.SqlValidateException;
+
+import org.apache.calcite.sql.SqlCharStringLiteral;
+import org.apache.calcite.sql.SqlCreate;
+import org.apache.calcite.sql.SqlIdentifier;
+import org.apache.calcite.sql.SqlKind;
+import org.apache.calcite.sql.SqlNode;
+import org.apache.calcite.sql.SqlOperator;
+import org.apache.calcite.sql.SqlSpecialOperator;
+import org.apache.calcite.sql.SqlWriter;
+import org.apache.calcite.sql.parser.SqlParserPos;
+import org.apache.calcite.util.ImmutableNullableList;
+
+import javax.annotation.Nonnull;
+
+import java.util.List;
+
+import static java.util.Objects.requireNonNull;
+
+/**
+ * CREATE FUNCTION DDL sql call.
+ */
+public class SqlCreateFunction extends SqlCreate implements ExtendedSqlNode {
+
+   public static final SqlSpecialOperator OPERATOR = new 
SqlSpecialOperator("CREATE FUNCTION", SqlKind.CREATE_FUNCTION);
+
+   private final SqlIdentifier functionName;
+
+   private final SqlCharStringLiteral functionClassName;
+
+   private final String functionLanguage;
+
+   private final boolean hasTemporary;
+
+   private final boolean isSystemFunction;
+
+   public SqlCreateFunction(
+   SqlParserPos pos,
+   SqlIdentifier functionName,
+   SqlCharStringLiteral functionClassName,
+   String functionLanguage,
+   boolean ifNotExists,
+   boolean hasTemporary,
+   boolean isSystemFunction) {
+   super(OPERATOR, pos, false, ifNotExists);
+   this.functionName = requireNonNull(functionName);
+   this.functionClassName = requireNonNull(functionClassName);
+   this.isSystemFunction = requireNonNull(isSystemFunction);
+   this.hasTemporary = hasTemporary;
+   this.functionLanguage = functionLanguage;
+   }
+
+   @Override
+   public SqlOperator getOperator() {
+   return OPERATOR;
+   }
+
+   @Nonnull
+   @Override
+   public List getOperandList() {
+   return ImmutableNullableList.of(functionName, 
functionClassName);
+   }
+
+   @Override
+   public void unparse(SqlWriter writer, int leftPrec, int rightPrec) {
+   writer.keyword("CREATE");
+   if (hasTemporary) {
+   writer.keyword("TEMPORARY");
+   }
+   if (isSystemFunction) {
+   writer.keyword("SYSTEM");
+   }
+   writer.keyword("FUNCTION");
+   if (ifNotExists) {
+   writer.keyword("IF NOT EXISTS");
+   }
+   functionName.unparse(writer, leftPrec, rightPrec);
+   writer.keyword("AS");
+   functionClassName.unparse(writer, leftPrec, rightPrec);
+   if (functionLanguage != null) {
+   writer.keyword("LANGUAGE");
+   writer.keyword(functionLanguage);
+   }
+   }
+
+   @Override
+   public void validate() throws SqlValidateException {
+   // no-op
+   }
+
+   public boolean isIfNotExists() {
+   return ifNotExists;
+   }
+
+   public boolean isSystemFunction() {
+   return isSystemFunction;
+   }
+
+   public SqlIdentifier getFunctionName() {
+   return this.functionName;
+   }
+
+   public SqlCharStringLiteral getFunctionClassName() {
+   return this.functionClassName;
+   }
+
+   public String getFunctionLanguage() {
+   return this.functionLanguage;
+   }
+
+   public String[] fullFunctionName() {
 
 Review comment

[GitHub] [flink] flinkbot edited a comment on issue #10237: [FLINK-14362][runtime] Change DefaultSchedulingResultPartition to return correct partition state

2019-11-18 Thread GitBox
flinkbot edited a comment on issue #10237: [FLINK-14362][runtime] Change 
DefaultSchedulingResultPartition to return correct partition state
URL: https://github.com/apache/flink/pull/10237#issuecomment-555034534
 
 
   
   ## CI report:
   
   * 5bb85aa49e52a19e22bec321a0f3d0309456515d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/137009333)
   * c33127b51730b576e93d6c58e10b014016ebce3a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/137121954)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] HuangZhenQiu commented on a change in pull request #10240: [FLINK-14841][table] add create and drop function ddl in SQL parser

2019-11-18 Thread GitBox
HuangZhenQiu commented on a change in pull request #10240: [FLINK-14841][table] 
add create and drop function ddl in SQL parser
URL: https://github.com/apache/flink/pull/10240#discussion_r347725643
 
 

 ##
 File path: 
flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl
 ##
 @@ -177,6 +177,87 @@ SqlDescribeDatabase SqlDescribeDatabase() :
 
 }
 
+SqlCreate SqlCreateFunction(Span s, boolean replace) :
+{
+SqlIdentifier functionName = null;
+SqlCharStringLiteral functionClassName = null;
+String functionLanguage = null;
+boolean ifNotExists = false;
+boolean hasTemporary = false;
+boolean isSystemFunction = false;
+}
+{
+[
+   {hasTemporary = true;}
+(
+   { isSystemFunction = true; }
+|
+{isSystemFunction = false; }
+)
+]
+
+(
+   { ifNotExists = true; }
+|
+{ ifNotExists = false; }
+)
+functionName = CompoundIdentifier()
+  {
+String p = SqlParserUtil.parseString(token.image);
+functionClassName = SqlLiteral.createCharString(p, getPos());
+}
+[
+(
+  { functionLanguage = "JAVA"; }
+|
+ { functionLanguage = "SCALA"; }
+|
+   { functionLanguage = "SQL"; }
+)
+]
+{
+return new SqlCreateFunction(s.pos(), functionName, functionClassName, 
functionLanguage,
+ifNotExists, hasTemporary, isSystemFunction);
+}
+}
+
+SqlDrop SqlDropFunction(Span s, boolean replace) :
+{
+SqlIdentifier functionName = null;
+String functionLanguage = null;
 
 Review comment:
   You are right. As catalog doesn't distinguish function with same name with 
different language, I think it can removed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] HuangZhenQiu commented on a change in pull request #10240: [FLINK-14841][table] add create and drop function ddl in SQL parser

2019-11-18 Thread GitBox
HuangZhenQiu commented on a change in pull request #10240: [FLINK-14841][table] 
add create and drop function ddl in SQL parser
URL: https://github.com/apache/flink/pull/10240#discussion_r347725667
 
 

 ##
 File path: 
flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl
 ##
 @@ -177,6 +177,87 @@ SqlDescribeDatabase SqlDescribeDatabase() :
 
 }
 
+SqlCreate SqlCreateFunction(Span s, boolean replace) :
+{
+SqlIdentifier functionName = null;
+SqlCharStringLiteral functionClassName = null;
+String functionLanguage = null;
+boolean ifNotExists = false;
+boolean hasTemporary = false;
+boolean isSystemFunction = false;
+}
+{
+[
+   {hasTemporary = true;}
+(
+   { isSystemFunction = true; }
+|
+{isSystemFunction = false; }
+)
+]
+
+(
+   { ifNotExists = true; }
+|
+{ ifNotExists = false; }
+)
+functionName = CompoundIdentifier()
+  {
+String p = SqlParserUtil.parseString(token.image);
+functionClassName = SqlLiteral.createCharString(p, getPos());
+}
+[
+(
+  { functionLanguage = "JAVA"; }
+|
+ { functionLanguage = "SCALA"; }
+|
+   { functionLanguage = "SQL"; }
+)
+]
+{
+return new SqlCreateFunction(s.pos(), functionName, functionClassName, 
functionLanguage,
+ifNotExists, hasTemporary, isSystemFunction);
+}
+}
+
+SqlDrop SqlDropFunction(Span s, boolean replace) :
+{
+SqlIdentifier functionName = null;
+String functionLanguage = null;
+boolean ifExists = false;
+boolean hasTemporary = false;
 
 Review comment:
   Yes


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] HuangZhenQiu commented on a change in pull request #10240: [FLINK-14841][table] add create and drop function ddl in SQL parser

2019-11-18 Thread GitBox
HuangZhenQiu commented on a change in pull request #10240: [FLINK-14841][table] 
add create and drop function ddl in SQL parser
URL: https://github.com/apache/flink/pull/10240#discussion_r347725696
 
 

 ##
 File path: 
flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlCreateFunction.java
 ##
 @@ -0,0 +1,135 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.sql.parser.ddl;
+
+import org.apache.flink.sql.parser.ExtendedSqlNode;
+import org.apache.flink.sql.parser.error.SqlValidateException;
+
+import org.apache.calcite.sql.SqlCharStringLiteral;
+import org.apache.calcite.sql.SqlCreate;
+import org.apache.calcite.sql.SqlIdentifier;
+import org.apache.calcite.sql.SqlKind;
+import org.apache.calcite.sql.SqlNode;
+import org.apache.calcite.sql.SqlOperator;
+import org.apache.calcite.sql.SqlSpecialOperator;
+import org.apache.calcite.sql.SqlWriter;
+import org.apache.calcite.sql.parser.SqlParserPos;
+import org.apache.calcite.util.ImmutableNullableList;
+
+import javax.annotation.Nonnull;
+
+import java.util.List;
+
+import static java.util.Objects.requireNonNull;
+
+/**
+ * CREATE FUNCTION DDL sql call.
+ */
+public class SqlCreateFunction extends SqlCreate implements ExtendedSqlNode {
+
+   public static final SqlSpecialOperator OPERATOR = new 
SqlSpecialOperator("CREATE FUNCTION", SqlKind.CREATE_FUNCTION);
+
+   private final SqlIdentifier functionName;
+
+   private final SqlCharStringLiteral functionClassName;
+
+   private final String functionLanguage;
+
+   private final boolean hasTemporary;
+
+   private final boolean isSystemFunction;
+
+   public SqlCreateFunction(
+   SqlParserPos pos,
+   SqlIdentifier functionName,
+   SqlCharStringLiteral functionClassName,
+   String functionLanguage,
+   boolean ifNotExists,
+   boolean hasTemporary,
+   boolean isSystemFunction) {
+   super(OPERATOR, pos, false, ifNotExists);
 
 Review comment:
   Good catch


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] HuangZhenQiu commented on a change in pull request #10240: [FLINK-14841][table] add create and drop function ddl in SQL parser

2019-11-18 Thread GitBox
HuangZhenQiu commented on a change in pull request #10240: [FLINK-14841][table] 
add create and drop function ddl in SQL parser
URL: https://github.com/apache/flink/pull/10240#discussion_r347725156
 
 

 ##
 File path: 
flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl
 ##
 @@ -177,6 +177,87 @@ SqlDescribeDatabase SqlDescribeDatabase() :
 
 }
 
+SqlCreate SqlCreateFunction(Span s, boolean replace) :
+{
+SqlIdentifier functionName = null;
+SqlCharStringLiteral functionClassName = null;
+String functionLanguage = null;
+boolean ifNotExists = false;
+boolean hasTemporary = false;
 
 Review comment:
   Yes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] HuangZhenQiu commented on a change in pull request #10240: [FLINK-14841][table] add create and drop function ddl in SQL parser

2019-11-18 Thread GitBox
HuangZhenQiu commented on a change in pull request #10240: [FLINK-14841][table] 
add create and drop function ddl in SQL parser
URL: https://github.com/apache/flink/pull/10240#discussion_r347725138
 
 

 ##
 File path: 
flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl
 ##
 @@ -177,6 +177,87 @@ SqlDescribeDatabase SqlDescribeDatabase() :
 
 }
 
+SqlCreate SqlCreateFunction(Span s, boolean replace) :
+{
+SqlIdentifier functionName = null;
+SqlCharStringLiteral functionClassName = null;
+String functionLanguage = null;
 
 Review comment:
   If we define language enum in sql parser module. It can't be accessed in api 
module. Thus, I would prefer to keep the value as String here, and add the enum 
in api module. We may do the transform in SqlToOperationConverter.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] HuangZhenQiu commented on a change in pull request #10240: [FLINK-14841][table] add create and drop function ddl in SQL parser

2019-11-18 Thread GitBox
HuangZhenQiu commented on a change in pull request #10240: [FLINK-14841][table] 
add create and drop function ddl in SQL parser
URL: https://github.com/apache/flink/pull/10240#discussion_r347725174
 
 

 ##
 File path: flink-table/flink-sql-parser/src/main/codegen/data/Parser.tdd
 ##
 @@ -68,7 +70,8 @@
 "CATALOGS",
 "USE",
 "DATABASES",
-"EXTENDED"
+"EXTENDED",
+"SCALA"
 
 Review comment:
   Yes. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10143: [FLINK-13184]Starting a TaskExecutor blocks the YarnResourceManager's main thread

2019-11-18 Thread GitBox
flinkbot edited a comment on issue #10143: [FLINK-13184]Starting a TaskExecutor 
blocks the YarnResourceManager's main thread
URL: https://github.com/apache/flink/pull/10143#issuecomment-552173061
 
 
   
   ## CI report:
   
   * 89945a11272d023b9b41f23ac40fb47ff2931fb8 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135808680)
   * 624a1e2f85bce886c198f223c1b8d161295af501 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/135813622)
   * 1adc1cbe8f4a751aad2e62d91b4d1537725fcca2 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135814484)
   * 6885e128c831283cde5a374cff406f48869c32d7 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135928664)
   * f158d75ad62fe62c67cbf90875a057e58095bb1a : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/137124170)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] Myasuka commented on issue #10244: [hotfix][docs] Fix broken links to table functions

2019-11-18 Thread GitBox
Myasuka commented on issue #10244: [hotfix][docs] Fix broken links to table 
functions
URL: https://github.com/apache/flink/pull/10244#issuecomment-555326027
 
 
   @flinkbot attention @bowenli86 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] WeiZhong94 commented on issue #10061: [FLINK-14581][python] Let python UDF execution no longer rely on the flink directory structure to support running python UDFs on yarn.

2019-11-18 Thread GitBox
WeiZhong94 commented on issue #10061: [FLINK-14581][python] Let python UDF 
execution no longer rely on the flink directory structure to support running 
python UDFs on yarn.
URL: https://github.com/apache/flink/pull/10061#issuecomment-555325820
 
 
   @hequn8128 Thanks for your reminder, I have fixed the test failure in the 
latest commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10244: [hotfix][docs] Fix broken links to table functions

2019-11-18 Thread GitBox
flinkbot commented on issue #10244: [hotfix][docs] Fix broken links to table 
functions
URL: https://github.com/apache/flink/pull/10244#issuecomment-555325412
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 24a0f63f335a3d2e074b408531269d3c03054a1d (Tue Nov 19 
04:16:25 UTC 2019)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on issue #10239: [Flink-11491][Test] Support all TPC-DS queries

2019-11-18 Thread GitBox
KurtYoung commented on issue #10239: [Flink-11491][Test] Support all TPC-DS 
queries
URL: https://github.com/apache/flink/pull/10239#issuecomment-555325054
 
 
   BTW, you should add some query and answer files to the rat exclude files 
which are around line 1300 in parent pom.xml 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10239: [Flink-11491][Test] Support all TPC-DS queries

2019-11-18 Thread GitBox
KurtYoung commented on a change in pull request #10239: [Flink-11491][Test] 
Support all TPC-DS queries
URL: https://github.com/apache/flink/pull/10239#discussion_r347718586
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-tpcds-test/src/main/java/org/apache/flink/table/tpcds/stats/CatalogTableStats.java
 ##
 @@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.tpcds.stats;
+
+import org.apache.flink.table.api.TableEnvironment;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.stats.CatalogColumnStatistics;
+import org.apache.flink.table.catalog.stats.CatalogTableStatistics;
+
+/**
+ * Class to save catalog table statistics.
+ * Consist ofå {@link CatalogTableStatistics} and {@link 
CatalogTableStatistics}.
+ */
+public class CatalogTableStats {
+   private CatalogTableStatistics catalogTableStatistics;
+   private CatalogColumnStatistics catalogColumnStatistics;
+
+   public CatalogTableStats(CatalogTableStatistics catalogTableStatistics, 
CatalogColumnStatistics catalogColumnStatistics) {
+   this.catalogTableStatistics = catalogTableStatistics;
+   this.catalogColumnStatistics = catalogColumnStatistics;
+   }
+
+   public void register2Catalog(TableEnvironment tEnv, String table) {
+   try {
+   tEnv.getCatalog(tEnv.getCurrentCatalog()).get()
+   .alterTableStatistics(new 
ObjectPath(tEnv.getCurrentDatabase(), table), catalogTableStatistics, false);
+   tEnv.getCatalog(tEnv.getCurrentCatalog()).get()
 
 Review comment:
   ditto


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] Myasuka opened a new pull request #10244: [hotfix][docs] Fix broken links to table functions

2019-11-18 Thread GitBox
Myasuka opened a new pull request #10244: [hotfix][docs] Fix broken links to 
table functions
URL: https://github.com/apache/flink/pull/10244
 
 
   ## What is the purpose of the change
   Fix broken links to table functions
   
   ## Brief change log
   Correct the links in `index.md`, `table_api.md` and their Chinese documents.
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10239: [Flink-11491][Test] Support all TPC-DS queries

2019-11-18 Thread GitBox
KurtYoung commented on a change in pull request #10239: [Flink-11491][Test] 
Support all TPC-DS queries
URL: https://github.com/apache/flink/pull/10239#discussion_r347718565
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-tpcds-test/src/main/java/org/apache/flink/table/tpcds/stats/CatalogTableStats.java
 ##
 @@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.tpcds.stats;
+
+import org.apache.flink.table.api.TableEnvironment;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.stats.CatalogColumnStatistics;
+import org.apache.flink.table.catalog.stats.CatalogTableStatistics;
+
+/**
+ * Class to save catalog table statistics.
+ * Consist ofå {@link CatalogTableStatistics} and {@link 
CatalogTableStatistics}.
+ */
+public class CatalogTableStats {
+   private CatalogTableStatistics catalogTableStatistics;
+   private CatalogColumnStatistics catalogColumnStatistics;
+
+   public CatalogTableStats(CatalogTableStatistics catalogTableStatistics, 
CatalogColumnStatistics catalogColumnStatistics) {
+   this.catalogTableStatistics = catalogTableStatistics;
+   this.catalogColumnStatistics = catalogColumnStatistics;
+   }
+
+   public void register2Catalog(TableEnvironment tEnv, String table) {
+   try {
+   tEnv.getCatalog(tEnv.getCurrentCatalog()).get()
 
 Review comment:
   Please handle the case that if the catalog doesn't exist. You will also see 
an IDE warning on this `get()`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10239: [Flink-11491][Test] Support all TPC-DS queries

2019-11-18 Thread GitBox
KurtYoung commented on a change in pull request #10239: [Flink-11491][Test] 
Support all TPC-DS queries
URL: https://github.com/apache/flink/pull/10239#discussion_r347723047
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-tpcds-test/src/main/java/org/apache/flink/table/tpcds/utils/TpcdsResultComparator.java
 ##
 @@ -0,0 +1,232 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.tpcds.utils;
+
+import org.apache.flink.api.java.utils.ParameterTool;
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.FileReader;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+/**
+ * Result comparator for TPC-DS test, according to the TPC-DS standard 
specification v2.11.0.
+ * skip validate query 6、19、30、31、46、67、68、81 temporary,
+ * because they can not match answer set perfectly from now and
+ * we'd take some effort to address it.
+ */
+public class TpcdsResultComparator {
+
+   private static final int VALIDATE_QUERY_NUM = 95;
+   private static final List VALIDATE_QUERIES = Arrays.asList(
+   "1", "2", "3", "4", "5", "7", "8", "9", "10",
+   "11", "12", "13", "14a", "14b", "15", "16", "17", "18", "20",
+   "21", "22", "23a", "23b", "24a", "24b", "25", "26", "27", "28", 
"29",
+   "32", "33", "34", "35", "36", "37", "38", "39a", "39b", "40",
+   "41", "42", "43", "44", "45", "47", "48", "49", "50",
+   "51", "52", "53", "54", "55", "56", "57", "58", "59", "60",
+   "61", "62", "63", "64", "65", "66", "69", "70",
+   "71", "72", "73", "74", "75", "76", "77", "78", "79", "80",
+   "82", "83", "84", "85", "86", "87", "88", "89", "90",
+   "91", "92", "93", "94", "95", "96", "97", "98", "99"
+   );
+
+   private static final String REGEX_SPLIT_BAR = "\\|";
+   private static final String FILE_SEPARATOR = "/";
+   private static final String RESULT_SUFFIX = ".ans";
+   private static final double TOLERATED_DOUBLE_DEVIATION = 0.01d;
+
+   public static void main(String[] args) {
+   ParameterTool params = ParameterTool.fromArgs(args);
+   String expectedDir = params.getRequired("expectedDir");
+   String actualDir = params.getRequired("actualDir");
+   int passCnt = 0;
+   for (String queryId : VALIDATE_QUERIES) {
+   File expectedFile = new File(expectedDir + 
FILE_SEPARATOR + queryId + RESULT_SUFFIX);
+   File actualFile = new File(actualDir + FILE_SEPARATOR + 
queryId + RESULT_SUFFIX);
+
+   if (compareResult(expectedFile, actualFile)) {
+   passCnt++;
+   System.out.println("[INFO] validate success, 
file: " + expectedFile.getName() + " cnt:" + passCnt);
+   } else {
+   System.out.println("[WARN] validate fail, file: 
" + expectedFile.getName() + "\n");
 
 Review comment:
   output the content when mismatch?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10239: [Flink-11491][Test] Support all TPC-DS queries

2019-11-18 Thread GitBox
KurtYoung commented on a change in pull request #10239: [Flink-11491][Test] 
Support all TPC-DS queries
URL: https://github.com/apache/flink/pull/10239#discussion_r347718057
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-tpcds-test/src/main/java/org/apache/flink/table/tpcds/schema/TpcdsSchemaProvider.java
 ##
 @@ -0,0 +1,526 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.tpcds.schema;
+
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.types.DataType;
+
+import java.sql.Date;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.Map;
+
+/**
+ * Class to provide all TPC-DS tables' schema information.
+ * The data type of column use {@link DataType}
+ */
+public class TpcdsSchemaProvider {
+
+   private static int tpcdsTableNums = 24;
+   private static Map schemaMap = new 
HashMap<>(tpcdsTableNums);
+
+   static {
+   schemaMap.put("catalog_sales", new TpcdsSchema(
+   Arrays.asList(
+   new Column("cs_sold_date_sk", 0, 
DataTypes.BIGINT()),
+   new Column("cs_sold_time_sk", 1, 
DataTypes.BIGINT()),
+   new Column("cs_ship_date_sk", 2, 
DataTypes.BIGINT()),
+   new Column("cs_bill_customer_sk", 3, 
DataTypes.BIGINT()),
+   new Column("cs_bill_cdemo_sk", 4, 
DataTypes.BIGINT()),
+   new Column("cs_bill_hdemo_sk", 5, 
DataTypes.BIGINT()),
+   new Column("cs_bill_addr_sk", 6, 
DataTypes.BIGINT()),
+   new Column("cs_ship_customer_sk", 7, 
DataTypes.BIGINT()),
+   new Column("cs_ship_cdemo_sk", 8, 
DataTypes.BIGINT()),
+   new Column("cs_ship_hdemo_sk", 9, 
DataTypes.BIGINT()),
+   new Column("cs_ship_addr_sk", 10, 
DataTypes.BIGINT()),
+   new Column("cs_call_center_sk", 11, 
DataTypes.BIGINT()),
+   new Column("cs_catalog_page_sk", 12, 
DataTypes.BIGINT()),
+   new Column("cs_ship_mode_sk", 13, 
DataTypes.BIGINT()),
+   new Column("cs_warehouse_sk", 14, 
DataTypes.BIGINT()),
+   new Column("cs_item_sk", 15, 
DataTypes.BIGINT()),
+   new Column("cs_promo_sk", 16, 
DataTypes.BIGINT()),
+   new Column("cs_order_number", 17, 
DataTypes.BIGINT()),
+   new Column("cs_quantity", 18, DataTypes.INT()),
+   new Column("cs_wholesale_cost", 19, 
DataTypes.DECIMAL(7, 2)),
+   new Column("cs_list_price", 20, 
DataTypes.DECIMAL(7, 2)),
+   new Column("cs_sales_price", 21, 
DataTypes.DECIMAL(7, 2)),
+   new Column("cs_ext_discount_amt", 22, 
DataTypes.DECIMAL(7, 2)),
+   new Column("cs_ext_sales_price", 23, 
DataTypes.DECIMAL(7, 2)),
+   new Column("cs_ext_wholesale_cost", 24, 
DataTypes.DECIMAL(7, 2)),
+   new Column("cs_ext_list_price", 25, 
DataTypes.DECIMAL(7, 2)),
+   new Column("cs_ext_tax", 26, 
DataTypes.DECIMAL(7, 2)),
+   new Column("cs_coupon_amt", 27, 
DataTypes.DECIMAL(7, 2)),
+   new Column("cs_ext_ship_cost", 28, 
DataTypes.DECIMAL(7, 2)),
+   new Column("cs_net_paid", 29, 
DataTypes.DECIMAL(7, 2)),
+   new Column("cs_net_paid_inc_tax", 30, 
DataTypes.DECIMAL(7, 2)),
+   new Column("cs_net_paid_inc_ship", 31, 
DataTypes.DECIMAL(7, 2)),
+   new Column("cs_net_paid_inc_ship_tax", 32, 
DataTypes.DECIMAL(7, 2)),
+   new Column("cs_net_profit", 33, 
DataTypes.DECIMAL(7, 2))
+   )));
+   schemaMap.put("catalog_returns", new TpcdsSchema(
+  

[GitHub] [flink] KurtYoung commented on a change in pull request #10239: [Flink-11491][Test] Support all TPC-DS queries

2019-11-18 Thread GitBox
KurtYoung commented on a change in pull request #10239: [Flink-11491][Test] 
Support all TPC-DS queries
URL: https://github.com/apache/flink/pull/10239#discussion_r347718616
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-tpcds-test/src/main/java/org/apache/flink/table/tpcds/stats/CatalogTableStats.java
 ##
 @@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.tpcds.stats;
+
+import org.apache.flink.table.api.TableEnvironment;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.stats.CatalogColumnStatistics;
+import org.apache.flink.table.catalog.stats.CatalogTableStatistics;
+
+/**
+ * Class to save catalog table statistics.
+ * Consist ofå {@link CatalogTableStatistics} and {@link 
CatalogTableStatistics}.
+ */
+public class CatalogTableStats {
+   private CatalogTableStatistics catalogTableStatistics;
+   private CatalogColumnStatistics catalogColumnStatistics;
+
+   public CatalogTableStats(CatalogTableStatistics catalogTableStatistics, 
CatalogColumnStatistics catalogColumnStatistics) {
+   this.catalogTableStatistics = catalogTableStatistics;
+   this.catalogColumnStatistics = catalogColumnStatistics;
+   }
+
+   public void register2Catalog(TableEnvironment tEnv, String table) {
+   try {
+   tEnv.getCatalog(tEnv.getCurrentCatalog()).get()
+   .alterTableStatistics(new 
ObjectPath(tEnv.getCurrentDatabase(), table), catalogTableStatistics, false);
+   tEnv.getCatalog(tEnv.getCurrentCatalog()).get()
+   .alterTableColumnStatistics(new 
ObjectPath(tEnv.getCurrentDatabase(), table), catalogColumnStatistics, false);
+   } catch (Exception e) {
+   e.printStackTrace();
 
 Review comment:
   Don't eat the exception


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10239: [Flink-11491][Test] Support all TPC-DS queries

2019-11-18 Thread GitBox
KurtYoung commented on a change in pull request #10239: [Flink-11491][Test] 
Support all TPC-DS queries
URL: https://github.com/apache/flink/pull/10239#discussion_r347722729
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-tpcds-test/src/main/java/org/apache/flink/table/tpcds/utils/AnswerFormatter.java
 ##
 @@ -0,0 +1,206 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.tpcds.utils;
+
+import org.apache.flink.api.java.utils.ParameterTool;
+
+import java.io.BufferedReader;
+import java.io.BufferedWriter;
+import java.io.File;
+import java.io.FileReader;
+import java.io.FileWriter;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * answer set format tool class. convert delimiter from spaces or tabs to 
bar('|') in TPC-DS answer set.
+ * before convert, need to format TPC-DS result as following:
+ * 1. split answer set which has multi query results to multi answer set, 
includes query14, 23, 24, 39.
+ * 2. replace tabs by spaces in answer set by vim.
+ * (1) cd answer_set directory
+ * (2) vim 1.ans with command model,
+ * :set ts=8
+ * :set noexpandtab
+ * :%retab!
+ * :args ./*.ans
+ * :argdo %retab! |update
+ * (3) save and quit vim.
+ */
+public class AnswerFormatter {
+
+   private static final int SPACE_BETWEEN_COL = 1;
+   private static final String RESULT_HEAD_STRING_BAR = "|";
+   private static final String RESULT_HEAD_STRING_DASH = "--";
+   private static final String RESULT_HEAD_STRING_SPACE = " ";
+   private static final String COL_DELIMITER = "|";
+   private static final String ANSWER_FILE_SUFFIX = ".ans";
+   private static final String REGEX_SPLIT_BAR = "\\|";
+
+   /**
+* 1.flink keeps NULLS_FIRST in ASC order, keeps NULLS_LAST in DESC 
order,
+* choose corresponding answer set file here.
+* 2.for query 8、14a、18、70、77, decimal precision of answer set is to low
+* and unreasonable, compare result with result from SQL server, they 
can
+* strictly match.
+*/
+   private static final List ORIGIN_ANSWER_FILE = Arrays.asList(
+   "1", "2", "3", "4", "5_NULLS_FIRST", "6_NULLS_FIRST", "7", 
"8_SQL_SERVER", "9", "10",
+   "11", "12", "13", "14a_SQL_SERVER", "14b_NULLS_FIRST", 
"15_NULLS_FIRST", "16",
+   "17", "18_SQL_SERVER", "19", "20_NULLS_FIRST", 
"21_NULLS_FIRST", "22_NULLS_FIRST",
+   "23a_NULLS_FIRST", "23b_NULLS_FIRST", "24a", "24b", "25", "26", 
"27_NULLS_FIRST",
+   "28", "29", "30", "31", "32", "33", "34_NULLS_FIRST", 
"35_NULLS_FIRST", "36_NULLS_FIRST",
+   "37", "38", "39a", "39b", "40", "41", "42", "43", "44", "45", 
"46_NULLS_FIRST",
+   "47", "48", "49", "50", "51", "52", "53", "54", "55", 
"56_NULLS_FIRST", "57",
+   "58", "59", "60", "61", "62_NULLS_FIRST", "63", "64", 
"65_NULLS_FIRST", "66_NULLS_FIRST",
+   "67_NULLS_FIRST", "68_NULLS_FIRST", "69", "70_SQL_SERVER", 
"71_NULLS_LAST", "72_NULLS_FIRST", "73",
+   "74", "75", "76_NULLS_FIRST", "77_SQL_SERVER", "78", 
"79_NULLS_FIRST", "80_NULLS_FIRST",
+   "81", "82", "83", "84", "85", "86_NULLS_FIRST", "87", "88", 
"89", "90",
+   "91", "92", "93_NULLS_FIRST", "94", "95", "96", "97", 
"98_NULLS_FIRST", "99_NULLS_FIRST"
+   );
+
+   public static void main(String[] args) throws Exception {
+   ParameterTool params = ParameterTool.fromArgs(args);
+   String originDir = params.getRequired("originDir");
+   String destDir = params.getRequired("destDir");
+   for (int i = 0; i < ORIGIN_ANSWER_FILE.size(); i++) {
+   String file = ORIGIN_ANSWER_FILE.get(i);
+   String originFileName = file + ANSWER_FILE_SUFFIX;
+   String destFileName = file.split("_")[0] + 
ANSWER_FILE_SUFFIX;
+   File originFIle = new File(originDir + "/" + 
originFileName);
+   File destFile = new File(destDir + "/" + destFileName);
+   convert(originFIle, destFile);
+  

[GitHub] [flink] KurtYoung commented on a change in pull request #10239: [Flink-11491][Test] Support all TPC-DS queries

2019-11-18 Thread GitBox
KurtYoung commented on a change in pull request #10239: [Flink-11491][Test] 
Support all TPC-DS queries
URL: https://github.com/apache/flink/pull/10239#discussion_r347722561
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-tpcds-test/src/main/java/org/apache/flink/table/tpcds/utils/AnswerFormatter.java
 ##
 @@ -0,0 +1,206 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.tpcds.utils;
+
+import org.apache.flink.api.java.utils.ParameterTool;
+
+import java.io.BufferedReader;
+import java.io.BufferedWriter;
+import java.io.File;
+import java.io.FileReader;
+import java.io.FileWriter;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * answer set format tool class. convert delimiter from spaces or tabs to 
bar('|') in TPC-DS answer set.
+ * before convert, need to format TPC-DS result as following:
+ * 1. split answer set which has multi query results to multi answer set, 
includes query14, 23, 24, 39.
+ * 2. replace tabs by spaces in answer set by vim.
+ * (1) cd answer_set directory
+ * (2) vim 1.ans with command model,
+ * :set ts=8
+ * :set noexpandtab
+ * :%retab!
+ * :args ./*.ans
+ * :argdo %retab! |update
+ * (3) save and quit vim.
+ */
+public class AnswerFormatter {
+
+   private static final int SPACE_BETWEEN_COL = 1;
+   private static final String RESULT_HEAD_STRING_BAR = "|";
+   private static final String RESULT_HEAD_STRING_DASH = "--";
+   private static final String RESULT_HEAD_STRING_SPACE = " ";
+   private static final String COL_DELIMITER = "|";
+   private static final String ANSWER_FILE_SUFFIX = ".ans";
+   private static final String REGEX_SPLIT_BAR = "\\|";
+
+   /**
+* 1.flink keeps NULLS_FIRST in ASC order, keeps NULLS_LAST in DESC 
order,
+* choose corresponding answer set file here.
+* 2.for query 8、14a、18、70、77, decimal precision of answer set is to low
+* and unreasonable, compare result with result from SQL server, they 
can
+* strictly match.
+*/
+   private static final List ORIGIN_ANSWER_FILE = Arrays.asList(
+   "1", "2", "3", "4", "5_NULLS_FIRST", "6_NULLS_FIRST", "7", 
"8_SQL_SERVER", "9", "10",
+   "11", "12", "13", "14a_SQL_SERVER", "14b_NULLS_FIRST", 
"15_NULLS_FIRST", "16",
+   "17", "18_SQL_SERVER", "19", "20_NULLS_FIRST", 
"21_NULLS_FIRST", "22_NULLS_FIRST",
+   "23a_NULLS_FIRST", "23b_NULLS_FIRST", "24a", "24b", "25", "26", 
"27_NULLS_FIRST",
+   "28", "29", "30", "31", "32", "33", "34_NULLS_FIRST", 
"35_NULLS_FIRST", "36_NULLS_FIRST",
+   "37", "38", "39a", "39b", "40", "41", "42", "43", "44", "45", 
"46_NULLS_FIRST",
+   "47", "48", "49", "50", "51", "52", "53", "54", "55", 
"56_NULLS_FIRST", "57",
+   "58", "59", "60", "61", "62_NULLS_FIRST", "63", "64", 
"65_NULLS_FIRST", "66_NULLS_FIRST",
+   "67_NULLS_FIRST", "68_NULLS_FIRST", "69", "70_SQL_SERVER", 
"71_NULLS_LAST", "72_NULLS_FIRST", "73",
+   "74", "75", "76_NULLS_FIRST", "77_SQL_SERVER", "78", 
"79_NULLS_FIRST", "80_NULLS_FIRST",
+   "81", "82", "83", "84", "85", "86_NULLS_FIRST", "87", "88", 
"89", "90",
+   "91", "92", "93_NULLS_FIRST", "94", "95", "96", "97", 
"98_NULLS_FIRST", "99_NULLS_FIRST"
+   );
+
+   public static void main(String[] args) throws Exception {
+   ParameterTool params = ParameterTool.fromArgs(args);
+   String originDir = params.getRequired("originDir");
+   String destDir = params.getRequired("destDir");
+   for (int i = 0; i < ORIGIN_ANSWER_FILE.size(); i++) {
+   String file = ORIGIN_ANSWER_FILE.get(i);
+   String originFileName = file + ANSWER_FILE_SUFFIX;
+   String destFileName = file.split("_")[0] + 
ANSWER_FILE_SUFFIX;
+   File originFIle = new File(originDir + "/" + 
originFileName);
+   File destFile = new File(destDir + "/" + destFileName);
+   convert(originFIle, destFile);
+  

[GitHub] [flink] KurtYoung commented on a change in pull request #10239: [Flink-11491][Test] Support all TPC-DS queries

2019-11-18 Thread GitBox
KurtYoung commented on a change in pull request #10239: [Flink-11491][Test] 
Support all TPC-DS queries
URL: https://github.com/apache/flink/pull/10239#discussion_r347720572
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-tpcds-test/src/main/java/org/apache/flink/table/tpcds/utils/AnswerFormatter.java
 ##
 @@ -0,0 +1,206 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.tpcds.utils;
+
+import org.apache.flink.api.java.utils.ParameterTool;
+
+import java.io.BufferedReader;
+import java.io.BufferedWriter;
+import java.io.File;
+import java.io.FileReader;
+import java.io.FileWriter;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * answer set format tool class. convert delimiter from spaces or tabs to 
bar('|') in TPC-DS answer set.
+ * before convert, need to format TPC-DS result as following:
+ * 1. split answer set which has multi query results to multi answer set, 
includes query14, 23, 24, 39.
+ * 2. replace tabs by spaces in answer set by vim.
+ * (1) cd answer_set directory
+ * (2) vim 1.ans with command model,
+ * :set ts=8
+ * :set noexpandtab
+ * :%retab!
+ * :args ./*.ans
+ * :argdo %retab! |update
+ * (3) save and quit vim.
+ */
+public class AnswerFormatter {
+
+   private static final int SPACE_BETWEEN_COL = 1;
+   private static final String RESULT_HEAD_STRING_BAR = "|";
+   private static final String RESULT_HEAD_STRING_DASH = "--";
+   private static final String RESULT_HEAD_STRING_SPACE = " ";
+   private static final String COL_DELIMITER = "|";
+   private static final String ANSWER_FILE_SUFFIX = ".ans";
+   private static final String REGEX_SPLIT_BAR = "\\|";
+
+   /**
+* 1.flink keeps NULLS_FIRST in ASC order, keeps NULLS_LAST in DESC 
order,
+* choose corresponding answer set file here.
+* 2.for query 8、14a、18、70、77, decimal precision of answer set is to low
+* and unreasonable, compare result with result from SQL server, they 
can
+* strictly match.
+*/
+   private static final List ORIGIN_ANSWER_FILE = Arrays.asList(
+   "1", "2", "3", "4", "5_NULLS_FIRST", "6_NULLS_FIRST", "7", 
"8_SQL_SERVER", "9", "10",
+   "11", "12", "13", "14a_SQL_SERVER", "14b_NULLS_FIRST", 
"15_NULLS_FIRST", "16",
+   "17", "18_SQL_SERVER", "19", "20_NULLS_FIRST", 
"21_NULLS_FIRST", "22_NULLS_FIRST",
+   "23a_NULLS_FIRST", "23b_NULLS_FIRST", "24a", "24b", "25", "26", 
"27_NULLS_FIRST",
+   "28", "29", "30", "31", "32", "33", "34_NULLS_FIRST", 
"35_NULLS_FIRST", "36_NULLS_FIRST",
+   "37", "38", "39a", "39b", "40", "41", "42", "43", "44", "45", 
"46_NULLS_FIRST",
+   "47", "48", "49", "50", "51", "52", "53", "54", "55", 
"56_NULLS_FIRST", "57",
+   "58", "59", "60", "61", "62_NULLS_FIRST", "63", "64", 
"65_NULLS_FIRST", "66_NULLS_FIRST",
+   "67_NULLS_FIRST", "68_NULLS_FIRST", "69", "70_SQL_SERVER", 
"71_NULLS_LAST", "72_NULLS_FIRST", "73",
+   "74", "75", "76_NULLS_FIRST", "77_SQL_SERVER", "78", 
"79_NULLS_FIRST", "80_NULLS_FIRST",
+   "81", "82", "83", "84", "85", "86_NULLS_FIRST", "87", "88", 
"89", "90",
+   "91", "92", "93_NULLS_FIRST", "94", "95", "96", "97", 
"98_NULLS_FIRST", "99_NULLS_FIRST"
+   );
+
+   public static void main(String[] args) throws Exception {
+   ParameterTool params = ParameterTool.fromArgs(args);
+   String originDir = params.getRequired("originDir");
+   String destDir = params.getRequired("destDir");
+   for (int i = 0; i < ORIGIN_ANSWER_FILE.size(); i++) {
+   String file = ORIGIN_ANSWER_FILE.get(i);
+   String originFileName = file + ANSWER_FILE_SUFFIX;
+   String destFileName = file.split("_")[0] + 
ANSWER_FILE_SUFFIX;
+   File originFIle = new File(originDir + "/" + 
originFileName);
+   File destFile = new File(destDir + "/" + destFileName);
+   convert(originFIle, destFile);
+  

[GitHub] [flink] KurtYoung commented on a change in pull request #10239: [Flink-11491][Test] Support all TPC-DS queries

2019-11-18 Thread GitBox
KurtYoung commented on a change in pull request #10239: [Flink-11491][Test] 
Support all TPC-DS queries
URL: https://github.com/apache/flink/pull/10239#discussion_r347720758
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-tpcds-test/src/main/java/org/apache/flink/table/tpcds/utils/AnswerFormatter.java
 ##
 @@ -0,0 +1,206 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.tpcds.utils;
+
+import org.apache.flink.api.java.utils.ParameterTool;
+
+import java.io.BufferedReader;
+import java.io.BufferedWriter;
+import java.io.File;
+import java.io.FileReader;
+import java.io.FileWriter;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * answer set format tool class. convert delimiter from spaces or tabs to 
bar('|') in TPC-DS answer set.
+ * before convert, need to format TPC-DS result as following:
+ * 1. split answer set which has multi query results to multi answer set, 
includes query14, 23, 24, 39.
+ * 2. replace tabs by spaces in answer set by vim.
+ * (1) cd answer_set directory
+ * (2) vim 1.ans with command model,
+ * :set ts=8
+ * :set noexpandtab
+ * :%retab!
+ * :args ./*.ans
+ * :argdo %retab! |update
+ * (3) save and quit vim.
+ */
+public class AnswerFormatter {
+
+   private static final int SPACE_BETWEEN_COL = 1;
+   private static final String RESULT_HEAD_STRING_BAR = "|";
+   private static final String RESULT_HEAD_STRING_DASH = "--";
+   private static final String RESULT_HEAD_STRING_SPACE = " ";
+   private static final String COL_DELIMITER = "|";
+   private static final String ANSWER_FILE_SUFFIX = ".ans";
+   private static final String REGEX_SPLIT_BAR = "\\|";
+
+   /**
+* 1.flink keeps NULLS_FIRST in ASC order, keeps NULLS_LAST in DESC 
order,
+* choose corresponding answer set file here.
+* 2.for query 8、14a、18、70、77, decimal precision of answer set is to low
+* and unreasonable, compare result with result from SQL server, they 
can
+* strictly match.
+*/
+   private static final List ORIGIN_ANSWER_FILE = Arrays.asList(
+   "1", "2", "3", "4", "5_NULLS_FIRST", "6_NULLS_FIRST", "7", 
"8_SQL_SERVER", "9", "10",
+   "11", "12", "13", "14a_SQL_SERVER", "14b_NULLS_FIRST", 
"15_NULLS_FIRST", "16",
+   "17", "18_SQL_SERVER", "19", "20_NULLS_FIRST", 
"21_NULLS_FIRST", "22_NULLS_FIRST",
+   "23a_NULLS_FIRST", "23b_NULLS_FIRST", "24a", "24b", "25", "26", 
"27_NULLS_FIRST",
+   "28", "29", "30", "31", "32", "33", "34_NULLS_FIRST", 
"35_NULLS_FIRST", "36_NULLS_FIRST",
+   "37", "38", "39a", "39b", "40", "41", "42", "43", "44", "45", 
"46_NULLS_FIRST",
+   "47", "48", "49", "50", "51", "52", "53", "54", "55", 
"56_NULLS_FIRST", "57",
+   "58", "59", "60", "61", "62_NULLS_FIRST", "63", "64", 
"65_NULLS_FIRST", "66_NULLS_FIRST",
+   "67_NULLS_FIRST", "68_NULLS_FIRST", "69", "70_SQL_SERVER", 
"71_NULLS_LAST", "72_NULLS_FIRST", "73",
+   "74", "75", "76_NULLS_FIRST", "77_SQL_SERVER", "78", 
"79_NULLS_FIRST", "80_NULLS_FIRST",
+   "81", "82", "83", "84", "85", "86_NULLS_FIRST", "87", "88", 
"89", "90",
+   "91", "92", "93_NULLS_FIRST", "94", "95", "96", "97", 
"98_NULLS_FIRST", "99_NULLS_FIRST"
+   );
+
+   public static void main(String[] args) throws Exception {
+   ParameterTool params = ParameterTool.fromArgs(args);
+   String originDir = params.getRequired("originDir");
+   String destDir = params.getRequired("destDir");
+   for (int i = 0; i < ORIGIN_ANSWER_FILE.size(); i++) {
+   String file = ORIGIN_ANSWER_FILE.get(i);
+   String originFileName = file + ANSWER_FILE_SUFFIX;
+   String destFileName = file.split("_")[0] + 
ANSWER_FILE_SUFFIX;
+   File originFIle = new File(originDir + "/" + 
originFileName);
+   File destFile = new File(destDir + "/" + destFileName);
+   convert(originFIle, destFile);
+  

[GitHub] [flink] KurtYoung commented on a change in pull request #10239: [Flink-11491][Test] Support all TPC-DS queries

2019-11-18 Thread GitBox
KurtYoung commented on a change in pull request #10239: [Flink-11491][Test] 
Support all TPC-DS queries
URL: https://github.com/apache/flink/pull/10239#discussion_r347723729
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-tpcds-test/src/main/java/org/apache/flink/table/tpcds/utils/TpcdsResultComparator.java
 ##
 @@ -0,0 +1,232 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.tpcds.utils;
+
+import org.apache.flink.api.java.utils.ParameterTool;
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.FileReader;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+/**
+ * Result comparator for TPC-DS test, according to the TPC-DS standard 
specification v2.11.0.
+ * skip validate query 6、19、30、31、46、67、68、81 temporary,
+ * because they can not match answer set perfectly from now and
+ * we'd take some effort to address it.
+ */
+public class TpcdsResultComparator {
+
+   private static final int VALIDATE_QUERY_NUM = 95;
+   private static final List VALIDATE_QUERIES = Arrays.asList(
+   "1", "2", "3", "4", "5", "7", "8", "9", "10",
+   "11", "12", "13", "14a", "14b", "15", "16", "17", "18", "20",
+   "21", "22", "23a", "23b", "24a", "24b", "25", "26", "27", "28", 
"29",
+   "32", "33", "34", "35", "36", "37", "38", "39a", "39b", "40",
+   "41", "42", "43", "44", "45", "47", "48", "49", "50",
+   "51", "52", "53", "54", "55", "56", "57", "58", "59", "60",
+   "61", "62", "63", "64", "65", "66", "69", "70",
+   "71", "72", "73", "74", "75", "76", "77", "78", "79", "80",
+   "82", "83", "84", "85", "86", "87", "88", "89", "90",
+   "91", "92", "93", "94", "95", "96", "97", "98", "99"
+   );
+
+   private static final String REGEX_SPLIT_BAR = "\\|";
+   private static final String FILE_SEPARATOR = "/";
+   private static final String RESULT_SUFFIX = ".ans";
+   private static final double TOLERATED_DOUBLE_DEVIATION = 0.01d;
+
+   public static void main(String[] args) {
+   ParameterTool params = ParameterTool.fromArgs(args);
+   String expectedDir = params.getRequired("expectedDir");
+   String actualDir = params.getRequired("actualDir");
+   int passCnt = 0;
+   for (String queryId : VALIDATE_QUERIES) {
+   File expectedFile = new File(expectedDir + 
FILE_SEPARATOR + queryId + RESULT_SUFFIX);
+   File actualFile = new File(actualDir + FILE_SEPARATOR + 
queryId + RESULT_SUFFIX);
+
+   if (compareResult(expectedFile, actualFile)) {
+   passCnt++;
+   System.out.println("[INFO] validate success, 
file: " + expectedFile.getName() + " cnt:" + passCnt);
+   } else {
+   System.out.println("[WARN] validate fail, file: 
" + expectedFile.getName() + "\n");
+   }
+   }
+   if (passCnt == VALIDATE_QUERY_NUM) {
+   System.exit(0);
+   }
+   System.exit(1);
+   }
+
+   private static boolean compareResult(File expectedFile, File 
actualFile) {
+   try {
+   BufferedReader expectedReader = new BufferedReader(new 
FileReader(expectedFile));
+   BufferedReader actualReader = new BufferedReader(new 
FileReader(actualFile));
+
+   int expectedLineNum = 0;
+   int actualLineNum = 0;
+
+   String expectedLine, actualLine;
+   while ((expectedLine = expectedReader.readLine()) != 
null &&
+   (actualLine = actualReader.readLine()) != null) 
{
+   expectedLineNum++;
+   actualLineNum++;
+
+   // reslut top 8 line of query 34,
+   // result line  2、3  0f query 77
+   // result line 18、 19 of query 79
+ 

[GitHub] [flink] KurtYoung commented on a change in pull request #10239: [Flink-11491][Test] Support all TPC-DS queries

2019-11-18 Thread GitBox
KurtYoung commented on a change in pull request #10239: [Flink-11491][Test] 
Support all TPC-DS queries
URL: https://github.com/apache/flink/pull/10239#discussion_r347717927
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-tpcds-test/src/main/java/org/apache/flink/table/tpcds/schema/Schema.java
 ##
 @@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.tpcds.schema;
+
+import org.apache.flink.table.types.DataType;
+
+import java.util.List;
+
+/** The schema interface. */
+public interface Schema {
 
 Review comment:
   unnecessary interface here. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10239: [Flink-11491][Test] Support all TPC-DS queries

2019-11-18 Thread GitBox
KurtYoung commented on a change in pull request #10239: [Flink-11491][Test] 
Support all TPC-DS queries
URL: https://github.com/apache/flink/pull/10239#discussion_r347717782
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-tpcds-test/src/main/java/org/apache/flink/table/tpcds/schema/Column.java
 ##
 @@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.tpcds.schema;
+
+import org.apache.flink.table.types.DataType;
+
+/** Class to define column schema of TPS-DS table. */
+public class Column {
+   private String name;
+   private int index;
 
 Review comment:
   this field is useless? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14164) Add a metric to show failover count regarding fine grained recovery

2019-11-18 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16977102#comment-16977102
 ] 

Zhu Zhu commented on FLINK-14164:
-

Hi [~stevenz3wu], we have to inform you that `numberOfRestarts` is added as a 
gauge rather than a meter.
This is because we found the meter can be inaccurate if the measured events 
happen in a very low frequency (see 
[discussion|https://github.com/apache/flink/pull/10082#discussion_r343562150]). 
This meter can be only used to build alerts "restarts > 0" and would not be 
able to accurately trigger other alerts, like "restarts > 10 in the past hour". 
So it's not good to add it as a meter.

The current idea is, it would be better to have users to use time-series 
databases that can derive the rate in whatever granularity they desire, thus to 
build flexible and accurate monitoring/alerting for low frequency events.

Would that work for you? Feel free to share your concerns.


> Add a metric to show failover count regarding fine grained recovery
> ---
>
> Key: FLINK-14164
> URL: https://issues.apache.org/jira/browse/FLINK-14164
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination, Runtime / Metrics
>Affects Versions: 1.10.0
>Reporter: Zhu Zhu
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Previously Flink uses restart all strategy to recover jobs from failures. And 
> the metric "fullRestart" is used to show the count of failovers.
> However, with fine grained recovery introduced in 1.9.0, the "fullRestart" 
> metric only reveals how many times the entire graph has been restarted, not 
> including the number of fine grained failure recoveries.
> As many users want to build their job alerting based on failovers, I'd 
> propose to add such a new metric {{numberOfRestarts}} which also respects 
> fine grained recoveries. The metric should be a Gauge.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   4   5   6   >