[jira] [Created] (FLINK-15379) JDBC connector return wrong value if defined dataType contains precision

2019-12-23 Thread Leonard Xu (Jira)
Leonard Xu created FLINK-15379:
--

 Summary: JDBC connector return wrong value if defined dataType 
contains precision
 Key: FLINK-15379
 URL: https://issues.apache.org/jira/browse/FLINK-15379
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.10.0
Reporter: Leonard Xu


A mysql table like:

 
{code:java}
// CREATE TABLE `currency` (
  `currency_id` bigint(20) NOT NULL,
  `currency_name` varchar(200) DEFAULT NULL,
  `rate` double DEFAULT NULL,
  `currency_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
  `country` varchar(100) DEFAULT NULL,
  `timestamp6` timestamp(6) NULL DEFAULT NULL,
  `time6` time(6) DEFAULT NULL,
  `gdp` decimal(10,4) DEFAULT NULL,
  PRIMARY KEY (`currency_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
+-+---+--+-+-++-+--+
| currency_id | currency_name | rate | currency_time   | country | 
timestamp6 | time6   | gdp  |
+-+---+--+-+-++-+--+
|   1 | US Dollar | 1020 | 2019-12-20 17:23:00 | America | 
2019-12-20 12:22:00.023456 | 12:22:00.023456 | 100.0112 |
|   2 | Euro  |  114 | 2019-12-20 12:22:00 | Germany | 
2019-12-20 12:22:00.023456 | 12:22:00.023456 | 100.0112 |
|   3 | RMB   |   16 | 2019-12-20 12:22:00 | China   | 
2019-12-20 12:22:00.023456 | 12:22:00.023456 | 100.0112 |
|   4 | Yen   |1 | 2019-12-20 12:22:00 | Japan   | 
2019-12-20 12:22:00.123456 | 12:22:00.123456 | 100.4112 |
+-+---+--+-+-++-+--+{code}
 

If user defined a jdbc table as  dimension table like:

 
{code:java}
// 
public static final String mysqlCurrencyDDL = "CREATE TABLE currency (\n" +
"  currency_id BIGINT,\n" +
"  currency_name STRING,\n" +
"  rate DOUBLE,\n" +
"  currency_time TIMESTAMP(3),\n" +
"  country STRING,\n" +
"  timestamp6 TIMESTAMP(6),\n" +
"  time6 TIME(6),\n" +
"  gdp DECIMAL(10, 4)\n" +
") WITH (\n" +
"   'connector.type' = 'jdbc',\n" +
"   'connector.url' = 'jdbc:mysql://localhost:3306/test',\n" +
"   'connector.username' = 'root'," +
"   'connector.table' = 'currency',\n" +
"   'connector.driver' = 'com.mysql.jdbc.Driver',\n" +
"   'connector.lookup.cache.max-rows' = '500', \n" +
"   'connector.lookup.cache.ttl' = '10s',\n" +
"   'connector.lookup.max-retries' = '3'" +
")";
{code}
 

User will get wrong value in column `timestamp6`,`time6`,`gdp`:
{code:java}
// c.currency_id, c.currency_name, c.rate, c.currency_time, c.country, 
c.timestamp9, c.time9, c.gdp 

1,US 
Dollar,1020.0,2019-12-20T17:23,America,2019-12-20T12:22:00.023456,12:22,-0.0001
2,Euro,114.0,2019-12-20T12:22,Germany,2019-12-20T12:22:00.023456,12:22,-0.0001
4,Yen,1.0,2019-12-20T12:22,Japan,2019-12-20T12:22:00.123456,12:22,-0.0001{code}






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-15125) PROCTIME() computed column defined in CREATE TABLE doesn't work

2019-12-23 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-15125:
---

Assignee: Danny Chen

> PROCTIME() computed column defined in CREATE TABLE doesn't work
> ---
>
> Key: FLINK-15125
> URL: https://issues.apache.org/jira/browse/FLINK-15125
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Assignee: Danny Chen
>Priority: Major
> Fix For: 1.10.0
>
>
> {{CatalogTableITCase#testStreamSourceTableWithProctime}} is ignored for now. 
> We should enable it and fix the problem. The exception stack:
> {code}
> scala.MatchError: PROCTIME() (of class org.apache.calcite.rex.RexCall)
>   at 
> org.apache.flink.table.planner.plan.rules.logical.BatchLogicalWindowAggregateRule.getTimeFieldReference(BatchLogicalWindowAggregateRule.scala:59)
>   at 
> org.apache.flink.table.planner.plan.rules.logical.LogicalWindowAggregateRuleBase.translateWindow(LogicalWindowAggregateRuleBase.scala:249)
>   at 
> org.apache.flink.table.planner.plan.rules.logical.LogicalWindowAggregateRuleBase.onMatch(LogicalWindowAggregateRuleBase.scala:72)
>   at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:319)
>   at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:560)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:419)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:256)
>   at 
> org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:215)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:202)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at 
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
>   at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:83)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:56)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:44)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:44)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:44)
>   at 
> org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:221)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:148)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:661)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.execute(TableEnvironmentImpl.java:620)
>   at 
> org.apache.flink.table.planner.catalog.CatalogTableITCase.execJob(CatalogTableITCase.scala:89)
>   at 
> 

[jira] [Assigned] (FLINK-15066) Cannot run multiple `insert into csvTable values ()`

2019-12-23 Thread Kurt Young (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young reassigned FLINK-15066:
--

Assignee: Jingsong Lee  (was: Danny Chen)

> Cannot run multiple `insert into csvTable values ()`
> 
>
> Key: FLINK-15066
> URL: https://issues.apache.org/jira/browse/FLINK-15066
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Reporter: Kurt Young
>Assignee: Jingsong Lee
>Priority: Major
> Fix For: 1.10.0
>
>
> I created a csv table in sql client, and tried to insert some data into this 
> table.
> The first insert into success, but the second one failed with exception: 
> {code:java}
> // Caused by: java.io.IOException: File or directory /.../xxx.csv already 
> exists. Existing files and directories are not overwritten in NO_OVERWRITE 
> mode. Use OVERWRITE mode to overwrite existing files and directories.at 
> org.apache.flink.core.fs.FileSystem.initOutPathLocalFS(FileSystem.java:817)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10674: [FLINK-15220][Connector/Kafka][Table] Add startFromTimestamp in KafkaTableSource

2019-12-23 Thread GitBox
flinkbot commented on issue #10674: [FLINK-15220][Connector/Kafka][Table] Add 
startFromTimestamp in KafkaTableSource
URL: https://github.com/apache/flink/pull/10674#issuecomment-568683622
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit f59106d121f9dd0fc40b23640ec2ad8a663d6020 (Tue Dec 24 
07:45:25 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10667: [FLINK-15313][table] Fix can't insert decimal data into sink using TypeInformation

2019-12-23 Thread GitBox
JingsongLi commented on a change in pull request #10667: [FLINK-15313][table] 
Fix can't insert decimal data into sink using TypeInformation
URL: https://github.com/apache/flink/pull/10667#discussion_r361082465
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/sinks/TableSinkUtils.scala
 ##
 @@ -18,67 +18,99 @@
 
 package org.apache.flink.table.planner.sinks
 
-import org.apache.flink.table.api.ValidationException
-import org.apache.flink.table.catalog.ObjectIdentifier
+import org.apache.flink.api.java.tuple.{Tuple2 => JTuple2}
+import org.apache.flink.api.java.typeutils.{GenericTypeInfo, TupleTypeInfo}
+import org.apache.flink.api.scala.typeutils.CaseClassTypeInfo
+import org.apache.flink.table.api.{TableException, TableSchema, Types, 
ValidationException}
+import org.apache.flink.table.catalog.{CatalogTable, ObjectIdentifier}
+import org.apache.flink.table.dataformat.BaseRow
 import org.apache.flink.table.operations.CatalogSinkModifyOperation
-import 
org.apache.flink.table.runtime.types.LogicalTypeDataTypeConverter.fromDataTypeToLogicalType
-import org.apache.flink.table.runtime.types.PlannerTypeUtils
-import org.apache.flink.table.sinks.{PartitionableTableSink, TableSink}
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory
+import org.apache.flink.table.planner.plan.utils.RelOptUtils
+import org.apache.flink.table.sinks._
+import org.apache.flink.table.types.DataType
+import 
org.apache.flink.table.types.inference.TypeTransformations.{legacyDecimalToDefaultDecimal,
 toNullable}
+import org.apache.flink.table.types.logical.utils.{LogicalTypeCasts, 
LogicalTypeChecks}
+import org.apache.flink.table.types.logical.{LegacyTypeInformationType, 
RowType}
+import org.apache.flink.table.types.utils.DataTypeUtils
+import 
org.apache.flink.table.types.utils.TypeConversions.{fromLegacyInfoToDataType, 
fromLogicalToDataType}
+import org.apache.flink.table.utils.{TableSchemaUtils, TypeMappingUtils}
+import org.apache.flink.types.Row
+import org.apache.calcite.rel.RelNode
 
 import scala.collection.JavaConversions._
 
 object TableSinkUtils {
 
   /**
-* Checks if the given [[CatalogSinkModifyOperation]]'s query can be 
written to
-* the given [[TableSink]]. It checks if the names & the field types match. 
If the table
-* sink is a [[PartitionableTableSink]], also check that the partitions are 
valid.
+* Checks if the given query can be written into the given sink. It checks 
the field types
+* should be compatible (types should equal including precisions). If types 
are not compatible,
+* but can be implicitly casted, a cast projection will be applied. 
Otherwise, an exception will
+* be thrown.
+*
+* @param query the query to be checked
+* @param sinkSchema the schema of sink to be checked
+* @param typeFactory type factory
+* @return the query RelNode which may be applied the implicitly cast 
projection.
+*/
+  def validateSchemaAndApplyImplicitCast(
+  query: RelNode,
+  sinkSchema: TableSchema,
+  typeFactory: FlinkTypeFactory,
+  sinkIdentifier: Option[String] = None): RelNode = {
+
+val queryLogicalType = DataTypeUtils
+  // convert type to nullable, because we ignore nullability when writing 
query into sink
 
 Review comment:
   Why need to nullable?
   Without this not work?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10667: [FLINK-15313][table] Fix can't insert decimal data into sink using TypeInformation

2019-12-23 Thread GitBox
JingsongLi commented on a change in pull request #10667: [FLINK-15313][table] 
Fix can't insert decimal data into sink using TypeInformation
URL: https://github.com/apache/flink/pull/10667#discussion_r361055773
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/transforms/LegacyDecimalTypeTransformation.java
 ##
 @@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.types.inference.transforms;
+
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.inference.TypeTransformation;
+import org.apache.flink.table.types.logical.DecimalType;
+import org.apache.flink.table.types.logical.LegacyTypeInformationType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
+
+/**
+ * This type transformation transforms the legacy decimal type (usually 
converted from
+ * {@link org.apache.flink.api.common.typeinfo.Types#BIG_DEC}) to DECIMAL(38, 
18).
+ */
+public class LegacyDecimalTypeTransformation implements TypeTransformation {
+
+   public static final TypeTransformation INSTANCE = new 
LegacyDecimalTypeTransformation();
+
+   @Override
+   public DataType transform(DataType typeToTransform) {
+   LogicalType logicalType = typeToTransform.getLogicalType();
+   if (logicalType instanceof LegacyTypeInformationType && 
logicalType.getTypeRoot() == LogicalTypeRoot.DECIMAL) {
+   DataType decimalType = DataTypes
+   .DECIMAL(DecimalType.MAX_PRECISION, 18)
+   
.bridgedTo(typeToTransform.getConversionClass());
+   if (!logicalType.isNullable()) {
 
 Review comment:
   return logicalType.isNullable() ? decimalType : decimalType.notNull();


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-15377) Mesos WordCount test fails on travis

2019-12-23 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002699#comment-17002699
 ] 

Yangze Guo edited comment on FLINK-15377 at 12/24/19 7:43 AM:
--

BTW, as the correctness of this test case has been verified by the 
*check_result_hash* function. I think we could also skip the exception check.

At least, it could be a quick fix without breaking the test. WDYT?


was (Author: karmagyz):
BTW, as the correctness of this test case has been verified by the 
*check_result_hash* function. I think we could also skip the exception check.

> Mesos WordCount test fails on travis
> 
>
> Key: FLINK-15377
> URL: https://issues.apache.org/jira/browse/FLINK-15377
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Mesos
>Affects Versions: 1.10.0
>Reporter: Yu Li
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.10.0
>
>
> The "Run Mesos WordCount test" fails nightly run on travis with below error:
> {code}
> rm: cannot remove 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/test-data/log/mesos-sl/mesos-slave.INFO':
>  Permission denied
> rm: cannot remove 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/test-data/log/mesos-sl/mesos-fetcher.INFO':
>  Permission denied
> rm: cannot remove 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/test-data/log/mesos-sl/mesos-slave.4a4fda410c57.invalid-user.log.INFO.20191224-031307.1':
>  Permission denied
> ...
> [FAIL] 'Run Mesos WordCount test' failed after 5 minutes and 26 seconds! Test 
> exited with exit code 0 but the logs contained errors, exceptions or 
> non-empty .out files
> {code}
> https://api.travis-ci.org/v3/job/628795106/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15220) Add startFromTimestamp in KafkaTableSource

2019-12-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15220:
---
Labels: pull-request-available  (was: )

> Add startFromTimestamp in KafkaTableSource
> --
>
> Key: FLINK-15220
> URL: https://issues.apache.org/jira/browse/FLINK-15220
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0
>Reporter: Paul Lin
>Assignee: Paul Lin
>Priority: Major
>  Labels: pull-request-available
>
> KafkaTableSource supports all startup modes in DataStream API except 
> `startFromTimestamp`, but `startFromTimestamp` is a common and valid use case 
> in Table/SQL API as well.
>  
> The proposed changes are as follow:
> h3. Table Descriptor
> A new method should be added to Kafka table descriptor:
> ```
> new Kafka().startFromTimestamp(long millisFromEpoch)
> ```
> And the parameter would be milliseconds from epoch to stay aligned with 
> FlinkKafkaConsumerBase#setStartFromTimestamp(long startupOffsetsTimestamp).
> Since Kafka 0.8/0.9 that doesn’t support timestamp would likely be 
> deprecated, we can assume users are using Kafka that supports timestamp by 
> default, and throws exceptions if users try to use timestamp startup mode 
> with deprecated Kafka versions during the property validation phase. 
> h3. YAML & DDL
> YAML and DDL use string-based properties to describe tables, and the proposed 
> keys are as follow:
> ```
> 'connector.startup-mode' = 'timestamp',
> 'connector.startup-timestamp-millis' = '157614541',
> 'connector.startup-timestamp' = '2019-12-12 10:11:23.123'
> ```
> The timestamp would need to be in form of milliseconds from epoch or 
> "-MM-dd HH:mm:ss[.SSS]". If both are provided, a validation exception 
> would be thrown.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] link3280 opened a new pull request #10674: [FLINK-15220][Connector/Kafka][Table] Add startFromTimestamp in KafkaTableSource

2019-12-23 Thread GitBox
link3280 opened a new pull request #10674: 
[FLINK-15220][Connector/Kafka][Table] Add startFromTimestamp in KafkaTableSource
URL: https://github.com/apache/flink/pull/10674
 
 
   ## What is the purpose of the change
   
   KafkaTableSource supports all startup modes in DataStream API except 
`startFromTimestamp`, but `startFromTimestamp` is a common and valid use case 
in Table/SQL API as well. 
   
   ## Brief change log
   
   - Add `startFromTimestamp(long)` to Kafka descriptor API.
   - Add new keys `connector.startup-timestamp-millis` and 
`connector.startup-timestamp` to Kafka connector properties.
   - Add new value `timestamp` to `connector.startup-mode` in Kafka connector 
properties.
   - Add startup timestamp parameter to KafkaTableSource and 
KafkaTableSourceSinkFactory.
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
   - Added tests to verify Kafka descriptor API generated properties.
   - Added tests to verify the validation of new properties of timestamp 
startup mode in KafkaValidator. 
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: yes
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? yes
 - If yes, how is the feature documented? not documented
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15081) Translate "Concepts & Common API" page of Table API into Chinese

2019-12-23 Thread ShijieZhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002708#comment-17002708
 ] 

ShijieZhang commented on FLINK-15081:
-

Thanks a lot. I'll pay attention next time.

> Translate "Concepts & Common API" page of Table API into Chinese
> 
>
> Key: FLINK-15081
> URL: https://issues.apache.org/jira/browse/FLINK-15081
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation
>Reporter: Steve OU
>Assignee: Steve OU
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/common.html]
> The markdown file is located in flink/docs/dev/table/common.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15378) StreamFileSystemSink supported mutil hdfs plugins.

2019-12-23 Thread ouyangwulin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ouyangwulin updated FLINK-15378:

Description: 
Request 1:  FileSystem plugins not effect the default yarn dependecies.

Request 2:  StreamFileSystemSink supported mutil hdfs plugins.    

As Problem describe :

    when I put a ' filesystem plugin to FLINK_HOME/pulgins in flink', and the 
clas{color:#172b4d}s '*com.filesystem.plugin.FileSystemFactoryEnhance*' 
implements '*FileSystemFactory*', when jm start, It will call 
FileSystem.initialize(configuration, 
PluginUtils.createPluginManagerFromRootFolder(configuration)) to load factories 
to map  FileSystem#**{color}FS_FACTORIES, and the key is only schema. When 
tm/jm use local hadoop conf A ,   the user code use hadoop conf Bin 'filesystem 
plugin',  Conf A and Conf B is used to different hadoop cluster. and The Jm 
will start failed, beacuse of the blodserver in JM will load Conf B to get 
filesystem. the full log add appendix.

 

AS reslove method:

    use  schema and authority as key for ' FileSystem#**FS_FACTORIES '

 

 

  was:
Request 1:  FileSystem plugins not effect the default yarn dependecies.

Request 2:  StreamFileSystemSink supported mutil hdfs plugins.    

As Problem describe :

    when I put a ' filesystem plugin to FLINK_HOME/pulgins in flink', and the 
clas{color:#172b4d}s '*com.filesystem.plugin.FileSystemFactoryEnhance*' 
implements '*FileSystemFactory*', when jm start, It will call 
FileSystem.initialize(configuration, 
PluginUtils.createPluginManagerFromRootFolder(configuration)) to load factories 
to map  FileSystem#**{color}FS_FACTORIES, and the key is only schema. When 
tm/jm use local hadoop conf A ,   the user code use hadoop conf Bin 'filesystem 
plugin',  Conf A and Conf B is used to different hadoop cluster. and The Jm 
will start failed, beacuse of the blodserver in JM will load Conf B to get 
filesystem. the full log add appendix.

 

AS reslove method:

    use  schema and authority as key for ' FileSystem#**FS_FACTORIES key'

 

 


> StreamFileSystemSink supported mutil hdfs plugins.
> --
>
> Key: FLINK-15378
> URL: https://issues.apache.org/jira/browse/FLINK-15378
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Core
>Affects Versions: 1.9.2, 1.11.0
>Reporter: ouyangwulin
>Priority: Major
> Fix For: 1.11.0
>
> Attachments: jobmananger.log
>
>
> Request 1:  FileSystem plugins not effect the default yarn dependecies.
> Request 2:  StreamFileSystemSink supported mutil hdfs plugins.    
> As Problem describe :
>     when I put a ' filesystem plugin to FLINK_HOME/pulgins in flink', and the 
> clas{color:#172b4d}s '*com.filesystem.plugin.FileSystemFactoryEnhance*' 
> implements '*FileSystemFactory*', when jm start, It will call 
> FileSystem.initialize(configuration, 
> PluginUtils.createPluginManagerFromRootFolder(configuration)) to load 
> factories to map  FileSystem#**{color}FS_FACTORIES, and the key is only 
> schema. When tm/jm use local hadoop conf A ,   the user code use hadoop conf 
> Bin 'filesystem plugin',  Conf A and Conf B is used to different hadoop 
> cluster. and The Jm will start failed, beacuse of the blodserver in JM will 
> load Conf B to get filesystem. the full log add appendix.
>  
> AS reslove method:
>     use  schema and authority as key for ' FileSystem#**FS_FACTORIES '
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15081) Translate "Concepts & Common API" page of Table API into Chinese

2019-12-23 Thread Steve OU (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002705#comment-17002705
 ] 

Steve OU commented on FLINK-15081:
--

Hi [~CrazyTomatoOo] , never mind. 

[~jark] as ShijieZhang have completed the translation task, would you please 
help to review his PR and transfer the assignee to him? I will help to 
translate other doc.

> Translate "Concepts & Common API" page of Table API into Chinese
> 
>
> Key: FLINK-15081
> URL: https://issues.apache.org/jira/browse/FLINK-15081
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation
>Reporter: Steve OU
>Assignee: Steve OU
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/common.html]
> The markdown file is located in flink/docs/dev/table/common.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15378) StreamFileSystemSink supported mutil hdfs plugins.

2019-12-23 Thread ouyangwulin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002704#comment-17002704
 ] 

ouyangwulin commented on FLINK-15378:
-

Please assign the for me!

> StreamFileSystemSink supported mutil hdfs plugins.
> --
>
> Key: FLINK-15378
> URL: https://issues.apache.org/jira/browse/FLINK-15378
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Core
>Affects Versions: 1.9.2, 1.11.0
>Reporter: ouyangwulin
>Priority: Major
> Fix For: 1.11.0
>
> Attachments: jobmananger.log
>
>
> Request 1:  FileSystem plugins not effect the default yarn dependecies.
> Request 2:  StreamFileSystemSink supported mutil hdfs plugins.    
> As Problem describe :
>     when I put a ' filesystem plugin to FLINK_HOME/pulgins in flink', and the 
> clas{color:#172b4d}s '*com.filesystem.plugin.FileSystemFactoryEnhance*' 
> implements '*FileSystemFactory*', when jm start, It will call 
> FileSystem.initialize(configuration, 
> PluginUtils.createPluginManagerFromRootFolder(configuration)) to load 
> factories to map  FileSystem#**{color}FS_FACTORIES, and the key is only 
> schema. When tm/jm use local hadoop conf A ,   the user code use hadoop conf 
> Bin 'filesystem plugin',  Conf A and Conf B is used to different hadoop 
> cluster. and The Jm will start failed, beacuse of the blodserver in JM will 
> load Conf B to get filesystem. the full log add appendix.
>  
> AS reslove method:
>     use  schema and authority as key for ' FileSystem#**FS_FACTORIES key'
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-15355) Nightly streaming file sink fails with unshaded hadoop

2019-12-23 Thread PengFei Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002703#comment-17002703
 ] 

PengFei Li edited comment on FLINK-15355 at 12/24/19 7:35 AM:
--

This test begins to fail after FLINK-11956 with commit 
8ec545d56f007645ca8f2a2374386882132ffc7a. The name of this test is "e2e - misc 
- hadoop 2.8" whose profile contains "-Dinclude-hadoop -Dhadoop.version=2.8.3", 
and flink-shaded-hadoop-2-uber-2.8.3-9.0.jar will be put into lib directory.  
After enable jvm option "-verbose:class" in jobmanager.sh, we can find that 
"org.apache.hadoop.conf.Configuration" is loaded from 
flink-shaded-hadoop-2-uber-2.8.3-9.0.jar rather than hadoop-s3. 

"Configuration#getTimeDuration(String name, String defaultValue, TimeUnit 
unit)" only exists in hadoop-3.1.0 which hadoop-s3 depends on, so 
NoSuchMethodError is throwed. The purpose of filesystem plugin is to solve 
class conflicts, but it seems something doesn't work as expected. I'm not 
familiar with plugin implementation, so need to take some time to find root 
cause. Hope for any feedback. [~arvid heise]

  


was (Author: banmoy):
This test begins to fail after FLINK-11956 with commit 
8ec545d56f007645ca8f2a2374386882132ffc7a. The name of this test is "e2e - misc 
- hadoop 2.8" whose profile contains "-Dinclude-hadoop -Dhadoop.version=2.8.3", 
and flink-shaded-hadoop-2-uber-2.8.3-9.0.jar will be put into lib directory.  
After enable jvm option "-verbose:class" in jobmanager.sh, we can find that 
"org.apache.hadoop.conf.Configuration" is loaded from 
flink-shaded-hadoop-2-uber-2.8.3-9.0.jar rather than hadoop-s3. 

"Configuration#getTimeDuration(String name, String defaultValue, TimeUnit 
unit)" only exists in hadoop-3.1.0 which hadoop-s3 depends on, so 
NoSuchMethodError is throwed. The purpose of filesystem plugin is to solve 
class conflicts, but it seems something doesn't work as expected. I'm not 
familiar with plugin implementation before, so need to take some time to find 
root cause. Hope for any feedback. [~arvid heise]

  

> Nightly streaming file sink fails with unshaded hadoop
> --
>
> Key: FLINK-15355
> URL: https://issues.apache.org/jira/browse/FLINK-15355
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Arvid Heise
>Assignee: PengFei Li
>Priority: Blocker
> Fix For: 1.10.0
>
>
> {code:java}
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
> JobGraph.
>  at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
>  at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
>  at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
>  at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
>  at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
>  at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:895)
>  at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>  at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>  at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)
> Caused by: java.lang.RuntimeException: 
> java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
> JobGraph.
>  at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:199)
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1751)
>  at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.executeAsync(StreamContextEnvironment.java:94)
>  at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:63)
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1628)
>  at StreamingFileSinkProgram.main(StreamingFileSinkProgram.java:77)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at 

[GitHub] [flink] flinkbot edited a comment on issue #10673: [FLINK-15374][core][config] Update descriptions for jvm overhead config options

2019-12-23 Thread GitBox
flinkbot edited a comment on issue #10673: [FLINK-15374][core][config] Update 
descriptions for jvm overhead config options
URL: https://github.com/apache/flink/pull/10673#issuecomment-568663453
 
 
   
   ## CI report:
   
   * 3a8fcf0e8936f3c4115ff4771b52e060064676af Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142190071) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3875)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10672: [FLINK-15373][core][config] Update descriptions for framework / task off-heap memory config options

2019-12-23 Thread GitBox
flinkbot edited a comment on issue #10672: [FLINK-15373][core][config] Update 
descriptions for framework / task off-heap memory config options
URL: https://github.com/apache/flink/pull/10672#issuecomment-568663425
 
 
   
   ## CI report:
   
   * 31a2ccbabb6675673d445b6a9d258e6622d295d8 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142190064) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3874)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-15355) Nightly streaming file sink fails with unshaded hadoop

2019-12-23 Thread PengFei Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002703#comment-17002703
 ] 

PengFei Li edited comment on FLINK-15355 at 12/24/19 7:34 AM:
--

This test begins to fail after FLINK-11956 with commit 
8ec545d56f007645ca8f2a2374386882132ffc7a. The name of this test is "e2e - misc 
- hadoop 2.8" whose profile contains "-Dinclude-hadoop -Dhadoop.version=2.8.3", 
and flink-shaded-hadoop-2-uber-2.8.3-9.0.jar will be put into lib directory.  
After enable jvm option "-verbose:class" in jobmanager.sh, we can find that 
"org.apache.hadoop.conf.Configuration" is loaded from 
flink-shaded-hadoop-2-uber-2.8.3-9.0.jar rather than hadoop-s3. 

"Configuration#getTimeDuration(String name, String defaultValue, TimeUnit 
unit)" only exists in hadoop-3.1.0 which hadoop-s3 depends on, so 
NoSuchMethodError is throwed. The purpose of filesystem plugin is to solve 
class conflicts, but it seems something doesn't work as expected. I'm not 
familiar with plugin implementation before, so need to take some time to find 
root cause. Hope for any feedback. [~arvid heise]

  


was (Author: banmoy):
This test begins to fail after FLINK-11956 with commit 
8ec545d56f007645ca8f2a2374386882132ffc7a. The name of this test is "e2e - misc 
- hadoop 2.8" whose profile contains "-Dinclude-hadoop -Dhadoop.version=2.8.3", 
and flink-shaded-hadoop-2-uber-2.8.3-9.0.jar will be put into lib directory.  
After enable jvm option "-verbose:class" in jobmanager.sh, we can find that 
"org.apache.hadoop.conf.Configuration" is loaded from 
flink-shaded-hadoop-2-uber-2.8.3-9.0.jar rather than hadoop-s3. 

"Configuration#getTimeDuration(String name, String defaultValue, TimeUnit 
unit)" only exists in hadoop-3.1.0, so NoSuchMethodError is throwed. The 
purpose of filesystem plugin is to solve class conflicts, but it seems 
something doesn't work as expected. I'm not familiar with plugin implementation 
before, so need to take some time to find root cause. Hope for any feedback. 
[~arvid heise]

  

> Nightly streaming file sink fails with unshaded hadoop
> --
>
> Key: FLINK-15355
> URL: https://issues.apache.org/jira/browse/FLINK-15355
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Arvid Heise
>Assignee: PengFei Li
>Priority: Blocker
> Fix For: 1.10.0
>
>
> {code:java}
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
> JobGraph.
>  at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
>  at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
>  at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
>  at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
>  at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
>  at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:895)
>  at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>  at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>  at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)
> Caused by: java.lang.RuntimeException: 
> java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
> JobGraph.
>  at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:199)
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1751)
>  at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.executeAsync(StreamContextEnvironment.java:94)
>  at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:63)
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1628)
>  at StreamingFileSinkProgram.main(StreamingFileSinkProgram.java:77)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  

[GitHub] [flink] flinkbot edited a comment on issue #10670: [FLINK-15370][state backends] Make sure sharedResources takes effect in RocksDBResourceContainer

2019-12-23 Thread GitBox
flinkbot edited a comment on issue #10670: [FLINK-15370][state backends] Make 
sure sharedResources takes effect in RocksDBResourceContainer
URL: https://github.com/apache/flink/pull/10670#issuecomment-568658193
 
 
   
   ## CI report:
   
   * deed8274c86783bb2068acbcd9c6ad8d40b63f83 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142188770) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3871)
 
   * 88d1a9ecbca46c2d452e058b3b9efaed1de8f6ec Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142193211) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3878)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-15355) Nightly streaming file sink fails with unshaded hadoop

2019-12-23 Thread PengFei Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002703#comment-17002703
 ] 

PengFei Li edited comment on FLINK-15355 at 12/24/19 7:33 AM:
--

This test begins to fail after FLINK-11956 with commit 
8ec545d56f007645ca8f2a2374386882132ffc7a. The name of this test is "e2e - misc 
- hadoop 2.8" whose profile contains "-Dinclude-hadoop -Dhadoop.version=2.8.3", 
and flink-shaded-hadoop-2-uber-2.8.3-9.0.jar will be put into lib directory.  
After enable jvm option "-verbose:class" in jobmanager.sh, we can find that 
"org.apache.hadoop.conf.Configuration" is loaded from 
flink-shaded-hadoop-2-uber-2.8.3-9.0.jar rather than hadoop-s3. 

"Configuration#getTimeDuration(String name, String defaultValue, TimeUnit 
unit)" only exists in hadoop-3.1.0, so NoSuchMethodError is throwed. The 
purpose of filesystem plugin is to solve class conflicts, but it seems 
something doesn't work as expected. I'm not familiar with plugin implementation 
before, so need to take some time to find root cause. Hope for any feedback. 
[~arvid heise]

  


was (Author: banmoy):
This test begins to fail after FLINK-11956 with commit 
8ec545d56f007645ca8f2a2374386882132ffc7a. The name of this test is "e2e - misc 
- hadoop 2.8" whose profile contains "-Dinclude-hadoop -Dhadoop.version=2.8.3", 
and flink-shaded-hadoop-2-uber-2.8.3-9.0.jar will be put into lib directory.  
After enable jvm option "-verbose:class" in jobmanager.sh, we can find that 
"org.apache.hadoop.conf.Configuration" is loaded from 
flink-shaded-hadoop-2-uber-2.8.3-9.0.jar rather than hadoop-s3. 
"Configuration#getTimeDuration(String name, String defaultValue, TimeUnit 
unit)" only exists in hadoop-3.1.0, so NoSuchMethodError is throwed. The 
purpose of filesystem plugin is to solve class conflicts, but it seems 
something doesn't work as expected. I'm not familiar with plugin implementation 
before, so need to take some time to find root cause. Hope for any feedback. 
[~arvid heise]

  

> Nightly streaming file sink fails with unshaded hadoop
> --
>
> Key: FLINK-15355
> URL: https://issues.apache.org/jira/browse/FLINK-15355
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Arvid Heise
>Assignee: PengFei Li
>Priority: Blocker
> Fix For: 1.10.0
>
>
> {code:java}
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
> JobGraph.
>  at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
>  at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
>  at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
>  at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
>  at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
>  at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:895)
>  at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>  at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>  at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)
> Caused by: java.lang.RuntimeException: 
> java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
> JobGraph.
>  at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:199)
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1751)
>  at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.executeAsync(StreamContextEnvironment.java:94)
>  at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:63)
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1628)
>  at StreamingFileSinkProgram.main(StreamingFileSinkProgram.java:77)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> 

[jira] [Comment Edited] (FLINK-15355) Nightly streaming file sink fails with unshaded hadoop

2019-12-23 Thread PengFei Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002703#comment-17002703
 ] 

PengFei Li edited comment on FLINK-15355 at 12/24/19 7:33 AM:
--

This test begins to fail after FLINK-11956 with commit 
8ec545d56f007645ca8f2a2374386882132ffc7a. The name of this test is "e2e - misc 
- hadoop 2.8" whose profile contains "-Dinclude-hadoop -Dhadoop.version=2.8.3", 
and flink-shaded-hadoop-2-uber-2.8.3-9.0.jar will be put into lib directory.  
After enable jvm option "-verbose:class" in jobmanager.sh, we can find that 
"org.apache.hadoop.conf.Configuration" is loaded from 
flink-shaded-hadoop-2-uber-2.8.3-9.0.jar rather than hadoop-s3. 
"Configuration#getTimeDuration(String name, String defaultValue, TimeUnit 
unit)" only exists in hadoop-3.1.0, so NoSuchMethodError is throwed. The 
purpose of filesystem plugin is to solve class conflicts, but it seems 
something doesn't work as expected. I'm not familiar with plugin implementation 
before, so need to take some time to find root cause. Hope for any feedback. 
[~arvid heise]

  


was (Author: banmoy):
This test begins to fail after 
[FLINK-11956|https://issues.apache.org/jira/browse/FLINK-11956] with commit 
8ec545d56f007645ca8f2a2374386882132ffc7a. The name of this test is "e2e - misc 
- hadoop 2.8" whose profile contains "-Dinclude-hadoop -Dhadoop.version=2.8.3", 
and flink-shaded-hadoop-2-uber-2.8.3-9.0.jar will be put into lib directory.  
After enable jvm option "-verbose:class" in jobmanager.sh, we can find that 
"org.apache.hadoop.conf.Configuration" is loaded from 
flink-shaded-hadoop-2-uber-2.8.3-9.0.jar rather than hadoop-s3. 

"Configuration#getTimeDuration(String name, String defaultValue, TimeUnit 
unit)" only exists in hadoop-3.1.0, so NoSuchMethodError is throwed. The 
purpose of filesystem plugin is to solve class conflicts, but it seems 
something doesn't work as expected. I'm not familiar with plugin implementation 
before, so need to take some time to find root cause. Hope for any feedback. 
[~arvid heise]

  

> Nightly streaming file sink fails with unshaded hadoop
> --
>
> Key: FLINK-15355
> URL: https://issues.apache.org/jira/browse/FLINK-15355
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Arvid Heise
>Assignee: PengFei Li
>Priority: Blocker
> Fix For: 1.10.0
>
>
> {code:java}
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
> JobGraph.
>  at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
>  at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
>  at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
>  at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
>  at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
>  at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:895)
>  at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>  at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>  at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)
> Caused by: java.lang.RuntimeException: 
> java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
> JobGraph.
>  at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:199)
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1751)
>  at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.executeAsync(StreamContextEnvironment.java:94)
>  at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:63)
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1628)
>  at StreamingFileSinkProgram.main(StreamingFileSinkProgram.java:77)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at 

[GitHub] [flink] flinkbot edited a comment on issue #10655: [FLINK-15356][metric] Add applicationId to flink metrics running on yarn

2019-12-23 Thread GitBox
flinkbot edited a comment on issue #10655: [FLINK-15356][metric] Add 
applicationId to flink metrics running on yarn
URL: https://github.com/apache/flink/pull/10655#issuecomment-568172387
 
 
   
   ## CI report:
   
   * 21a9da4826fee2cbf8e79168e854a605ea0a5ef3 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142005364) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3842)
 
   * 4c18b1fcec87cefa647a011b9c78d7791d89b372 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142079726) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3851)
 
   * ae4d5b34f19b18fdc5696444f60bc0e342c0153b UNKNOWN
   * dc0a50ffd0c588e82a96cd7a60f2bce7a5b9fe36 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142193196) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3877)
 
   * 03ee4fa1f29c549803559df77ab6d9321192beed UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15355) Nightly streaming file sink fails with unshaded hadoop

2019-12-23 Thread PengFei Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002703#comment-17002703
 ] 

PengFei Li commented on FLINK-15355:


This test begins to fail after 
[FLINK-11956|https://issues.apache.org/jira/browse/FLINK-11956] with commit 
8ec545d56f007645ca8f2a2374386882132ffc7a. The name of this test is "e2e - misc 
- hadoop 2.8" whose profile contains "-Dinclude-hadoop -Dhadoop.version=2.8.3", 
and flink-shaded-hadoop-2-uber-2.8.3-9.0.jar will be put into lib directory.  
After enable jvm option "-verbose:class" in jobmanager.sh, we can find that 
"org.apache.hadoop.conf.Configuration" is loaded from 
flink-shaded-hadoop-2-uber-2.8.3-9.0.jar rather than hadoop-s3. 

"Configuration#getTimeDuration(String name, String defaultValue, TimeUnit 
unit)" only exists in hadoop-3.1.0, so NoSuchMethodError is throwed. The 
purpose of filesystem plugin is to solve class conflicts, but it seems 
something doesn't work as expected. I'm not familiar with plugin implementation 
before, so need to take some time to find root cause. Hope for any feedback. 
[~arvid heise]

  

> Nightly streaming file sink fails with unshaded hadoop
> --
>
> Key: FLINK-15355
> URL: https://issues.apache.org/jira/browse/FLINK-15355
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Arvid Heise
>Assignee: PengFei Li
>Priority: Blocker
> Fix For: 1.10.0
>
>
> {code:java}
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
> JobGraph.
>  at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
>  at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
>  at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
>  at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
>  at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
>  at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:895)
>  at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>  at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>  at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)
> Caused by: java.lang.RuntimeException: 
> java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
> JobGraph.
>  at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:199)
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1751)
>  at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.executeAsync(StreamContextEnvironment.java:94)
>  at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:63)
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1628)
>  at StreamingFileSinkProgram.main(StreamingFileSinkProgram.java:77)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:321)
>  ... 11 more
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
> JobGraph.
>  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
>  at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1746)
>  ... 20 more
> Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to 
> submit JobGraph.
>  at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$7(RestClusterClient.java:326)
>  at 
> java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
>  at 
> 

[jira] [Commented] (FLINK-15081) Translate "Concepts & Common API" page of Table API into Chinese

2019-12-23 Thread ShijieZhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002702#comment-17002702
 ] 

ShijieZhang commented on FLINK-15081:
-

Sorry, I should have informed you first. It's my fault. What should I do next?

> Translate "Concepts & Common API" page of Table API into Chinese
> 
>
> Key: FLINK-15081
> URL: https://issues.apache.org/jira/browse/FLINK-15081
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation
>Reporter: Steve OU
>Assignee: Steve OU
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/common.html]
> The markdown file is located in flink/docs/dev/table/common.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-13662) FlinkKinesisProducerTest.testBackpressure failed on Travis

2019-12-23 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu closed FLINK-13662.
---
Resolution: Fixed

Fixed via

master:
a62641a0918aaedbac6312293cf8826e4d11f300
20041cafbfe500ed386e11da5d09f116e7a45b81

1.10.0:
a57da33111ba6f9155fef0dde8635ae54e641507
b9b422e2f282e374aba6f207ffbe280bd5f91c9e

> FlinkKinesisProducerTest.testBackpressure failed on Travis
> --
>
> Key: FLINK-13662
> URL: https://issues.apache.org/jira/browse/FLINK-13662
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis, Tests
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Till Rohrmann
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The {{FlinkKinesisProducerTest.testBackpressure}} failed on Travis with
> {code}
> 14:45:50.489 [ERROR] Failures: 
> 14:45:50.489 [ERROR]   FlinkKinesisProducerTest.testBackpressure:298 Flush 
> triggered before reaching queue limit
> {code}
> https://api.travis-ci.org/v3/job/569262823/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15378) StreamFileSystemSink supported mutil hdfs plugins.

2019-12-23 Thread ouyangwulin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ouyangwulin updated FLINK-15378:

Description: 
Request 1:  FileSystem plugins not effect the default yarn dependecies.

Request 2:  StreamFileSystemSink supported mutil hdfs plugins.    

As Problem describe :

    when I put a ' filesystem plugin to FLINK_HOME/pulgins in flink', and the 
clas{color:#172b4d}s '*com.filesystem.plugin.FileSystemFactoryEnhance*' 
implements '*FileSystemFactory*', when jm start, It will call 
FileSystem.initialize(configuration, 
PluginUtils.createPluginManagerFromRootFolder(configuration)) to load factories 
to map  FileSystem#**{color}FS_FACTORIES, and the key is only schema. When 
tm/jm use local hadoop conf A ,   the user code use hadoop conf Bin 'filesystem 
plugin',  Conf A and Conf B is used to different hadoop cluster. and The Jm 
will start failed, beacuse of the blodserver in JM will load Conf B to get 
filesystem. the full log add appendix.

 

AS reslove method:

    use  schema and authority as key for ' FileSystem#**FS_FACTORIES key'

 

 

  was:
Request 1:  FileSystem plugins not effect the default yarn dependecies.

Request 2:  StreamFileSystemSink supported mutil hdfs plugins.    

 

As Problem describe :

    when I put a ' filesystem plugin to FLINK_HOME/pulgins in flink', and the 
clas{color:#172b4d}s '*com.filesystem.plugin.FileSystemFactoryEnhance*' 
implements '*FileSystemFactory*', when jm start, It will call 
FileSystem.initialize(configuration, 
PluginUtils.createPluginManagerFromRootFolder(configuration)) to load factories 
to map  FileSystem#**{color}FS_FACTORIES, and the key is only schema. When 
tm/jm use local hadoop conf A ,   the user code use hadoop conf Bin 'filesystem 
plugin',  Conf A and Conf B is used to different hadoop cluster. and The Jm 
will start failed, beacuse of the blodserver in JM will load Conf B to get 
filesystem. the full log add appendix.

 

 


> StreamFileSystemSink supported mutil hdfs plugins.
> --
>
> Key: FLINK-15378
> URL: https://issues.apache.org/jira/browse/FLINK-15378
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Core
>Affects Versions: 1.9.2, 1.11.0
>Reporter: ouyangwulin
>Priority: Major
> Fix For: 1.11.0
>
> Attachments: jobmananger.log
>
>
> Request 1:  FileSystem plugins not effect the default yarn dependecies.
> Request 2:  StreamFileSystemSink supported mutil hdfs plugins.    
> As Problem describe :
>     when I put a ' filesystem plugin to FLINK_HOME/pulgins in flink', and the 
> clas{color:#172b4d}s '*com.filesystem.plugin.FileSystemFactoryEnhance*' 
> implements '*FileSystemFactory*', when jm start, It will call 
> FileSystem.initialize(configuration, 
> PluginUtils.createPluginManagerFromRootFolder(configuration)) to load 
> factories to map  FileSystem#**{color}FS_FACTORIES, and the key is only 
> schema. When tm/jm use local hadoop conf A ,   the user code use hadoop conf 
> Bin 'filesystem plugin',  Conf A and Conf B is used to different hadoop 
> cluster. and The Jm will start failed, beacuse of the blodserver in JM will 
> load Conf B to get filesystem. the full log add appendix.
>  
> AS reslove method:
>     use  schema and authority as key for ' FileSystem#**FS_FACTORIES key'
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lirui-apache commented on a change in pull request #10620: [FLINK-15239][table-planner-blink] TM Metaspace memory leak

2019-12-23 Thread GitBox
lirui-apache commented on a change in pull request #10620: 
[FLINK-15239][table-planner-blink] TM Metaspace memory leak
URL: https://github.com/apache/flink/pull/10620#discussion_r361088455
 
 

 ##
 File path: 
flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/generated/CompileUtils.java
 ##
 @@ -45,7 +48,7 @@
 * number of Meta zone GC (class unloading), resulting in performance 
bottlenecks. So we add
 * a cache to avoid this problem.
 */
-   protected static final Cache, Class> 
COMPILED_CACHE = CacheBuilder
+   protected static final Cache> 
COMPILED_CACHE = CacheBuilder
.newBuilder()
.maximumSize(100)   // estimated cache size
 
 Review comment:
   My hunch is that using soft values in the inner cache gives us fine grained 
garbage collection. Instead of removing the whole inner cache, we may want to 
only remove the least recently used class instances.
   According to this 
[reading](http://jeremymanson.blogspot.com/2009/07/how-hotspot-decides-to-clear_07.html),
 the timestamp that a soft value was lastly accessed plays a part when JVM 
decides whether to clear the reference.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15377) Mesos WordCount test fails on travis

2019-12-23 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002699#comment-17002699
 ] 

Yangze Guo commented on FLINK-15377:


BTW, as the correctness of this test case has been verified by the 
*check_result_hash* function. I think we could also skip the exception check.

> Mesos WordCount test fails on travis
> 
>
> Key: FLINK-15377
> URL: https://issues.apache.org/jira/browse/FLINK-15377
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Mesos
>Affects Versions: 1.10.0
>Reporter: Yu Li
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.10.0
>
>
> The "Run Mesos WordCount test" fails nightly run on travis with below error:
> {code}
> rm: cannot remove 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/test-data/log/mesos-sl/mesos-slave.INFO':
>  Permission denied
> rm: cannot remove 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/test-data/log/mesos-sl/mesos-fetcher.INFO':
>  Permission denied
> rm: cannot remove 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/test-data/log/mesos-sl/mesos-slave.4a4fda410c57.invalid-user.log.INFO.20191224-031307.1':
>  Permission denied
> ...
> [FAIL] 'Run Mesos WordCount test' failed after 5 minutes and 26 seconds! Test 
> exited with exit code 0 but the logs contained errors, exceptions or 
> non-empty .out files
> {code}
> https://api.travis-ci.org/v3/job/628795106/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15377) Mesos WordCount test fails on travis

2019-12-23 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002697#comment-17002697
 ] 

Yangze Guo commented on FLINK-15377:


[~gjy] That's why I add _sudo_ to the clean-up function in the beginning. 
However, it's an accidental error. I could not reproduce it locally; Thus, I 
could not figure out what wrong with the permission. I prefer to add sudo 
privilege. WDYT?

cc [~trohrmann]

> Mesos WordCount test fails on travis
> 
>
> Key: FLINK-15377
> URL: https://issues.apache.org/jira/browse/FLINK-15377
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Mesos
>Affects Versions: 1.10.0
>Reporter: Yu Li
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.10.0
>
>
> The "Run Mesos WordCount test" fails nightly run on travis with below error:
> {code}
> rm: cannot remove 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/test-data/log/mesos-sl/mesos-slave.INFO':
>  Permission denied
> rm: cannot remove 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/test-data/log/mesos-sl/mesos-fetcher.INFO':
>  Permission denied
> rm: cannot remove 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/test-data/log/mesos-sl/mesos-slave.4a4fda410c57.invalid-user.log.INFO.20191224-031307.1':
>  Permission denied
> ...
> [FAIL] 'Run Mesos WordCount test' failed after 5 minutes and 26 seconds! Test 
> exited with exit code 0 but the logs contained errors, exceptions or 
> non-empty .out files
> {code}
> https://api.travis-ci.org/v3/job/628795106/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15378) StreamFileSystemSink supported mutil hdfs plugins.

2019-12-23 Thread ouyangwulin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ouyangwulin updated FLINK-15378:

Attachment: jobmananger.log

> StreamFileSystemSink supported mutil hdfs plugins.
> --
>
> Key: FLINK-15378
> URL: https://issues.apache.org/jira/browse/FLINK-15378
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Core
>Affects Versions: 1.9.2, 1.11.0
>Reporter: ouyangwulin
>Priority: Major
> Fix For: 1.11.0
>
> Attachments: jobmananger.log
>
>
> Request 1:  FileSystem plugins not effect the default yarn dependecies.
> Request 2:  StreamFileSystemSink supported mutil hdfs plugins.    
>  
> As Problem describe :
>     when I put a ' filesystem plugin to FLINK_HOME/pulgins in flink', and the 
> clas{color:#172b4d}s '*com.filesystem.plugin.FileSystemFactoryEnhance*' 
> implements '*FileSystemFactory*', when jm start, It will call 
> FileSystem.initialize(configuration, 
> PluginUtils.createPluginManagerFromRootFolder(configuration)) to load 
> factories to map  FileSystem#**{color}FS_FACTORIES, and the key is only 
> schema. When tm/jm use local hadoop conf A ,   the user code use hadoop conf 
> Bin 'filesystem plugin',  Conf A and Conf B is used to different hadoop 
> cluster. and The Jm will start failed, beacuse of the blodserver in JM will 
> load Conf B to get filesystem. the full log add appendix.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15378) StreamFileSystemSink supported mutil hdfs plugins.

2019-12-23 Thread ouyangwulin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ouyangwulin updated FLINK-15378:

Description: 
Request 1:  FileSystem plugins not effect the default yarn dependecies.

Request 2:  StreamFileSystemSink supported mutil hdfs plugins.    

 

As Problem describe :

    when I put a ' filesystem plugin to FLINK_HOME/pulgins in flink', and the 
clas{color:#172b4d}s '*com.filesystem.plugin.FileSystemFactoryEnhance*' 
implements '*FileSystemFactory*', when jm start, It will call 
FileSystem.initialize(configuration, 
PluginUtils.createPluginManagerFromRootFolder(configuration)) to load factories 
to map  FileSystem#**{color}FS_FACTORIES, and the key is only schema. When 
tm/jm use local hadoop conf A ,   the user code use hadoop conf Bin 'filesystem 
plugin',  Conf A and Conf B is used to different hadoop cluster. and The Jm 
will start failed, beacuse of the blodserver in JM will load Conf B to get 
filesystem. the full log add appendix.

 

 

  was:
Request 1:  FileSystem plugins not effect the default yarn dependecies.

Request 2:  StreamFileSystemSink supported mutil hdfs plugins.    

 

 


> StreamFileSystemSink supported mutil hdfs plugins.
> --
>
> Key: FLINK-15378
> URL: https://issues.apache.org/jira/browse/FLINK-15378
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Core
>Affects Versions: 1.9.2, 1.11.0
>Reporter: ouyangwulin
>Priority: Major
> Fix For: 1.11.0
>
>
> Request 1:  FileSystem plugins not effect the default yarn dependecies.
> Request 2:  StreamFileSystemSink supported mutil hdfs plugins.    
>  
> As Problem describe :
>     when I put a ' filesystem plugin to FLINK_HOME/pulgins in flink', and the 
> clas{color:#172b4d}s '*com.filesystem.plugin.FileSystemFactoryEnhance*' 
> implements '*FileSystemFactory*', when jm start, It will call 
> FileSystem.initialize(configuration, 
> PluginUtils.createPluginManagerFromRootFolder(configuration)) to load 
> factories to map  FileSystem#**{color}FS_FACTORIES, and the key is only 
> schema. When tm/jm use local hadoop conf A ,   the user code use hadoop conf 
> Bin 'filesystem plugin',  Conf A and Conf B is used to different hadoop 
> cluster. and The Jm will start failed, beacuse of the blodserver in JM will 
> load Conf B to get filesystem. the full log add appendix.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] Myasuka removed a comment on issue #10670: [FLINK-15370][state backends] Make sure sharedResources takes effect in RocksDBResourceContainer

2019-12-23 Thread GitBox
Myasuka removed a comment on issue #10670: [FLINK-15370][state backends] Make 
sure sharedResources takes effect in RocksDBResourceContainer
URL: https://github.com/apache/flink/pull/10670#issuecomment-568677004
 
 
   LGTM
   
   It could be better to describe clearly that once `sharedResources` is not 
null, we would set write buffer manager extracted from `sharedResources`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] Myasuka commented on issue #10670: [FLINK-15370][state backends] Make sure sharedResources takes effect in RocksDBResourceContainer

2019-12-23 Thread GitBox
Myasuka commented on issue #10670: [FLINK-15370][state backends] Make sure 
sharedResources takes effect in RocksDBResourceContainer
URL: https://github.com/apache/flink/pull/10670#issuecomment-568677004
 
 
   LGTM
   
   It could be better to describe clearly that once `sharedResources` is not 
null, we would set write buffer manager extracted from `sharedResources`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15368) Add end-to-end test for controlling RocksDB memory usage

2019-12-23 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002693#comment-17002693
 ] 

Yun Tang commented on FLINK-15368:
--

The basic idea of end-to-end test for controlling RocksDB memory usage is to 
expose RocksDB native metrics and print in logs to track the memory usage just 
like {{wait_oper_metric_num_in_records}} bash function used in many end-to-end 
tests. There still needs something else to do like introducing [block cache 
properties|https://github.com/facebook/rocksdb/blob/afa2420c2bf0304a4b8796cab219e859146cc031/include/rocksdb/db.h#L790]
 into Flink RocksDB native metrics.

I have implemented to expose block cache metrics in my private branch. However, 
I found with a simplified version of {{DataStreamAllroundTestProgram}}, the 
block cache usage would easily exceed the capacity due to the large pinned 
usage.

I have tried to avoid to pin L0 and top level index & filter but it still 
existed. Then I try to allocate the LRUCache with {{strictCapacityLimit=true}} 
property. However, task manager would easily crash due to core dump of RocksDB. 
Still Investigating.

> Add end-to-end test for controlling RocksDB memory usage
> 
>
> Key: FLINK-15368
> URL: https://issues.apache.org/jira/browse/FLINK-15368
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Affects Versions: 1.10.0
>Reporter: Yu Li
>Assignee: Yun Tang
>Priority: Critical
> Fix For: 1.10.0
>
>
> We need to add an end-to-end test to make sure the RocksDB memory usage 
> control works well, especially under the slot sharing case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-14843) Streaming bucketing end-to-end test can fail with Output hash mismatch

2019-12-23 Thread PengFei Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002690#comment-17002690
 ] 

PengFei Li edited comment on FLINK-14843 at 12/24/19 7:04 AM:
--

I think it is not a bug, but how the test works. The output of 
test_streaming_bucketing.sh tells us that number of produced values is 62530, 
which is more than the expected 6, so checksum fails. The duplicated data 
is from those pending files which isn't included in a checkpoint, and can't be 
truncated to remove duplicated data when job is restored. The meaning of "sleep 
10" is waiting for at least one completed checkpoint before triggering another 
failover, so that pending files generated when job is closing are in the 
restored checkpoint. 10 seconds is enough because checkpoint interval is set to 
4s in BucketingSinkTestProgram. Maybe we need to change the script to wait for 
a completed checkpoint explicitly . What do you think? [~gjy] [~kkl0u]


was (Author: banmoy):
I think it is not a bug, but how the test works. The output of 
test_streaming_bucketing.sh tells us that number of produced values is 62530, 
which is more than the expected 6, so checksum fails. The duplicated data 
is from those pending files which isn't included in a checkpoint, and can't be 
truncated to remove duplicated data when job is restored. The meaning of "sleep 
10" is waiting for at least one completed checkpoint before triggering another 
failover, so that pending files generated when job is closing are in the 
restored checkpoint. 10 seconds is enough because checkpoint interval is set to 
4s in BucketingSinkTestProgram. Maybe we need to change the script to wait a 
completed checkpoint explicitly . What do you think? [~gjy] [~kkl0u]

> Streaming bucketing end-to-end test can fail with Output hash mismatch
> --
>
> Key: FLINK-14843
> URL: https://issues.apache.org/jira/browse/FLINK-14843
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Tests
>Affects Versions: 1.10.0
> Environment: rev: dcc1330375826b779e4902176bb2473704dabb11
>Reporter: Gary Yao
>Assignee: PengFei Li
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.10.0
>
> Attachments: complete_result, 
> flink-gary-standalonesession-0-gyao-desktop.log, 
> flink-gary-taskexecutor-0-gyao-desktop.log, 
> flink-gary-taskexecutor-1-gyao-desktop.log, 
> flink-gary-taskexecutor-2-gyao-desktop.log, 
> flink-gary-taskexecutor-3-gyao-desktop.log, 
> flink-gary-taskexecutor-4-gyao-desktop.log, 
> flink-gary-taskexecutor-5-gyao-desktop.log, 
> flink-gary-taskexecutor-6-gyao-desktop.log
>
>
> *Description*
> Streaming bucketing end-to-end test ({{test_streaming_bucketing.sh}}) can 
> fail with Output hash mismatch.
> {noformat}
> Number of running task managers has reached 4.
> Job (e0b7a86e4d4111f3947baa3d004e083a) is running.
> Waiting until all values have been produced
> Truncating buckets
> Number of produced values 26930/6
> Truncating buckets
> Number of produced values 30890/6
> Truncating buckets
> Number of produced values 37340/6
> Truncating buckets
> Number of produced values 41290/6
> Truncating buckets
> Number of produced values 46710/6
> Truncating buckets
> Number of produced values 52120/6
> Truncating buckets
> Number of produced values 57110/6
> Truncating buckets
> Number of produced values 62530/6
> Cancelling job e0b7a86e4d4111f3947baa3d004e083a.
> Cancelled job e0b7a86e4d4111f3947baa3d004e083a.
> Waiting for job (e0b7a86e4d4111f3947baa3d004e083a) to reach terminal state 
> CANCELED ...
> Job (e0b7a86e4d4111f3947baa3d004e083a) reached terminal state CANCELED
> Job e0b7a86e4d4111f3947baa3d004e083a was cancelled, time to verify
> FAIL Bucketing Sink: Output hash mismatch.  Got 
> 9e00429abfb30eea4f459eb812b470ad, expected 01aba5ff77a0ef5e5cf6a727c248bdc3.
> head hexdump of actual:
> 000   (   2   ,   1   0   ,   0   ,   S   o   m   e   p   a   y
> 010   l   o   a   d   .   .   .   )  \n   (   2   ,   1   0   ,   1
> 020   ,   S   o   m   e   p   a   y   l   o   a   d   .   .   .
> 030   )  \n   (   2   ,   1   0   ,   2   ,   S   o   m   e   p
> 040   a   y   l   o   a   d   .   .   .   )  \n   (   2   ,   1   0
> 050   ,   3   ,   S   o   m   e   p   a   y   l   o   a   d   .
> 060   .   .   )  \n   (   2   ,   1   0   ,   4   ,   S   o   m   e
> 070   p   a   y   l   o   a   d   .   .   .   )  \n   (   2   ,
> 080   1   0   ,   5   ,   S   o   m   e   p   a   y   l   o   a
> 090   d   .   .   .   )  \n   (   2   ,   1   0   ,   6   ,   S   o
> 0a0   m   e   p   a   y   l   o   a   d   .   .   .   )  \n   (
> 0b0   2  

[jira] [Comment Edited] (FLINK-14843) Streaming bucketing end-to-end test can fail with Output hash mismatch

2019-12-23 Thread PengFei Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002690#comment-17002690
 ] 

PengFei Li edited comment on FLINK-14843 at 12/24/19 7:03 AM:
--

I think it is not a bug, but how the test works. The output of 
test_streaming_bucketing.sh tells us that number of produced values is 62530, 
which is more than the expected 6, so checksum fails. The duplicated data 
is from those pending files which isn't included in a checkpoint, and can't be 
truncated to remove duplicated data when job is restored. The meaning of "sleep 
10" is waiting for at least one completed checkpoint before triggering another 
failover, so that pending files generated when job is closing are in the 
restored checkpoint. 10 seconds is enough because checkpoint interval is set to 
4s in BucketingSinkTestProgram. Maybe we need to change the script to wait a 
completed checkpoint explicitly . What do you think? [~gjy] [~kkl0u]


was (Author: banmoy):
I think it is not a bug, but how the test works. The output of 
test_streaming_bucketing.sh tells us that number of produced values is 62530, 
which is more than the expected 6, so checksum fails. The duplicated data 
is from those pending files which isn't included in a checkpoint, and can't be 
truncated to remove duplicated data when job is restored. The meaning of "sleep 
10" is waiting for at least one completed checkpoint before triggering another 
failover, so that pending files generated when job is closing are in the 
restored checkpoint. 10 seconds is enough because checkpoint interval is set to 
4s in BucketingSinkTestProgram. Maybe we need to add a comment on "sleep 10". 
What do you think? [~gjy] [~kkl0u]

> Streaming bucketing end-to-end test can fail with Output hash mismatch
> --
>
> Key: FLINK-14843
> URL: https://issues.apache.org/jira/browse/FLINK-14843
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Tests
>Affects Versions: 1.10.0
> Environment: rev: dcc1330375826b779e4902176bb2473704dabb11
>Reporter: Gary Yao
>Assignee: PengFei Li
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.10.0
>
> Attachments: complete_result, 
> flink-gary-standalonesession-0-gyao-desktop.log, 
> flink-gary-taskexecutor-0-gyao-desktop.log, 
> flink-gary-taskexecutor-1-gyao-desktop.log, 
> flink-gary-taskexecutor-2-gyao-desktop.log, 
> flink-gary-taskexecutor-3-gyao-desktop.log, 
> flink-gary-taskexecutor-4-gyao-desktop.log, 
> flink-gary-taskexecutor-5-gyao-desktop.log, 
> flink-gary-taskexecutor-6-gyao-desktop.log
>
>
> *Description*
> Streaming bucketing end-to-end test ({{test_streaming_bucketing.sh}}) can 
> fail with Output hash mismatch.
> {noformat}
> Number of running task managers has reached 4.
> Job (e0b7a86e4d4111f3947baa3d004e083a) is running.
> Waiting until all values have been produced
> Truncating buckets
> Number of produced values 26930/6
> Truncating buckets
> Number of produced values 30890/6
> Truncating buckets
> Number of produced values 37340/6
> Truncating buckets
> Number of produced values 41290/6
> Truncating buckets
> Number of produced values 46710/6
> Truncating buckets
> Number of produced values 52120/6
> Truncating buckets
> Number of produced values 57110/6
> Truncating buckets
> Number of produced values 62530/6
> Cancelling job e0b7a86e4d4111f3947baa3d004e083a.
> Cancelled job e0b7a86e4d4111f3947baa3d004e083a.
> Waiting for job (e0b7a86e4d4111f3947baa3d004e083a) to reach terminal state 
> CANCELED ...
> Job (e0b7a86e4d4111f3947baa3d004e083a) reached terminal state CANCELED
> Job e0b7a86e4d4111f3947baa3d004e083a was cancelled, time to verify
> FAIL Bucketing Sink: Output hash mismatch.  Got 
> 9e00429abfb30eea4f459eb812b470ad, expected 01aba5ff77a0ef5e5cf6a727c248bdc3.
> head hexdump of actual:
> 000   (   2   ,   1   0   ,   0   ,   S   o   m   e   p   a   y
> 010   l   o   a   d   .   .   .   )  \n   (   2   ,   1   0   ,   1
> 020   ,   S   o   m   e   p   a   y   l   o   a   d   .   .   .
> 030   )  \n   (   2   ,   1   0   ,   2   ,   S   o   m   e   p
> 040   a   y   l   o   a   d   .   .   .   )  \n   (   2   ,   1   0
> 050   ,   3   ,   S   o   m   e   p   a   y   l   o   a   d   .
> 060   .   .   )  \n   (   2   ,   1   0   ,   4   ,   S   o   m   e
> 070   p   a   y   l   o   a   d   .   .   .   )  \n   (   2   ,
> 080   1   0   ,   5   ,   S   o   m   e   p   a   y   l   o   a
> 090   d   .   .   .   )  \n   (   2   ,   1   0   ,   6   ,   S   o
> 0a0   m   e   p   a   y   l   o   a   d   .   .   .   )  \n   (
> 0b0   2   ,   1   0   ,   7   ,   S   o   m   

[GitHub] [flink] flinkbot edited a comment on issue #10671: [FLINK-15372][core][config] Use shorter config keys for FLIP-49 total memory config options

2019-12-23 Thread GitBox
flinkbot edited a comment on issue #10671: [FLINK-15372][core][config] Use 
shorter config keys for FLIP-49 total memory config options
URL: https://github.com/apache/flink/pull/10671#issuecomment-568658217
 
 
   
   ## CI report:
   
   * b4b8fe5b7c9d469f4fcd1bd83317e1dbaae7c534 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142188774) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3872)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10669: [FLINK-15192][docs][table] Split 'SQL' page into multiple sub pages for better readability

2019-12-23 Thread GitBox
flinkbot edited a comment on issue #10669: [FLINK-15192][docs][table] Split 
'SQL' page into multiple sub pages for better readability
URL: https://github.com/apache/flink/pull/10669#issuecomment-568653494
 
 
   
   ## CI report:
   
   * aac6bab3b96e3d721234030ce35e24006b6bf7c5 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142187553) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3870)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10670: [FLINK-15370][state backends] Make sure sharedResources takes effect in RocksDBResourceContainer

2019-12-23 Thread GitBox
flinkbot edited a comment on issue #10670: [FLINK-15370][state backends] Make 
sure sharedResources takes effect in RocksDBResourceContainer
URL: https://github.com/apache/flink/pull/10670#issuecomment-568658193
 
 
   
   ## CI report:
   
   * deed8274c86783bb2068acbcd9c6ad8d40b63f83 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142188770) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3871)
 
   * 88d1a9ecbca46c2d452e058b3b9efaed1de8f6ec UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10655: [FLINK-15356][metric] Add applicationId to flink metrics running on yarn

2019-12-23 Thread GitBox
flinkbot edited a comment on issue #10655: [FLINK-15356][metric] Add 
applicationId to flink metrics running on yarn
URL: https://github.com/apache/flink/pull/10655#issuecomment-568172387
 
 
   
   ## CI report:
   
   * 21a9da4826fee2cbf8e79168e854a605ea0a5ef3 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142005364) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3842)
 
   * 4c18b1fcec87cefa647a011b9c78d7791d89b372 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142079726) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3851)
 
   * ae4d5b34f19b18fdc5696444f60bc0e342c0153b UNKNOWN
   * dc0a50ffd0c588e82a96cd7a60f2bce7a5b9fe36 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10643: [FLINK-15342][hive] Verify querying Hive view

2019-12-23 Thread GitBox
flinkbot edited a comment on issue #10643: [FLINK-15342][hive] Verify querying 
Hive view
URL: https://github.com/apache/flink/pull/10643#issuecomment-567843043
 
 
   
   ## CI report:
   
   * 1051e0c0c2cf1ffc4925ab2db4489ccf615686b0 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/141883442) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3809)
 
   * d0f0a3aa44051bd8526b9daddf7a28ee7fba19ac Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142190049) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3873)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-14843) Streaming bucketing end-to-end test can fail with Output hash mismatch

2019-12-23 Thread PengFei Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002690#comment-17002690
 ] 

PengFei Li edited comment on FLINK-14843 at 12/24/19 7:01 AM:
--

I think it is not a bug, but how the test works. The output of 
test_streaming_bucketing.sh tells us that number of produced values is 62530, 
which is more than the expected 6, so checksum fails. The duplicated data 
is from those pending files which isn't included in a checkpoint, and can't be 
truncated to remove duplicated data when job is restored. The meaning of "sleep 
10" is waiting for at least one completed checkpoint before triggering another 
failover, so that pending files generated when job is closing are in the 
restored checkpoint. 10 seconds is enough because checkpoint interval is set to 
4s in BucketingSinkTestProgram. Maybe we need to add a comment on "sleep 10". 
What do you think? [~gjy] [~kkl0u]


was (Author: banmoy):
I think it is not a bug, but how the test works. The output of 
test_streaming_bucketing.sh tells us that number of produced values is 62530, 
which is more than the expected 6, so checksum fails. The duplicated data 
is from those pending files which isn't included in a checkpoint, and can't be 
truncated to remove duplicated data when job is restored. The meaning of "sleep 
10" is waiting for at least one completed checkpoint before triggering another 
failover, so that pending files generated when job is closing are in the 
restored checkpoint. 10 seconds is enough because checkpoint interval is set to 
4s in BucketingSinkTestProgram. Maybe we just need to add a comment on "sleep 
10". What do you think? [~gjy] [~kkl0u]

> Streaming bucketing end-to-end test can fail with Output hash mismatch
> --
>
> Key: FLINK-14843
> URL: https://issues.apache.org/jira/browse/FLINK-14843
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Tests
>Affects Versions: 1.10.0
> Environment: rev: dcc1330375826b779e4902176bb2473704dabb11
>Reporter: Gary Yao
>Assignee: PengFei Li
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.10.0
>
> Attachments: complete_result, 
> flink-gary-standalonesession-0-gyao-desktop.log, 
> flink-gary-taskexecutor-0-gyao-desktop.log, 
> flink-gary-taskexecutor-1-gyao-desktop.log, 
> flink-gary-taskexecutor-2-gyao-desktop.log, 
> flink-gary-taskexecutor-3-gyao-desktop.log, 
> flink-gary-taskexecutor-4-gyao-desktop.log, 
> flink-gary-taskexecutor-5-gyao-desktop.log, 
> flink-gary-taskexecutor-6-gyao-desktop.log
>
>
> *Description*
> Streaming bucketing end-to-end test ({{test_streaming_bucketing.sh}}) can 
> fail with Output hash mismatch.
> {noformat}
> Number of running task managers has reached 4.
> Job (e0b7a86e4d4111f3947baa3d004e083a) is running.
> Waiting until all values have been produced
> Truncating buckets
> Number of produced values 26930/6
> Truncating buckets
> Number of produced values 30890/6
> Truncating buckets
> Number of produced values 37340/6
> Truncating buckets
> Number of produced values 41290/6
> Truncating buckets
> Number of produced values 46710/6
> Truncating buckets
> Number of produced values 52120/6
> Truncating buckets
> Number of produced values 57110/6
> Truncating buckets
> Number of produced values 62530/6
> Cancelling job e0b7a86e4d4111f3947baa3d004e083a.
> Cancelled job e0b7a86e4d4111f3947baa3d004e083a.
> Waiting for job (e0b7a86e4d4111f3947baa3d004e083a) to reach terminal state 
> CANCELED ...
> Job (e0b7a86e4d4111f3947baa3d004e083a) reached terminal state CANCELED
> Job e0b7a86e4d4111f3947baa3d004e083a was cancelled, time to verify
> FAIL Bucketing Sink: Output hash mismatch.  Got 
> 9e00429abfb30eea4f459eb812b470ad, expected 01aba5ff77a0ef5e5cf6a727c248bdc3.
> head hexdump of actual:
> 000   (   2   ,   1   0   ,   0   ,   S   o   m   e   p   a   y
> 010   l   o   a   d   .   .   .   )  \n   (   2   ,   1   0   ,   1
> 020   ,   S   o   m   e   p   a   y   l   o   a   d   .   .   .
> 030   )  \n   (   2   ,   1   0   ,   2   ,   S   o   m   e   p
> 040   a   y   l   o   a   d   .   .   .   )  \n   (   2   ,   1   0
> 050   ,   3   ,   S   o   m   e   p   a   y   l   o   a   d   .
> 060   .   .   )  \n   (   2   ,   1   0   ,   4   ,   S   o   m   e
> 070   p   a   y   l   o   a   d   .   .   .   )  \n   (   2   ,
> 080   1   0   ,   5   ,   S   o   m   e   p   a   y   l   o   a
> 090   d   .   .   .   )  \n   (   2   ,   1   0   ,   6   ,   S   o
> 0a0   m   e   p   a   y   l   o   a   d   .   .   .   )  \n   (
> 0b0   2   ,   1   0   ,   7   ,   S   o   m   e   p   a   y   l
> 

[GitHub] [flink] danny0405 commented on a change in pull request #10620: [FLINK-15239][table-planner-blink] TM Metaspace memory leak

2019-12-23 Thread GitBox
danny0405 commented on a change in pull request #10620: 
[FLINK-15239][table-planner-blink] TM Metaspace memory leak
URL: https://github.com/apache/flink/pull/10620#discussion_r361083233
 
 

 ##
 File path: 
flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/generated/CompileUtils.java
 ##
 @@ -45,7 +48,7 @@
 * number of Meta zone GC (class unloading), resulting in performance 
bottlenecks. So we add
 * a cache to avoid this problem.
 */
-   protected static final Cache, Class> 
COMPILED_CACHE = CacheBuilder
+   protected static final Cache> 
COMPILED_CACHE = CacheBuilder
.newBuilder()
.maximumSize(100)   // estimated cache size
 
 Review comment:
   Why not invoke `.softValues()` here instead of a new `Cache` value ? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14843) Streaming bucketing end-to-end test can fail with Output hash mismatch

2019-12-23 Thread PengFei Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002690#comment-17002690
 ] 

PengFei Li commented on FLINK-14843:


I think it is not a bug, but how the test works. The output of 
test_streaming_bucketing.sh tells us that number of produced values is 62530, 
which is more than the expected 6, so checksum fails. The duplicated data 
is from those pending files which isn't included in a checkpoint, and can't be 
truncated to remove duplicated data when job is restored. The meaning of "sleep 
10" is waiting for at least one completed checkpoint before triggering another 
failover, so that pending files generated when job is closing are in the 
restored checkpoint. 10 seconds is enough because checkpoint interval is set to 
4s in BucketingSinkTestProgram. Maybe we just need to add a comment on "sleep 
10". What do you think? [~gjy] [~kkl0u]

> Streaming bucketing end-to-end test can fail with Output hash mismatch
> --
>
> Key: FLINK-14843
> URL: https://issues.apache.org/jira/browse/FLINK-14843
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Tests
>Affects Versions: 1.10.0
> Environment: rev: dcc1330375826b779e4902176bb2473704dabb11
>Reporter: Gary Yao
>Assignee: PengFei Li
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.10.0
>
> Attachments: complete_result, 
> flink-gary-standalonesession-0-gyao-desktop.log, 
> flink-gary-taskexecutor-0-gyao-desktop.log, 
> flink-gary-taskexecutor-1-gyao-desktop.log, 
> flink-gary-taskexecutor-2-gyao-desktop.log, 
> flink-gary-taskexecutor-3-gyao-desktop.log, 
> flink-gary-taskexecutor-4-gyao-desktop.log, 
> flink-gary-taskexecutor-5-gyao-desktop.log, 
> flink-gary-taskexecutor-6-gyao-desktop.log
>
>
> *Description*
> Streaming bucketing end-to-end test ({{test_streaming_bucketing.sh}}) can 
> fail with Output hash mismatch.
> {noformat}
> Number of running task managers has reached 4.
> Job (e0b7a86e4d4111f3947baa3d004e083a) is running.
> Waiting until all values have been produced
> Truncating buckets
> Number of produced values 26930/6
> Truncating buckets
> Number of produced values 30890/6
> Truncating buckets
> Number of produced values 37340/6
> Truncating buckets
> Number of produced values 41290/6
> Truncating buckets
> Number of produced values 46710/6
> Truncating buckets
> Number of produced values 52120/6
> Truncating buckets
> Number of produced values 57110/6
> Truncating buckets
> Number of produced values 62530/6
> Cancelling job e0b7a86e4d4111f3947baa3d004e083a.
> Cancelled job e0b7a86e4d4111f3947baa3d004e083a.
> Waiting for job (e0b7a86e4d4111f3947baa3d004e083a) to reach terminal state 
> CANCELED ...
> Job (e0b7a86e4d4111f3947baa3d004e083a) reached terminal state CANCELED
> Job e0b7a86e4d4111f3947baa3d004e083a was cancelled, time to verify
> FAIL Bucketing Sink: Output hash mismatch.  Got 
> 9e00429abfb30eea4f459eb812b470ad, expected 01aba5ff77a0ef5e5cf6a727c248bdc3.
> head hexdump of actual:
> 000   (   2   ,   1   0   ,   0   ,   S   o   m   e   p   a   y
> 010   l   o   a   d   .   .   .   )  \n   (   2   ,   1   0   ,   1
> 020   ,   S   o   m   e   p   a   y   l   o   a   d   .   .   .
> 030   )  \n   (   2   ,   1   0   ,   2   ,   S   o   m   e   p
> 040   a   y   l   o   a   d   .   .   .   )  \n   (   2   ,   1   0
> 050   ,   3   ,   S   o   m   e   p   a   y   l   o   a   d   .
> 060   .   .   )  \n   (   2   ,   1   0   ,   4   ,   S   o   m   e
> 070   p   a   y   l   o   a   d   .   .   .   )  \n   (   2   ,
> 080   1   0   ,   5   ,   S   o   m   e   p   a   y   l   o   a
> 090   d   .   .   .   )  \n   (   2   ,   1   0   ,   6   ,   S   o
> 0a0   m   e   p   a   y   l   o   a   d   .   .   .   )  \n   (
> 0b0   2   ,   1   0   ,   7   ,   S   o   m   e   p   a   y   l
> 0c0   o   a   d   .   .   .   )  \n   (   2   ,   1   0   ,   8   ,
> 0d0   S   o   m   e   p   a   y   l   o   a   d   .   .   .   )
> 0e0  \n   (   2   ,   1   0   ,   9   ,   S   o   m   e   p   a
> 0f0   y   l   o   a   d   .   .   .   )  \n
> 0fa
> Stopping taskexecutor daemon (pid: 55164) on host gyao-desktop.
> Stopping standalonesession daemon (pid: 51073) on host gyao-desktop.
> Stopping taskexecutor daemon (pid: 51504) on host gyao-desktop.
> Skipping taskexecutor daemon (pid: 52034), because it is not running anymore 
> on gyao-desktop.
> Skipping taskexecutor daemon (pid: 52472), because it is not running anymore 
> on gyao-desktop.
> Skipping taskexecutor daemon (pid: 52916), because it is not running anymore 
> on gyao-desktop.
> 

[GitHub] [flink] zhuzhurk closed pull request #10551: [FLINK-13662][kinesis] Relax timing requirements

2019-12-23 Thread GitBox
zhuzhurk closed pull request #10551: [FLINK-13662][kinesis] Relax timing 
requirements
URL: https://github.com/apache/flink/pull/10551
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-15378) StreamFileSystemSink supported mutil hdfs plugins.

2019-12-23 Thread ouyangwulin (Jira)
ouyangwulin created FLINK-15378:
---

 Summary: StreamFileSystemSink supported mutil hdfs plugins.
 Key: FLINK-15378
 URL: https://issues.apache.org/jira/browse/FLINK-15378
 Project: Flink
  Issue Type: Improvement
  Components: API / Core
Affects Versions: 1.9.2, 1.11.0
Reporter: ouyangwulin
 Fix For: 1.11.0


Request 1:  FileSystem plugins not effect the default yarn dependecies.

Request 2:  StreamFileSystemSink supported mutil hdfs plugins.    

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14393) add an option to enable/disable cancel job in web ui

2019-12-23 Thread Kaibo Zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002689#comment-17002689
 ] 

Kaibo Zhou commented on FLINK-14393:


This problem has been around for a long time. I think this feature is useful 
before we have general access control to the REST endpoint.

I am glad to do backend work. Can anyone assign it to me?

> add an option to enable/disable cancel job in web ui
> 
>
> Key: FLINK-14393
> URL: https://issues.apache.org/jira/browse/FLINK-14393
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Affects Versions: 1.10.0
>Reporter: Yadong Xie
>Priority: Major
> Fix For: 1.11.0
>
>
> add the option to enable/disable cancel job in web ui
> when disabled, user can not cancel a job through the web ui



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15377) Mesos WordCount test fails on travis

2019-12-23 Thread Yu Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated FLINK-15377:
--
Description: 
The "Run Mesos WordCount test" fails nightly run on travis with below error:
{code}
rm: cannot remove 
'/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/test-data/log/mesos-sl/mesos-slave.INFO':
 Permission denied
rm: cannot remove 
'/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/test-data/log/mesos-sl/mesos-fetcher.INFO':
 Permission denied
rm: cannot remove 
'/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/test-data/log/mesos-sl/mesos-slave.4a4fda410c57.invalid-user.log.INFO.20191224-031307.1':
 Permission denied
...
[FAIL] 'Run Mesos WordCount test' failed after 5 minutes and 26 seconds! Test 
exited with exit code 0 but the logs contained errors, exceptions or non-empty 
.out files
{code}

https://api.travis-ci.org/v3/job/628795106/log.txt

  was:
The "Run Mesos WordCount test" fails nightly run on travis with below error:
{code}
rm: cannot remove 
'/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/test-data/log/mesos-sl/mesos-slave.INFO':
 Permission denied
rm: cannot remove 
'/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/test-data/log/mesos-sl/mesos-fetcher.INFO':
 Permission denied
rm: cannot remove 
'/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/test-data/log/mesos-sl/mesos-slave.4a4fda410c57.invalid-user.log.INFO.20191224-031307.1':
 Permission denied
...
[FAIL] 'Run Mesos WordCount test' failed after 5 minutes and 26 seconds! Test 
exited with exit code 0 but the logs contained errors, exceptions or non-empty 
.out files
{code}


> Mesos WordCount test fails on travis
> 
>
> Key: FLINK-15377
> URL: https://issues.apache.org/jira/browse/FLINK-15377
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Mesos
>Affects Versions: 1.10.0
>Reporter: Yu Li
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.10.0
>
>
> The "Run Mesos WordCount test" fails nightly run on travis with below error:
> {code}
> rm: cannot remove 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/test-data/log/mesos-sl/mesos-slave.INFO':
>  Permission denied
> rm: cannot remove 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/test-data/log/mesos-sl/mesos-fetcher.INFO':
>  Permission denied
> rm: cannot remove 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/test-data/log/mesos-sl/mesos-slave.4a4fda410c57.invalid-user.log.INFO.20191224-031307.1':
>  Permission denied
> ...
> [FAIL] 'Run Mesos WordCount test' failed after 5 minutes and 26 seconds! Test 
> exited with exit code 0 but the logs contained errors, exceptions or 
> non-empty .out files
> {code}
> https://api.travis-ci.org/v3/job/628795106/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15377) Mesos WordCount test fails on travis

2019-12-23 Thread Yu Li (Jira)
Yu Li created FLINK-15377:
-

 Summary: Mesos WordCount test fails on travis
 Key: FLINK-15377
 URL: https://issues.apache.org/jira/browse/FLINK-15377
 Project: Flink
  Issue Type: Bug
  Components: Deployment / Mesos
Affects Versions: 1.10.0
Reporter: Yu Li
 Fix For: 1.10.0


The "Run Mesos WordCount test" fails nightly run on travis with below error:
{code}
rm: cannot remove 
'/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/test-data/log/mesos-sl/mesos-slave.INFO':
 Permission denied
rm: cannot remove 
'/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/test-data/log/mesos-sl/mesos-fetcher.INFO':
 Permission denied
rm: cannot remove 
'/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/test-data/log/mesos-sl/mesos-slave.4a4fda410c57.invalid-user.log.INFO.20191224-031307.1':
 Permission denied
...
[FAIL] 'Run Mesos WordCount test' failed after 5 minutes and 26 seconds! Test 
exited with exit code 0 but the logs contained errors, exceptions or non-empty 
.out files
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15377) Mesos WordCount test fails on travis

2019-12-23 Thread Yu Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002688#comment-17002688
 ] 

Yu Li commented on FLINK-15377:
---

[~karmagyz] FYI.

> Mesos WordCount test fails on travis
> 
>
> Key: FLINK-15377
> URL: https://issues.apache.org/jira/browse/FLINK-15377
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Mesos
>Affects Versions: 1.10.0
>Reporter: Yu Li
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.10.0
>
>
> The "Run Mesos WordCount test" fails nightly run on travis with below error:
> {code}
> rm: cannot remove 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/test-data/log/mesos-sl/mesos-slave.INFO':
>  Permission denied
> rm: cannot remove 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/test-data/log/mesos-sl/mesos-fetcher.INFO':
>  Permission denied
> rm: cannot remove 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/test-data/log/mesos-sl/mesos-slave.4a4fda410c57.invalid-user.log.INFO.20191224-031307.1':
>  Permission denied
> ...
> [FAIL] 'Run Mesos WordCount test' failed after 5 minutes and 26 seconds! Test 
> exited with exit code 0 but the logs contained errors, exceptions or 
> non-empty .out files
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15206) support dynamic catalog table for truly unified SQL job

2019-12-23 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-15206:
-
Description: 
currently if users have both an online and an offline job with same business 
logic in Flink SQL, their codebase is still not unified. They would keep two 
SQL statements whose only difference is the source (or/and sink) table (with 
different params). E.g.


{code:java}
// online job
insert into x select * from kafka_table (starting time) ...;

// offline backfill job
insert into x select * from hive_table  (starting and ending time) ...;
{code}

We can introduce a "dynamic catalog table". The dynamic catalog table acts as a 
view, and is just an abstract table of multiple actual tables behind it that 
can be switched under some configuration flags. When execute a job, depending 
on the configuration, the dynamic catalog table can point to an actual source 
table.

A use case for this is the example given above - when executed in streaming 
mode, {{my_source_dynamic_table}} should point to a kafka catalog table with a 
new starting position, and in batch mode, {{my_source_dynamic_table}} should 
point to a hive catalog table with starting/ending positions.
 
One thing to note is that the starting position of kafka_table, and 
starting/ending position of hive_table are different every time. needs more 
thinking of how can we accommodate that

  was:
currently if users have both an online and an offline job with same business 
logic in Flink SQL, their codebase is still not unified. They would keep two 
SQL statements whose only difference is the source (or/and sink) table (with 
different params). E.g.


{code:java}
// online job
insert into x select * from kafka_table (starting time) ...;

// offline backfill job
insert into x select * from hive_table  (starting and ending time) ...;
{code}

We would like to introduce a "dynamic catalog table". The dynamic catalog table 
acts as a view, and is just an abstract table of multiple actual tables behind 
it that can be switched under some configuration flags. When execute a job, 
depending on the configuration, the dynamic catalog table can point to an 
actual source table.

A use case for this is the example given above - when executed in streaming 
mode, {{my_source_dynamic_table}} should point to a kafka catalog table with a 
new starting position, and in batch mode, {{my_source_dynamic_table}} should 
point to a hive catalog table with starting/ending positions.
 
One thing to note is that the starting position of kafka_table, and 
starting/ending position of hive_table are different every time. needs more 
thinking of how can we accommodate that


> support dynamic catalog table for truly unified SQL job
> ---
>
> Key: FLINK-15206
> URL: https://issues.apache.org/jira/browse/FLINK-15206
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
>
> currently if users have both an online and an offline job with same business 
> logic in Flink SQL, their codebase is still not unified. They would keep two 
> SQL statements whose only difference is the source (or/and sink) table (with 
> different params). E.g.
> {code:java}
> // online job
> insert into x select * from kafka_table (starting time) ...;
> // offline backfill job
> insert into x select * from hive_table  (starting and ending time) ...;
> {code}
> We can introduce a "dynamic catalog table". The dynamic catalog table acts as 
> a view, and is just an abstract table of multiple actual tables behind it 
> that can be switched under some configuration flags. When execute a job, 
> depending on the configuration, the dynamic catalog table can point to an 
> actual source table.
> A use case for this is the example given above - when executed in streaming 
> mode, {{my_source_dynamic_table}} should point to a kafka catalog table with 
> a new starting position, and in batch mode, {{my_source_dynamic_table}} 
> should point to a hive catalog table with starting/ending positions.
>  
> One thing to note is that the starting position of kafka_table, and 
> starting/ending position of hive_table are different every time. needs more 
> thinking of how can we accommodate that



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-14609) Add doc for Flink SQL computed columns

2019-12-23 Thread Danny Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Chen closed FLINK-14609.
--
Release Note: Close because it is duplicated of FLINK-15277.
  Resolution: Duplicate

> Add doc for Flink SQL computed columns
> --
>
> Key: FLINK-14609
> URL: https://issues.apache.org/jira/browse/FLINK-14609
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.9.1
>Reporter: Danny Chen
>Assignee: Danny Chen
>Priority: Blocker
> Fix For: 1.10.0
>
>
> 1. Add doc to describe the syntax of computed column.
> 2. Add some demo on the website.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] KarmaGYZ commented on issue #10668: [hotfix] Align the parameter pattern of retry_times with retry_times_…

2019-12-23 Thread GitBox
KarmaGYZ commented on issue #10668: [hotfix] Align the parameter pattern of 
retry_times with retry_times_…
URL: https://github.com/apache/flink/pull/10668#issuecomment-568669941
 
 
   Travis gives green light to relevant e2e tests 
[here](https://travis-ci.org/KarmaGYZ/flink/builds/628975565)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-15359) Remove unused YarnConfigOptions

2019-12-23 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen closed FLINK-15359.
-
Resolution: Fixed

master via dcb02507c4d07bef2df1d0fb790bba84c5b07727

> Remove unused YarnConfigOptions
> ---
>
> Key: FLINK-15359
> URL: https://issues.apache.org/jira/browse/FLINK-15359
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Reporter: Zili Chen
>Assignee: Yan Xu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> There are several unused {{YarnConfigOptions}}. Remove them for preventing 
> misunderstanding.
> - {{yarn.appmaster.rpc.address}}
> - {{yarn.appmaster.rpc.port}}
> - {{yarn.maximum-failed-containers}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] TisonKun merged pull request #10658: [FLINK-15359] Remove unused YarnConfigOptions, Tests, Docs

2019-12-23 Thread GitBox
TisonKun merged pull request #10658: [FLINK-15359] Remove unused 
YarnConfigOptions, Tests, Docs
URL: https://github.com/apache/flink/pull/10658
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10673: [FLINK-15374][core][config] Update descriptions for jvm overhead config options

2019-12-23 Thread GitBox
flinkbot edited a comment on issue #10673: [FLINK-15374][core][config] Update 
descriptions for jvm overhead config options
URL: https://github.com/apache/flink/pull/10673#issuecomment-568663453
 
 
   
   ## CI report:
   
   * 3a8fcf0e8936f3c4115ff4771b52e060064676af Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142190071) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3875)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10671: [FLINK-15372][core][config] Use shorter config keys for FLIP-49 total memory config options

2019-12-23 Thread GitBox
flinkbot edited a comment on issue #10671: [FLINK-15372][core][config] Use 
shorter config keys for FLIP-49 total memory config options
URL: https://github.com/apache/flink/pull/10671#issuecomment-568658217
 
 
   
   ## CI report:
   
   * b4b8fe5b7c9d469f4fcd1bd83317e1dbaae7c534 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142188774) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3872)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-15376) support "CREATE TABLE AS" in Flink SQL

2019-12-23 Thread Bowen Li (Jira)
Bowen Li created FLINK-15376:


 Summary: support "CREATE TABLE AS" in Flink SQL
 Key: FLINK-15376
 URL: https://issues.apache.org/jira/browse/FLINK-15376
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / API
Reporter: Bowen Li
Assignee: Kurt Young






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10672: [FLINK-15373][core][config] Update descriptions for framework / task off-heap memory config options

2019-12-23 Thread GitBox
flinkbot edited a comment on issue #10672: [FLINK-15373][core][config] Update 
descriptions for framework / task off-heap memory config options
URL: https://github.com/apache/flink/pull/10672#issuecomment-568663425
 
 
   
   ## CI report:
   
   * 31a2ccbabb6675673d445b6a9d258e6622d295d8 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142190064) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3874)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on issue #10658: [FLINK-15359] Remove unused YarnConfigOptions, Tests, Docs

2019-12-23 Thread GitBox
TisonKun commented on issue #10658: [FLINK-15359] Remove unused 
YarnConfigOptions, Tests, Docs
URL: https://github.com/apache/flink/pull/10658#issuecomment-568669127
 
 
   @xintongsong Yes it is ever used in pre FLIP-6 framework. If we keep current 
dynamically allocate container mechanism it possibly meaningless. We can always 
add back the configuration when we implement similar logic in the future but 
given it has no power now remove them could reduce misleading.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10655: [FLINK-15356][metric] Add applicationId to flink metrics running on yarn

2019-12-23 Thread GitBox
flinkbot edited a comment on issue #10655: [FLINK-15356][metric] Add 
applicationId to flink metrics running on yarn
URL: https://github.com/apache/flink/pull/10655#issuecomment-568172387
 
 
   
   ## CI report:
   
   * 21a9da4826fee2cbf8e79168e854a605ea0a5ef3 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142005364) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3842)
 
   * 4c18b1fcec87cefa647a011b9c78d7791d89b372 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142079726) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3851)
 
   * ae4d5b34f19b18fdc5696444f60bc0e342c0153b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10643: [FLINK-15342][hive] Verify querying Hive view

2019-12-23 Thread GitBox
flinkbot edited a comment on issue #10643: [FLINK-15342][hive] Verify querying 
Hive view
URL: https://github.com/apache/flink/pull/10643#issuecomment-567843043
 
 
   
   ## CI report:
   
   * 1051e0c0c2cf1ffc4925ab2db4489ccf615686b0 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/141883442) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3809)
 
   * d0f0a3aa44051bd8526b9daddf7a28ee7fba19ac Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142190049) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3873)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-15175) syntax not supported in SQLClient for TPCDS queries

2019-12-23 Thread Kurt Young (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young closed FLINK-15175.
--
Fix Version/s: 1.10.0
   Resolution: Fixed

master: fc4927e41989be75866218c2a60aa914e1eedcd3

1.10.0: 3af8e1ef31aa61cf08ed910df32a1a26dd26f892

> syntax  not supported in SQLClient for TPCDS queries
> 
>
> Key: FLINK-15175
> URL: https://issues.apache.org/jira/browse/FLINK-15175
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: liupengcheng
>Assignee: liupengcheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code:java}
> Flink SQL> WITH customer_total_return AS
> > ( SELECT
> > sr_customer_sk AS ctr_customer_sk,
> > sr_store_sk AS ctr_store_sk,
> > sum(sr_return_amt) AS ctr_total_return
> >   FROM store_returns, date_dim
> >   WHERE sr_returned_date_sk = d_date_sk AND d_year = 2000
> >   GROUP BY sr_customer_sk, sr_store_sk)
> > SELECT c_customer_id
> > FROM customer_total_return ctr1, store, customer
> > WHERE ctr1.ctr_total_return >
> >   (SELECT avg(ctr_total_return) * 1.2
> >   FROM customer_total_return ctr2
> >   WHERE ctr1.ctr_store_sk = ctr2.ctr_store_sk)
> >   AND s_store_sk = ctr1.ctr_store_sk
> >   AND s_state = 'TN'
> >   AND ctr1.ctr_customer_sk = c_customer_sk
> > ORDER BY c_customer_id
> > LIMIT 100;
> [ERROR] Unknown or invalid SQL statement.
> {code}
> It seems that the newest branch already support all TPCDS queries, but 
> currently the sql client parser has not supported yet. 
> Anyone already working on this? If not I can try it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15175) syntax not supported in SQLClient for TPCDS queries

2019-12-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15175:
---
Labels: pull-request-available  (was: )

> syntax  not supported in SQLClient for TPCDS queries
> 
>
> Key: FLINK-15175
> URL: https://issues.apache.org/jira/browse/FLINK-15175
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: liupengcheng
>Assignee: liupengcheng
>Priority: Major
>  Labels: pull-request-available
>
> {code:java}
> Flink SQL> WITH customer_total_return AS
> > ( SELECT
> > sr_customer_sk AS ctr_customer_sk,
> > sr_store_sk AS ctr_store_sk,
> > sum(sr_return_amt) AS ctr_total_return
> >   FROM store_returns, date_dim
> >   WHERE sr_returned_date_sk = d_date_sk AND d_year = 2000
> >   GROUP BY sr_customer_sk, sr_store_sk)
> > SELECT c_customer_id
> > FROM customer_total_return ctr1, store, customer
> > WHERE ctr1.ctr_total_return >
> >   (SELECT avg(ctr_total_return) * 1.2
> >   FROM customer_total_return ctr2
> >   WHERE ctr1.ctr_store_sk = ctr2.ctr_store_sk)
> >   AND s_store_sk = ctr1.ctr_store_sk
> >   AND s_state = 'TN'
> >   AND ctr1.ctr_customer_sk = c_customer_sk
> > ORDER BY c_customer_id
> > LIMIT 100;
> [ERROR] Unknown or invalid SQL statement.
> {code}
> It seems that the newest branch already support all TPCDS queries, but 
> currently the sql client parser has not supported yet. 
> Anyone already working on this? If not I can try it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] KurtYoung closed pull request #10619: [FLINK-15175]Fix CTES not supported in SQL CLI

2019-12-23 Thread GitBox
KurtYoung closed pull request #10619: [FLINK-15175]Fix CTES not supported in 
SQL CLI
URL: https://github.com/apache/flink/pull/10619
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15289) Run sql appear error of "Zero-length character strings have no serializable string representation".

2019-12-23 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-15289:
-
Fix Version/s: (was: 1.10.0)
   1.11.0

> Run sql appear error of "Zero-length character strings have no serializable 
> string representation".
> ---
>
> Key: FLINK-15289
> URL: https://issues.apache.org/jira/browse/FLINK-15289
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: xiaojin.wy
>Assignee: Jingsong Lee
>Priority: Major
> Fix For: 1.11.0
>
>
> *The sql is:*
>  CREATE TABLE `INT8_TBL` (
>  q1 BIGINT,
>  q2 BIGINT
>  ) WITH (
>  'format.field-delimiter'=',',
>  'connector.type'='filesystem',
>  'format.derive-schema'='true',
>  
> 'connector.path'='/defender_test_data/daily_regression_batch_postgres_1.10/test_bigint/sources/INT8_TBL.csv',
>  'format.type'='csv'
>  );
> SELECT '' AS five, q1 AS plus, -q1 AS xm FROM INT8_TBL;
> *The error detail is :*
>  2019-12-17 15:35:07,026 ERROR org.apache.flink.table.client.SqlClient - SQL 
> Client must stop. Unexpected exception. This is a bug. Please consider filing 
> an issue.
>  org.apache.flink.table.api.TableException: Zero-length character strings 
> have no serializable string representation.
>  at 
> org.apache.flink.table.types.logical.CharType.asSerializableString(CharType.java:116)
>  at 
> org.apache.flink.table.descriptors.DescriptorProperties.putTableSchema(DescriptorProperties.java:218)
>  at 
> org.apache.flink.table.catalog.CatalogTableImpl.toProperties(CatalogTableImpl.java:75)
>  at 
> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSink(TableFactoryUtil.java:85)
>  at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryAndPersistInternal(LocalExecutor.java:688)
>  at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryAndPersist(LocalExecutor.java:488)
>  at org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:601)
>  at 
> org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:385)
>  at java.util.Optional.ifPresent(Optional.java:159)
>  at 
> org.apache.flink.table.client.cli.CliClient.submitSQLFile(CliClient.java:271)
>  at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:125)
>  at org.apache.flink.table.client.SqlClient.start(SqlClient.java:104)
>  at org.apache.flink.table.client.SqlClient.main(SqlClient.java:180)
> *The input data is:*
>  123,456
>  123,4567890123456789
>  4567890123456789,123
>  4567890123456789,4567890123456789
>  4567890123456789,-4567890123456789



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15356) Add applicationId to existing flink metrics running on yarn

2019-12-23 Thread Forward Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002676#comment-17002676
 ] 

Forward Xu commented on FLINK-15356:


hi, [~fly_in_gis] Thank you very much, I will use clusterID as the 
applicationID to pass into the metrics.

> Add applicationId to existing flink metrics running on yarn
> ---
>
> Key: FLINK-15356
> URL: https://issues.apache.org/jira/browse/FLINK-15356
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Metrics
>Reporter: Forward Xu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When sending metrics to Prometheus, these systems have only the Flink job ID, 
> and the Flink job ID is UUID, which cannot be associated with the application 
> job on the yarn. Therefore, we need to increase the applicationId when 
> running on yarn. This helps us to accurately find the corresponding job on 
> yarn when the metric is abnormal.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14764) The final Explicit conversion matrix we should support in our planner

2019-12-23 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-14764:
-
Fix Version/s: (was: 1.10.0)
   1.11.0

> The final Explicit conversion matrix we should support in our planner
> -
>
> Key: FLINK-14764
> URL: https://issues.apache.org/jira/browse/FLINK-14764
> Project: Flink
>  Issue Type: Task
>  Components: Documentation
>Affects Versions: 1.10.0
>Reporter: Zhenghua Gao
>Priority: Major
> Fix For: 1.11.0
>
> Attachments: SQL_2011_CAST_Matrix.png
>
>
> The SQL standard defines the cast specification with an explicit conversion 
> matrix (SQL 2011 Part 2 Section 6.13 Syntax Rules 6)). But neither Legacy 
> planner nor blink planner would follow that. IMO we should determine a final 
> Explicit/Implicit conversion matrix before 1.10 (at least in blink planner).
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10673: [FLINK-15374][core][config] Update descriptions for jvm overhead config options

2019-12-23 Thread GitBox
flinkbot commented on issue #10673: [FLINK-15374][core][config] Update 
descriptions for jvm overhead config options
URL: https://github.com/apache/flink/pull/10673#issuecomment-568663453
 
 
   
   ## CI report:
   
   * 3a8fcf0e8936f3c4115ff4771b52e060064676af UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10672: [FLINK-15373][core][config] Update descriptions for framework / task off-heap memory config options

2019-12-23 Thread GitBox
flinkbot commented on issue #10672: [FLINK-15373][core][config] Update 
descriptions for framework / task off-heap memory config options
URL: https://github.com/apache/flink/pull/10672#issuecomment-568663425
 
 
   
   ## CI report:
   
   * 31a2ccbabb6675673d445b6a9d258e6622d295d8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10669: [FLINK-15192][docs][table] Split 'SQL' page into multiple sub pages for better readability

2019-12-23 Thread GitBox
flinkbot edited a comment on issue #10669: [FLINK-15192][docs][table] Split 
'SQL' page into multiple sub pages for better readability
URL: https://github.com/apache/flink/pull/10669#issuecomment-568653494
 
 
   
   ## CI report:
   
   * aac6bab3b96e3d721234030ce35e24006b6bf7c5 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142187553) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3870)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10671: [FLINK-15372][core][config] Use shorter config keys for FLIP-49 total memory config options

2019-12-23 Thread GitBox
flinkbot edited a comment on issue #10671: [FLINK-15372][core][config] Use 
shorter config keys for FLIP-49 total memory config options
URL: https://github.com/apache/flink/pull/10671#issuecomment-568658217
 
 
   
   ## CI report:
   
   * b4b8fe5b7c9d469f4fcd1bd83317e1dbaae7c534 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142188774) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3872)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10670: [FLINK-15370][state backends] Make sure sharedResources takes effect in RocksDBResourceContainer

2019-12-23 Thread GitBox
flinkbot edited a comment on issue #10670: [FLINK-15370][state backends] Make 
sure sharedResources takes effect in RocksDBResourceContainer
URL: https://github.com/apache/flink/pull/10670#issuecomment-568658193
 
 
   
   ## CI report:
   
   * deed8274c86783bb2068acbcd9c6ad8d40b63f83 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142188770) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3871)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10643: [FLINK-15342][hive] Verify querying Hive view

2019-12-23 Thread GitBox
flinkbot edited a comment on issue #10643: [FLINK-15342][hive] Verify querying 
Hive view
URL: https://github.com/apache/flink/pull/10643#issuecomment-567843043
 
 
   
   ## CI report:
   
   * 1051e0c0c2cf1ffc4925ab2db4489ccf615686b0 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/141883442) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3809)
 
   * d0f0a3aa44051bd8526b9daddf7a28ee7fba19ac UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-14980) add documentation and example for function DDL

2019-12-23 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-14980.

Resolution: Fixed

> add documentation and example for function DDL
> --
>
> Key: FLINK-14980
> URL: https://issues.apache.org/jira/browse/FLINK-14980
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: Bowen Li
>Assignee: Zhenqiu Huang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0, 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-14849) Documentation wrong hive dependents

2019-12-23 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-14849:
---

Assignee: Rui Li  (was: Jingsong Lee)

> Documentation wrong hive dependents
> ---
>
> Key: FLINK-14849
> URL: https://issues.apache.org/jira/browse/FLINK-14849
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Documentation
>Reporter: Jingsong Lee
>Assignee: Rui Li
>Priority: Critical
> Fix For: 1.10.0
>
>
> {code:java}
> With:
> 
> org.apache.hive
> hive-exec
> 3.1.1
> 
> Caused by: java.lang.ClassCastException: org.codehaus.janino.CompilerFactory 
> cannot be cast to org.codehaus.commons.compiler.ICompilerFactory
>   at 
> org.codehaus.commons.compiler.CompilerFactoryFactory.getCompilerFactory(CompilerFactoryFactory.java:129)
>   at 
> org.codehaus.commons.compiler.CompilerFactoryFactory.getDefaultCompilerFactory(CompilerFactoryFactory.java:79)
>   at 
> org.apache.calcite.rel.metadata.JaninoRelMetadataProvider.compile(JaninoRelMetadataProvider.java:432)
>   ... 68 more
> {code}
> After https://issues.apache.org/jira/browse/FLINK-13749 , flink-client will 
> use default child-first resolve-order.
> If user jar has some conflict dependents, there will be some problem.
> Maybe we should update document to add some exclusions to hive dependents.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-15204) add documentation for Flink-Hive timestamp conversions in table and udf

2019-12-23 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-15204.

Resolution: Fixed

> add documentation for Flink-Hive timestamp conversions in table and udf
> ---
>
> Key: FLINK-15204
> URL: https://issues.apache.org/jira/browse/FLINK-15204
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Documentation
>Reporter: Bowen Li
>Assignee: Rui Li
>Priority: Major
> Fix For: 1.10.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lirui-apache commented on issue #10643: [FLINK-15342][hive] Verify querying Hive view

2019-12-23 Thread GitBox
lirui-apache commented on issue #10643: [FLINK-15342][hive] Verify querying 
Hive view
URL: https://github.com/apache/flink/pull/10643#issuecomment-568658747
 
 
   Hi @bowenli86 , I just added a view that involves joining two tables.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10673: [FLINK-15374][core][config] Update descriptions for jvm overhead config options

2019-12-23 Thread GitBox
flinkbot commented on issue #10673: [FLINK-15374][core][config] Update 
descriptions for jvm overhead config options
URL: https://github.com/apache/flink/pull/10673#issuecomment-568658483
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 3a8fcf0e8936f3c4115ff4771b52e060064676af (Tue Dec 24 
05:32:18 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10671: [FLINK-15372][core][config] Use shorter config keys for FLIP-49 total memory config options

2019-12-23 Thread GitBox
flinkbot commented on issue #10671: [FLINK-15372][core][config] Use shorter 
config keys for FLIP-49 total memory config options
URL: https://github.com/apache/flink/pull/10671#issuecomment-568658217
 
 
   
   ## CI report:
   
   * b4b8fe5b7c9d469f4fcd1bd83317e1dbaae7c534 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10670: [FLINK-15370][state backends] Make sure sharedResources takes effect in RocksDBResourceContainer

2019-12-23 Thread GitBox
flinkbot commented on issue #10670: [FLINK-15370][state backends] Make sure 
sharedResources takes effect in RocksDBResourceContainer
URL: https://github.com/apache/flink/pull/10670#issuecomment-568658193
 
 
   
   ## CI report:
   
   * deed8274c86783bb2068acbcd9c6ad8d40b63f83 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15374) Update descriptions for jvm overhead config options

2019-12-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15374:
---
Labels: pull-request-available  (was: )

> Update descriptions for jvm overhead config options
> ---
>
> Key: FLINK-15374
> URL: https://issues.apache.org/jira/browse/FLINK-15374
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Reporter: Xintong Song
>Assignee: Xintong Song
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> Update descriptions for "taskmanager.memory.jvm-overhead.[min|max|fraction]" 
> to remove "I/O direct memory" and explicitly state that it's not counted into 
> MaxDirectMemorySize.
> Detailed discussion can be found in this [ML 
> thread|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Some-feedback-after-trying-out-the-new-FLIP-49-memory-configurations-td36129.html].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10669: [FLINK-15192][docs][table] Split 'SQL' page into multiple sub pages for better readability

2019-12-23 Thread GitBox
flinkbot edited a comment on issue #10669: [FLINK-15192][docs][table] Split 
'SQL' page into multiple sub pages for better readability
URL: https://github.com/apache/flink/pull/10669#issuecomment-568653494
 
 
   
   ## CI report:
   
   * aac6bab3b96e3d721234030ce35e24006b6bf7c5 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142187553) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3870)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xintongsong opened a new pull request #10673: [FLINK-15374][core][config] Update descriptions for jvm overhead config options

2019-12-23 Thread GitBox
xintongsong opened a new pull request #10673: [FLINK-15374][core][config] 
Update descriptions for jvm overhead config options
URL: https://github.com/apache/flink/pull/10673
 
 
   ## What is the purpose of the change
   
   This PR updates descriptions for JVM overhead config options, to remove "I/O 
direct memory" and explicitly state that it's not counted into 
MaxDirectMemorySize.
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-15189) add documentation for catalog view and hive view

2019-12-23 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-15189.

Resolution: Fixed

> add documentation for catalog view and hive view
> 
>
> Key: FLINK-15189
> URL: https://issues.apache.org/jira/browse/FLINK-15189
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Documentation
>Reporter: Bowen Li
>Assignee: Rui Li
>Priority: Blocker
> Fix For: 1.10.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10637: [FLINK-15319][e2e] flink-end-to-end-tests-common-kafka fails due to t…

2019-12-23 Thread GitBox
flinkbot edited a comment on issue #10637: [FLINK-15319][e2e] 
flink-end-to-end-tests-common-kafka fails due to t…
URL: https://github.com/apache/flink/pull/10637#issuecomment-567778216
 
 
   
   ## CI report:
   
   * eec2a87ddc2fa8bd32e598d4f715259e09792a73 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141861783) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3793)
 
   * 6427c34b6bbef5ea52963c380cb9fa255e330a01 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142183261) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3869)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15333) Using FlinkCostFactory rather than RelOptCostImpl in blink planner

2019-12-23 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002634#comment-17002634
 ] 

Jingsong Lee commented on FLINK-15333:
--

[~Leonard Xu] Which case will fail?

> Using FlinkCostFactory  rather than RelOptCostImpl in blink planner
> ---
>
> Key: FLINK-15333
> URL: https://issues.apache.org/jira/browse/FLINK-15333
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Leonard Xu
>Assignee: godfrey he
>Priority: Major
> Fix For: 1.10.0
>
>
> When I test FLINK SQL in flink 1.10, I found an exception  which is a bug and 
> need to fix.
> {code:java}
> // Some comments here
> Exception in thread "main" java.lang.ClassCastException: 
> org.apache.calcite.plan.RelOptCostImpl$Factory cannot be cast to 
> org.apache.flink.table.planner.plan.cost.FlinkCostFactory
>   at 
> org.apache.flink.table.planner.plan.nodes.common.CommonPhysicalExchange.computeSelfCost(CommonPhysicalExchange.scala:50)
>   at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMdNonCumulativeCost.getNonCumulativeCost(FlinkRelMdNonCumulativeCost.scala:41)
>   at 
> GeneratedMetadataHandler_NonCumulativeCost.getNonCumulativeCost_$(Unknown 
> Source)
>   at 
> GeneratedMetadataHandler_NonCumulativeCost.getNonCumulativeCost(Unknown 
> Source)
>   at 
> org.apache.calcite.rel.metadata.RelMetadataQuery.getNonCumulativeCost(RelMetadataQuery.java:301)
>   at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMdCumulativeCost.getCumulativeCost(FlinkRelMdCumulativeCost.scala:38)
>   at GeneratedMetadataHandler_CumulativeCost.getCumulativeCost_$(Unknown 
> Source)
>   at GeneratedMetadataHandler_CumulativeCost.getCumulativeCost(Unknown 
> Source)
>   at GeneratedMetadataHandler_CumulativeCost.getCumulativeCost_$(Unknown 
> Source)
>   at GeneratedMetadataHandler_CumulativeCost.getCumulativeCost(Unknown 
> Source)
>   at 
> org.apache.calcite.rel.metadata.RelMetadataQuery.getCumulativeCost(RelMetadataQuery.java:282)
>   at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMdCumulativeCost$$anonfun$getCumulativeCost$1.apply(FlinkRelMdCumulativeCost.scala:41)
>   at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMdCumulativeCost$$anonfun$getCumulativeCost$1.apply(FlinkRelMdCumulativeCost.scala:40)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at 
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
>   at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMdCumulativeCost.getCumulativeCost(FlinkRelMdCumulativeCost.scala:39)
>   at GeneratedMetadataHandler_CumulativeCost.getCumulativeCost_$(Unknown 
> Source)
>   at GeneratedMetadataHandler_CumulativeCost.getCumulativeCost(Unknown 
> Source)
>   at GeneratedMetadataHandler_CumulativeCost.getCumulativeCost_$(Unknown 
> Source)
>   at GeneratedMetadataHandler_CumulativeCost.getCumulativeCost(Unknown 
> Source)
>   at 
> org.apache.calcite.rel.metadata.RelMetadataQuery.getCumulativeCost(RelMetadataQuery.java:282)
>   at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMdCumulativeCost$$anonfun$getCumulativeCost$1.apply(FlinkRelMdCumulativeCost.scala:41)
>   at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMdCumulativeCost$$anonfun$getCumulativeCost$1.apply(FlinkRelMdCumulativeCost.scala:40)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at 
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
>   at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.plan.metadata.FlinkRelMdCumulativeCost.getCumulativeCost(FlinkRelMdCumulativeCost.scala:39)
>   at 

[GitHub] [flink] flinkbot commented on issue #10672: [FLINK-15373][core][config] Update descriptions for framework / task off-heap memory config options

2019-12-23 Thread GitBox
flinkbot commented on issue #10672: [FLINK-15373][core][config] Update 
descriptions for framework / task off-heap memory config options
URL: https://github.com/apache/flink/pull/10672#issuecomment-568656301
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 31a2ccbabb6675673d445b6a9d258e6622d295d8 (Tue Dec 24 
05:17:59 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15373) Update descriptions for framework / task off-heap memory config options

2019-12-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15373:
---
Labels: pull-request-available  (was: )

> Update descriptions for framework / task off-heap memory config options
> ---
>
> Key: FLINK-15373
> URL: https://issues.apache.org/jira/browse/FLINK-15373
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Reporter: Xintong Song
>Assignee: Xintong Song
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> Update descriptions for "taskmanager.memory.framework.off-heap.size" and 
> "taskmanager.memory.task.off-heap.size" to explicitly state that:
> * Both direct and native memory are accounted
> * Will be fully counted into MaxDirectMemorySize
> Detailed discussion can be found in this [ML 
> thread|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Some-feedback-after-trying-out-the-new-FLIP-49-memory-configurations-td36129.html].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xintongsong opened a new pull request #10672: [FLINK-15373][core][config

2019-12-23 Thread GitBox
xintongsong opened a new pull request #10672: [FLINK-15373][core][config
URL: https://github.com/apache/flink/pull/10672
 
 
   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10671: [FLINK-15372][core][config] Use shorter config keys for FLIP-49 total memory config options

2019-12-23 Thread GitBox
flinkbot commented on issue #10671: [FLINK-15372][core][config] Use shorter 
config keys for FLIP-49 total memory config options
URL: https://github.com/apache/flink/pull/10671#issuecomment-568654185
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit b4b8fe5b7c9d469f4fcd1bd83317e1dbaae7c534 (Tue Dec 24 
05:03:54 UTC 2019)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15372) Use shorter config keys for FLIP-49 total memory config options

2019-12-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15372:
---
Labels: pull-request-available  (was: )

> Use shorter config keys for FLIP-49 total memory config options
> ---
>
> Key: FLINK-15372
> URL: https://issues.apache.org/jira/browse/FLINK-15372
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Reporter: Xintong Song
>Assignee: Xintong Song
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> We propose to use shorter keys for total flink / process memory config 
> options, to make it less clumsy without loss of expressiveness.
> To be specific, we propose to:
> * Change the config option key "taskmanager.memory.total-flink.size" to 
> "taskmanager.memory.flink.size"
> * Change the config option key "taskmanager.memory.total-process.size" to 
> "taskmanager.memory.process.size"
> Detailed discussion can be found in this [ML 
> thread|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Some-feedback-after-trying-out-the-new-FLIP-49-memory-configurations-td36129.html].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xintongsong opened a new pull request #10671: [FLINK-15372][core][config] Use shorter config keys for FLIP-49 total memory config options

2019-12-23 Thread GitBox
xintongsong opened a new pull request #10671: [FLINK-15372][core][config] Use 
shorter config keys for FLIP-49 total memory config options
URL: https://github.com/apache/flink/pull/10671
 
 
   ## What is the purpose of the change
   
   This PR updates config keys for total flink / process memory sizes, by 
removing "total-" for less clumsy without loss of expressiveness.
   
   ## Brief change log
   
   - Change the config option key "taskmanager.memory.total-flink.size" to 
"taskmanager.memory.flink.size"
   - Change the config option key "taskmanager.memory.total-process.size" to 
"taskmanager.memory.process.size"
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10669: [FLINK-15192][docs][table] Split 'SQL' page into multiple sub pages for better readability

2019-12-23 Thread GitBox
flinkbot commented on issue #10669: [FLINK-15192][docs][table] Split 'SQL' page 
into multiple sub pages for better readability
URL: https://github.com/apache/flink/pull/10669#issuecomment-568653494
 
 
   
   ## CI report:
   
   * aac6bab3b96e3d721234030ce35e24006b6bf7c5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10637: [FLINK-15319][e2e] flink-end-to-end-tests-common-kafka fails due to t…

2019-12-23 Thread GitBox
flinkbot edited a comment on issue #10637: [FLINK-15319][e2e] 
flink-end-to-end-tests-common-kafka fails due to t…
URL: https://github.com/apache/flink/pull/10637#issuecomment-567778216
 
 
   
   ## CI report:
   
   * eec2a87ddc2fa8bd32e598d4f715259e09792a73 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141861783) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3793)
 
   * 6427c34b6bbef5ea52963c380cb9fa255e330a01 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142183261) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3869)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-15344) Update limitations in hive udf document

2019-12-23 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-15344.

Fix Version/s: 1.11.0
   Resolution: Fixed

master: 5ec3feac0ecf4991faf1ba9e6d45125ae8536e75
1.10: 324537a107ffb8f3c9ceed4b6d8a3a3bdb3c5711

> Update limitations in hive udf document
> ---
>
> Key: FLINK-15344
> URL: https://issues.apache.org/jira/browse/FLINK-15344
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Documentation
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0, 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The limitation is not valid now.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10670: [FLINK-15370][state backends] Make sure sharedResources takes effect in RocksDBResourceContainer

2019-12-23 Thread GitBox
flinkbot commented on issue #10670: [FLINK-15370][state backends] Make sure 
sharedResources takes effect in RocksDBResourceContainer
URL: https://github.com/apache/flink/pull/10670#issuecomment-568651457
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit deed8274c86783bb2068acbcd9c6ad8d40b63f83 (Tue Dec 24 
04:46:14 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 closed pull request #10647: [FLINK-15344][documentation] Update limitations in hive udf document

2019-12-23 Thread GitBox
bowenli86 closed pull request #10647: [FLINK-15344][documentation] Update 
limitations in hive udf document
URL: https://github.com/apache/flink/pull/10647
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15370) Configured write buffer manager actually not take effect in RocksDB's DBOptions

2019-12-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15370:
---
Labels: pull-request-available  (was: )

> Configured write buffer manager actually not take effect in RocksDB's 
> DBOptions
> ---
>
> Key: FLINK-15370
> URL: https://issues.apache.org/jira/browse/FLINK-15370
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Yun Tang
>Assignee: Yu Li
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0, 1.11.0
>
>
> Currently, we call {{DBOptions#setWriteBufferManager}} after we extract the 
> {{DBOptions}} from {{RocksDBResourceContainer}}, however, we would extract a 
> new {{DBOptions}}  when creating the RocksDB instance. In other words, the 
> configured write buffer manager would not take effect in the {{DBOptions}} 
> which finally used in target RocksDB instance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] carp84 commented on issue #10670: [FLINK-15370][state backends] Make sure sharedResources takes effect in RocksDBResourceContainer

2019-12-23 Thread GitBox
carp84 commented on issue #10670: [FLINK-15370][state backends] Make sure 
sharedResources takes effect in RocksDBResourceContainer
URL: https://github.com/apache/flink/pull/10670#issuecomment-568651196
 
 
   @Myasuka @StephanEwen Please help review when time allows, thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] carp84 opened a new pull request #10670: [FLINK-15370][state backends] Make sure sharedResources takes effect in RocksDBResourceContainer

2019-12-23 Thread GitBox
carp84 opened a new pull request #10670: [FLINK-15370][state backends] Make 
sure sharedResources takes effect in RocksDBResourceContainer
URL: https://github.com/apache/flink/pull/10670
 
 
   
   ## What is the purpose of the change
   
   Currently we never uses the `sharedResources` in `RocksDBResourceContainer`, 
and in `RocksDBStateBackend.createKeyedStateBackend` we set the shared write 
buffer manager to the `dbOptions` but never pass/use it in 
`RocksDBKeyedStateBackendBuilder`, which leads to the result that different 
RocksDB backends actually are using separate write buffer managers thus the 
total memory control is invalid.
   
   
   ## Brief change log
   
   Always set write buffer manager to the one from `sharedResources` if it's 
non-null, in `RocksDBResourceContainer.getDbOptions`.
   
   
   ## Verifying this change
   
   This change added a new `testSharedResources` test case to cover the 
problematic case.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   >