[jira] [Created] (FLINK-15127) rename CreateFunctionOperation, DropFunctionOperation, AlterFunctionOperation to CreateCatalogFunctionOperation, DropCatalogFunctionOperation, AlterCatalogFunctionOperat

2019-12-07 Thread Bowen Li (Jira)
Bowen Li created FLINK-15127:


 Summary: rename CreateFunctionOperation, DropFunctionOperation, 
AlterFunctionOperation to CreateCatalogFunctionOperation, 
DropCatalogFunctionOperation, AlterCatalogFunctionOperation
 Key: FLINK-15127
 URL: https://issues.apache.org/jira/browse/FLINK-15127
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Bowen Li
Assignee: Zhenqiu Huang


rename these operations since they should only support operations related to 
catalog functions (both temp and persistent)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15126) migrate "show functions" from sql cli to sql parser

2019-12-07 Thread Bowen Li (Jira)
Bowen Li created FLINK-15126:


 Summary: migrate "show functions" from sql cli to sql parser
 Key: FLINK-15126
 URL: https://issues.apache.org/jira/browse/FLINK-15126
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Client, Table SQL / Planner
Reporter: Bowen Li
Assignee: Zhenqiu Huang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15125) PROCTIME() computed column defined in CREATE TABLE doesn't work

2019-12-07 Thread Jark Wu (Jira)
Jark Wu created FLINK-15125:
---

 Summary: PROCTIME() computed column defined in CREATE TABLE 
doesn't work
 Key: FLINK-15125
 URL: https://issues.apache.org/jira/browse/FLINK-15125
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Jark Wu
 Fix For: 1.10.0


{{CatalogTableITCase#testStreamSourceTableWithProctime}} is ignored for now. We 
should enable it and fix the problem. The exception stack:


{code}

scala.MatchError: PROCTIME() (of class org.apache.calcite.rex.RexCall)

at 
org.apache.flink.table.planner.plan.rules.logical.BatchLogicalWindowAggregateRule.getTimeFieldReference(BatchLogicalWindowAggregateRule.scala:59)
at 
org.apache.flink.table.planner.plan.rules.logical.LogicalWindowAggregateRuleBase.translateWindow(LogicalWindowAggregateRuleBase.scala:249)
at 
org.apache.flink.table.planner.plan.rules.logical.LogicalWindowAggregateRuleBase.onMatch(LogicalWindowAggregateRuleBase.scala:72)
at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:319)
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:560)
at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:419)
at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:256)
at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:215)
at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:202)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:83)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:56)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:44)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:44)
at scala.collection.immutable.List.foreach(List.scala:392)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:44)
at 
org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77)
at 
org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:221)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:148)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:661)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.execute(TableEnvironmentImpl.java:620)
at 
org.apache.flink.table.planner.catalog.CatalogTableITCase.execJob(CatalogTableITCase.scala:89)
at 
org.apache.flink.table.planner.catalog.CatalogTableITCase.testStreamSourceTableWithProctime(CatalogTableITCase.scala:607)
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15124) types with precision can't be executed in sql client with blink planner

2019-12-07 Thread Kurt Young (Jira)
Kurt Young created FLINK-15124:
--

 Summary: types with precision can't be executed in sql client with 
blink planner
 Key: FLINK-15124
 URL: https://issues.apache.org/jira/browse/FLINK-15124
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client, Table SQL / Planner
Affects Versions: 1.10.0
Reporter: Kurt Young


I created a table in sql client with blink planner:  
{noformat}
create table t (
a int,
b varchar,
c decimal(10, 5))
with (
'connector.type' = 'filesystem',
'format.type' = 'csv',
'format.derive-schema' = 'true',
'connector.path' = 'xxx'
);
{noformat}
The table description looks good:
{noformat}
Flink SQL> describe t; 
root 
  |-- a: INT 
  |-- b: STRING 
  |-- c: DECIMAL(10, 5){noformat}
But the select query failed:
{noformat}
Flink SQL> select * from t;
[ERROR] Could not execute SQL statement. Reason: 
org.apache.flink.table.planner.codegen.CodeGenException: Incompatible types of 
expression and result type. 
Expression[GeneratedExpression(field$3,isNull$3,,DECIMAL(38, 18),None)] type is 
[DECIMAL(38, 18)], result type is [DECIMAL(10, 5)]
{noformat}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15123) remove uniqueKeys from FlinkStatistic in blink planner

2019-12-07 Thread godfrey he (Jira)
godfrey he created FLINK-15123:
--

 Summary: remove uniqueKeys from FlinkStatistic in blink planner 
 Key: FLINK-15123
 URL: https://issues.apache.org/jira/browse/FLINK-15123
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Reporter: godfrey he


{{uniqueKeys}} is a kind of constraint, it's unreasonable that {{uniqueKeys}} 
is a kind of statistic. so we should remove uniqueKeys from {{FlinkStatistic}} 
in blink planner. Some temporary solutions (e.g. 
{{RichTableSourceQueryOperation}}) should also be resolved after primaryKey is 
introduced in {{TableSchema}} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15122) Reusing record object in StreamTaskNetworkInput

2019-12-07 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-15122:


 Summary: Reusing record object in StreamTaskNetworkInput
 Key: FLINK-15122
 URL: https://issues.apache.org/jira/browse/FLINK-15122
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Network
Reporter: Jingsong Lee
 Fix For: 1.11.0


Now blink's batch is forced to open object reusing, but the data read from the 
network is not reused, which will lead to a large GC of the batch job.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Adding e2e tests for Flink's Mesos integration

2019-12-07 Thread Yangze Guo
Thanks for your feedback!

@Till
Regarding the time overhead, I think it mainly come from the network
transmission. For building the image locally, it will totally download
260MB files including the base image and packages. For pulling from
DockerHub, the compressed size of the image is 347MB. Thus, I agree
that it is ok to build the image locally.

@Piyush
Thank you for offering the help and sharing your usage scenario. In
current stage, I think it will be really helpful if you can compress
the custom image[1] or reduce the time overhead to build it locally.
Any ideas for improving test coverage will also be appreciated.

[1]https://hub.docker.com/layers/karmagyz/mesos-flink/latest/images/sha256-4e1caefea107818aa11374d6ac8a6e889922c81806f5cd791ead141f18ec7e64

Best,
Yangze Guo

On Sat, Dec 7, 2019 at 3:17 AM Piyush Narang  wrote:
>
> +1 from our end as well. At Criteo, we are running some Flink jobs on Mesos 
> in production to compute short term features for machine learning. We’d love 
> to help out and contribute on this initiative.
>
> Thanks,
> -- Piyush
>
>
> From: Till Rohrmann 
> Date: Friday, December 6, 2019 at 8:10 AM
> To: dev 
> Cc: user 
> Subject: Re: [DISCUSS] Adding e2e tests for Flink's Mesos integration
>
> Big +1 for adding a fully working e2e test for Flink's Mesos integration. 
> Ideally we would have it ready for the 1.10 release. The lack of such a test 
> has bitten us already multiple times.
>
> In general I would prefer to use the official image if possible since it 
> frees us from maintaining our own custom image. Since Java 9 is no longer 
> officially supported as we opted for supporting Java 11 (LTS) it might not be 
> feasible, though. How much longer would building the custom image vs. 
> downloading the custom image from DockerHub be? Maybe it is ok to build the 
> image locally. Then we would not have to maintain the image.
>
> Cheers,
> Till
>
> On Fri, Dec 6, 2019 at 11:05 AM Yangze Guo 
> mailto:karma...@gmail.com>> wrote:
> Hi, all,
>
> Currently, there is no end to end test or IT case for Mesos deployment
> while the common deployment related developing would inevitably touch
> the logic of this component. Thus, some work needs to be done to
> guarantee experience for both Meos users and contributors. After
> offline discussion with Till and Xintong, we have some basic ideas and
> would like to start a discussion thread on adding end to end tests for
> Flink's Mesos integration.
>
> As a first step, we would like to keep the scope of this contribution
> to be relative small. This may also help us to quickly get some basic
> test cases that might be helpful for the upcoming 1.10 release.
>
> As far as we can think of, what needs to be done is to setup a Mesos
> framework during the testing and determine which tests need to be
> included.
>
>
> ** Regarding the Mesos framework, after trying out several approaches,
> I find that setting up Mesos in docker is probably what we want. The
> resources needed for building and setting up Mesos from source is
> probably not affordable in most of the scenarios. So, the one open
> question that worth discussion is the choice of Docker image. We have
> come up with two options.
>
> - Using official Mesos image[1]
> The official image was the first alternative that come to our mind,
> but we run into some sort of Java version compatibility problem that
> leads to failures of launching task executors. Flink supports Java 9
> since version 1.9.0 [2], However, the official Docker image of Mesos
> is built with a development version of JDK 9, which probably has
> caused this problem. Unless we want to make Flink to also be
> compatible with the JDK development version used by the official mesos
> image, this option does not work out. Besides, according to the
> official roadmap[5], Java 9 is not a long-term support version, which
> may bring stability risk in future.
>
> - Build a custom image
> I've already tried build a custom image[3] and successfully run most
> of the existing end to end tests cases with it. The image is built
> with Ubuntu 16.04, JDK 8 and Mesos 1.7.1. For the mesos e2e test
> framework, we could either build the image from a Docker file or pull
> the pre-built image from DockerHub (or other hub services) during the
> testing.
> If we decide to publish the an image on DockerHub, we probably need a
> Flink official  repository/account to hold it.
>
>
> ** Regarding the test coverage, we think the following three tests
> could be a good starting point that covers a very essential set of
> behaviors for Mesos deployment.
> - Wordcount end-to-end test. For verifying the basic process of Mesos
> deployment.
> - Multiple submissions of the same job. For preventing resource
> management problems on Mesos, such as [4]
> - State TTL RocksDb backend end-to-end test. For verifying memory
> configuration behaviors, since Mesos has it’s own config options and
> logics.
>
> Unfortunately, neither of us who participated the in