[jira] [Comment Edited] (FLINK-13400) Remove Hive and Hadoop dependencies from SQL Client

2021-07-19 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17382162#comment-17382162
 ] 

frank wang edited comment on FLINK-13400 at 7/19/21, 10:05 AM:
---

[~lirui] i have submit pr about this issue 
[FLINK-13400|https://github.com/apache/flink/pull/16532],
I just removed the hive/hadoop related packages and some codes in sql-client, 
create a new module about flink-end-to-end-tests-hive, support catalog test on 
sql-client and hive-connector, i didnot implements e2e tests about sql-client 
and hive, because there is not hive test container, like [SQLClientHBaseITCase 
|https://issues.apache.org/jira/browse/FLINK-21519]


was (Author: frank wang):
[~lirui] i have submit pr about this issue 
[FLINK-13400|https://github.com/apache/flink/pull/16517],
I just removed the hive/hadoop related packages and some codes in sql-client, 
create a new module about flink-end-to-end-tests-hive, support catalog test on 
sql-client and hive-connector, i didnot implements e2e tests about sql-client 
and hive, because there is not hive test container, like [SQLClientHBaseITCase 
|https://issues.apache.org/jira/browse/FLINK-21519]

> Remove Hive and Hadoop dependencies from SQL Client
> ---
>
> Key: FLINK-13400
> URL: https://issues.apache.org/jira/browse/FLINK-13400
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Reporter: Timo Walther
>Assignee: frank wang
>Priority: Major
>  Labels: auto-deprioritized-critical, pull-request-available, 
> stale-major
> Fix For: 1.14.0
>
>
> 340/550 lines in the SQL Client {{pom.xml}} are just around Hive and Hadoop 
> dependencies.  Hive has nothing to do with the SQL Client and it will be hard 
> to maintain the long list of  exclusion there. Some dependencies are even in 
> a {{provided}} scope and not {{test}} scope.
> We should remove all dependencies on Hive/Hadoop and replace catalog-related 
> tests by a testing catalog. Similar to how we tests source/sinks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-13400) Remove Hive and Hadoop dependencies from SQL Client

2021-07-16 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17382162#comment-17382162
 ] 

frank wang commented on FLINK-13400:


[~lirui] i have submit pr about this issue 
[FLINK-13400|https://github.com/apache/flink/pull/16517],
I just removed the hive/hadoop related packages and some codes in sql-client, 
create a new module about flink-end-to-end-tests-hive, support catalog test on 
sql-client and hive-connector, i didnot implements e2e tests about sql-client 
and hive, because there is not hive test container, like [SQLClientHBaseITCase 
|http://example.com]https://issues.apache.org/jira/browse/FLINK-21519

> Remove Hive and Hadoop dependencies from SQL Client
> ---
>
> Key: FLINK-13400
> URL: https://issues.apache.org/jira/browse/FLINK-13400
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Reporter: Timo Walther
>Assignee: frank wang
>Priority: Major
>  Labels: auto-deprioritized-critical, pull-request-available, 
> stale-major
> Fix For: 1.14.0
>
>
> 340/550 lines in the SQL Client {{pom.xml}} are just around Hive and Hadoop 
> dependencies.  Hive has nothing to do with the SQL Client and it will be hard 
> to maintain the long list of  exclusion there. Some dependencies are even in 
> a {{provided}} scope and not {{test}} scope.
> We should remove all dependencies on Hive/Hadoop and replace catalog-related 
> tests by a testing catalog. Similar to how we tests source/sinks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-13400) Remove Hive and Hadoop dependencies from SQL Client

2021-07-16 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17382162#comment-17382162
 ] 

frank wang edited comment on FLINK-13400 at 7/16/21, 3:51 PM:
--

[~lirui] i have submit pr about this issue 
[FLINK-13400|https://github.com/apache/flink/pull/16517],
I just removed the hive/hadoop related packages and some codes in sql-client, 
create a new module about flink-end-to-end-tests-hive, support catalog test on 
sql-client and hive-connector, i didnot implements e2e tests about sql-client 
and hive, because there is not hive test container, like [SQLClientHBaseITCase 
|https://issues.apache.org/jira/browse/FLINK-21519]


was (Author: frank wang):
[~lirui] i have submit pr about this issue 
[FLINK-13400|https://github.com/apache/flink/pull/16517],
I just removed the hive/hadoop related packages and some codes in sql-client, 
create a new module about flink-end-to-end-tests-hive, support catalog test on 
sql-client and hive-connector, i didnot implements e2e tests about sql-client 
and hive, because there is not hive test container, like [SQLClientHBaseITCase 
|http://example.com]https://issues.apache.org/jira/browse/FLINK-21519

> Remove Hive and Hadoop dependencies from SQL Client
> ---
>
> Key: FLINK-13400
> URL: https://issues.apache.org/jira/browse/FLINK-13400
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Reporter: Timo Walther
>Assignee: frank wang
>Priority: Major
>  Labels: auto-deprioritized-critical, pull-request-available, 
> stale-major
> Fix For: 1.14.0
>
>
> 340/550 lines in the SQL Client {{pom.xml}} are just around Hive and Hadoop 
> dependencies.  Hive has nothing to do with the SQL Client and it will be hard 
> to maintain the long list of  exclusion there. Some dependencies are even in 
> a {{provided}} scope and not {{test}} scope.
> We should remove all dependencies on Hive/Hadoop and replace catalog-related 
> tests by a testing catalog. Similar to how we tests source/sinks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14055) Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"

2021-07-15 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17381785#comment-17381785
 ] 

frank wang commented on FLINK-14055:


[~ZhenqiuHuang] what is problem? Is that the class cannot be loaded?mayby we 
can solve it together:D

> Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"
> 
>
> Key: FLINK-14055
> URL: https://issues.apache.org/jira/browse/FLINK-14055
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Bowen Li
>Priority: Major
>  Labels: auto-unassigned, sprint
>
> As FLINK-7151 adds basic function DDL to Flink, this ticket is to support 
> dynamically loading functions from external source in function DDL with 
> advanced syntax like 
>  
> {code:java}
> CREATE FUNCTION func_name as class_name USING JAR/FILE/ACHIEVE 'xxx' [, 
> JAR/FILE/ACHIEVE 'yyy'] ;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-13400) Remove Hive and Hadoop dependencies from SQL Client

2021-07-08 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17377733#comment-17377733
 ] 

frank wang commented on FLINK-13400:


hi [~lirui] maybe you can assign this to me

> Remove Hive and Hadoop dependencies from SQL Client
> ---
>
> Key: FLINK-13400
> URL: https://issues.apache.org/jira/browse/FLINK-13400
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Reporter: Timo Walther
>Priority: Major
>  Labels: auto-deprioritized-critical, auto-unassigned, stale-major
> Fix For: 1.14.0
>
>
> 340/550 lines in the SQL Client {{pom.xml}} are just around Hive and Hadoop 
> dependencies.  Hive has nothing to do with the SQL Client and it will be hard 
> to maintain the long list of  exclusion there. Some dependencies are even in 
> a {{provided}} scope and not {{test}} scope.
> We should remove all dependencies on Hive/Hadoop and replace catalog-related 
> tests by a testing catalog. Similar to how we tests source/sinks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-22402) Add hive dialect test in sql-client

2021-07-08 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17377481#comment-17377481
 ] 

frank wang edited comment on FLINK-22402 at 7/8/21, 4:18 PM:
-

ok, i can do it about 
[FLINK-13400|https://issues.apache.org/jira/browse/FLINK-13400] 
thx [~lirui]


was (Author: frank wang):
ok, i can do it about 
[FLINK-13400|https://issues.apache.org/jira/browse/FLINK-13400] [~lirui]

> Add hive dialect test in sql-client
> ---
>
> Key: FLINK-22402
> URL: https://issues.apache.org/jira/browse/FLINK-22402
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Tests
>Reporter: Rui Li
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-22402) Add hive dialect test in sql-client

2021-07-08 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17377481#comment-17377481
 ] 

frank wang edited comment on FLINK-22402 at 7/8/21, 4:09 PM:
-

ok, i can do it about 
[FLINK-13400|https://issues.apache.org/jira/browse/FLINK-13400] [~lirui]


was (Author: frank wang):
ok, i can do it about https://issues.apache.org/jira/browse/FLINK-13400 [~lirui]

> Add hive dialect test in sql-client
> ---
>
> Key: FLINK-22402
> URL: https://issues.apache.org/jira/browse/FLINK-22402
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Tests
>Reporter: Rui Li
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22402) Add hive dialect test in sql-client

2021-07-08 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17377481#comment-17377481
 ] 

frank wang commented on FLINK-22402:


ok, i can do it about https://issues.apache.org/jira/browse/FLINK-13400 [~lirui]

> Add hive dialect test in sql-client
> ---
>
> Key: FLINK-22402
> URL: https://issues.apache.org/jira/browse/FLINK-22402
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Tests
>Reporter: Rui Li
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-14055) Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"

2021-07-08 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17377351#comment-17377351
 ] 

frank wang edited comment on FLINK-14055 at 7/8/21, 12:22 PM:
--

[~gelimusong],yes, the path that can be a remote path, like in hdfs, or we can 
download the jar to local path ,or support hdfs class loader from remote path
[~ZhenqiuHuang] in your code, do you already finish that ? if so why you write 
flip to discuss


was (Author: frank wang):
[~gelimusong],yes, the path that can be a remote path, like in hdfs, or we can 
download the jar to local path ,or support hdfs class loader from remote path
[~ZhenqiuHuang] in your code, do you already finish that ?

> Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"
> 
>
> Key: FLINK-14055
> URL: https://issues.apache.org/jira/browse/FLINK-14055
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Bowen Li
>Priority: Major
>  Labels: auto-unassigned, sprint
>
> As FLINK-7151 adds basic function DDL to Flink, this ticket is to support 
> dynamically loading functions from external source in function DDL with 
> advanced syntax like 
>  
> {code:java}
> CREATE FUNCTION func_name as class_name USING JAR/FILE/ACHIEVE 'xxx' [, 
> JAR/FILE/ACHIEVE 'yyy'] ;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14055) Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"

2021-07-08 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17377351#comment-17377351
 ] 

frank wang commented on FLINK-14055:


[~gelimusong],yes, the path that can be a remote path, like in hdfs, or we can 
download the jar to local path ,or support hdfs class loader from remote path
[~ZhenqiuHuang] in your code, do you already finish that ?

> Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"
> 
>
> Key: FLINK-14055
> URL: https://issues.apache.org/jira/browse/FLINK-14055
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Bowen Li
>Priority: Major
>  Labels: auto-unassigned, sprint
>
> As FLINK-7151 adds basic function DDL to Flink, this ticket is to support 
> dynamically loading functions from external source in function DDL with 
> advanced syntax like 
>  
> {code:java}
> CREATE FUNCTION func_name as class_name USING JAR/FILE/ACHIEVE 'xxx' [, 
> JAR/FILE/ACHIEVE 'yyy'] ;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22402) Add hive dialect test in sql-client

2021-07-08 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17377345#comment-17377345
 ] 

frank wang commented on FLINK-22402:


hi, [~lirui],does this finished? if not, can you assign it to me ?

> Add hive dialect test in sql-client
> ---
>
> Key: FLINK-22402
> URL: https://issues.apache.org/jira/browse/FLINK-22402
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Tests
>Reporter: Rui Li
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14055) Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"

2021-07-07 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17376670#comment-17376670
 ] 

frank wang commented on FLINK-14055:


[~ZhenqiuHuang],we implement CatalogFactory interface and 
FunctionDefinitionFactory interface 

and the core code like this

 @Override
public FunctionDefinition createFunctionDefinition(String name, CatalogFunction 
catalogFunction) {
// Currently only handles Java class-based functions
Object func;
String classNameInnter = catalogFunction.getClassName().substring(0, 
catalogFunction.getClassName().lastIndexOf("."))+"$"+catalogFunction.getClassName().substring(catalogFunction.getClassName().lastIndexOf(".")
 + 1);
HdfsClassLoader hdfsClassLoader = null;
try {
hdfsClassLoader = 
HdfsClassLoader.getClazz(catalogFunction.getProperties().get("JAR_PATH"));
func = hdfsClassLoader.loadClass(classNameInnter).newInstance();
} catch (Exception e) {
try {
func = 
hdfsClassLoader.loadClass(catalogFunction.getClassName()).newInstance();
} catch (Exception ie) {
try {
func = 
Thread.currentThread().getContextClassLoader().loadClass(catalogFunction.getClassName()).newInstance();
} catch (Exception iie) {
throw new IllegalStateException(
String.format("Failed 
instantiating '%s'", catalogFunction.getClassName())
);
}
}
}

UserDefinedFunction udf = (UserDefinedFunction) func;

if (udf instanceof ScalarFunction) {
return new ScalarFunctionDefinition(
name,
(ScalarFunction) udf
);
} else if (udf instanceof TableFunction) {
TableFunction t = (TableFunction) udf;
return new TableFunctionDefinition(
name,
t,
t.getResultType()
);
} else if (udf instanceof AggregateFunction) {
AggregateFunction a = (AggregateFunction) udf;

return new AggregateFunctionDefinition(
name,
a,
a.getAccumulatorType(),
a.getResultType()
);
} else if (udf instanceof TableAggregateFunction) {
TableAggregateFunction a = (TableAggregateFunction) udf;

return new TableAggregateFunctionDefinition(
name,
a,
a.getAccumulatorType(),
a.getResultType()
);
} else {
throw new UnsupportedOperationException(
String.format("Function %s should be of 
ScalarFunction, TableFunction, AggregateFunction, or TableAggregateFunction", 
catalogFunction.getClassName())
);
}
}

 

 

> Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"
> 
>
> Key: FLINK-14055
> URL: https://issues.apache.org/jira/browse/FLINK-14055
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Bowen Li
>Priority: Major
>  Labels: auto-unassigned, sprint
>
> As FLINK-7151 adds basic function DDL to Flink, this ticket is to support 
> dynamically loading functions from external source in function DDL with 
> advanced syntax like 
>  
> {code:java}
> CREATE FUNCTION func_name as class_name USING JAR/FILE/ACHIEVE 'xxx' [, 
> JAR/FILE/ACHIEVE 'yyy'] ;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23032) Refactor HiveSource to make it usable in data stream job

2021-07-06 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17376233#comment-17376233
 ] 

frank wang commented on FLINK-23032:


[~lirui] can you assign to me about this feature, mayby i can help do it 

> Refactor HiveSource to make it usable in data stream job
> 
>
> Key: FLINK-23032
> URL: https://issues.apache.org/jira/browse/FLINK-23032
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Major
> Fix For: 1.14.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-10851) sqlUpdate support complex insert grammar

2021-07-06 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-10851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17376230#comment-17376230
 ] 

frank wang commented on FLINK-10851:


[~dwysakowicz] yes, we can close this ticket

> sqlUpdate support complex insert grammar
> 
>
> Key: FLINK-10851
> URL: https://issues.apache.org/jira/browse/FLINK-10851
> Project: Flink
>  Issue Type: Bug
>Reporter: frank wang
>Priority: Major
>  Labels: pull-request-available
>
> my code is
> {{tableEnv.sqlUpdate("insert into kafka.sdkafka.product_4 select filedName1, 
> filedName2 from kafka.sdkafka.order_4");}}
> but flink give me error info, said kafka "No table was registered under the 
> name kafka"
> i modify the code ,that is ok now
> TableEnvironment.scala
> {code:java}
> def sqlUpdate(stmt: String, config: QueryConfig): Unit = {
>   val planner = new FlinkPlannerImpl(getFrameworkConfig, getPlanner, 
> getTypeFactory)
>   // parse the sql query
>   val parsed = planner.parse(stmt)
>   parsed match {
> case insert: SqlInsert =>
>   // validate the SQL query
>   val query = insert.getSource
>   val validatedQuery = planner.validate(query)
>   // get query result as Table
>   val queryResult = new Table(this, 
> LogicalRelNode(planner.rel(validatedQuery).rel))
>   // get name of sink table
>   val targetTableName = 
> insert.getTargetTable.asInstanceOf[SqlIdentifier].names.get(0)
>   // insert query result into sink table
>   insertInto(queryResult, targetTableName, config)
> case _ =>
>   throw new TableException(
> "Unsupported SQL query! sqlUpdate() only accepts SQL statements of 
> type INSERT.")
>   }
> }
> {code}
> should modify to this
> {code:java}
> def sqlUpdate(stmt: String, config: QueryConfig): Unit = {
>   val planner = new FlinkPlannerImpl(getFrameworkConfig, getPlanner, 
> getTypeFactory)
>   // parse the sql query
>   val parsed = planner.parse(stmt)
>   parsed match {
> case insert: SqlInsert =>
>   // validate the SQL query
>   val query = insert.getSource
>   val validatedQuery = planner.validate(query)
>   // get query result as Table
>   val queryResult = new Table(this, 
> LogicalRelNode(planner.rel(validatedQuery).rel))
>   // get name of sink table
>   //val targetTableName = 
> insert.getTargetTable.asInstanceOf[SqlIdentifier].names.get(0)
>   val targetTableName = insert.getTargetTable.toString
>   // insert query result into sink table
>   insertInto(queryResult, targetTableName, config)
> case _ =>
>   throw new TableException(
> "Unsupported SQL query! sqlUpdate() only accepts SQL statements of 
> type INSERT.")
>   }
> }
> {code}
>  
> i hope this can be acceptted, thx



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-12031) the registerFactory method of TypeExtractor Should not be private

2021-07-06 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17376231#comment-17376231
 ] 

frank wang commented on FLINK-12031:


[~twalthr] i can help to solve this problem, can you assign to me ?

> the registerFactory method of TypeExtractor  Should not be private
> --
>
> Key: FLINK-12031
> URL: https://issues.apache.org/jira/browse/FLINK-12031
> Project: Flink
>  Issue Type: Bug
>  Components: API / Type Serialization System
>Reporter: frank wang
>Priority: Minor
>
> [https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/java/typeutils/TypeExtractor.java]
> {code:java}
> /**
>  * Registers a type information factory globally for a certain type. Every 
> following type extraction
>  * operation will use the provided factory for this type. The factory will 
> have highest precedence
>  * for this type. In a hierarchy of types the registered factory has higher 
> precedence than annotations
>  * at the same level but lower precedence than factories defined down the 
> hierarchy.
>  *
>  * @param t type for which a new factory is registered
>  * @param factory type information factory that will produce {@link 
> TypeInformation}
>  */
> private static void registerFactory(Type t, Class 
> factory) {
>Preconditions.checkNotNull(t, "Type parameter must not be null.");
>Preconditions.checkNotNull(factory, "Factory parameter must not be null.");
>if (!TypeInfoFactory.class.isAssignableFrom(factory)) {
>   throw new IllegalArgumentException("Class is not a TypeInfoFactory.");
>}
>if (registeredTypeInfoFactories.containsKey(t)) {
>   throw new InvalidTypesException("A TypeInfoFactory for type '" + t + "' 
> is already registered.");
>}
>registeredTypeInfoFactories.put(t, factory);
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-14055) Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"

2021-07-06 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17376208#comment-17376208
 ] 

frank wang edited comment on FLINK-14055 at 7/7/21, 3:55 AM:
-

before, in our team, we implement function custom class loader, support from 
hdfs load, and local load, maybe i can join this work to help do something 
[~zhouqi] [~jark] [~ZhenqiuHuang]


was (Author: frank wang):
before, in our team, we implement function custom class loader, support from 
hdfs load, and local load, maybe i can join this work to help do something 
[~zhouqi] [~jark]

> Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"
> 
>
> Key: FLINK-14055
> URL: https://issues.apache.org/jira/browse/FLINK-14055
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Bowen Li
>Priority: Major
>  Labels: auto-unassigned, sprint
>
> As FLINK-7151 adds basic function DDL to Flink, this ticket is to support 
> dynamically loading functions from external source in function DDL with 
> advanced syntax like 
>  
> {code:java}
> CREATE FUNCTION func_name as class_name USING JAR/FILE/ACHIEVE 'xxx' [, 
> JAR/FILE/ACHIEVE 'yyy'] ;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-14055) Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"

2021-07-06 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17376208#comment-17376208
 ] 

frank wang edited comment on FLINK-14055 at 7/7/21, 3:53 AM:
-

before, in our team, we implement function custom class loader, support from 
hdfs load, and local load, maybe i can join this work to help do something 
[~zhouqi] [~jark]


was (Author: frank wang):
before, in our team, we implement function custom class loader, support from 
hdfs load, and local load, maybe i can join this work to help do something

> Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"
> 
>
> Key: FLINK-14055
> URL: https://issues.apache.org/jira/browse/FLINK-14055
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Bowen Li
>Priority: Major
>  Labels: auto-unassigned, sprint
>
> As FLINK-7151 adds basic function DDL to Flink, this ticket is to support 
> dynamically loading functions from external source in function DDL with 
> advanced syntax like 
>  
> {code:java}
> CREATE FUNCTION func_name as class_name USING JAR/FILE/ACHIEVE 'xxx' [, 
> JAR/FILE/ACHIEVE 'yyy'] ;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14055) Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"

2021-07-06 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17376208#comment-17376208
 ] 

frank wang commented on FLINK-14055:


before, in our team, we implement function custom class loader, support from 
hdfs load, and local load, maybe i can join this work to help do something

> Add advanced function DDL syntax "USING JAR/FILE/ACHIVE"
> 
>
> Key: FLINK-14055
> URL: https://issues.apache.org/jira/browse/FLINK-14055
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Bowen Li
>Priority: Major
>  Labels: auto-unassigned, sprint
>
> As FLINK-7151 adds basic function DDL to Flink, this ticket is to support 
> dynamically loading functions from external source in function DDL with 
> advanced syntax like 
>  
> {code:java}
> CREATE FUNCTION func_name as class_name USING JAR/FILE/ACHIEVE 'xxx' [, 
> JAR/FILE/ACHIEVE 'yyy'] ;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22735) HiveTableSourceITCase.testStreamPartitionReadByCreateTime failed because of times out

2021-06-21 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17366704#comment-17366704
 ] 

frank wang commented on FLINK-22735:


i test the code, found The final code that went wrong was the following

 
{code:java}
public class CollectResultFetcher {

private static final int DEFAULT_RETRY_MILLIS = 100;
private void sleepBeforeRetry() {
   if (retryMillis <= 0) {
return;
   }

   try {
// TODO a more proper retry strategy?
Thread.sleep(retryMillis);
   } catch (InterruptedException e) {
LOG.warn("Interrupted when sleeping before a retry", e);
   }
}
}{code}
 

retryMillis=100, why the time is 100, can we redefine the time, or reduce time?

 

> HiveTableSourceITCase.testStreamPartitionReadByCreateTime failed because of 
> times out 
> --
>
> Key: FLINK-22735
> URL: https://issues.apache.org/jira/browse/FLINK-22735
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.14.0, 1.13.1
>Reporter: Guowei Ma
>Assignee: frank wang
>Priority: Blocker
>  Labels: stale-blocker, test-stability
> Fix For: 1.14.0
>
> Attachments: wx20210610-222...@2x.png, wx20210610-222...@2x.png
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18205&view=logs&j=245e1f2e-ba5b-5570-d689-25ae21e5302f&t=e7f339b2-a7c3-57d9-00af-3712d4b15354&l=23726
> {code:java}
> May 20 22:22:26 [ERROR] Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 225.004 s <<< FAILURE! - in 
> org.apache.flink.connectors.hive.HiveTableSourceITCase
> May 20 22:22:26 [ERROR] 
> testStreamPartitionReadByCreateTime(org.apache.flink.connectors.hive.HiveTableSourceITCase)
>   Time elapsed: 120.182 s  <<< ERROR!
> May 20 22:22:26 org.junit.runners.model.TestTimedOutException: test timed out 
> after 12 milliseconds
> May 20 22:22:26   at java.lang.Thread.sleep(Native Method)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:237)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:113)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
> May 20 22:22:26   at 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
> May 20 22:22:26   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.fetchRows(HiveTableSourceITCase.java:712)
> May 20 22:22:26   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.testStreamPartitionReadByCreateTime(HiveTableSourceITCase.java:652)
> May 20 22:22:26   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> May 20 22:22:26   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> May 20 22:22:26   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> May 20 22:22:26   at java.lang.reflect.Method.invoke(Method.java:498)
> May 20 22:22:26   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> May 20 22:22:26   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> May 20 22:22:26   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> May 20 22:22:26   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> May 20 22:22:26   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22735) HiveTableSourceITCase.testStreamPartitionReadByCreateTime failed because of times out

2021-06-15 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17364101#comment-17364101
 ] 

frank wang commented on FLINK-22735:


i check the building test, indeed, the problem appear on 1.12, sorry, i will 
check out to 1.12 and try that

> HiveTableSourceITCase.testStreamPartitionReadByCreateTime failed because of 
> times out 
> --
>
> Key: FLINK-22735
> URL: https://issues.apache.org/jira/browse/FLINK-22735
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.14.0, 1.13.1
>Reporter: Guowei Ma
>Assignee: frank wang
>Priority: Blocker
>  Labels: stale-blocker, test-stability
> Fix For: 1.14.0
>
> Attachments: wx20210610-222...@2x.png, wx20210610-222...@2x.png
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18205&view=logs&j=245e1f2e-ba5b-5570-d689-25ae21e5302f&t=e7f339b2-a7c3-57d9-00af-3712d4b15354&l=23726
> {code:java}
> May 20 22:22:26 [ERROR] Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 225.004 s <<< FAILURE! - in 
> org.apache.flink.connectors.hive.HiveTableSourceITCase
> May 20 22:22:26 [ERROR] 
> testStreamPartitionReadByCreateTime(org.apache.flink.connectors.hive.HiveTableSourceITCase)
>   Time elapsed: 120.182 s  <<< ERROR!
> May 20 22:22:26 org.junit.runners.model.TestTimedOutException: test timed out 
> after 12 milliseconds
> May 20 22:22:26   at java.lang.Thread.sleep(Native Method)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:237)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:113)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
> May 20 22:22:26   at 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
> May 20 22:22:26   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.fetchRows(HiveTableSourceITCase.java:712)
> May 20 22:22:26   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.testStreamPartitionReadByCreateTime(HiveTableSourceITCase.java:652)
> May 20 22:22:26   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> May 20 22:22:26   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> May 20 22:22:26   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> May 20 22:22:26   at java.lang.reflect.Method.invoke(Method.java:498)
> May 20 22:22:26   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> May 20 22:22:26   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> May 20 22:22:26   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> May 20 22:22:26   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> May 20 22:22:26   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22735) HiveTableSourceITCase.testStreamPartitionReadByCreateTime failed because of times out

2021-06-15 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17363655#comment-17363655
 ] 

frank wang commented on FLINK-22735:


[~dwysakowicz] until now I haven't reproduced locally, I will continue to try 
locally and ask [~lirui] for help,maybe we can add the overtime time

> HiveTableSourceITCase.testStreamPartitionReadByCreateTime failed because of 
> times out 
> --
>
> Key: FLINK-22735
> URL: https://issues.apache.org/jira/browse/FLINK-22735
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.14.0, 1.13.1
>Reporter: Guowei Ma
>Assignee: frank wang
>Priority: Blocker
>  Labels: stale-blocker, test-stability
> Fix For: 1.14.0
>
> Attachments: wx20210610-222...@2x.png, wx20210610-222...@2x.png
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18205&view=logs&j=245e1f2e-ba5b-5570-d689-25ae21e5302f&t=e7f339b2-a7c3-57d9-00af-3712d4b15354&l=23726
> {code:java}
> May 20 22:22:26 [ERROR] Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 225.004 s <<< FAILURE! - in 
> org.apache.flink.connectors.hive.HiveTableSourceITCase
> May 20 22:22:26 [ERROR] 
> testStreamPartitionReadByCreateTime(org.apache.flink.connectors.hive.HiveTableSourceITCase)
>   Time elapsed: 120.182 s  <<< ERROR!
> May 20 22:22:26 org.junit.runners.model.TestTimedOutException: test timed out 
> after 12 milliseconds
> May 20 22:22:26   at java.lang.Thread.sleep(Native Method)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:237)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:113)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
> May 20 22:22:26   at 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
> May 20 22:22:26   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.fetchRows(HiveTableSourceITCase.java:712)
> May 20 22:22:26   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.testStreamPartitionReadByCreateTime(HiveTableSourceITCase.java:652)
> May 20 22:22:26   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> May 20 22:22:26   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> May 20 22:22:26   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> May 20 22:22:26   at java.lang.reflect.Method.invoke(Method.java:498)
> May 20 22:22:26   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> May 20 22:22:26   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> May 20 22:22:26   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> May 20 22:22:26   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> May 20 22:22:26   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-22735) HiveTableSourceITCase.testStreamPartitionReadByCreateTime failed because of times out

2021-06-15 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17363655#comment-17363655
 ] 

frank wang edited comment on FLINK-22735 at 6/15/21, 1:53 PM:
--

[~dwysakowicz] until now I haven't reproduced locally, I will continue to try 
locally and ask [~lirui] for help,maybe we can add the overtime time to avoid 
that problem


was (Author: frank wang):
[~dwysakowicz] until now I haven't reproduced locally, I will continue to try 
locally and ask [~lirui] for help,maybe we can add the overtime time

> HiveTableSourceITCase.testStreamPartitionReadByCreateTime failed because of 
> times out 
> --
>
> Key: FLINK-22735
> URL: https://issues.apache.org/jira/browse/FLINK-22735
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.14.0, 1.13.1
>Reporter: Guowei Ma
>Assignee: frank wang
>Priority: Blocker
>  Labels: stale-blocker, test-stability
> Fix For: 1.14.0
>
> Attachments: wx20210610-222...@2x.png, wx20210610-222...@2x.png
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18205&view=logs&j=245e1f2e-ba5b-5570-d689-25ae21e5302f&t=e7f339b2-a7c3-57d9-00af-3712d4b15354&l=23726
> {code:java}
> May 20 22:22:26 [ERROR] Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 225.004 s <<< FAILURE! - in 
> org.apache.flink.connectors.hive.HiveTableSourceITCase
> May 20 22:22:26 [ERROR] 
> testStreamPartitionReadByCreateTime(org.apache.flink.connectors.hive.HiveTableSourceITCase)
>   Time elapsed: 120.182 s  <<< ERROR!
> May 20 22:22:26 org.junit.runners.model.TestTimedOutException: test timed out 
> after 12 milliseconds
> May 20 22:22:26   at java.lang.Thread.sleep(Native Method)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:237)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:113)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
> May 20 22:22:26   at 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
> May 20 22:22:26   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.fetchRows(HiveTableSourceITCase.java:712)
> May 20 22:22:26   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.testStreamPartitionReadByCreateTime(HiveTableSourceITCase.java:652)
> May 20 22:22:26   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> May 20 22:22:26   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> May 20 22:22:26   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> May 20 22:22:26   at java.lang.reflect.Method.invoke(Method.java:498)
> May 20 22:22:26   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> May 20 22:22:26   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> May 20 22:22:26   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> May 20 22:22:26   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> May 20 22:22:26   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-22735) HiveTableSourceITCase.testStreamPartitionReadByCreateTime failed because of times out

2021-06-10 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17361360#comment-17361360
 ] 

frank wang edited comment on FLINK-22735 at 6/11/21, 3:19 AM:
--

[~lirui] i check the code , and find in 
[FLINK-22890|https://issues.apache.org/jira/browse/FLINK-22890], the code have 
been merged to master that you modified, so that, i test the code is no problem


was (Author: frank wang):
[~lirui] i check the code , and find 
[FLINK-22890|https://issues.apache.org/jira/browse/FLINK-22890]

> HiveTableSourceITCase.testStreamPartitionReadByCreateTime failed because of 
> times out 
> --
>
> Key: FLINK-22735
> URL: https://issues.apache.org/jira/browse/FLINK-22735
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.14.0, 1.13.1
>Reporter: Guowei Ma
>Assignee: frank wang
>Priority: Blocker
>  Labels: stale-blocker, test-stability
> Fix For: 1.14.0
>
> Attachments: wx20210610-222...@2x.png, wx20210610-222...@2x.png
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18205&view=logs&j=245e1f2e-ba5b-5570-d689-25ae21e5302f&t=e7f339b2-a7c3-57d9-00af-3712d4b15354&l=23726
> {code:java}
> May 20 22:22:26 [ERROR] Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 225.004 s <<< FAILURE! - in 
> org.apache.flink.connectors.hive.HiveTableSourceITCase
> May 20 22:22:26 [ERROR] 
> testStreamPartitionReadByCreateTime(org.apache.flink.connectors.hive.HiveTableSourceITCase)
>   Time elapsed: 120.182 s  <<< ERROR!
> May 20 22:22:26 org.junit.runners.model.TestTimedOutException: test timed out 
> after 12 milliseconds
> May 20 22:22:26   at java.lang.Thread.sleep(Native Method)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:237)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:113)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
> May 20 22:22:26   at 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
> May 20 22:22:26   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.fetchRows(HiveTableSourceITCase.java:712)
> May 20 22:22:26   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.testStreamPartitionReadByCreateTime(HiveTableSourceITCase.java:652)
> May 20 22:22:26   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> May 20 22:22:26   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> May 20 22:22:26   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> May 20 22:22:26   at java.lang.reflect.Method.invoke(Method.java:498)
> May 20 22:22:26   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> May 20 22:22:26   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> May 20 22:22:26   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> May 20 22:22:26   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> May 20 22:22:26   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22735) HiveTableSourceITCase.testStreamPartitionReadByCreateTime failed because of times out

2021-06-10 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17361360#comment-17361360
 ] 

frank wang commented on FLINK-22735:


[~lirui] i check the code , and find 
[FLINK-22890|https://issues.apache.org/jira/browse/FLINK-22890]

> HiveTableSourceITCase.testStreamPartitionReadByCreateTime failed because of 
> times out 
> --
>
> Key: FLINK-22735
> URL: https://issues.apache.org/jira/browse/FLINK-22735
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.14.0, 1.13.1
>Reporter: Guowei Ma
>Assignee: frank wang
>Priority: Blocker
>  Labels: stale-blocker, test-stability
> Fix For: 1.14.0
>
> Attachments: wx20210610-222...@2x.png, wx20210610-222...@2x.png
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18205&view=logs&j=245e1f2e-ba5b-5570-d689-25ae21e5302f&t=e7f339b2-a7c3-57d9-00af-3712d4b15354&l=23726
> {code:java}
> May 20 22:22:26 [ERROR] Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 225.004 s <<< FAILURE! - in 
> org.apache.flink.connectors.hive.HiveTableSourceITCase
> May 20 22:22:26 [ERROR] 
> testStreamPartitionReadByCreateTime(org.apache.flink.connectors.hive.HiveTableSourceITCase)
>   Time elapsed: 120.182 s  <<< ERROR!
> May 20 22:22:26 org.junit.runners.model.TestTimedOutException: test timed out 
> after 12 milliseconds
> May 20 22:22:26   at java.lang.Thread.sleep(Native Method)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:237)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:113)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
> May 20 22:22:26   at 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
> May 20 22:22:26   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.fetchRows(HiveTableSourceITCase.java:712)
> May 20 22:22:26   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.testStreamPartitionReadByCreateTime(HiveTableSourceITCase.java:652)
> May 20 22:22:26   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> May 20 22:22:26   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> May 20 22:22:26   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> May 20 22:22:26   at java.lang.reflect.Method.invoke(Method.java:498)
> May 20 22:22:26   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> May 20 22:22:26   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> May 20 22:22:26   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> May 20 22:22:26   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> May 20 22:22:26   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-22735) HiveTableSourceITCase.testStreamPartitionReadByCreateTime failed because of times out

2021-06-10 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360973#comment-17360973
 ] 

frank wang edited comment on FLINK-22735 at 6/10/21, 2:37 PM:
--

[~lirui] i test two class about HiveTableSourceITCase and HiveTableSinkITCase, 
i have not find any problem, can you help me?maybe my operation mistake, thx 

!wx20210610-222...@2x.png!

!wx20210610-222...@2x.png!

 


was (Author: frank wang):
[~lirui] i test two class about HiveTableSourceITCase and HiveTableSinkITCase, 
i have not find any problem, can you help me?maybe my operation mistake, thx 

!wx20210610-222...@2x.png!

!wx20210610-222...@2x.png!

 

 

 

 

 

 

 

> HiveTableSourceITCase.testStreamPartitionReadByCreateTime failed because of 
> times out 
> --
>
> Key: FLINK-22735
> URL: https://issues.apache.org/jira/browse/FLINK-22735
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.14.0, 1.13.1
>Reporter: Guowei Ma
>Assignee: frank wang
>Priority: Blocker
>  Labels: stale-blocker, test-stability
> Fix For: 1.14.0
>
> Attachments: wx20210610-222...@2x.png, wx20210610-222...@2x.png
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18205&view=logs&j=245e1f2e-ba5b-5570-d689-25ae21e5302f&t=e7f339b2-a7c3-57d9-00af-3712d4b15354&l=23726
> {code:java}
> May 20 22:22:26 [ERROR] Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 225.004 s <<< FAILURE! - in 
> org.apache.flink.connectors.hive.HiveTableSourceITCase
> May 20 22:22:26 [ERROR] 
> testStreamPartitionReadByCreateTime(org.apache.flink.connectors.hive.HiveTableSourceITCase)
>   Time elapsed: 120.182 s  <<< ERROR!
> May 20 22:22:26 org.junit.runners.model.TestTimedOutException: test timed out 
> after 12 milliseconds
> May 20 22:22:26   at java.lang.Thread.sleep(Native Method)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:237)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:113)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
> May 20 22:22:26   at 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
> May 20 22:22:26   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.fetchRows(HiveTableSourceITCase.java:712)
> May 20 22:22:26   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.testStreamPartitionReadByCreateTime(HiveTableSourceITCase.java:652)
> May 20 22:22:26   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> May 20 22:22:26   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> May 20 22:22:26   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> May 20 22:22:26   at java.lang.reflect.Method.invoke(Method.java:498)
> May 20 22:22:26   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> May 20 22:22:26   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> May 20 22:22:26   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> May 20 22:22:26   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> May 20 22:22:26   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-22735) HiveTableSourceITCase.testStreamPartitionReadByCreateTime failed because of times out

2021-06-10 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360973#comment-17360973
 ] 

frank wang edited comment on FLINK-22735 at 6/10/21, 2:36 PM:
--

[~lirui] i test two class about HiveTableSourceITCase and HiveTableSinkITCase, 
i have not find any problem, can you help me?maybe my operation mistake, thx 

!wx20210610-222...@2x.png!

!wx20210610-222...@2x.png!

 

 

 

 

 

 

 


was (Author: frank wang):
!wx20210610-222...@2x.png![~lirui] i test two class about HiveTableSourceITCase 
and HiveTableSinkITCase, i have not find any problem, can you help me?maybe my 
operation mistake, thx  !wx20210610-222...@2x.png!

> HiveTableSourceITCase.testStreamPartitionReadByCreateTime failed because of 
> times out 
> --
>
> Key: FLINK-22735
> URL: https://issues.apache.org/jira/browse/FLINK-22735
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.14.0, 1.13.1
>Reporter: Guowei Ma
>Assignee: frank wang
>Priority: Blocker
>  Labels: stale-blocker, test-stability
> Fix For: 1.14.0
>
> Attachments: wx20210610-222...@2x.png, wx20210610-222...@2x.png
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18205&view=logs&j=245e1f2e-ba5b-5570-d689-25ae21e5302f&t=e7f339b2-a7c3-57d9-00af-3712d4b15354&l=23726
> {code:java}
> May 20 22:22:26 [ERROR] Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 225.004 s <<< FAILURE! - in 
> org.apache.flink.connectors.hive.HiveTableSourceITCase
> May 20 22:22:26 [ERROR] 
> testStreamPartitionReadByCreateTime(org.apache.flink.connectors.hive.HiveTableSourceITCase)
>   Time elapsed: 120.182 s  <<< ERROR!
> May 20 22:22:26 org.junit.runners.model.TestTimedOutException: test timed out 
> after 12 milliseconds
> May 20 22:22:26   at java.lang.Thread.sleep(Native Method)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:237)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:113)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
> May 20 22:22:26   at 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
> May 20 22:22:26   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.fetchRows(HiveTableSourceITCase.java:712)
> May 20 22:22:26   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.testStreamPartitionReadByCreateTime(HiveTableSourceITCase.java:652)
> May 20 22:22:26   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> May 20 22:22:26   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> May 20 22:22:26   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> May 20 22:22:26   at java.lang.reflect.Method.invoke(Method.java:498)
> May 20 22:22:26   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> May 20 22:22:26   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> May 20 22:22:26   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> May 20 22:22:26   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> May 20 22:22:26   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-22735) HiveTableSourceITCase.testStreamPartitionReadByCreateTime failed because of times out

2021-06-10 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360973#comment-17360973
 ] 

frank wang edited comment on FLINK-22735 at 6/10/21, 2:36 PM:
--

!wx20210610-222...@2x.png![~lirui] i test two class about HiveTableSourceITCase 
and HiveTableSinkITCase, i have not find any problem, can you help me?maybe my 
operation mistake, thx  !wx20210610-222...@2x.png!


was (Author: frank wang):
[~lirui] i test two class about HiveTableSourceITCase and HiveTableSinkITCase, 
i have not find any problem, can you help me?maybe my operation mistake, thx 
!wx20210610-222...@2x.png!

> HiveTableSourceITCase.testStreamPartitionReadByCreateTime failed because of 
> times out 
> --
>
> Key: FLINK-22735
> URL: https://issues.apache.org/jira/browse/FLINK-22735
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.14.0, 1.13.1
>Reporter: Guowei Ma
>Assignee: frank wang
>Priority: Blocker
>  Labels: stale-blocker, test-stability
> Fix For: 1.14.0
>
> Attachments: wx20210610-222...@2x.png, wx20210610-222...@2x.png
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18205&view=logs&j=245e1f2e-ba5b-5570-d689-25ae21e5302f&t=e7f339b2-a7c3-57d9-00af-3712d4b15354&l=23726
> {code:java}
> May 20 22:22:26 [ERROR] Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 225.004 s <<< FAILURE! - in 
> org.apache.flink.connectors.hive.HiveTableSourceITCase
> May 20 22:22:26 [ERROR] 
> testStreamPartitionReadByCreateTime(org.apache.flink.connectors.hive.HiveTableSourceITCase)
>   Time elapsed: 120.182 s  <<< ERROR!
> May 20 22:22:26 org.junit.runners.model.TestTimedOutException: test timed out 
> after 12 milliseconds
> May 20 22:22:26   at java.lang.Thread.sleep(Native Method)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:237)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:113)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
> May 20 22:22:26   at 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
> May 20 22:22:26   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.fetchRows(HiveTableSourceITCase.java:712)
> May 20 22:22:26   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.testStreamPartitionReadByCreateTime(HiveTableSourceITCase.java:652)
> May 20 22:22:26   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> May 20 22:22:26   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> May 20 22:22:26   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> May 20 22:22:26   at java.lang.reflect.Method.invoke(Method.java:498)
> May 20 22:22:26   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> May 20 22:22:26   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> May 20 22:22:26   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> May 20 22:22:26   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> May 20 22:22:26   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22735) HiveTableSourceITCase.testStreamPartitionReadByCreateTime failed because of times out

2021-06-10 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360973#comment-17360973
 ] 

frank wang commented on FLINK-22735:


[~lirui] i test two class about HiveTableSourceITCase and HiveTableSinkITCase, 
i have not find any problem, can you help me?maybe my operation mistake, thx 
!wx20210610-222...@2x.png!

> HiveTableSourceITCase.testStreamPartitionReadByCreateTime failed because of 
> times out 
> --
>
> Key: FLINK-22735
> URL: https://issues.apache.org/jira/browse/FLINK-22735
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.14.0, 1.13.1
>Reporter: Guowei Ma
>Assignee: frank wang
>Priority: Blocker
>  Labels: stale-blocker, test-stability
> Fix For: 1.14.0
>
> Attachments: wx20210610-222...@2x.png, wx20210610-222...@2x.png
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18205&view=logs&j=245e1f2e-ba5b-5570-d689-25ae21e5302f&t=e7f339b2-a7c3-57d9-00af-3712d4b15354&l=23726
> {code:java}
> May 20 22:22:26 [ERROR] Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 225.004 s <<< FAILURE! - in 
> org.apache.flink.connectors.hive.HiveTableSourceITCase
> May 20 22:22:26 [ERROR] 
> testStreamPartitionReadByCreateTime(org.apache.flink.connectors.hive.HiveTableSourceITCase)
>   Time elapsed: 120.182 s  <<< ERROR!
> May 20 22:22:26 org.junit.runners.model.TestTimedOutException: test timed out 
> after 12 milliseconds
> May 20 22:22:26   at java.lang.Thread.sleep(Native Method)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:237)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:113)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
> May 20 22:22:26   at 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
> May 20 22:22:26   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.fetchRows(HiveTableSourceITCase.java:712)
> May 20 22:22:26   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.testStreamPartitionReadByCreateTime(HiveTableSourceITCase.java:652)
> May 20 22:22:26   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> May 20 22:22:26   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> May 20 22:22:26   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> May 20 22:22:26   at java.lang.reflect.Method.invoke(Method.java:498)
> May 20 22:22:26   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> May 20 22:22:26   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> May 20 22:22:26   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> May 20 22:22:26   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> May 20 22:22:26   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22735) HiveTableSourceITCase.testStreamPartitionReadByCreateTime failed because of times out

2021-06-10 Thread frank wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

frank wang updated FLINK-22735:
---
Attachment: wx20210610-222...@2x.png
wx20210610-222...@2x.png

> HiveTableSourceITCase.testStreamPartitionReadByCreateTime failed because of 
> times out 
> --
>
> Key: FLINK-22735
> URL: https://issues.apache.org/jira/browse/FLINK-22735
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.14.0, 1.13.1
>Reporter: Guowei Ma
>Assignee: frank wang
>Priority: Blocker
>  Labels: stale-blocker, test-stability
> Fix For: 1.14.0
>
> Attachments: wx20210610-222...@2x.png, wx20210610-222...@2x.png
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18205&view=logs&j=245e1f2e-ba5b-5570-d689-25ae21e5302f&t=e7f339b2-a7c3-57d9-00af-3712d4b15354&l=23726
> {code:java}
> May 20 22:22:26 [ERROR] Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 225.004 s <<< FAILURE! - in 
> org.apache.flink.connectors.hive.HiveTableSourceITCase
> May 20 22:22:26 [ERROR] 
> testStreamPartitionReadByCreateTime(org.apache.flink.connectors.hive.HiveTableSourceITCase)
>   Time elapsed: 120.182 s  <<< ERROR!
> May 20 22:22:26 org.junit.runners.model.TestTimedOutException: test timed out 
> after 12 milliseconds
> May 20 22:22:26   at java.lang.Thread.sleep(Native Method)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:237)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:113)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
> May 20 22:22:26   at 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
> May 20 22:22:26   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.fetchRows(HiveTableSourceITCase.java:712)
> May 20 22:22:26   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.testStreamPartitionReadByCreateTime(HiveTableSourceITCase.java:652)
> May 20 22:22:26   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> May 20 22:22:26   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> May 20 22:22:26   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> May 20 22:22:26   at java.lang.reflect.Method.invoke(Method.java:498)
> May 20 22:22:26   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> May 20 22:22:26   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> May 20 22:22:26   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> May 20 22:22:26   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> May 20 22:22:26   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22735) HiveTableSourceITCase.testStreamPartitionReadByCreateTime failed because of times out

2021-06-08 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17359720#comment-17359720
 ] 

frank wang commented on FLINK-22735:


i can help solve this problem, pls assign to me [~maguowei].  [~xintongsong]

> HiveTableSourceITCase.testStreamPartitionReadByCreateTime failed because of 
> times out 
> --
>
> Key: FLINK-22735
> URL: https://issues.apache.org/jira/browse/FLINK-22735
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.14.0, 1.13.1
>Reporter: Guowei Ma
>Priority: Blocker
>  Labels: stale-blocker, test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18205&view=logs&j=245e1f2e-ba5b-5570-d689-25ae21e5302f&t=e7f339b2-a7c3-57d9-00af-3712d4b15354&l=23726
> {code:java}
> May 20 22:22:26 [ERROR] Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 225.004 s <<< FAILURE! - in 
> org.apache.flink.connectors.hive.HiveTableSourceITCase
> May 20 22:22:26 [ERROR] 
> testStreamPartitionReadByCreateTime(org.apache.flink.connectors.hive.HiveTableSourceITCase)
>   Time elapsed: 120.182 s  <<< ERROR!
> May 20 22:22:26 org.junit.runners.model.TestTimedOutException: test timed out 
> after 12 milliseconds
> May 20 22:22:26   at java.lang.Thread.sleep(Native Method)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:237)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:113)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
> May 20 22:22:26   at 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
> May 20 22:22:26   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.fetchRows(HiveTableSourceITCase.java:712)
> May 20 22:22:26   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.testStreamPartitionReadByCreateTime(HiveTableSourceITCase.java:652)
> May 20 22:22:26   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> May 20 22:22:26   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> May 20 22:22:26   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> May 20 22:22:26   at java.lang.reflect.Method.invoke(Method.java:498)
> May 20 22:22:26   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> May 20 22:22:26   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> May 20 22:22:26   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> May 20 22:22:26   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> May 20 22:22:26   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22443) can not be execute an extreme long sql under batch mode

2021-05-22 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17349715#comment-17349715
 ] 

frank wang commented on FLINK-22443:


i have init some table schema ,does this ok?[~macdoor615]
{code:java}
//source
create table raw_restapi_rotated.r_biz_product
( 
 code varchar(50), 
 type varchar(20)
)
create table raw_restapi_rotated.p_hcd
( 
dt timestamp, 
businessplatform archar(50), 
coltime date, 
indicatornumber varchar(50), 
indicatorvalue bigint
)
{code}

> can not be execute an extreme long sql under batch mode
> ---
>
> Key: FLINK-22443
> URL: https://issues.apache.org/jira/browse/FLINK-22443
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.12.2
> Environment: execute command
>  
> {code:java}
> bin/sql-client.sh embedded -d conf/sql-client-batch.yaml 
> {code}
> content of conf/sql-client-batch.yaml
>  
> {code:java}
> catalogs:
> - name: bnpmphive
>   type: hive
>   hive-conf-dir: /home/gum/hive/conf
>   hive-version: 3.1.2
> execution:
>   planner: blink
>   type: batch
>   #type: streaming
>   result-mode: table
>   parallelism: 4
>   max-parallelism: 2000
>   current-catalog: bnpmphive
>   #current-database: snmpprobe 
> #configuration:
> #  table.sql-dialect: hivemodules:
>- name: core
>  type: core
>- name: myhive
>  type: hivedeployment:
>   # general cluster communication timeout in ms
>   response-timeout: 5000
>   # (optional) address from cluster to gateway
>   gateway-address: ""
>   # (optional) port from cluster to gateway
>   gateway-port: 0
> {code}
>  
>Reporter: macdoor615
>Priority: Blocker
>  Labels: stale-blocker, stale-critical
> Attachments: flink-gum-taskexecutor-8-hb3-prod-hadoop-002.log.4.zip
>
>
> 1. execute an extreme long sql under batch mode
>  
> {code:java}
> select
> 'CD' product_name,
> r.code business_platform,
> 5 statisticperiod,
> cast('2021-03-24 00:00:00' as timestamp) coltime,
> cast(r1.indicatorvalue as double) as YWPT_ZHQI_CD_038_GZ_2,
> cast(r2.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_7,
> cast(r3.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_5,
> cast(r4.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_6,
> cast(r5.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00029,
> cast(r6.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00028,
> cast(r7.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00015,
> cast(r8.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00014,
> cast(r9.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00011,
> cast(r10.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00010,
> cast(r11.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00013,
> cast(r12.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00012,
> cast(r13.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00027,
> cast(r14.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00026,
> cast(r15.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00046,
> cast(r16.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00047,
> cast(r17.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00049,
> cast(r18.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00048,
> cast(r19.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00024,
> cast(r20.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00025,
> cast(r21.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00022,
> cast(r22.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00023,
> cast(r23.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00054,
> cast(r24.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00055,
> cast(r25.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00033,
> cast(r26.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00032,
> cast(r27.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00053,
> cast(r28.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00052,
> cast(r29.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00051,
> cast(r30.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00050,
> cast(r31.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00043,
> cast(r32.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00042,
> cast(r33.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00017,
> cast(r34.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00016,
> cast(r35.indicatorvalue as double) as YWPT_ZHQI_CD_038_GZ_3,
> cast(r36.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00045,
> cast(r37.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00044,
> cast(r38.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00038,
> cast(r39.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00039,
> cast(r40.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00037,
> cast(r41.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00036,
> cast(r42.indicatorvalue as double) 

[jira] [Comment Edited] (FLINK-22443) can not be execute an extreme long sql under batch mode

2021-05-22 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17349714#comment-17349714
 ] 

frank wang edited comment on FLINK-22443 at 5/22/21, 11:17 AM:
---

do you give your source table schema, tks,[~macdoor615]


was (Author: frank wang):
do you give your source table schema, tks

> can not be execute an extreme long sql under batch mode
> ---
>
> Key: FLINK-22443
> URL: https://issues.apache.org/jira/browse/FLINK-22443
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.12.2
> Environment: execute command
>  
> {code:java}
> bin/sql-client.sh embedded -d conf/sql-client-batch.yaml 
> {code}
> content of conf/sql-client-batch.yaml
>  
> {code:java}
> catalogs:
> - name: bnpmphive
>   type: hive
>   hive-conf-dir: /home/gum/hive/conf
>   hive-version: 3.1.2
> execution:
>   planner: blink
>   type: batch
>   #type: streaming
>   result-mode: table
>   parallelism: 4
>   max-parallelism: 2000
>   current-catalog: bnpmphive
>   #current-database: snmpprobe 
> #configuration:
> #  table.sql-dialect: hivemodules:
>- name: core
>  type: core
>- name: myhive
>  type: hivedeployment:
>   # general cluster communication timeout in ms
>   response-timeout: 5000
>   # (optional) address from cluster to gateway
>   gateway-address: ""
>   # (optional) port from cluster to gateway
>   gateway-port: 0
> {code}
>  
>Reporter: macdoor615
>Priority: Blocker
>  Labels: stale-blocker, stale-critical
> Attachments: flink-gum-taskexecutor-8-hb3-prod-hadoop-002.log.4.zip
>
>
> 1. execute an extreme long sql under batch mode
>  
> {code:java}
> select
> 'CD' product_name,
> r.code business_platform,
> 5 statisticperiod,
> cast('2021-03-24 00:00:00' as timestamp) coltime,
> cast(r1.indicatorvalue as double) as YWPT_ZHQI_CD_038_GZ_2,
> cast(r2.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_7,
> cast(r3.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_5,
> cast(r4.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_6,
> cast(r5.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00029,
> cast(r6.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00028,
> cast(r7.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00015,
> cast(r8.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00014,
> cast(r9.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00011,
> cast(r10.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00010,
> cast(r11.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00013,
> cast(r12.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00012,
> cast(r13.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00027,
> cast(r14.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00026,
> cast(r15.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00046,
> cast(r16.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00047,
> cast(r17.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00049,
> cast(r18.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00048,
> cast(r19.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00024,
> cast(r20.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00025,
> cast(r21.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00022,
> cast(r22.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00023,
> cast(r23.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00054,
> cast(r24.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00055,
> cast(r25.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00033,
> cast(r26.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00032,
> cast(r27.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00053,
> cast(r28.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00052,
> cast(r29.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00051,
> cast(r30.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00050,
> cast(r31.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00043,
> cast(r32.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00042,
> cast(r33.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00017,
> cast(r34.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00016,
> cast(r35.indicatorvalue as double) as YWPT_ZHQI_CD_038_GZ_3,
> cast(r36.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00045,
> cast(r37.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00044,
> cast(r38.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00038,
> cast(r39.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00039,
> cast(r40.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00037,
> cast(r41.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00036,
> cast(r42.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00040,
> cast(r43.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00041,
> cast(r44.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00034,
> cast(r4

[jira] [Commented] (FLINK-22443) can not be execute an extreme long sql under batch mode

2021-05-22 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17349714#comment-17349714
 ] 

frank wang commented on FLINK-22443:


do you give your source table schema, tks

> can not be execute an extreme long sql under batch mode
> ---
>
> Key: FLINK-22443
> URL: https://issues.apache.org/jira/browse/FLINK-22443
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.12.2
> Environment: execute command
>  
> {code:java}
> bin/sql-client.sh embedded -d conf/sql-client-batch.yaml 
> {code}
> content of conf/sql-client-batch.yaml
>  
> {code:java}
> catalogs:
> - name: bnpmphive
>   type: hive
>   hive-conf-dir: /home/gum/hive/conf
>   hive-version: 3.1.2
> execution:
>   planner: blink
>   type: batch
>   #type: streaming
>   result-mode: table
>   parallelism: 4
>   max-parallelism: 2000
>   current-catalog: bnpmphive
>   #current-database: snmpprobe 
> #configuration:
> #  table.sql-dialect: hivemodules:
>- name: core
>  type: core
>- name: myhive
>  type: hivedeployment:
>   # general cluster communication timeout in ms
>   response-timeout: 5000
>   # (optional) address from cluster to gateway
>   gateway-address: ""
>   # (optional) port from cluster to gateway
>   gateway-port: 0
> {code}
>  
>Reporter: macdoor615
>Priority: Blocker
>  Labels: stale-blocker, stale-critical
> Attachments: flink-gum-taskexecutor-8-hb3-prod-hadoop-002.log.4.zip
>
>
> 1. execute an extreme long sql under batch mode
>  
> {code:java}
> select
> 'CD' product_name,
> r.code business_platform,
> 5 statisticperiod,
> cast('2021-03-24 00:00:00' as timestamp) coltime,
> cast(r1.indicatorvalue as double) as YWPT_ZHQI_CD_038_GZ_2,
> cast(r2.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_7,
> cast(r3.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_5,
> cast(r4.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_6,
> cast(r5.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00029,
> cast(r6.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00028,
> cast(r7.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00015,
> cast(r8.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00014,
> cast(r9.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00011,
> cast(r10.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00010,
> cast(r11.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00013,
> cast(r12.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00012,
> cast(r13.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00027,
> cast(r14.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00026,
> cast(r15.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00046,
> cast(r16.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00047,
> cast(r17.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00049,
> cast(r18.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00048,
> cast(r19.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00024,
> cast(r20.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00025,
> cast(r21.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00022,
> cast(r22.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00023,
> cast(r23.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00054,
> cast(r24.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00055,
> cast(r25.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00033,
> cast(r26.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00032,
> cast(r27.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00053,
> cast(r28.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00052,
> cast(r29.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00051,
> cast(r30.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00050,
> cast(r31.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00043,
> cast(r32.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00042,
> cast(r33.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00017,
> cast(r34.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00016,
> cast(r35.indicatorvalue as double) as YWPT_ZHQI_CD_038_GZ_3,
> cast(r36.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00045,
> cast(r37.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00044,
> cast(r38.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00038,
> cast(r39.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00039,
> cast(r40.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00037,
> cast(r41.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00036,
> cast(r42.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00040,
> cast(r43.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00041,
> cast(r44.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00034,
> cast(r45.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00035,
> cast(r46.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00030,
> cast(r47.i

[jira] [Comment Edited] (FLINK-22443) can not be execute an extreme long sql under batch mode

2021-05-22 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17349714#comment-17349714
 ] 

frank wang edited comment on FLINK-22443 at 5/22/21, 11:14 AM:
---

do you give your source table schema, tks


was (Author: frank wang):
do you give your source table schema, tks

> can not be execute an extreme long sql under batch mode
> ---
>
> Key: FLINK-22443
> URL: https://issues.apache.org/jira/browse/FLINK-22443
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.12.2
> Environment: execute command
>  
> {code:java}
> bin/sql-client.sh embedded -d conf/sql-client-batch.yaml 
> {code}
> content of conf/sql-client-batch.yaml
>  
> {code:java}
> catalogs:
> - name: bnpmphive
>   type: hive
>   hive-conf-dir: /home/gum/hive/conf
>   hive-version: 3.1.2
> execution:
>   planner: blink
>   type: batch
>   #type: streaming
>   result-mode: table
>   parallelism: 4
>   max-parallelism: 2000
>   current-catalog: bnpmphive
>   #current-database: snmpprobe 
> #configuration:
> #  table.sql-dialect: hivemodules:
>- name: core
>  type: core
>- name: myhive
>  type: hivedeployment:
>   # general cluster communication timeout in ms
>   response-timeout: 5000
>   # (optional) address from cluster to gateway
>   gateway-address: ""
>   # (optional) port from cluster to gateway
>   gateway-port: 0
> {code}
>  
>Reporter: macdoor615
>Priority: Blocker
>  Labels: stale-blocker, stale-critical
> Attachments: flink-gum-taskexecutor-8-hb3-prod-hadoop-002.log.4.zip
>
>
> 1. execute an extreme long sql under batch mode
>  
> {code:java}
> select
> 'CD' product_name,
> r.code business_platform,
> 5 statisticperiod,
> cast('2021-03-24 00:00:00' as timestamp) coltime,
> cast(r1.indicatorvalue as double) as YWPT_ZHQI_CD_038_GZ_2,
> cast(r2.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_7,
> cast(r3.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_5,
> cast(r4.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_6,
> cast(r5.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00029,
> cast(r6.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00028,
> cast(r7.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00015,
> cast(r8.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00014,
> cast(r9.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00011,
> cast(r10.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00010,
> cast(r11.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00013,
> cast(r12.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00012,
> cast(r13.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00027,
> cast(r14.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00026,
> cast(r15.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00046,
> cast(r16.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00047,
> cast(r17.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00049,
> cast(r18.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00048,
> cast(r19.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00024,
> cast(r20.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00025,
> cast(r21.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00022,
> cast(r22.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00023,
> cast(r23.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00054,
> cast(r24.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00055,
> cast(r25.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00033,
> cast(r26.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00032,
> cast(r27.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00053,
> cast(r28.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00052,
> cast(r29.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00051,
> cast(r30.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00050,
> cast(r31.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00043,
> cast(r32.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00042,
> cast(r33.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00017,
> cast(r34.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00016,
> cast(r35.indicatorvalue as double) as YWPT_ZHQI_CD_038_GZ_3,
> cast(r36.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00045,
> cast(r37.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00044,
> cast(r38.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00038,
> cast(r39.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00039,
> cast(r40.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00037,
> cast(r41.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00036,
> cast(r42.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00040,
> cast(r43.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00041,
> cast(r44.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00034,
> cast(r45.indicatorval

[jira] [Commented] (FLINK-13400) Remove Hive and Hadoop dependencies from SQL Client

2021-05-22 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17349708#comment-17349708
 ] 

frank wang commented on FLINK-13400:


Does the issue need to do anything else? if need, mayby i can help to something,

anyway, i just find one has not change from provided to test about hadoop or 
hive
{code:java}
org.apache.hadoop
hadoop-mapreduce-client-core
${hivemetastore.hadoop.version}
provided
{code}
if need to modify, maybe you can assign to me, [~twalthr] 

> Remove Hive and Hadoop dependencies from SQL Client
> ---
>
> Key: FLINK-13400
> URL: https://issues.apache.org/jira/browse/FLINK-13400
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Reporter: Timo Walther
>Priority: Critical
>  Labels: auto-unassigned, stale-critical
> Fix For: 1.14.0
>
>
> 340/550 lines in the SQL Client {{pom.xml}} are just around Hive and Hadoop 
> dependencies.  Hive has nothing to do with the SQL Client and it will be hard 
> to maintain the long list of  exclusion there. Some dependencies are even in 
> a {{provided}} scope and not {{test}} scope.
> We should remove all dependencies on Hive/Hadoop and replace catalog-related 
> tests by a testing catalog. Similar to how we tests source/sinks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22443) can not be execute an extreme long sql under batch mode

2021-05-21 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17349599#comment-17349599
 ] 

frank wang commented on FLINK-22443:


can you give your table schema, maybe it can help me mock test to solve this 
problem, thks,[~macdoor615]

> can not be execute an extreme long sql under batch mode
> ---
>
> Key: FLINK-22443
> URL: https://issues.apache.org/jira/browse/FLINK-22443
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.12.2
> Environment: execute command
>  
> {code:java}
> bin/sql-client.sh embedded -d conf/sql-client-batch.yaml 
> {code}
> content of conf/sql-client-batch.yaml
>  
> {code:java}
> catalogs:
> - name: bnpmphive
>   type: hive
>   hive-conf-dir: /home/gum/hive/conf
>   hive-version: 3.1.2
> execution:
>   planner: blink
>   type: batch
>   #type: streaming
>   result-mode: table
>   parallelism: 4
>   max-parallelism: 2000
>   current-catalog: bnpmphive
>   #current-database: snmpprobe 
> #configuration:
> #  table.sql-dialect: hivemodules:
>- name: core
>  type: core
>- name: myhive
>  type: hivedeployment:
>   # general cluster communication timeout in ms
>   response-timeout: 5000
>   # (optional) address from cluster to gateway
>   gateway-address: ""
>   # (optional) port from cluster to gateway
>   gateway-port: 0
> {code}
>  
>Reporter: macdoor615
>Priority: Blocker
>  Labels: stale-blocker, stale-critical
> Attachments: flink-gum-taskexecutor-8-hb3-prod-hadoop-002.log.4.zip
>
>
> 1. execute an extreme long sql under batch mode
>  
> {code:java}
> select
> 'CD' product_name,
> r.code business_platform,
> 5 statisticperiod,
> cast('2021-03-24 00:00:00' as timestamp) coltime,
> cast(r1.indicatorvalue as double) as YWPT_ZHQI_CD_038_GZ_2,
> cast(r2.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_7,
> cast(r3.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_5,
> cast(r4.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_6,
> cast(r5.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00029,
> cast(r6.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00028,
> cast(r7.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00015,
> cast(r8.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00014,
> cast(r9.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00011,
> cast(r10.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00010,
> cast(r11.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00013,
> cast(r12.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00012,
> cast(r13.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00027,
> cast(r14.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00026,
> cast(r15.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00046,
> cast(r16.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00047,
> cast(r17.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00049,
> cast(r18.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00048,
> cast(r19.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00024,
> cast(r20.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00025,
> cast(r21.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00022,
> cast(r22.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00023,
> cast(r23.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00054,
> cast(r24.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00055,
> cast(r25.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00033,
> cast(r26.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00032,
> cast(r27.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00053,
> cast(r28.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00052,
> cast(r29.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00051,
> cast(r30.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00050,
> cast(r31.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00043,
> cast(r32.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00042,
> cast(r33.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00017,
> cast(r34.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00016,
> cast(r35.indicatorvalue as double) as YWPT_ZHQI_CD_038_GZ_3,
> cast(r36.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00045,
> cast(r37.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00044,
> cast(r38.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00038,
> cast(r39.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00039,
> cast(r40.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00037,
> cast(r41.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00036,
> cast(r42.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00040,
> cast(r43.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00041,
> cast(r44.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00034,
> cast(r45.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00035,
> cast(r46.indi

[jira] [Commented] (FLINK-8093) flink job fail because of kafka producer create fail of "javax.management.InstanceAlreadyExistsException"

2021-05-21 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-8093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17349590#comment-17349590
 ] 

frank wang commented on FLINK-8093:
---

can we set clientId when user have not set the clientid,if so, maybe i can 
solve this problem[~NicoK] 

> flink job fail because of kafka producer create fail of 
> "javax.management.InstanceAlreadyExistsException"
> -
>
> Key: FLINK-8093
> URL: https://issues.apache.org/jira/browse/FLINK-8093
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.3.2, 1.10.0
> Environment: flink 1.3.2, kafka 0.9.1
>Reporter: dongtingting
>Priority: Critical
>  Labels: auto-unassigned, stale-critical, usability
>
> one taskmanager has multiple taskslot, one task fail because of create 
> kafkaProducer fail,the reason for create kafkaProducer fail is 
> “javax.management.InstanceAlreadyExistsException: 
> kafka.producer:type=producer-metrics,client-id=producer-3”。 the detail trace 
> is :
> {noformat}
> 2017-11-04 19:41:23,281 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Source: Custom Source -> Filter -> Map -> Filter -> Sink: 
> dp_client_**_log (7/80) (99551f3f892232d7df5eb9060fa9940c) switched from 
> RUNNING to FAILED.
> org.apache.kafka.common.KafkaException: Failed to construct kafka producer
> at 
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:321)
> at 
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:181)
> at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase.getKafkaProducer(FlinkKafkaProducerBase.java:202)
> at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase.open(FlinkKafkaProducerBase.java:212)
> at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
> at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:111)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:375)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:252)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.kafka.common.KafkaException: Error registering mbean 
> kafka.producer:type=producer-metrics,client-id=producer-3
> at 
> org.apache.kafka.common.metrics.JmxReporter.reregister(JmxReporter.java:159)
> at 
> org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:77)
> at 
> org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:288)
> at org.apache.kafka.common.metrics.Metrics.addMetric(Metrics.java:255)
> at org.apache.kafka.common.metrics.Metrics.addMetric(Metrics.java:239)
> at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.registerMetrics(RecordAccumulator.java:137)
> at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.(RecordAccumulator.java:111)
> at 
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:261)
> ... 9 more
> Caused by: javax.management.InstanceAlreadyExistsException: 
> kafka.producer:type=producer-metrics,client-id=producer-3
> at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at 
> org.apache.kafka.common.metrics.JmxReporter.reregister(JmxReporter.java:157)
> ... 16 more
> {noformat}
> I doubt that task in different taskslot of one taskmanager use different 
> classloader, and taskid may be  the same in one process。 So this lead to 
> create kafkaProducer fail in one taskManager。 
> Does anybody encountered the same problem? 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-22644) Translate "Native Kubernetes" page into Chinese

2021-05-12 Thread frank wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

frank wang updated FLINK-22644:
---
Comment: was deleted

(was: hi, i can translate it, can you assign it to me ?)

> Translate "Native Kubernetes" page into Chinese
> ---
>
> Key: FLINK-22644
> URL: https://issues.apache.org/jira/browse/FLINK-22644
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Yuchen Cheng
>Priority: Major
>  Labels: pull-request-available
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/deployment/resource-providers/native_kubernetes/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22644) Translate "Native Kubernetes" page into Chinese

2021-05-12 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17343116#comment-17343116
 ] 

frank wang commented on FLINK-22644:


hi, i can translate it, can you assign it to me ?

> Translate "Native Kubernetes" page into Chinese
> ---
>
> Key: FLINK-22644
> URL: https://issues.apache.org/jira/browse/FLINK-22644
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Yuchen Cheng
>Priority: Major
>  Labels: pull-request-available
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/deployment/resource-providers/native_kubernetes/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22525) The zone id in exception message should be GMT+08:00 instead of GMT+8:00

2021-04-29 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17335375#comment-17335375
 ] 

frank wang commented on FLINK-22525:


i check the code, maybe we should remove some validate code 

 
{code:java}
//flink code
Instant instant = null;
if (timestampField instanceof java.time.Instant) {
instant = ((Instant) timestampField);
} else if (timestampField instanceof java.sql.Timestamp) {
Timestamp timestamp = ((Timestamp) timestampField);
// conversion between java.sql.Timestamp and TIMESTAMP_WITH_LOCAL_TIME_ZONE
instant =
TimestampData.fromEpochMillis(
timestamp.getTime(), timestamp.getNanos() % 
1000_000)
.toInstant();
} else if (timestampField instanceof TimestampData) {
instant = ((TimestampData) timestampField).toInstant();
} else if (timestampField instanceof Integer) {
instant = Instant.ofEpochSecond((Integer) timestampField);
} else if (timestampField instanceof Long) {
instant = Instant.ofEpochMilli((Long) timestampField);
}
if (instant != null) {
return timestampToString(
instant.atZone(sessionTimeZone).toLocalDateTime(),
getPrecision(fieldType));
} else {
return timestampField;
}{code}
and i test this code, i found it can print the right time that i need
{code:java}
//test code
Instant instant=Instant.now();
System.out.println(instant.atZone(ZoneId.of("UTC+8")).toLocalDateTime().toString());
{code}
so can we remove some of code as follow
{code:java}
//flink validate time zone code
private void validateTimeZone(String zone) {
final String zoneId = zone.toUpperCase();
if (zoneId.startsWith("UTC+")
|| zoneId.startsWith("UTC-")
|| SHORT_IDS.containsKey(zoneId)) {
throw new IllegalArgumentException(
String.format(
"The supported Zone ID is either a full name such as 
'America/Los_Angeles',"
+ " or a custom timezone id such as 'GMT-8:00', 
but configured Zone ID is '%s'.",
zone));
}
}
{code}
 

> The zone id in exception message should be GMT+08:00 instead of GMT+8:00
> 
>
> Key: FLINK-22525
> URL: https://issues.apache.org/jira/browse/FLINK-22525
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.13.0
>Reporter: Leonard Xu
>Priority: Minor
> Fix For: 1.14.0, 1.13.1
>
>
> {code:java}
> Flink SQL> SET table.local-time-zone=UTC+3;
> Flink SQL> select current_row_timestamp();
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.IllegalArgumentException: The supported Zone ID is either a full 
> name such as 'America/Los_Angeles', or a custom timezone id such as 
> 'GMT-8:00', but configured Zone ID is 'UTC+3'.
> {code}
> The valid zoned should  be 'GMT-08:00'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19777) java.lang.NullPointerException

2020-10-22 Thread frank wang (Jira)
frank wang created FLINK-19777:
--

 Summary: java.lang.NullPointerException
 Key: FLINK-19777
 URL: https://issues.apache.org/jira/browse/FLINK-19777
 Project: Flink
  Issue Type: Bug
 Environment: jdk 1.8.0_262
flink 1.11.1
Reporter: frank wang


i use flink sql run a job,the sql and metadata is :
 meta :

1>soure: kafka
 create table metric_source_window_table(

`metricName` String,

`namespace` String,

`timestamp` BIGINT,

`doubleValue` DOUBLE,

`longValue` BIGINT,

`metricsValue` String,

`tags` MAP,

`meta` Map,

t as TO_TIMESTAMP(FROM_UNIXTIME(`timestamp`/1000,'-MM-dd HH:mm:ss')),

WATERMARK FOR t AS t) WITH (

'connector' = 'kafka',

'topic' = 'ai-platform',

'properties.bootstrap.servers' = 'xxx',

'properties.group.id' = 'metricgroup',

'scan.startup.mode'='earliest-offset',

'format' = 'json',

'json.fail-on-missing-field' = 'false',

'json.ignore-parse-errors' = 'true')

2>sink to clickhouse(the clickhouse-connector was developed by ourself)

create table flink_metric_window_table(

`timestamp` BIGINT,

`longValue` BIGINT,

`metricName` String,

`metricsValueSum` DOUBLE,

`metricsValueMin` DOUBLE,

`metricsValueMax` DOUBLE,

`tag_record_id` String,

`tag_host_ip` String,

`tag_instance` String,

`tag_job_name` String,

`tag_ai_app_name` String,

`tag_namespace` String,

`tag_ai_type` String,

`tag_host_name` String,

`tag_alarm_domain` String) WITH (

'connector.type' = 'clickhouse',

'connector.property-version' = '1',

'connector.url' = 'jdbc:clickhouse://xxx:8123/dataeye',

'connector.cluster'='ck_cluster',

'connector.write.flush.max-rows'='6000',

'connector.write.flush.interval'='1000',

'connector.table' = 'flink_metric_table_all')

my sql is :

insert into
 hive.temp_vipflink.flink_metric_window_table
select
 cast(HOP_ROWTIME(t, INTERVAL '60' SECOND, INTERVAL '15' MINUTE) AS BIGINT) AS 
`timestamps`,
 sum(COALESCE( `longValue`, 0)) AS longValue,
 metricName,
 sum(IF(IS_DIGIT(metricsValue), cast(metricsValue AS DOUBLE), 0)) AS 
metricsValueSum,
 min(IF(IS_DIGIT(metricsValue), cast(metricsValue AS DOUBLE), 0)) AS 
metricsValueMin,
 max(IF(IS_DIGIT(metricsValue), cast(metricsValue AS DOUBLE), 0)) AS 
metricsValueMax,
 tags ['record_id'],
 tags ['host_ip'],
 tags ['instance'],
 tags ['job_name'],
 tags ['ai_app_name'],
 tags ['namespace'],
 tags ['ai_type'],
 tags ['host_name'],
 tags ['alarm_domain']
from
 hive.temp_vipflink.metric_source_window_table
 group by 
 metricName,
 tags ['record_id'],
 tags ['host_ip'],
 tags ['instance'],
 tags ['job_name'],
 tags ['ai_app_name'],
 tags ['namespace'],
 tags ['ai_type'],
 tags ['host_name'],
 tags ['alarm_domain'],
 HOP(t, INTERVAL '60' SECOND, INTERVAL '15' MINUTE)

 

when i run this sql for a long hours, it will appear a exception like this:

[2020-10-22 20:54:52.089] [ERROR] [GroupWindowAggregate(groupBy=[metricName, 
$f1, $f2, $f3, $f4, $f5, $f6, $f7, $f8, $f9], window=[SlidingGroupWindow('w$, 
t, 90, 6)], properties=[w$start, w$end, w$rowtime, w$proctime], 
select=[metricName, $f1, $f2, $f3, $f4, $f5, $f6, $f7, $f8, $f9, SUM($f11) AS 
longValue, SUM($f12) AS metricsValueSum, MIN($f12) AS metricsValueMin, 
MAX($f12) AS metricsValueMax, start('w$) AS w$start, end('w$) AS w$end, 
rowtime('w$) AS w$rowtime, proctime('w$) AS w$proctime]) -> 
Calc(select=[CAST(CAST(w$rowtime)) AS timestamps, longValue, metricName, 
metricsValueSum, metricsValueMin, metricsValueMax, $f1 AS EXPR$6, $f2 AS 
EXPR$7, $f3 AS EXPR$8, $f4 AS EXPR$9, $f5 AS EXPR$10, $f6 AS EXPR$11, $f7 AS 
EXPR$12, $f8 AS EXPR$13, $f9 AS EXPR$14]) -> SinkConversionToTuple2 -> Sink: 
JdbcUpsertTableSink(timestamp, longValue, metricName, metricsValueSum, 
metricsValueMin, metricsValueMax, tag_record_id, tag_host_ip, tag_instance, 
tag_job_name, tag_ai_app_name, tag_namespace, tag_ai_type, tag_host_name, 
tag_alarm_domain) (23/44)] 
[org.apache.flink.streaming.runtime.tasks.StreamTask] >>> Error during disposal 
of stream operator. java.lang.NullPointerException: null at 
org.apache.flink.table.runtime.operators.window.WindowOperator.dispose(WindowOperator.java:318)
 ~[flink-table-blink_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT] at 
org.apache.flink.streaming.runtime.tasks.StreamTask.disposeAllOperators(StreamTask.java:729)
 [flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT] at 
org.apache.flink.streaming.runtime.tasks.StreamTask.cleanUpInvoke(StreamTask.java:645)
 [flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT] at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:549) 
[flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT] at 
org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:721) 
[flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT] at 
org.apache.flink.runtime.taskmanager.Task.run(Task.java:546) 
[flink-dist_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT] at 
java.lang.Thread.run(Thread.java:748) [?:1.8.0_262]

 

finally ,this job is error, and thi

[jira] [Comment Edited] (FLINK-16198) FileUtilsTest fails on Mac OS

2020-04-28 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17094297#comment-17094297
 ] 

frank wang edited comment on FLINK-16198 at 4/28/20, 8:42 AM:
--

I test the example,
1. i delete the some code, the test is ok 

{quote} @Test
public void testCompressionOnRelativePath() throws IOException {
final java.nio.file.Path compressDir = 
tmp.newFolder("compressDir").toPath();
 delete this code
/*final java.nio.file.Path relativeCompressDir =
Paths.get(new 
File("").getAbsolutePath()).relativize(compressDir);
*/
verifyDirectoryCompression(compressDir);
}{quote}

2. i think the problems is duing to concurrency, in FileUtils.java, the 
guardIfWindows method
{quote}private static void guardIfWindows(ThrowingConsumer 
toRun, File file) throws IOException {
if (!OperatingSystem.isWindows()) {
toRun.accept(file);
}
else {
// for windows, we synchronize on a global lock, to 
prevent concurrent delete issues
// >
// in the future, we may want to find either a good way 
of working around file visibility
// in Windows under concurrent operations (the behavior 
seems completely unpredictable)
// or  make this locking more fine grained, for example 
 on directory path prefixes
synchronized (WINDOWS_DELETE_LOCK) {
for (int attempt = 1; attempt <= 10; attempt++) 
{
try {
toRun.accept(file);
break;
}
catch (AccessDeniedException e) {
// ah, windows...
}

// briefly wait and fall through the 
loop
try {
Thread.sleep(1);
} catch (InterruptedException e) {
// restore the interruption 
flag and error out of the method

Thread.currentThread().interrupt();
throw new 
IOException("operation interrupted");
}
}
}
}
}{quote}
because it hasnot concurrency control on mac or linux, may be we can modify 
some code to follow
{quote}private static void guardIfWindows(ThrowingConsumer 
toRun, File file) throws IOException {
// for windows, we synchronize on a global lock, to prevent 
concurrent delete issues
// >
// in the future, we may want to find either a good way of 
working around file visibility
// in Windows under concurrent operations (the behavior seems 
completely unpredictable)
// or  make this locking more fine grained, for example  on 
directory path prefixes
synchronized (WINDOWS_DELETE_LOCK) {
for (int attempt = 1; attempt <= 10; attempt++) {
try {
toRun.accept(file);
break;
}
catch (AccessDeniedException e) {
// ah, windows...
}

// briefly wait and fall through the loop
try {
Thread.sleep(1);
} catch (InterruptedException e) {
// restore the interruption flag and 
error out of the method
Thread.currentThread().interrupt();
throw new IOException("operation 
interrupted");
}
}
}
}{quote}



was (Author: frank wang):
I test the example,
1. i delete the some code, the test is ok 

 @Test
public void testCompressionOnRelativePath() throws IOException {
final java.nio.file.Path compressDir = 
tmp.newFolder("compressDir").toPath();
 delete this code
/*final java.nio.file.Path relativeCompressDir =
Paths.get(new 
File("").getAbsolutePath()).relativiz

[jira] [Comment Edited] (FLINK-16198) FileUtilsTest fails on Mac OS

2020-04-28 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17094297#comment-17094297
 ] 

frank wang edited comment on FLINK-16198 at 4/28/20, 8:41 AM:
--

I test the example,
1. i delete the some code, the test is ok 

 @Test
public void testCompressionOnRelativePath() throws IOException {
final java.nio.file.Path compressDir = 
tmp.newFolder("compressDir").toPath();
 delete this code
/*final java.nio.file.Path relativeCompressDir =
Paths.get(new 
File("").getAbsolutePath()).relativize(compressDir);
*/
verifyDirectoryCompression(compressDir);
}

2. i think the problems is duing to concurrency, in FileUtils.java, the 
guardIfWindows method
{quote}private static void guardIfWindows(ThrowingConsumer 
toRun, File file) throws IOException {
if (!OperatingSystem.isWindows()) {
toRun.accept(file);
}
else {
// for windows, we synchronize on a global lock, to 
prevent concurrent delete issues
// >
// in the future, we may want to find either a good way 
of working around file visibility
// in Windows under concurrent operations (the behavior 
seems completely unpredictable)
// or  make this locking more fine grained, for example 
 on directory path prefixes
synchronized (WINDOWS_DELETE_LOCK) {
for (int attempt = 1; attempt <= 10; attempt++) 
{
try {
toRun.accept(file);
break;
}
catch (AccessDeniedException e) {
// ah, windows...
}

// briefly wait and fall through the 
loop
try {
Thread.sleep(1);
} catch (InterruptedException e) {
// restore the interruption 
flag and error out of the method

Thread.currentThread().interrupt();
throw new 
IOException("operation interrupted");
}
}
}
}
}{quote}
because it hasnot concurrency control on mac or linux, may be we can modify 
some code to follow
{quote}private static void guardIfWindows(ThrowingConsumer 
toRun, File file) throws IOException {
// for windows, we synchronize on a global lock, to prevent 
concurrent delete issues
// >
// in the future, we may want to find either a good way of 
working around file visibility
// in Windows under concurrent operations (the behavior seems 
completely unpredictable)
// or  make this locking more fine grained, for example  on 
directory path prefixes
synchronized (WINDOWS_DELETE_LOCK) {
for (int attempt = 1; attempt <= 10; attempt++) {
try {
toRun.accept(file);
break;
}
catch (AccessDeniedException e) {
// ah, windows...
}

// briefly wait and fall through the loop
try {
Thread.sleep(1);
} catch (InterruptedException e) {
// restore the interruption flag and 
error out of the method
Thread.currentThread().interrupt();
throw new IOException("operation 
interrupted");
}
}
}
}{quote}



was (Author: frank wang):
I test the example,
1. i delete the some code, the test is ok 

 @Test
public void testCompressionOnRelativePath() throws IOException {
final java.nio.file.Path compressDir = 
tmp.newFolder("compressDir").toPath();
 delete this code
/*final java.nio.file.Path relativeCompressDir =
Paths.get(new 
File("").getAbsolutePath()).relativize(compressDir)

[jira] [Comment Edited] (FLINK-16198) FileUtilsTest fails on Mac OS

2020-04-28 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17094297#comment-17094297
 ] 

frank wang edited comment on FLINK-16198 at 4/28/20, 8:39 AM:
--

I test the example,
1. i delete the some code, the test is ok 

 @Test
public void testCompressionOnRelativePath() throws IOException {
final java.nio.file.Path compressDir = 
tmp.newFolder("compressDir").toPath();
 delete this code
/*final java.nio.file.Path relativeCompressDir =
Paths.get(new 
File("").getAbsolutePath()).relativize(compressDir);
*/
verifyDirectoryCompression(compressDir);
}

2. i think the problems is duing to concurrency,
{quote}private static void guardIfWindows(ThrowingConsumer 
toRun, File file) throws IOException {
// for windows, we synchronize on a global lock, to prevent 
concurrent delete issues
// >
// in the future, we may want to find either a good way of 
working around file visibility
// in Windows under concurrent operations (the behavior seems 
completely unpredictable)
// or  make this locking more fine grained, for example  on 
directory path prefixes
synchronized (WINDOWS_DELETE_LOCK) {
for (int attempt = 1; attempt <= 10; attempt++) {
try {
toRun.accept(file);
break;
}
catch (AccessDeniedException e) {
// ah, windows...
}

// briefly wait and fall through the loop
try {
Thread.sleep(1);
} catch (InterruptedException e) {
// restore the interruption flag and 
error out of the method
Thread.currentThread().interrupt();
throw new IOException("operation 
interrupted");
}
}
}
}{quote}
because it hasnot concurrency control on mac or linux, may be we can modify 
some code to follow



was (Author: frank wang):
I test the example,
1. i delete the some code, the test is ok 

 @Test
public void testCompressionOnRelativePath() throws IOException {
final java.nio.file.Path compressDir = 
tmp.newFolder("compressDir").toPath();
 delete this code
/*final java.nio.file.Path relativeCompressDir =
Paths.get(new 
File("").getAbsolutePath()).relativize(compressDir);
*/
verifyDirectoryCompression(compressDir);
}

2. i think the problems is duing to concurrency,
private static void guardIfWindows(ThrowingConsumer toRun, 
File file) throws IOException {
// for windows, we synchronize on a global lock, to prevent 
concurrent delete issues
// >
// in the future, we may want to find either a good way of 
working around file visibility
// in Windows under concurrent operations (the behavior seems 
completely unpredictable)
// or  make this locking more fine grained, for example  on 
directory path prefixes
synchronized (WINDOWS_DELETE_LOCK) {
for (int attempt = 1; attempt <= 10; attempt++) {
try {
toRun.accept(file);
break;
}
catch (AccessDeniedException e) {
// ah, windows...
}

// briefly wait and fall through the loop
try {
Thread.sleep(1);
} catch (InterruptedException e) {
// restore the interruption flag and 
error out of the method
Thread.currentThread().interrupt();
throw new IOException("operation 
interrupted");
}
}
}
}
because it hasnot concurrency control on mac or linux, may be we can modify 
some code to follow


> FileUtilsTest fails on Mac OS
> -
>
> Key: FLINK-16198
> URL: https://issues.apache.org/jira/browse/FLINK-16198
> P

[jira] [Commented] (FLINK-16198) FileUtilsTest fails on Mac OS

2020-04-28 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17094297#comment-17094297
 ] 

frank wang commented on FLINK-16198:


I test the example,
1. i delete the some code, the test is ok 

 @Test
public void testCompressionOnRelativePath() throws IOException {
final java.nio.file.Path compressDir = 
tmp.newFolder("compressDir").toPath();
 delete this code
/*final java.nio.file.Path relativeCompressDir =
Paths.get(new 
File("").getAbsolutePath()).relativize(compressDir);
*/
verifyDirectoryCompression(compressDir);
}

2. i think the problems is duing to concurrency,
private static void guardIfWindows(ThrowingConsumer toRun, 
File file) throws IOException {
// for windows, we synchronize on a global lock, to prevent 
concurrent delete issues
// >
// in the future, we may want to find either a good way of 
working around file visibility
// in Windows under concurrent operations (the behavior seems 
completely unpredictable)
// or  make this locking more fine grained, for example  on 
directory path prefixes
synchronized (WINDOWS_DELETE_LOCK) {
for (int attempt = 1; attempt <= 10; attempt++) {
try {
toRun.accept(file);
break;
}
catch (AccessDeniedException e) {
// ah, windows...
}

// briefly wait and fall through the loop
try {
Thread.sleep(1);
} catch (InterruptedException e) {
// restore the interruption flag and 
error out of the method
Thread.currentThread().interrupt();
throw new IOException("operation 
interrupted");
}
}
}
}
because it hasnot concurrency control on mac or linux, may be we can modify 
some code to follow


> FileUtilsTest fails on Mac OS
> -
>
> Key: FLINK-16198
> URL: https://issues.apache.org/jira/browse/FLINK-16198
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Tests
>Affects Versions: 1.11.0
>Reporter: Andrey Zagrebin
>Priority: Blocker
>  Labels: starter
> Fix For: 1.11.0
>
>
> The following tests fail if run on Mac OS (IDE/maven).
>  
> FileUtilsTest.testCompressionOnRelativePath: 
> {code:java}
> java.nio.file.NoSuchFileException: 
> ../../../../../var/folders/67/v4yp_42d21j6_n8k1h556h0cgn/T/junit6496651678375117676/compressDir/rootDirjava.nio.file.NoSuchFileException:
>  
> ../../../../../var/folders/67/v4yp_42d21j6_n8k1h556h0cgn/T/junit6496651678375117676/compressDir/rootDir
>  at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at 
> sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
>  at java.nio.file.Files.createDirectory(Files.java:674) at 
> org.apache.flink.util.FileUtilsTest.verifyDirectoryCompression(FileUtilsTest.java:440)
>  at 
> org.apache.flink.util.FileUtilsTest.testCompressionOnRelativePath(FileUtilsTest.java:261)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUn

[jira] [Commented] (FLINK-9953) Active Kubernetes integration

2019-08-04 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899725#comment-16899725
 ] 

frank wang commented on FLINK-9953:
---

[~till.rohrmann],Now how is the project going, can I still join now?

> Active Kubernetes integration
> -
>
> Key: FLINK-9953
> URL: https://issues.apache.org/jira/browse/FLINK-9953
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Coordination
>Reporter: Till Rohrmann
>Assignee: Chunhui Shi
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This is the umbrella issue tracking Flink's active Kubernetes integration. 
> Active means in this context that the {{ResourceManager}} can talk to 
> Kubernetes to launch new pods similar to Flink's Yarn and Mesos integration.
>  
> Document can be found here: 
> [https://docs.google.com/document/d/1Zmhui_29VASPcBOEqyMWnF3L6WEWZ4kedrCqya0WaAk/edit?usp=sharing]
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-12845) Execute multiple statements in command line or sql script file

2019-07-29 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16895718#comment-16895718
 ] 

frank wang commented on FLINK-12845:


[~docete],hi,Can you assign this issue to me? thx

> Execute multiple statements in command line or sql script file
> --
>
> Key: FLINK-12845
> URL: https://issues.apache.org/jira/browse/FLINK-12845
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Client
>Reporter: Zhenghua Gao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> User may copy multiple statements and paste them on command line GUI of SQL 
> Client, or User may pass a script file(using SOURCE command or -f option), we 
> should parse and execute them one by one(like other sql cli applications)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-11696) Avoid to send mkdir requests to DFS from task side

2019-07-22 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890622#comment-16890622
 ] 

frank wang commented on FLINK-11696:


We will assemble your code and adopt scenario 2 in our production environment.

> Avoid to send mkdir requests to DFS from task side
> --
>
> Key: FLINK-11696
> URL: https://issues.apache.org/jira/browse/FLINK-11696
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Reporter: Yun Tang
>Assignee: Yun Tang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, when we create checkpoint directory in the distributed file 
> system. Not only {{CheckpointCoordinator}} but also {{FsCheckpointStorage}} 
> in {{StreamTask}} would create the {{checkpointsDirectory}}, 
> {{sharedStateDirectory}} and {{taskOwnedStateDirectory}}. These many 
> {{mkdir}} RPC requests would cause a very high pressure on the distributed 
> file system, especially when the parallelism is large or jobs continue to 
> failover again and again.
> We could avoid these {{mkdir}} requests from task side if writing to a 
> distributed file system.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-11696) Avoid to send mkdir requests to DFS from task side

2019-07-22 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890620#comment-16890620
 ] 

frank wang commented on FLINK-11696:


We have also encountered similar problems. We originally had two options. One 
is to add random numbers when creating directories, and avoid creating them at 
the same time. One is to give the creation all to the jobmaster to complete, 
avoiding duplicate creation.

> Avoid to send mkdir requests to DFS from task side
> --
>
> Key: FLINK-11696
> URL: https://issues.apache.org/jira/browse/FLINK-11696
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Reporter: Yun Tang
>Assignee: Yun Tang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, when we create checkpoint directory in the distributed file 
> system. Not only {{CheckpointCoordinator}} but also {{FsCheckpointStorage}} 
> in {{StreamTask}} would create the {{checkpointsDirectory}}, 
> {{sharedStateDirectory}} and {{taskOwnedStateDirectory}}. These many 
> {{mkdir}} RPC requests would cause a very high pressure on the distributed 
> file system, especially when the parallelism is large or jobs continue to 
> failover again and again.
> We could avoid these {{mkdir}} requests from task side if writing to a 
> distributed file system.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (FLINK-13086) add Chinese documentation for catalogs

2019-07-18 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887589#comment-16887589
 ] 

frank wang edited comment on FLINK-13086 at 7/18/19 12:11 PM:
--

https://github.com/apache/flink/pull/9163

plse review it ,thx


was (Author: frank wang):
https://github.com/apache/flink/pull/9157

plse review it ,thx

> add Chinese documentation for catalogs
> --
>
> Key: FLINK-13086
> URL: https://issues.apache.org/jira/browse/FLINK-13086
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation, 
> Table SQL / API
>Reporter: Bowen Li
>Assignee: frank wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> the ticket for corresponding English documentation is FLINK-12277



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (FLINK-12894) add Chinese documentation of how to configure and use catalogs in SQL CLI

2019-07-18 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887588#comment-16887588
 ] 

frank wang edited comment on FLINK-12894 at 7/18/19 12:10 PM:
--

https://github.com/apache/flink/pull/9164

plse review it ,thx


was (Author: frank wang):
[https://github.com/apache/flink/pull/91|https://github.com/apache/flink/pull/9164]64

plse review it ,thx

> add Chinese documentation of how to configure and use catalogs in SQL CLI
> -
>
> Key: FLINK-12894
> URL: https://issues.apache.org/jira/browse/FLINK-12894
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation, 
> Table SQL / Client
>Reporter: Bowen Li
>Assignee: frank wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Ticket of its corresponding English version is FLINK-12627.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (FLINK-12894) add Chinese documentation of how to configure and use catalogs in SQL CLI

2019-07-18 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887588#comment-16887588
 ] 

frank wang edited comment on FLINK-12894 at 7/18/19 12:10 PM:
--

[https://github.com/apache/flink/pull/91|https://github.com/apache/flink/pull/9164]64

plse review it ,thx


was (Author: frank wang):
https://github.com/apache/flink/pull/9156

plse review it ,thx

> add Chinese documentation of how to configure and use catalogs in SQL CLI
> -
>
> Key: FLINK-12894
> URL: https://issues.apache.org/jira/browse/FLINK-12894
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation, 
> Table SQL / Client
>Reporter: Bowen Li
>Assignee: frank wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Ticket of its corresponding English version is FLINK-12627.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (FLINK-13086) add Chinese documentation for catalogs

2019-07-17 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887589#comment-16887589
 ] 

frank wang edited comment on FLINK-13086 at 7/18/19 5:07 AM:
-

https://github.com/apache/flink/pull/9157

plse review it ,thx


was (Author: frank wang):
[https://github.com/apache/flink/pull/9155|https://github.com/apache/flink/pull/9157]

plse review it ,thx

> add Chinese documentation for catalogs
> --
>
> Key: FLINK-13086
> URL: https://issues.apache.org/jira/browse/FLINK-13086
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation, 
> Table SQL / API
>Reporter: Bowen Li
>Assignee: frank wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> the ticket for corresponding English documentation is FLINK-12277



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (FLINK-12894) add Chinese documentation of how to configure and use catalogs in SQL CLI

2019-07-17 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887588#comment-16887588
 ] 

frank wang edited comment on FLINK-12894 at 7/18/19 5:07 AM:
-

https://github.com/apache/flink/pull/9156

plse review it ,thx


was (Author: frank wang):
[https://github.com/apache/flink/pull/9155]

plse review it ,thx

> add Chinese documentation of how to configure and use catalogs in SQL CLI
> -
>
> Key: FLINK-12894
> URL: https://issues.apache.org/jira/browse/FLINK-12894
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation, 
> Table SQL / Client
>Reporter: Bowen Li
>Assignee: frank wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Ticket of its corresponding English version is FLINK-12627.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (FLINK-13086) add Chinese documentation for catalogs

2019-07-17 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887589#comment-16887589
 ] 

frank wang edited comment on FLINK-13086 at 7/18/19 5:06 AM:
-

[https://github.com/apache/flink/pull/9155|https://github.com/apache/flink/pull/9157]

plse review it ,thx


was (Author: frank wang):
[https://github.com/apache/flink/pull/9155]

plse review it ,thx

> add Chinese documentation for catalogs
> --
>
> Key: FLINK-13086
> URL: https://issues.apache.org/jira/browse/FLINK-13086
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation, 
> Table SQL / API
>Reporter: Bowen Li
>Assignee: frank wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> the ticket for corresponding English documentation is FLINK-12277



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13086) add Chinese documentation for catalogs

2019-07-17 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887589#comment-16887589
 ] 

frank wang commented on FLINK-13086:


[https://github.com/apache/flink/pull/9155]

plse review it ,thx

> add Chinese documentation for catalogs
> --
>
> Key: FLINK-13086
> URL: https://issues.apache.org/jira/browse/FLINK-13086
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation, 
> Table SQL / API
>Reporter: Bowen Li
>Assignee: frank wang
>Priority: Major
> Fix For: 1.9.0
>
>
> the ticket for corresponding English documentation is FLINK-12277



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-12894) add Chinese documentation of how to configure and use catalogs in SQL CLI

2019-07-17 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887588#comment-16887588
 ] 

frank wang commented on FLINK-12894:


[https://github.com/apache/flink/pull/9155]

plse review it ,thx

> add Chinese documentation of how to configure and use catalogs in SQL CLI
> -
>
> Key: FLINK-12894
> URL: https://issues.apache.org/jira/browse/FLINK-12894
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation, 
> Table SQL / Client
>Reporter: Bowen Li
>Assignee: frank wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Ticket of its corresponding English version is FLINK-12627.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-12894) add Chinese documentation of how to configure and use catalogs in SQL CLI

2019-07-17 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887587#comment-16887587
 ] 

frank wang commented on FLINK-12894:


I have submitted

> add Chinese documentation of how to configure and use catalogs in SQL CLI
> -
>
> Key: FLINK-12894
> URL: https://issues.apache.org/jira/browse/FLINK-12894
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation, 
> Table SQL / Client
>Reporter: Bowen Li
>Assignee: frank wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Ticket of its corresponding English version is FLINK-12627.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13086) add Chinese documentation for catalogs

2019-07-17 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887586#comment-16887586
 ] 

frank wang commented on FLINK-13086:


I have submitted

> add Chinese documentation for catalogs
> --
>
> Key: FLINK-13086
> URL: https://issues.apache.org/jira/browse/FLINK-13086
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation, 
> Table SQL / API
>Reporter: Bowen Li
>Assignee: frank wang
>Priority: Major
> Fix For: 1.9.0
>
>
> the ticket for corresponding English documentation is FLINK-12277



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-12894) add Chinese documentation of how to configure and use catalogs in SQL CLI

2019-07-16 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886629#comment-16886629
 ] 

frank wang commented on FLINK-12894:


ok, i have do it 

> add Chinese documentation of how to configure and use catalogs in SQL CLI
> -
>
> Key: FLINK-12894
> URL: https://issues.apache.org/jira/browse/FLINK-12894
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation, 
> Table SQL / Client
>Reporter: Bowen Li
>Assignee: frank wang
>Priority: Major
> Fix For: 1.9.0
>
>
> Ticket of its corresponding English version is FLINK-12627.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13086) add Chinese documentation for catalogs

2019-07-16 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886628#comment-16886628
 ] 

frank wang commented on FLINK-13086:


i have do it 

> add Chinese documentation for catalogs
> --
>
> Key: FLINK-13086
> URL: https://issues.apache.org/jira/browse/FLINK-13086
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation, 
> Table SQL / API
>Reporter: Bowen Li
>Assignee: frank wang
>Priority: Major
> Fix For: 1.9.0
>
>
> the ticket for corresponding English documentation is FLINK-12277



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (FLINK-12894) add Chinese documentation of how to configure and use catalogs in SQL CLI

2019-07-03 Thread frank wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

frank wang reassigned FLINK-12894:
--

Assignee: frank wang

> add Chinese documentation of how to configure and use catalogs in SQL CLI
> -
>
> Key: FLINK-12894
> URL: https://issues.apache.org/jira/browse/FLINK-12894
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation, 
> Table SQL / Client
>Reporter: Bowen Li
>Assignee: frank wang
>Priority: Major
> Fix For: 1.9.0
>
>
> Ticket of its corresponding English version is FLINK-12627.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-13086) add Chinese documentation for catalogs

2019-07-03 Thread frank wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

frank wang reassigned FLINK-13086:
--

Assignee: frank wang

> add Chinese documentation for catalogs
> --
>
> Key: FLINK-13086
> URL: https://issues.apache.org/jira/browse/FLINK-13086
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation, 
> Table SQL / API
>Reporter: Bowen Li
>Assignee: frank wang
>Priority: Major
> Fix For: 1.9.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-12558) Yarn application can't stop when flink job finished

2019-05-20 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16844070#comment-16844070
 ] 

frank wang commented on FLINK-12558:


from your pic,the job was finised, and log is followd 

Caused by: org.apache.flink.runtime.messages.FlinkJobNotFoundException: Could 
not find Flink job (5ca911be525811c0035c48d994dfb6f5) at 
org.apache.flink.runtime.dispatcher.Dispatcher.getJobMasterGatewayFuture(Dispatcher.java:766)
 at 
org.apache.flink.runtime.dispatcher.Dispatcher.requestJob(Dispatcher.java:485)

so the job could not be found,because of the job is finished

> Yarn application can't stop when flink job finished
> ---
>
> Key: FLINK-12558
> URL: https://issues.apache.org/jira/browse/FLINK-12558
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Runtime / REST
>Affects Versions: 1.6.3
>Reporter: lamber-ken
>Assignee: frank wang
>Priority: Major
> Attachments: image-2019-05-20-18-47-12-497.png, jobmanager.txt
>
>
> I run a flink +SocketWindowWordCount+ job on yarn cluste mode, when I kill 
> the socket, the flink job can't stopped. and I can't reproduct the bug again.
>  
> *Steps 1*
> {code:java}
> nc -lk 
> {code}
> *Steps 2*
> {code:java}
> bin/flink run -m yarn-cluster -yn 2 
> examples/streaming/SocketWindowWordCount.jar --hostname 10.101.52.12 --port 
> 
> {code}
> *Steps 3*
>  cancel the above nc command
> *Steps 4*
>  every thing gone
>     !image-2019-05-20-18-47-12-497.png!
>  ** 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-12558) Yarn application can't stop when flink job finished

2019-05-20 Thread frank wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

frank wang reassigned FLINK-12558:
--

Assignee: frank wang

> Yarn application can't stop when flink job finished
> ---
>
> Key: FLINK-12558
> URL: https://issues.apache.org/jira/browse/FLINK-12558
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Runtime / REST
>Affects Versions: 1.6.3
>Reporter: lamber-ken
>Assignee: frank wang
>Priority: Major
> Attachments: image-2019-05-20-18-47-12-497.png, jobmanager.txt
>
>
> I run a flink +SocketWindowWordCount+ job on yarn cluste mode, when I kill 
> the socket, the flink job can't stopped. and I can't reproduct the bug again.
>  
> *Steps 1*
> {code:java}
> nc -lk 
> {code}
> *Steps 2*
> {code:java}
> bin/flink run -m yarn-cluster -yn 2 
> examples/streaming/SocketWindowWordCount.jar --hostname 10.101.52.12 --port 
> 
> {code}
> *Steps 3*
>  cancel the above nc command
> *Steps 4*
>  every thing gone
>     !image-2019-05-20-18-47-12-497.png!
>  ** 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-12031) the registerFactory method of TypeExtractor Should not be private

2019-03-27 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802778#comment-16802778
 ] 

frank wang commented on FLINK-12031:


i cannot understand, i found many method can get the TypeInfoFactory from the 
registeredTypeInfoFactories, if the registerFactory is private, maybe we should 
delete this properties

> the registerFactory method of TypeExtractor  Should not be private
> --
>
> Key: FLINK-12031
> URL: https://issues.apache.org/jira/browse/FLINK-12031
> Project: Flink
>  Issue Type: Bug
>Reporter: frank wang
>Priority: Minor
>
> [https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/java/typeutils/TypeExtractor.java]
> {code:java}
> /**
>  * Registers a type information factory globally for a certain type. Every 
> following type extraction
>  * operation will use the provided factory for this type. The factory will 
> have highest precedence
>  * for this type. In a hierarchy of types the registered factory has higher 
> precedence than annotations
>  * at the same level but lower precedence than factories defined down the 
> hierarchy.
>  *
>  * @param t type for which a new factory is registered
>  * @param factory type information factory that will produce {@link 
> TypeInformation}
>  */
> private static void registerFactory(Type t, Class 
> factory) {
>Preconditions.checkNotNull(t, "Type parameter must not be null.");
>Preconditions.checkNotNull(factory, "Factory parameter must not be null.");
>if (!TypeInfoFactory.class.isAssignableFrom(factory)) {
>   throw new IllegalArgumentException("Class is not a TypeInfoFactory.");
>}
>if (registeredTypeInfoFactories.containsKey(t)) {
>   throw new InvalidTypesException("A TypeInfoFactory for type '" + t + "' 
> is already registered.");
>}
>registeredTypeInfoFactories.put(t, factory);
> }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-12031) the registerFactory method of TypeExtractor Should not be private

2019-03-27 Thread frank wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

frank wang updated FLINK-12031:
---
Description: 
[https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/java/typeutils/TypeExtractor.java]
{code:java}
/**
 * Registers a type information factory globally for a certain type. Every 
following type extraction
 * operation will use the provided factory for this type. The factory will have 
highest precedence
 * for this type. In a hierarchy of types the registered factory has higher 
precedence than annotations
 * at the same level but lower precedence than factories defined down the 
hierarchy.
 *
 * @param t type for which a new factory is registered
 * @param factory type information factory that will produce {@link 
TypeInformation}
 */
private static void registerFactory(Type t, Class 
factory) {
   Preconditions.checkNotNull(t, "Type parameter must not be null.");
   Preconditions.checkNotNull(factory, "Factory parameter must not be null.");

   if (!TypeInfoFactory.class.isAssignableFrom(factory)) {
  throw new IllegalArgumentException("Class is not a TypeInfoFactory.");
   }
   if (registeredTypeInfoFactories.containsKey(t)) {
  throw new InvalidTypesException("A TypeInfoFactory for type '" + t + "' 
is already registered.");
   }
   registeredTypeInfoFactories.put(t, factory);
}
{code}
 

  was:
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/java/typeutils/TypeExtractor.java

/**
 * Registers a type information factory globally for a certain type. Every 
following type extraction
 * operation will use the provided factory for this type. The factory will have 
highest precedence
 * for this type. In a hierarchy of types the registered factory has higher 
precedence than annotations
 * at the same level but lower precedence than factories defined down the 
hierarchy.
 *
 * @param t type for which a new factory is registered
 * @param factory type information factory that will produce \{@link 
TypeInformation}
 */
private static void registerFactory(Type t, Class 
factory) {
 Preconditions.checkNotNull(t, "Type parameter must not be null.");
 Preconditions.checkNotNull(factory, "Factory parameter must not be null.");

 if (!TypeInfoFactory.class.isAssignableFrom(factory)) {
 throw new IllegalArgumentException("Class is not a TypeInfoFactory.");
 }
 if (registeredTypeInfoFactories.containsKey(t)) {
 throw new InvalidTypesException("A TypeInfoFactory for type '" + t + "' is 
already registered.");
 }
 registeredTypeInfoFactories.put(t, factory);
}


> the registerFactory method of TypeExtractor  Should not be private
> --
>
> Key: FLINK-12031
> URL: https://issues.apache.org/jira/browse/FLINK-12031
> Project: Flink
>  Issue Type: Bug
>Reporter: frank wang
>Priority: Minor
>
> [https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/java/typeutils/TypeExtractor.java]
> {code:java}
> /**
>  * Registers a type information factory globally for a certain type. Every 
> following type extraction
>  * operation will use the provided factory for this type. The factory will 
> have highest precedence
>  * for this type. In a hierarchy of types the registered factory has higher 
> precedence than annotations
>  * at the same level but lower precedence than factories defined down the 
> hierarchy.
>  *
>  * @param t type for which a new factory is registered
>  * @param factory type information factory that will produce {@link 
> TypeInformation}
>  */
> private static void registerFactory(Type t, Class 
> factory) {
>Preconditions.checkNotNull(t, "Type parameter must not be null.");
>Preconditions.checkNotNull(factory, "Factory parameter must not be null.");
>if (!TypeInfoFactory.class.isAssignableFrom(factory)) {
>   throw new IllegalArgumentException("Class is not a TypeInfoFactory.");
>}
>if (registeredTypeInfoFactories.containsKey(t)) {
>   throw new InvalidTypesException("A TypeInfoFactory for type '" + t + "' 
> is already registered.");
>}
>registeredTypeInfoFactories.put(t, factory);
> }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12031) the registerFactory method of TypeExtractor Should not be private

2019-03-27 Thread frank wang (JIRA)
frank wang created FLINK-12031:
--

 Summary: the registerFactory method of TypeExtractor  Should not 
be private
 Key: FLINK-12031
 URL: https://issues.apache.org/jira/browse/FLINK-12031
 Project: Flink
  Issue Type: Bug
Reporter: frank wang


https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/java/typeutils/TypeExtractor.java

/**
 * Registers a type information factory globally for a certain type. Every 
following type extraction
 * operation will use the provided factory for this type. The factory will have 
highest precedence
 * for this type. In a hierarchy of types the registered factory has higher 
precedence than annotations
 * at the same level but lower precedence than factories defined down the 
hierarchy.
 *
 * @param t type for which a new factory is registered
 * @param factory type information factory that will produce \{@link 
TypeInformation}
 */
private static void registerFactory(Type t, Class 
factory) {
 Preconditions.checkNotNull(t, "Type parameter must not be null.");
 Preconditions.checkNotNull(factory, "Factory parameter must not be null.");

 if (!TypeInfoFactory.class.isAssignableFrom(factory)) {
 throw new IllegalArgumentException("Class is not a TypeInfoFactory.");
 }
 if (registeredTypeInfoFactories.containsKey(t)) {
 throw new InvalidTypesException("A TypeInfoFactory for type '" + t + "' is 
already registered.");
 }
 registeredTypeInfoFactories.put(t, factory);
}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11982) BatchTableSourceFactory support Json Format File

2019-03-26 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802361#comment-16802361
 ] 

frank wang commented on FLINK-11982:


can you provide your test.json file, let me try that, thx

> BatchTableSourceFactory support Json Format File
> 
>
> Key: FLINK-11982
> URL: https://issues.apache.org/jira/browse/FLINK-11982
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Ecosystem
>Affects Versions: 1.6.4, 1.7.2
>Reporter: pingle wang
>Assignee: frank wang
>Priority: Major
>
> java code :
> {code:java}
> val connector = FileSystem().path("data/in/test.json")
> val desc = tEnv.connect(connector)
> .withFormat(
> new Json()
> .schema(
> Types.ROW(
> Array[String]("id", "name", "age"),
> Array[TypeInformation[_]](Types.STRING, Types.STRING, Types.INT))
> )
> .failOnMissingField(true)
> ).registerTableSource("persion")
> val sql = "select * from person"
> val result = tEnv.sqlQuery(sql)
> {code}
> Exception info :
> {code:java}
> Exception in thread "main" 
> org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
> suitable table factory for 
> 'org.apache.flink.table.factories.BatchTableSourceFactory' in
> the classpath.
> Reason: No context matches.
> The following properties are requested:
> connector.path=file:///Users/batch/test.json
> connector.property-version=1
> connector.type=filesystem
> format.derive-schema=true
> format.fail-on-missing-field=true
> format.property-version=1
> format.type=json
> The following factories have been considered:
> org.apache.flink.table.sources.CsvBatchTableSourceFactory
> org.apache.flink.table.sources.CsvAppendTableSourceFactory
> org.apache.flink.table.sinks.CsvBatchTableSinkFactory
> org.apache.flink.table.sinks.CsvAppendTableSinkFactory
> org.apache.flink.formats.avro.AvroRowFormatFactory
> org.apache.flink.formats.json.JsonRowFormatFactory
> org.apache.flink.streaming.connectors.kafka.Kafka010TableSourceSinkFactory
> org.apache.flink.streaming.connectors.kafka.Kafka09TableSourceSinkFactory
> at 
> org.apache.flink.table.factories.TableFactoryService$.filterByContext(TableFactoryService.scala:214)
> at 
> org.apache.flink.table.factories.TableFactoryService$.findInternal(TableFactoryService.scala:130)
> at 
> org.apache.flink.table.factories.TableFactoryService$.find(TableFactoryService.scala:81)
> at 
> org.apache.flink.table.factories.TableFactoryUtil$.findAndCreateTableSource(TableFactoryUtil.scala:44)
> at 
> org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.scala:46)
> at com.meitu.mlink.sql.batch.JsonExample.main(JsonExample.java:36){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11982) BatchTableSourceFactory support Json Format File

2019-03-22 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16799515#comment-16799515
 ] 

frank wang commented on FLINK-11982:


you want to dev this function? you don't provide that, 

> BatchTableSourceFactory support Json Format File
> 
>
> Key: FLINK-11982
> URL: https://issues.apache.org/jira/browse/FLINK-11982
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Ecosystem
>Affects Versions: 1.6.4, 1.7.2
>Reporter: pingle wang
>Assignee: frank wang
>Priority: Major
>
> java code :
> {code:java}
> val connector = FileSystem().path("data/in/test.json")
> val desc = tEnv.connect(connector)
> .withFormat(
> new Json()
> .schema(
> Types.ROW(
> Array[String]("id", "name", "age"),
> Array[TypeInformation[_]](Types.STRING, Types.STRING, Types.INT))
> )
> .failOnMissingField(true)
> ).registerTableSource("persion")
> val sql = "select * from person"
> val result = tEnv.sqlQuery(sql)
> {code}
> Exception info :
> {code:java}
> Exception in thread "main" 
> org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
> suitable table factory for 
> 'org.apache.flink.table.factories.BatchTableSourceFactory' in
> the classpath.
> Reason: No context matches.
> The following properties are requested:
> connector.path=file:///Users/batch/test.json
> connector.property-version=1
> connector.type=filesystem
> format.derive-schema=true
> format.fail-on-missing-field=true
> format.property-version=1
> format.type=json
> The following factories have been considered:
> org.apache.flink.table.sources.CsvBatchTableSourceFactory
> org.apache.flink.table.sources.CsvAppendTableSourceFactory
> org.apache.flink.table.sinks.CsvBatchTableSinkFactory
> org.apache.flink.table.sinks.CsvAppendTableSinkFactory
> org.apache.flink.formats.avro.AvroRowFormatFactory
> org.apache.flink.formats.json.JsonRowFormatFactory
> org.apache.flink.streaming.connectors.kafka.Kafka010TableSourceSinkFactory
> org.apache.flink.streaming.connectors.kafka.Kafka09TableSourceSinkFactory
> at 
> org.apache.flink.table.factories.TableFactoryService$.filterByContext(TableFactoryService.scala:214)
> at 
> org.apache.flink.table.factories.TableFactoryService$.findInternal(TableFactoryService.scala:130)
> at 
> org.apache.flink.table.factories.TableFactoryService$.find(TableFactoryService.scala:81)
> at 
> org.apache.flink.table.factories.TableFactoryUtil$.findAndCreateTableSource(TableFactoryUtil.scala:44)
> at 
> org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.scala:46)
> at com.meitu.mlink.sql.batch.JsonExample.main(JsonExample.java:36){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11982) BatchTableSourceFactory support Json Format File

2019-03-21 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16798651#comment-16798651
 ] 

frank wang commented on FLINK-11982:


in the META-INF/services/org.apache.flink.table.factories.TableFactory,just add 
this
org.apache.flink.formats.json.JsonRowFormatFactory

> BatchTableSourceFactory support Json Format File
> 
>
> Key: FLINK-11982
> URL: https://issues.apache.org/jira/browse/FLINK-11982
> Project: Flink
>  Issue Type: Bug
>  Components: API / Table SQL
>Affects Versions: 1.6.4, 1.7.2
>Reporter: pingle wang
>Assignee: frank wang
>Priority: Major
>
> java code :
> {code:java}
> val connector = FileSystem().path("data/in/test.json")
> val desc = tEnv.connect(connector)
> .withFormat(
> new Json()
> .schema(
> Types.ROW(
> Array[String]("id", "name", "age"),
> Array[TypeInformation[_]](Types.STRING, Types.STRING, Types.INT))
> )
> .failOnMissingField(true)
> ).registerTableSource("persion")
> val sql = "select * from person"
> val result = tEnv.sqlQuery(sql)
> {code}
> Exception info :
> {code:java}
> Exception in thread "main" 
> org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
> suitable table factory for 
> 'org.apache.flink.table.factories.BatchTableSourceFactory' in
> the classpath.
> Reason: No context matches.
> The following properties are requested:
> connector.path=file:///Users/batch/test.json
> connector.property-version=1
> connector.type=filesystem
> format.derive-schema=true
> format.fail-on-missing-field=true
> format.property-version=1
> format.type=json
> The following factories have been considered:
> org.apache.flink.table.sources.CsvBatchTableSourceFactory
> org.apache.flink.table.sources.CsvAppendTableSourceFactory
> org.apache.flink.table.sinks.CsvBatchTableSinkFactory
> org.apache.flink.table.sinks.CsvAppendTableSinkFactory
> org.apache.flink.formats.avro.AvroRowFormatFactory
> org.apache.flink.formats.json.JsonRowFormatFactory
> org.apache.flink.streaming.connectors.kafka.Kafka010TableSourceSinkFactory
> org.apache.flink.streaming.connectors.kafka.Kafka09TableSourceSinkFactory
> at 
> org.apache.flink.table.factories.TableFactoryService$.filterByContext(TableFactoryService.scala:214)
> at 
> org.apache.flink.table.factories.TableFactoryService$.findInternal(TableFactoryService.scala:130)
> at 
> org.apache.flink.table.factories.TableFactoryService$.find(TableFactoryService.scala:81)
> at 
> org.apache.flink.table.factories.TableFactoryUtil$.findAndCreateTableSource(TableFactoryUtil.scala:44)
> at 
> org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.scala:46)
> at com.meitu.mlink.sql.batch.JsonExample.main(JsonExample.java:36){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11982) BatchTableSourceFactory support Json Format File

2019-03-21 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16798628#comment-16798628
 ] 

frank wang commented on FLINK-11982:


format.type and format.property-version is integrant

> BatchTableSourceFactory support Json Format File
> 
>
> Key: FLINK-11982
> URL: https://issues.apache.org/jira/browse/FLINK-11982
> Project: Flink
>  Issue Type: Bug
>  Components: API / Table SQL
>Affects Versions: 1.6.4, 1.7.2
>Reporter: pingle wang
>Assignee: frank wang
>Priority: Major
>
> java code :
> {code:java}
> val connector = FileSystem().path("data/in/test.json")
> val desc = tEnv.connect(connector)
> .withFormat(
> new Json()
> .schema(
> Types.ROW(
> Array[String]("id", "name", "age"),
> Array[TypeInformation[_]](Types.STRING, Types.STRING, Types.INT))
> )
> .failOnMissingField(true)
> ).registerTableSource("persion")
> val sql = "select * from person"
> val result = tEnv.sqlQuery(sql)
> {code}
> Exception info :
> {code:java}
> Exception in thread "main" 
> org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
> suitable table factory for 
> 'org.apache.flink.table.factories.BatchTableSourceFactory' in
> the classpath.
> Reason: No context matches.
> The following properties are requested:
> connector.path=file:///Users/batch/test.json
> connector.property-version=1
> connector.type=filesystem
> format.derive-schema=true
> format.fail-on-missing-field=true
> format.property-version=1
> format.type=json
> The following factories have been considered:
> org.apache.flink.table.sources.CsvBatchTableSourceFactory
> org.apache.flink.table.sources.CsvAppendTableSourceFactory
> org.apache.flink.table.sinks.CsvBatchTableSinkFactory
> org.apache.flink.table.sinks.CsvAppendTableSinkFactory
> org.apache.flink.formats.avro.AvroRowFormatFactory
> org.apache.flink.formats.json.JsonRowFormatFactory
> org.apache.flink.streaming.connectors.kafka.Kafka010TableSourceSinkFactory
> org.apache.flink.streaming.connectors.kafka.Kafka09TableSourceSinkFactory
> at 
> org.apache.flink.table.factories.TableFactoryService$.filterByContext(TableFactoryService.scala:214)
> at 
> org.apache.flink.table.factories.TableFactoryService$.findInternal(TableFactoryService.scala:130)
> at 
> org.apache.flink.table.factories.TableFactoryService$.find(TableFactoryService.scala:81)
> at 
> org.apache.flink.table.factories.TableFactoryUtil$.findAndCreateTableSource(TableFactoryUtil.scala:44)
> at 
> org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.scala:46)
> at com.meitu.mlink.sql.batch.JsonExample.main(JsonExample.java:36){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11900) Flink on Kubernetes sensitive about arguments placement

2019-03-21 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797991#comment-16797991
 ] 

frank wang commented on FLINK-11900:


{code:java}
job-cluster --job-classname  --fromSavepoint  
--allowNonRestoredState does not pick the savepoint path and does not start 
from it

job-cluster --fromSavepoint    -n --job-classname  -p 
works for savepoint retrieval{code}
maybe you should keep them accordance, try that

> Flink on Kubernetes sensitive about arguments placement
> ---
>
> Key: FLINK-11900
> URL: https://issues.apache.org/jira/browse/FLINK-11900
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.7.2
>Reporter:  Mario Georgiev
>Assignee: frank wang
>Priority: Major
>
> Hello guys,
> I've discovered that when deploying the job cluster on Kubernetes, the Job 
> Cluster Manager seems sensitive about the placement of arguments. 
> For instance if i put the savepoint argument last, it never reads it. 
> For instance if arguments are :
> {code:java}
> job-cluster --job-classname  --fromSavepoint  
> --allowNonRestoredState does not pick the savepoint path and does not start 
> from it
> job-cluster --fromSavepoint    -n --job-classname  
> -p works for savepoint retrieval{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11900) Flink on Kubernetes sensitive about arguments placement

2019-03-21 Thread frank wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

frank wang reassigned FLINK-11900:
--

Assignee: frank wang

> Flink on Kubernetes sensitive about arguments placement
> ---
>
> Key: FLINK-11900
> URL: https://issues.apache.org/jira/browse/FLINK-11900
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.7.2
>Reporter:  Mario Georgiev
>Assignee: frank wang
>Priority: Major
>
> Hello guys,
> I've discovered that when deploying the job cluster on Kubernetes, the Job 
> Cluster Manager seems sensitive about the placement of arguments. 
> For instance if i put the savepoint argument last, it never reads it. 
> For instance if arguments are :
> {code:java}
> job-cluster --job-classname  --fromSavepoint  
> --allowNonRestoredState does not pick the savepoint path and does not start 
> from it
> job-cluster --fromSavepoint    -n --job-classname  
> -p works for savepoint retrieval{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11982) org.apache.flink.table.api.NoMatchingTableFactoryException

2019-03-20 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797728#comment-16797728
 ] 

frank wang commented on FLINK-11982:


it looks like you have not provide enough properties

> org.apache.flink.table.api.NoMatchingTableFactoryException
> --
>
> Key: FLINK-11982
> URL: https://issues.apache.org/jira/browse/FLINK-11982
> Project: Flink
>  Issue Type: Bug
>  Components: API / Table SQL
>Affects Versions: 1.6.4, 1.7.2
>Reporter: pingle wang
>Assignee: frank wang
>Priority: Major
>
> java code :
> {code:java}
> val desc = tEnv.connect(connector)
> .withFormat(
> new Json()
> .schema(
> Types.ROW(
> Array[String]("id", "name", "age"),
> Array[TypeInformation[_]](Types.STRING, Types.STRING, Types.INT))
> )
> .failOnMissingField(true)
> .deriveSchema()
> ).registerTableSource("persion")
> val sql = "select * from person"
> val result = tEnv.sqlQuery(sql)
> {code}
> Exception info :
> {code:java}
> Exception in thread "main" 
> org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
> suitable table factory for 
> 'org.apache.flink.table.factories.BatchTableSourceFactory' in
> the classpath.
> Reason: No context matches.
> The following properties are requested:
> connector.path=file:///Users/batch/test.json
> connector.property-version=1
> connector.type=filesystem
> format.derive-schema=true
> format.fail-on-missing-field=true
> format.property-version=1
> format.type=json
> The following factories have been considered:
> org.apache.flink.table.sources.CsvBatchTableSourceFactory
> org.apache.flink.table.sources.CsvAppendTableSourceFactory
> org.apache.flink.table.sinks.CsvBatchTableSinkFactory
> org.apache.flink.table.sinks.CsvAppendTableSinkFactory
> org.apache.flink.formats.avro.AvroRowFormatFactory
> org.apache.flink.formats.json.JsonRowFormatFactory
> org.apache.flink.streaming.connectors.kafka.Kafka010TableSourceSinkFactory
> org.apache.flink.streaming.connectors.kafka.Kafka09TableSourceSinkFactory
> at 
> org.apache.flink.table.factories.TableFactoryService$.filterByContext(TableFactoryService.scala:214)
> at 
> org.apache.flink.table.factories.TableFactoryService$.findInternal(TableFactoryService.scala:130)
> at 
> org.apache.flink.table.factories.TableFactoryService$.find(TableFactoryService.scala:81)
> at 
> org.apache.flink.table.factories.TableFactoryUtil$.findAndCreateTableSource(TableFactoryUtil.scala:44)
> at 
> org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.scala:46)
> at com.meitu.mlink.sql.batch.JsonExample.main(JsonExample.java:36){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11982) org.apache.flink.table.api.NoMatchingTableFactoryException

2019-03-20 Thread frank wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

frank wang reassigned FLINK-11982:
--

Assignee: frank wang

> org.apache.flink.table.api.NoMatchingTableFactoryException
> --
>
> Key: FLINK-11982
> URL: https://issues.apache.org/jira/browse/FLINK-11982
> Project: Flink
>  Issue Type: Bug
>  Components: API / Table SQL
>Affects Versions: 1.6.4, 1.7.2
>Reporter: pingle wang
>Assignee: frank wang
>Priority: Major
>
> java code :
> {code:java}
> val desc = tEnv.connect(connector)
> .withFormat(
> new Json()
> .schema(
> Types.ROW(
> Array[String]("id", "name", "age"),
> Array[TypeInformation[_]](Types.STRING, Types.STRING, Types.INT))
> )
> .failOnMissingField(true)
> .deriveSchema()
> ).registerTableSource("persion")
> val sql = "select * from person"
> val result = tEnv.sqlQuery(sql)
> {code}
> Exception info :
> {code:java}
> Exception in thread "main" 
> org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
> suitable table factory for 
> 'org.apache.flink.table.factories.BatchTableSourceFactory' in
> the classpath.
> Reason: No context matches.
> The following properties are requested:
> connector.path=file:///Users/batch/test.json
> connector.property-version=1
> connector.type=filesystem
> format.derive-schema=true
> format.fail-on-missing-field=true
> format.property-version=1
> format.type=json
> The following factories have been considered:
> org.apache.flink.table.sources.CsvBatchTableSourceFactory
> org.apache.flink.table.sources.CsvAppendTableSourceFactory
> org.apache.flink.table.sinks.CsvBatchTableSinkFactory
> org.apache.flink.table.sinks.CsvAppendTableSinkFactory
> org.apache.flink.formats.avro.AvroRowFormatFactory
> org.apache.flink.formats.json.JsonRowFormatFactory
> org.apache.flink.streaming.connectors.kafka.Kafka010TableSourceSinkFactory
> org.apache.flink.streaming.connectors.kafka.Kafka09TableSourceSinkFactory
> at 
> org.apache.flink.table.factories.TableFactoryService$.filterByContext(TableFactoryService.scala:214)
> at 
> org.apache.flink.table.factories.TableFactoryService$.findInternal(TableFactoryService.scala:130)
> at 
> org.apache.flink.table.factories.TableFactoryService$.find(TableFactoryService.scala:81)
> at 
> org.apache.flink.table.factories.TableFactoryUtil$.findAndCreateTableSource(TableFactoryUtil.scala:44)
> at 
> org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.scala:46)
> at com.meitu.mlink.sql.batch.JsonExample.main(JsonExample.java:36){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11848) Delete outdated kafka topics caused UNKNOWN_TOPIC_EXCEPTIION

2019-03-19 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796694#comment-16796694
 ] 

frank wang commented on FLINK-11848:


props.setProperty(FlinkKafkaConsumerBase.KEY_PARTITION_DISCOVERY_INTERVAL_MILLIS,
   "120");
Set the time to small, try that

> Delete outdated kafka topics caused UNKNOWN_TOPIC_EXCEPTIION
> 
>
> Key: FLINK-11848
> URL: https://issues.apache.org/jira/browse/FLINK-11848
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.6.4
>Reporter: Shengnan YU
>Assignee: frank wang
>Priority: Major
>
> Recently we are doing some streaming jobs with apache flink. There are 
> multiple KAFKA topics with a format as xx_yy-mm-dd. We used a topic regex 
> pattern to let a consumer to consume those topics. However, if we delete some 
> older topics, it seems that the metadata in consumer does not update properly 
> so It still remember those outdated topic in its topic list, which leads to 
> *UNKNOWN_TOPIC_EXCEPTION*. We must restart the consumer job to recovery. It 
> seems to occur in producer as well. Any idea to solve this problem? Thank you 
> very much!
>  
> Example to reproduce problem:
> There are multiple kafka topics which are 
> "test20190310","test20190311","test20190312" for instance. I run the job and 
> everything is ok. Then if I delete topic "test20190310", the consumer does 
> not perceive the topic is deleted, it will still go fetching metadata of that 
> topic. In taskmanager's log, unknown errors display. 
> {code:java}
> public static void main(String []args) throws Exception {
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> Properties props = new Properties();
> props.put("bootstrap.servers", "localhost:9092\n");
> props.put("group.id", "test10");
> props.put("enable.auto.commit", "true");
> props.put("auto.commit.interval.ms", "1000");
> props.put("auto.offset.rest", "earliest");
> props.put("key.deserializer", 
> "org.apache.kafka.common.serialization.StringDeserializer");
> props.put("value.deserializer", 
> "org.apache.kafka.common.serialization.StringDeserializer");
> 
> props.setProperty(FlinkKafkaConsumerBase.KEY_PARTITION_DISCOVERY_INTERVAL_MILLIS,
>"120");
> Pattern topics = Pattern.compile("^test.*$");
> FlinkKafkaConsumer011 consumer = new 
> FlinkKafkaConsumer011<>(topics, new SimpleStringSchema(), props);
> DataStream stream = env.addSource(consumer);
> stream.writeToSocket("localhost", 4, new SimpleStringSchema());
> env.execute("test");
> }
> }
> {code}
>   
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11848) Delete outdated kafka topics caused UNKNOWN_TOPIC_EXCEPTIION

2019-03-11 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16789269#comment-16789269
 ] 

frank wang commented on FLINK-11848:


can you provide some example, let me how you use it, thx

> Delete outdated kafka topics caused UNKNOWN_TOPIC_EXCEPTIION
> 
>
> Key: FLINK-11848
> URL: https://issues.apache.org/jira/browse/FLINK-11848
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.6.4
>Reporter: Shengnan YU
>Assignee: frank wang
>Priority: Major
>
> Recently we are doing some streaming jobs with apache flink. There are 
> multiple KAFKA topics with a format as xx_yy-mm-dd. We used a topic regex 
> pattern to let a consumer to consume those topics. However, if we delete some 
> older topics, it seems that the metadata in consumer does not update properly 
> so It still remember those outdated topic in its topic list, which leads to 
> *UNKNOWN_TOPIC_EXCEPTION*. We must restart the consumer job to recovery. It 
> seems to occur in producer as well. Any idea to solve this problem? Thank you 
> very much!
>   
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11848) Delete outdated kafka topics caused UNKNOWN_TOPIC_EXCEPTIION

2019-03-10 Thread frank wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

frank wang reassigned FLINK-11848:
--

Assignee: frank wang

> Delete outdated kafka topics caused UNKNOWN_TOPIC_EXCEPTIION
> 
>
> Key: FLINK-11848
> URL: https://issues.apache.org/jira/browse/FLINK-11848
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.6.4
>Reporter: Shengnan YU
>Assignee: frank wang
>Priority: Major
>
> Recently we are doing some streaming jobs with apache flink. There are 
> multiple KAFKA topics with a format as xx_yy-mm-dd. We used a topic regex 
> pattern to let a consumer to consume those topics. However, if we delete some 
> older topics, it seems that the metadata in consumer does not update properly 
> so It still remember those outdated topic in its topic list, which leads to 
> *UNKNOWN_TOPIC_EXCEPTION*. We must restart the consumer job to recovery. It 
> seems to occur in producer as well. Any idea to solve this problem? Thank you 
> very much!
>   
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11871) Introduce LongHashTable to improve performance when join key fits in long

2019-03-10 Thread frank wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

frank wang reassigned FLINK-11871:
--

Assignee: (was: frank wang)

> Introduce LongHashTable to improve performance when join key fits in long
> -
>
> Key: FLINK-11871
> URL: https://issues.apache.org/jira/browse/FLINK-11871
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Operators
>Reporter: Kurt Young
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11871) Introduce LongHashTable to improve performance when join key fits in long

2019-03-10 Thread frank wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

frank wang reassigned FLINK-11871:
--

Assignee: frank wang

> Introduce LongHashTable to improve performance when join key fits in long
> -
>
> Key: FLINK-11871
> URL: https://issues.apache.org/jira/browse/FLINK-11871
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Operators
>Reporter: Kurt Young
>Assignee: frank wang
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11490) Add an initial Blink SQL batch runtime

2019-03-10 Thread frank wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

frank wang reassigned FLINK-11490:
--

Assignee: (was: frank wang)

> Add an initial Blink SQL batch runtime
> --
>
> Key: FLINK-11490
> URL: https://issues.apache.org/jira/browse/FLINK-11490
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Table SQL
>Reporter: Timo Walther
>Priority: Major
>
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].
> This issue is an umbrella issue for tasks related to the merging of Blink 
> batch runtime features. The goal is to provide minimum viable product (MVP) 
> to batch users.
> An exact list of batch features, their properties, and dependencies needs to 
> be defined.
> The type system might not have been reworked at this stage. Operations might 
> not be executed with the full performance until changes in other Flink core 
> components have taken place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11490) Add an initial Blink SQL batch runtime

2019-03-10 Thread frank wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

frank wang reassigned FLINK-11490:
--

Assignee: frank wang  (was: Jingsong Lee)

> Add an initial Blink SQL batch runtime
> --
>
> Key: FLINK-11490
> URL: https://issues.apache.org/jira/browse/FLINK-11490
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Table SQL
>Reporter: Timo Walther
>Assignee: frank wang
>Priority: Major
>
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].
> This issue is an umbrella issue for tasks related to the merging of Blink 
> batch runtime features. The goal is to provide minimum viable product (MVP) 
> to batch users.
> An exact list of batch features, their properties, and dependencies needs to 
> be defined.
> The type system might not have been reworked at this stage. Operations might 
> not be executed with the full performance until changes in other Flink core 
> components have taken place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10851) sqlUpdate support complex insert grammar

2018-11-12 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16684717#comment-16684717
 ] 

frank wang commented on FLINK-10851:


OK,it can work,thx

> sqlUpdate support complex insert grammar
> 
>
> Key: FLINK-10851
> URL: https://issues.apache.org/jira/browse/FLINK-10851
> Project: Flink
>  Issue Type: Bug
>Reporter: frank wang
>Priority: Major
>  Labels: pull-request-available
>
> my code is
> {{tableEnv.sqlUpdate("insert into kafka.sdkafka.product_4 select filedName1, 
> filedName2 from kafka.sdkafka.order_4");}}
> but flink give me error info, said kafka "No table was registered under the 
> name kafka"
> i modify the code ,that is ok now
> TableEnvironment.scala
> {code:java}
> def sqlUpdate(stmt: String, config: QueryConfig): Unit = {
>   val planner = new FlinkPlannerImpl(getFrameworkConfig, getPlanner, 
> getTypeFactory)
>   // parse the sql query
>   val parsed = planner.parse(stmt)
>   parsed match {
> case insert: SqlInsert =>
>   // validate the SQL query
>   val query = insert.getSource
>   val validatedQuery = planner.validate(query)
>   // get query result as Table
>   val queryResult = new Table(this, 
> LogicalRelNode(planner.rel(validatedQuery).rel))
>   // get name of sink table
>   val targetTableName = 
> insert.getTargetTable.asInstanceOf[SqlIdentifier].names.get(0)
>   // insert query result into sink table
>   insertInto(queryResult, targetTableName, config)
> case _ =>
>   throw new TableException(
> "Unsupported SQL query! sqlUpdate() only accepts SQL statements of 
> type INSERT.")
>   }
> }
> {code}
> should modify to this
> {code:java}
> def sqlUpdate(stmt: String, config: QueryConfig): Unit = {
>   val planner = new FlinkPlannerImpl(getFrameworkConfig, getPlanner, 
> getTypeFactory)
>   // parse the sql query
>   val parsed = planner.parse(stmt)
>   parsed match {
> case insert: SqlInsert =>
>   // validate the SQL query
>   val query = insert.getSource
>   val validatedQuery = planner.validate(query)
>   // get query result as Table
>   val queryResult = new Table(this, 
> LogicalRelNode(planner.rel(validatedQuery).rel))
>   // get name of sink table
>   //val targetTableName = 
> insert.getTargetTable.asInstanceOf[SqlIdentifier].names.get(0)
>   val targetTableName = insert.getTargetTable.toString
>   // insert query result into sink table
>   insertInto(queryResult, targetTableName, config)
> case _ =>
>   throw new TableException(
> "Unsupported SQL query! sqlUpdate() only accepts SQL statements of 
> type INSERT.")
>   }
> }
> {code}
>  
> i hope this can be acceptted, thx



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-10851) sqlUpdate support complex insert grammar

2018-11-12 Thread frank wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

frank wang closed FLINK-10851.
--
Resolution: Resolved

> sqlUpdate support complex insert grammar
> 
>
> Key: FLINK-10851
> URL: https://issues.apache.org/jira/browse/FLINK-10851
> Project: Flink
>  Issue Type: Bug
>Reporter: frank wang
>Priority: Major
>  Labels: pull-request-available
>
> my code is
> {{tableEnv.sqlUpdate("insert into kafka.sdkafka.product_4 select filedName1, 
> filedName2 from kafka.sdkafka.order_4");}}
> but flink give me error info, said kafka "No table was registered under the 
> name kafka"
> i modify the code ,that is ok now
> TableEnvironment.scala
> {code:java}
> def sqlUpdate(stmt: String, config: QueryConfig): Unit = {
>   val planner = new FlinkPlannerImpl(getFrameworkConfig, getPlanner, 
> getTypeFactory)
>   // parse the sql query
>   val parsed = planner.parse(stmt)
>   parsed match {
> case insert: SqlInsert =>
>   // validate the SQL query
>   val query = insert.getSource
>   val validatedQuery = planner.validate(query)
>   // get query result as Table
>   val queryResult = new Table(this, 
> LogicalRelNode(planner.rel(validatedQuery).rel))
>   // get name of sink table
>   val targetTableName = 
> insert.getTargetTable.asInstanceOf[SqlIdentifier].names.get(0)
>   // insert query result into sink table
>   insertInto(queryResult, targetTableName, config)
> case _ =>
>   throw new TableException(
> "Unsupported SQL query! sqlUpdate() only accepts SQL statements of 
> type INSERT.")
>   }
> }
> {code}
> should modify to this
> {code:java}
> def sqlUpdate(stmt: String, config: QueryConfig): Unit = {
>   val planner = new FlinkPlannerImpl(getFrameworkConfig, getPlanner, 
> getTypeFactory)
>   // parse the sql query
>   val parsed = planner.parse(stmt)
>   parsed match {
> case insert: SqlInsert =>
>   // validate the SQL query
>   val query = insert.getSource
>   val validatedQuery = planner.validate(query)
>   // get query result as Table
>   val queryResult = new Table(this, 
> LogicalRelNode(planner.rel(validatedQuery).rel))
>   // get name of sink table
>   //val targetTableName = 
> insert.getTargetTable.asInstanceOf[SqlIdentifier].names.get(0)
>   val targetTableName = insert.getTargetTable.toString
>   // insert query result into sink table
>   insertInto(queryResult, targetTableName, config)
> case _ =>
>   throw new TableException(
> "Unsupported SQL query! sqlUpdate() only accepts SQL statements of 
> type INSERT.")
>   }
> }
> {code}
>  
> i hope this can be acceptted, thx



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10851) sqlUpdate support complex insert grammar

2018-11-12 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16683800#comment-16683800
 ] 

frank wang commented on FLINK-10851:


i don't think so

tableEnv.sqlUpdate("insert into `kafka.sdkafka.product_4` select filedName1, 
filedName2 from `kafka.sdkafka.order_4`")

the "kafka.sdkafka.product_4" will be split to new 
String[]\{"kafka","sdkafka","product_4"}

and each of them represents a meaning

> sqlUpdate support complex insert grammar
> 
>
> Key: FLINK-10851
> URL: https://issues.apache.org/jira/browse/FLINK-10851
> Project: Flink
>  Issue Type: Bug
>Reporter: frank wang
>Priority: Major
>  Labels: pull-request-available
>
> my code is
> {{tableEnv.sqlUpdate("insert into kafka.sdkafka.product_4 select filedName1, 
> filedName2 from kafka.sdkafka.order_4");}}
> but flink give me error info, said kafka "No table was registered under the 
> name kafka"
> i modify the code ,that is ok now
> TableEnvironment.scala
> {code:java}
> def sqlUpdate(stmt: String, config: QueryConfig): Unit = {
>   val planner = new FlinkPlannerImpl(getFrameworkConfig, getPlanner, 
> getTypeFactory)
>   // parse the sql query
>   val parsed = planner.parse(stmt)
>   parsed match {
> case insert: SqlInsert =>
>   // validate the SQL query
>   val query = insert.getSource
>   val validatedQuery = planner.validate(query)
>   // get query result as Table
>   val queryResult = new Table(this, 
> LogicalRelNode(planner.rel(validatedQuery).rel))
>   // get name of sink table
>   val targetTableName = 
> insert.getTargetTable.asInstanceOf[SqlIdentifier].names.get(0)
>   // insert query result into sink table
>   insertInto(queryResult, targetTableName, config)
> case _ =>
>   throw new TableException(
> "Unsupported SQL query! sqlUpdate() only accepts SQL statements of 
> type INSERT.")
>   }
> }
> {code}
> should modify to this
> {code:java}
> def sqlUpdate(stmt: String, config: QueryConfig): Unit = {
>   val planner = new FlinkPlannerImpl(getFrameworkConfig, getPlanner, 
> getTypeFactory)
>   // parse the sql query
>   val parsed = planner.parse(stmt)
>   parsed match {
> case insert: SqlInsert =>
>   // validate the SQL query
>   val query = insert.getSource
>   val validatedQuery = planner.validate(query)
>   // get query result as Table
>   val queryResult = new Table(this, 
> LogicalRelNode(planner.rel(validatedQuery).rel))
>   // get name of sink table
>   //val targetTableName = 
> insert.getTargetTable.asInstanceOf[SqlIdentifier].names.get(0)
>   val targetTableName = insert.getTargetTable.toString
>   // insert query result into sink table
>   insertInto(queryResult, targetTableName, config)
> case _ =>
>   throw new TableException(
> "Unsupported SQL query! sqlUpdate() only accepts SQL statements of 
> type INSERT.")
>   }
> }
> {code}
>  
> i hope this can be acceptted, thx



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10851) sqlUpdate support complex insert grammar

2018-11-12 Thread frank wang (JIRA)
frank wang created FLINK-10851:
--

 Summary: sqlUpdate support complex insert grammar
 Key: FLINK-10851
 URL: https://issues.apache.org/jira/browse/FLINK-10851
 Project: Flink
  Issue Type: Bug
Reporter: frank wang


my code is
{{tableEnv.sqlUpdate("insert into kafka.sdkafka.product_4 select filedName1, 
filedName2 from kafka.sdkafka.order_4");}}

but flink give me error info, said kafka "No table was registered under the 
name kafka"
i modify the code ,that is ok now
TableEnvironment.scala


{code:java}
def sqlUpdate(stmt: String, config: QueryConfig): Unit = {
  val planner = new FlinkPlannerImpl(getFrameworkConfig, getPlanner, 
getTypeFactory)
  // parse the sql query
  val parsed = planner.parse(stmt)
  parsed match {
case insert: SqlInsert =>
  // validate the SQL query
  val query = insert.getSource
  val validatedQuery = planner.validate(query)

  // get query result as Table
  val queryResult = new Table(this, 
LogicalRelNode(planner.rel(validatedQuery).rel))

  // get name of sink table
  val targetTableName = 
insert.getTargetTable.asInstanceOf[SqlIdentifier].names.get(0)

  // insert query result into sink table
  insertInto(queryResult, targetTableName, config)
case _ =>
  throw new TableException(
"Unsupported SQL query! sqlUpdate() only accepts SQL statements of type 
INSERT.")
  }
}
{code}
should modify to this
{code:java}
def sqlUpdate(stmt: String, config: QueryConfig): Unit = {
  val planner = new FlinkPlannerImpl(getFrameworkConfig, getPlanner, 
getTypeFactory)
  // parse the sql query
  val parsed = planner.parse(stmt)
  parsed match {
case insert: SqlInsert =>
  // validate the SQL query
  val query = insert.getSource
  val validatedQuery = planner.validate(query)

  // get query result as Table
  val queryResult = new Table(this, 
LogicalRelNode(planner.rel(validatedQuery).rel))

  // get name of sink table
  //val targetTableName = 
insert.getTargetTable.asInstanceOf[SqlIdentifier].names.get(0)
  val targetTableName = insert.getTargetTable.toString

  // insert query result into sink table
  insertInto(queryResult, targetTableName, config)
case _ =>
  throw new TableException(
"Unsupported SQL query! sqlUpdate() only accepts SQL statements of type 
INSERT.")
  }
}
{code}
 

i hope this can be acceptted, thx



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)